- Modify the "learning rate" to see if it helps the agent to escape the local maximum.
- The "clip factor" in PPO controls how much the new policy can deviate from the old policy. Adjusting this parameter could help the agent explore more.
- Adding noise to the actions can encourage exploration. For example, you can add Gaussian noise to the action values. Here is an interesting article by OpenAI on the same: https://openai.com/index/better-exploration-with-parameter-noise/
- Lastly, consider the "Entropy Loss Weight," which you have already tried.
Trying to train PPO RL agent
19 次查看(过去 30 天)
显示 更早的评论
Hello,
I'm trying to train a PPO agent, but I'm encountering the following issue:
From a certain time, the agent don't learn anymore (although the agent is only in a local maximum). Let's say that for the ten first episodes the agent gets a very bad reward, since it's actually perfoming bad. Then, on the 11th episode (see graph below), the agent found a local maxmimum by updating its actions value to 30 and -30 (these are the gains coefficients of a PI controller). Finally, starting from the 12th episode (i.e. the next one), the agent don't update its action values anymore.
As a solution, I've already tried to increase the EntropyLossWeight, from 0.02 to 1. I've tried a lot of values in this range, and it seems like nothing is efficient.
Another parameter may influence the result: from different actions taken in a very wide range of values (e.g. from [1; ∞] for the first action) any system output variation is perceptible, and thus any variation in the reward can't be seen watever the action value taken within this range of values. In another words, on the picture below, the agent tried three differents values of gains, but the three actions values produced the same result, So, maybe the agent can't learn from it.
So, I would like it to continue exploring, although it just got a better reward, since it is still not the best it can achieve.
Link to PPO agent options, including EntropyLossWeight: Options for PPO agent - MATLAB - MathWorks Switzerland
Any help would be very kind!
Thanks a lot in advance!
Nicolas
0 个评论
采纳的回答
Karan Singh
2024-7-23
Hi Nicolas,
The problem you are facing is a common scenario, and in my view, the only way to proceed forward is through trial and error of various parameters. Here are some which I have tried in my personal projects that may be useful for you:
3 个评论
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!