Reinforcement Learning Toolbox: Episode Q0 stopped predicting after a few thousand simulations. DQN Agent.

5 次查看(过去 30 天)
Q0 values were pretty ok until episode 2360, it's not stuck, just increasing very very slowly
I'm using the default generated DQN agent (with continuous observations and discrete actions) with only a few modifications. I'm not sure I understand what the issue is here or if this is the correct behaviour and this means my agent has converged to a somewhat stable result.
I understood, from documentation, that Episode Q0 should give a prediction of the "true discounted long-term reward", I assumed this meant the discounted reward for each single episode regardless of the convergence or lack thereof, but maybe I understood something wrong.
Please help clarify. I made several runs and they all display the same behaviour over a few thousand episodes (no always the same amount)
____
The changes I made were only these ones:
critic.Options = rlRepresentationOptions(...
'LearnRate',1e-3,...
'GradientThreshold',1,...
'UseDevice','gpu');
% extract agent options
agentOpts = agent.AgentOptions;
% modify agent options
agentOpts.EpsilonGreedyExploration.EpsilonDecay = 0.005;
agentOpts.DiscountFactor = 0.1;
% resave agent with new options
agent = rlDQNAgent(critic,agentOpts);
  2 个评论
Cecilia S.
Cecilia S. 2021-6-9
Hello! It happens every time, after some thousands of runs. I leave another example where you can see the value decreases very slowly after "getting stuck"
I thought it might be the "correct" behaviour and I was understanding the concept of "true discounted reward" wrong based on this example:
in which Q0 seems to be "stuck" too and that appears to be the expectable result.
Perhaps the problem is my reward definition? My reward function gives more negative reward as the system gets away from a target output value and a positive reward (single value) when it is in range, this also finishes the episode.
Pseudocode for reward in case it helps:
if ~IsDone
if parameter 1 out of range
Reward = -100*10^(difference between output1 and target value 1);
elseif parameter 1 is ok but parameter 2 is out of range
Reward = -1*10^(difference between output2 and target value 2);
end
else
Reward = +10;
end

请先登录,再进行评论。

回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Environments 的更多信息

产品


版本

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by