Reinforcement Learning Toolbox: Discount factor issue

5 次查看(过去 30 天)
Hi,
I am trying to apply some RL algorithms in the RL toolbox such as ,the actor-critic algorithm, to a problem where the rewards for each step in an episode is discounted, though in the training manager window I see the episode reward as the cumulative reward rather than the discounted sum of rewards. I wonder if this is a bug as these seems confusing .
Thanks,

回答(3 个)

Ajay Pattassery
Ajay Pattassery 2019-8-26
编辑:Ajay Pattassery 2019-8-26
In the Episode Manager you could view the discounted sum of rewards for each episode named as Episode Reward. This should be the discounted sum of rewards over the time steps if you have set rlACAgentOptions to a discount factor as below.
opt = rlACAgentOptions('DiscountFactor',0.95)
If you are observing the reward on each episode is not the discounted sum of rewards, revert with env, critic, actor, trainOpts to reproduce the issue (Or the code you have used).

EBRAHIM ALEBRAHIM
EBRAHIM ALEBRAHIM 2019-8-26
Hi Ajay,
I already have the discount factor set in the agent options as you mentioned and the problem still persist. I have a test simulator where the simulator returns a reward of 1 for each step in the episode and have set the maximum number of episodes in the training options to 500 as in my problem the episode never ends and hence 'IsDone' variable is always set to 0. If the episode reward in the training manager is supposed to be the discounted reward then it should be (1-0.95^500)0.05 =20. But the the training manager reports 500(the undiscounted sum of rewards).
Thanks
  1 个评论
Ajay Pattassery
Ajay Pattassery 2019-8-29
Hello,
I have tried an Actor-Critic example by following the model given in the link. I can see the effect of the discount factor in the following example.

请先登录,再进行评论。


EBRAHIM ALEBRAHIM
EBRAHIM ALEBRAHIM 2019-8-29
I would appreciate if you provide a screenshot of that because I deinitely don't see the effect of discounting even in the CartPole example. In that example, the epsiode reward that I get is basically the sum of rewards even though the discount rate is set below 1(that is 0.99). As you can see from the screenshot below the epsiode reward is 10 which is the sum of rewards of 15 sucessful balancing steps (each would give 1 unit of reward) and the last one is a failure which gives -5.
The discounted reward in this situation is supposed to be
The CartPole code that I have ran is below with the screenshot of the training (I set the Maximum number of training in the training options to 1) .
Thanks
CartPoleEx.png
clear
env = rlPredefinedEnv("CartPole-Discrete");
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
%Critic Network
criticNetwork = [
imageInputLayer([4 1 1],'Normalization','none','Name','state')
fullyConnectedLayer(1,'Name','CriticFC')];
criticOpts = rlRepresentationOptions('LearnRate',8e-3,'GradientThreshold',1);
critic = rlRepresentation(criticNetwork,obsInfo,'Observation',{'state'},criticOpts);
%Actor Network
actorNetwork = [
imageInputLayer([4 1 1],'Normalization','none','Name','state')
fullyConnectedLayer(2,'Name','action')];
actorOpts = rlRepresentationOptions('LearnRate',8e-3,'GradientThreshold',1);
actor = rlRepresentation(actorNetwork,obsInfo,actInfo,...
'Observation',{'state'},'Action',{'action'},actorOpts);
%Setting up agent
agentOpts = rlACAgentOptions(...
'NumStepsToLookAhead',32, ...
'DiscountFactor',0.99);
agent = rlACAgent(actor,critic,agentOpts);
%Train agent
rng(0)
trainOpts = rlTrainingOptions;
trainOpts.MaxEpisodes = 1;
trainOpts.MaxStepsPerEpisode = 500;
trainOpts.StopTrainingCriteria = "AverageReward";
trainOpts.StopTrainingValue = 500;
trainOpts.ScoreAveragingWindowLength = 5;
trainStats = train(agent,env,trainOpts)
  2 个评论
Ajay Pattassery
Ajay Pattassery 2019-9-5
The episode manager is showing the undiscounted cumulative reward from the environment. The discount factor, however, has an impact on training and hence the learned policy. You can observe the same by finding the average reward over a reasonable number of episodes with a discount factor closer to zero and with closer to one.
Srivatsank
Srivatsank 2024-5-28
Hey @Ajay Pattassery. Is it possible to change this display to discounted Reward? It would be helpful in debugging the reward functions that we are working with.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Environments 的更多信息

产品


版本

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by