Reinforcement Learning, how to apply normalization and for which parts?
4 次查看(过去 30 天)
显示 更早的评论
Hi,
I have a model in Simulink and I am trying to solve a scheduling problem for control. Since we also use neural networks (I am using actor critic because of the continuous action and state space) for the agent, I believe normalization should help to find at least local optima for the problem.
However, it does not work (at least with the R2021a) if we change the command from 'none' to any other option for creating the state and action.
Here is an example of the code line:
--> featureInputLayer(numObservations,'Normalization','none','Name','State')
That is why I am applying the normalization in my simulink file.
I was wondering what should be the best way to implement normalization in .slx file. Currently, I am applying for my observations and reward function.
Should I apply for another component in addition to this or not apply for any of these parts?
Any comments would be helpful, thanks.
3 个评论
回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!