Replace RL type (PPO with DPPG) in a Matlab example

There is a Matlab example about coverage path planning using PPO reinforcement learning in the following link:
I think the environment is fine and I only need to check the parts that there is a PPO. I am trying to replace PPO with DDPG, with the following codes
opt = rlDDPGAgentOptions(...
ActorOptimizerOptions=actorOpts,...
CriticOptimizerOptions=criticOpts,....
MiniBatchSize=64,...
SampleTime=Ts,...
DiscountFactor=0.995);
agentA = rlDDPGAgent(actor(1),critic(1),opt);
agentB = rlDDPGAgent(actor(2),critic(2),opt);
agentC = rlDDPGAgent(actor(3),critic(3),opt);
but there is error: First argument must be a rlDeterministicActorRepresentation object or an observation specification created using 'rlNumericSpec' or 'rlFiniteSetSpec' objects.
Do you have any idea?

 采纳的回答

PPO is a stochastic agent whereas DDPG is deterministic. This means that you cannot just use actors and critics designed for PPO with DDPG and vice versa. Your best bet is to either recreate those neural nets or use the default agent feature to get an initial architecture you can iterate upon.

3 个评论

Thanks for your input Emmanouil. Could you please guide me about neural networks recreation that can fit with deterministic DDPG. To be honest, I am a bit confused, how I can modify the neural networks. Some of actor critics are stright lines, while others are different. I appreciate if you introduce me some resources for reading.
I would start here and then take a look at the examples and how we set up the neural networks for each scenario
Thanks for your guidance. I recreated those neural nets but still there is error: matlab model input sizes must match the dimensions specified in the corresponding observation and action info specification
for idx = 1:3
% Create actor deep neural network.
actorNetwork = [
imageInputLayer(obsSize,Normalization="none")
convolution2dLayer(8,16, ...
Stride=1,Padding=1,WeightsInitializer="he")
reluLayer
convolution2dLayer(4,8, ...
Stride=1,Padding="same",WeightsInitializer="he")
reluLayer
fullyConnectedLayer(256,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(128,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(64,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(numAct)
softmaxLayer];
actorNetwork = dlnetwork(actorNetwork);
% Create critic deep neural network.
statePath = [...
featureInputLayer(prod(oinfo.Dimension),Name="NetObsInLayer")
fullyConnectedLayer(128)
reluLayer
fullyConnectedLayer(200,Name="sPathOut")];
actionPath = [
featureInputLayer(prod(ainfo.Dimension),Name="NetActInLayer")
fullyConnectedLayer(200,Name="aPathOut",BiasLearnRateFactor=0)];
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="CriticOutput")];
% Create layerGraph object and add layers
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork,actionPath);
criticNetwork = addLayers(criticNetwork,commonPath);
% Connect paths and convert to dlnetwork object
criticNetwork = connectLayers(criticNetwork,"sPathOut","add/in1");
criticNetwork = connectLayers(criticNetwork,"aPathOut","add/in2");
criticNetwork = dlnetwork(criticNetwork);
% create actor and critic
actor(idx) = rlDiscreteCategoricalActor(actorNetwork,oinfo,ainfo);%(actorNetwork,oinfo,ainfo); %#ok<*SAGROW>
critic(idx) = rlQValueFunction(criticNetwork, ...
oinfo,ainfo)
end

请先登录,再进行评论。

更多回答(0 个)

类别

帮助中心File Exchange 中查找有关 Policies and Value Functions 的更多信息

产品

版本

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by