reinforcement learning toolbox. error: Error encountered while creating actor representation: Observation names must match the names of the deep neural network's input layers.
13 次查看(过去 30 天)
显示 更早的评论
Error encountered while creating actor representation:
Observation names must match the names of the deep neural network's input layers. Make sure all observation names appear in the neural network.
My code
% Clear workspace, command window, and close all figures
clear all; clc; close all;
% Define State and Action Dimensions
stateDim = 5; % State dimension
actionDim = 3; % Action dimension
% Create Observation and Action Specifications
ObservationInfo = rlNumericSpec([stateDim 1]);
ObservationInfo.Name = "state";
ActionInfo = rlNumericSpec([actionDim 1], 'LowerLimit', [-1; -1; -1], 'UpperLimit', [1; 1; 1]);
ActionInfo.Name = "action";
% Display the properties to ensure consistency
disp(ObservationInfo);
disp(ActionInfo);
% Create the environment with the step and reset functions
try
env = rlFunctionEnv(ObservationInfo, ActionInfo, @stepFunction, @resetFunction);
catch ME
disp('Error setting up environment:');
disp(ME.message);
return;
end
% Create a minimal critic network
criticNetwork = [
featureInputLayer(stateDim, 'Normalization', 'none', 'Name', 'state')
fullyConnectedLayer(1, 'Name', 'output')];
% Create a minimal actor network
actorNetwork = [
featureInputLayer(stateDim, 'Normalization', 'none', 'Name', 'state')
fullyConnectedLayer(actionDim, 'Name', 'output')];
% Display layer names for verification
disp(['Critic Network Input Layer Name: ', criticNetwork(1).Name]);
disp(['Actor Network Input Layer Name: ', actorNetwork(1).Name]);
% Attempt to create the actor and critic representations
try
critic = rlValueFunction(layerGraph(criticNetwork), ObservationInfo);
actor = rlStochasticActorRepresentation(layerGraph(actorNetwork), ObservationInfo, ActionInfo);
catch ME
disp('Error encountered while creating actor representation:');
disp(ME.message);
disp('Observation Info and Actor Network Input Layer Names:');
disp(['ObservationInfo Name: ', ObservationInfo.Name]);
disp(['Actor Network Input Layer Name: ', actorNetwork(1).Name]);
return; % Stop execution if there's a mismatch error
end
% Create the PPO agent and specify agent options
agentOptions = rlPPOAgentOptions('ClipFactor', 0.2, 'EntropyLossWeight', 0.01, ...
'SampleTime', 0.1, 'MiniBatchSize', 64, 'ExperienceHorizon', 128);
agent = rlPPOAgent(actor, critic, agentOptions);
% Specify training options and run training
trainOpts = rlTrainingOptions('MaxEpisodes', 1000, 'MaxStepsPerEpisode', 500, ...
'Verbose', true, 'Plots', 'training-progress', 'StopTrainingCriteria', 'AverageReward', ...
'StopTrainingValue', 500);
trainingStats = train(agent, env, trainOpts);
% Custom reset function to initialize the environment
function [initialObs, loggedSignals] = resetFunction()
stateDim = 5;
initialObs = randn(stateDim, 1);
loggedSignals.State = initialObs;
end
% Custom step function to define environment behavior
function [nextObs, reward, isDone, loggedSignals] = stepFunction(action, loggedSignals)
state = loggedSignals.State;
nextObs = state + [0.1 * action; zeros(2, 1)];
reward = -sum((nextObs(1:3) - action).^2);
isDone = any(abs(nextObs(1:3)) > 10);
loggedSignals.State = nextObs;
end
0 个评论
回答(1 个)
Gayathri
2024-9-30
I understand that you are getting the following error when creating actor representation.
“Observation names must match the names of the deep neural network's input layers. Make sure all observation names appear in the neural network.”
Please change the line of code as shown below to incorporate the “Observation” name as an argument.
actor = rlStochasticActorRepresentation(layerGraph(actorNetwork), ObservationInfo, ActionInfo,'Observation','state');
Also, the number of outputs for a continuous stochastic actor representation must be two times the number of actions. Hence, please update the “actorNetwork” as shown below.
actorNetwork = [featureInputLayer(stateDim, 'Normalization', 'none', 'Name', 'state')
fullyConnectedLayer(2*actionDim, 'Name', 'output')];
For more information about “rlStochasticActorRepresentation” please refer to the examples in the below mentioned link.
Hope you find this information helpful.
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!