Errror: Undefined function 'getActionInfo' for input arguments of type 'struct'.

5 次查看(过去 30 天)
Hi
This would work previously. I now get an error when I try to test a RL agent. Is this an issue of data-type expected?
I have provided the error message as well as the properties of the agent Action and Observation objects.
Error message -> Undefined function 'getActionInfo' for input arguments of type 'struct'.
Agent and Observation objects:
>> DDPG_agent.agent.getActionInfo
rlNumericSpec with properties:
LowerLimit: 0
UpperLimit: 100
Name: [0×0 string]
Description: [0×0 string]
Dimension: [1 1]
DataType: "double"
>> DDPG_agent.agent.getObservationInfo
rlNumericSpec with properties:
LowerLimit: [3×1 double]
UpperLimit: [3×1 double]
Name: "observations"
Description: "controlled flow, error, integral of error"
Dimension: [3 1]
DataType: "double"
  2 个评论
Rajesh Siraskar
Rajesh Siraskar 2021-9-13
编辑:Rajesh Siraskar 2021-9-13
Hi -- Can someone please help?
Dear Emmanouil (Tzorakoleftherakis) - you've helped me in the past - do you know if there is something wrong I am doing?
Rajesh
Emmanouil Tzorakoleftherakis
Where is the error in the code above? I don't see an error when you call getActionInfo. Can you attach a mat file with the required variables to reproduce this? Also, which release are you using?

请先登录,再进行评论。

回答(2 个)

Rajesh Siraskar
Rajesh Siraskar 2021-9-22
Hello Emmanouil
Thank you for your help. The error is not in the code as that runs fine. It is the Simulation run that generates errors. I have tried to add the block diagram and original Simulation output below.
=== Simulation (Elapsed: 2 sec) ===
Error:Error in 'sm_DDPG_PPO_Experimental_Setup/DDPG_Sub_System/DDPG_Agent': Failed to evaluate mask initialization commands.
Caused by:
MATLAB System block 'sm_DDPG_PPO_Experimental_Setup/DDPG_Sub_System/DDPG_Agent/AgentWrapper' error occurred when invoking 'getSampleTime' method of 'rl.simulink.blocks.AgentWrapper'. The error was thrown from '
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\AgentWrapper.m' at line 152
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\AgentWrapper.m' at line 202
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\AgentWrapper.m' at line 257
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\agentmaskinit.m' at line 11'.
Undefined function 'getActionInfo' for input arguments of type 'struct'.
Error:Error in 'sm_DDPG_PPO_Experimental_Setup/PPO_Sub_System/PPO_Agent': Failed to evaluate mask initialization commands.
Caused by:
MATLAB System block 'sm_DDPG_PPO_Experimental_Setup/PPO_Sub_System/PPO_Agent/AgentWrapper' error occurred when invoking 'getSampleTime' method of 'rl.simulink.blocks.AgentWrapper'. The error was thrown from '
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\AgentWrapper.m' at line 152
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\AgentWrapper.m' at line 202
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\AgentWrapper.m' at line 257
'C:\Program Files\MATLAB\R2020b\toolbox\rl\rl\simulink\+rl\+simulink\+blocks\agentmaskinit.m' at line 11'.
Undefined function 'getActionInfo' for input arguments of type 'struct'.
  1 个评论
Emmanouil Tzorakoleftherakis
thanks. It is hard to pinpoint the error without a reproduction model, but it seems like you are using a struct somewhere you are not supposed to. Can you doublecheck?

请先登录,再进行评论。


Rajesh Siraskar
Rajesh Siraskar 2021-9-23
Hi Emmanouil - Here is below my full "simulation" code.
Basically I have trained two models using PPO and DDPG and am trying to run them in parallel to compare their "trajectories".
[Just in case someone else is interested my paper and Git code location.]
Thanks a lot Emmanouil - you are always very helpful
%--------------------------------------------------------------------------
% Reinforcement Learning for Valve Control. V.5.4: 11-Mar. 11pm
% Author: Rajesh Siraskar
% e-mail: rajeshsiraskar@gmail.com; siraskar@coventry.ac.uk
% University: Coventry University, UK, MTech Automotive Engineering
%
% Code: Experiment and validate a trained RL controller. Compare
% against PID control.
% This code accompanies the paper titled "Reinforcement Learning for Control of Valves"
% https://arxiv.org/abs/2012.14668
% -------------------------------------------------------------------------
%
% To experiment with a trained RL controller/agent, launch the Simulink model `sm_Experimental_Setup.slx` and then ensure
% variables are correctly set in the code file `code_Experimental_Setup.m` and excute the code.
% Variables to be set:
% 1. `MODELS_PATH`: Points to your base path for storing the models. Default 'models/'
% 2. `VALVE_SIMULATION_MODEL = sm_Experimental_Setup`: Points to Simulink model used for validation against PID and experimenting with different noise sources etc.
% 3. `PRE_TRAINED_MODEL_FILE = 'Grade_V.mat'`: Pre-trained model (RL controller) to be tested or validated. Example shows a model called `Grade_V.mat`
% 4. `TIME_DELAY`, `fS` (stiction) and `fD` (dynamic friction): Variables that represent the physical parameters. Set these to the values that you want
% test the RL controller against.
% Suggested Graded Learning stages:
% - GRADE_I: TIME_DELAY=0.1; fS = 8.4/10; fD = 3.5243/10
% - GRADE_II: TIME_DELAY=0.5; fS = 8.4/5; fD = 3.5243/5
% - GRADE_III: TIME_DELAY=1.5; fS = 8.4/2; fD = 3.5243/2
% - GRADE_IV: TIME_DELAY=1.5; fS = 8.4/1.5; fD = 3.5243/1.5
% - GRADE_V: TIME_DELAY=2.0, fS = 8.4/1.5; fD = 3.5243/1.5
% - GRADE_VI: TIME_DELAY=2.5, fS = 8.4/1.0; fD = 3.5243/1.0
%--------------------------------------------------------------------------
%clear all;
warning('off', 'all');
%% Set paths
MODELS_PATH = "models\";
VALVE_SIMULATION_MODEL = "sm_DDPG_PPO_Experimental_Setup"; % Simulink experimentation circuit
DDPG_AGENT = "/DDPG_Sub_System/DDPG_Agent";
PPO_AGENT = "/PPO_Sub_System/PPO_Agent";
%% GRADED LEARNING models
PRE_TRAINED_DDPG_MODEL_FILE = "DDPG_TEST.mat";
PRE_TRAINED_PPO_MODEL_FILE = "PPO_TEST.mat";
% Physical system parameters. Use iteratively. Suceessively increase
% difficulty of training task and apply Graded Learning to train the agent
TIME_DELAY = 2.5; % Time delay for process controlled by valve
fS = 8.4000; % Valve dynamic friction
fD = 3.5243; % Valve static friction
% Agent stage to be tested
DDPG_MODEL_FILE = strcat(MODELS_PATH, PRE_TRAINED_DDPG_MODEL_FILE);
PPO_MODEL_FILE = strcat(MODELS_PATH, PRE_TRAINED_PPO_MODEL_FILE);
% Time step. Tf/Ts gives Simulink's simulation time
Ts = 1.0; % Ts: Sample time (secs)
Tf = 200; % Tf: Simulation length (secs)
ACCEPTABLE_DELTA = 0.05;
% Load experiences from pre-trained agent
sprintf('- Load DDPG model: %s', DDPG_MODEL_FILE)
sprintf('- Load PPO model: %s', PPO_MODEL_FILE)
DDPG_agent = load(DDPG_MODEL_FILE,"agent");
PPO_agent = load(PPO_MODEL_FILE,"agent");
% ----------------------------------------------------------------
% Validate the learned agent against the model by simulation
% ----------------------------------------------------------------
% Define observation and action space
NUMBER_OBSERVATIONS = 3;
% Observation Vector
% (1) U(k)
% (2) Error signal
% (3) Error integral
obsInfo = rlNumericSpec([3 1],...
'LowerLimit',[-inf -inf 0]',...
'UpperLimit',[ inf inf inf]');
obsInfo.Name = "observations";
obsInfo.Description = "controlled flow, error, integral of error";
numObservations = obsInfo.Dimension(1);
actionInfo_DDPG = rlNumericSpec([1 1],'LowerLimit', 0,'UpperLimit', 100);
actionInfo_PPO = rlNumericSpec([2 1],'LowerLimit', 0,'UpperLimit', 100);
actionInfo_DDPG.Name = "flow";
actionInfo_PPO.Name = "flow";
% Intialise the environment with the serialised agent and run the test
sprintf ('\n\n ==== RL for control of valves V.5.1 ====================')
sprintf (' ---- Testing model: %s, %s', DDPG_MODEL_FILE, PPO_MODEL_FILE)
sprintf (' ---- Parameters: Time-Delay: %3.2f, fS: %3.2f, fD: %3.2f', TIME_DELAY, fS, fD)
ObservationInfo = [obsInfo, obsInfo];
ActionInfo = [actionInfo_DDPG, actionInfo_PPO];
arObservationInfo = num2cell(ObservationInfo, 1);
arActionInfo = num2cell(ActionInfo, 1);
% open_system(VALVE_SIMULATION_MODEL);
AgentBlocks = VALVE_SIMULATION_MODEL + [DDPG_AGENT, PPO_AGENT];
env = rlSimulinkEnv(VALVE_SIMULATION_MODEL, AgentBlocks, arObservationInfo, arActionInfo);
simOpts = rlSimulationOptions('MaxSteps', 2000);
expr = sim(env, [DDPG_agent.agent, PPO_agent.agent]);
% ------------------------------------------------------------------------
% Environment Reset function
% Randomize Reference_Signal between 0 and 100
% Reset if the controlled speed drops below zero or exceeds 100
% ------------------------------------------------------------------------
function in = localResetFcn(in, RL_System)
block_Reference_Signal = strcat (RL_System, '/Reference_Signal');
Reference_Signal = 20+randi(80) + rand;
in = setBlockParameter(in, block_Reference_Signal, ...
'Value', num2str(Reference_Signal));
% Randomize initial condition of the flow (0 and 100)
block_Actual_Flow = strcat (RL_System, '/Plant/Process/FLOW');
Actual_Flow = 20+randi(80) + rand;
in = setBlockParameter(in, block_Actual_Flow, 'Bias', num2str(Actual_Flow));
end
  1 个评论
Emmanouil Tzorakoleftherakis
A couple of suggestions:
1) Make sure DDPG_agent.agent and PPO_agent.agent are the actual agent objects and not structs
2) In the Simulink model, make sure to change the 'Agent object' field in the RL Agent block to be PPO_agent.agent or DDPG_agent.agent as needed (I suspect you may have forgotten to do this)

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Environments 的更多信息

产品


版本

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by