Adding Further detail:
I also tried generating an agent in the m file using the folliwng commands:
% Number of Neurons per layer
initOpts = rlAgentInitializationOptions(NumHiddenUnit=256);
%Actor-Critic Agent as this can generate continuous policy
agent = rlACAgent(ObservationInfo,ActionInfo,initOpts)
I run into the following error:
Error using dlnetwork/connectLayers (line 250)
Dot indexing is not supported for variables of this type.
Error in rl.function.rlContinuousGaussianActor>localCreateMeanStdOutput (line 335)
net = connectLayers(net,bridgeOutputName,'fc_mean');
Error in rl.function.rlContinuousGaussianActor.createDefault (line 263)
[actorNet,meanName,stdName] = localCreateMeanStdOutput(inputGraph,bridgeOutputName,actionInfo,initOptions);
Error in rl.internal.util.parseOnPolicyAgentInitializationInputs (line 12)
actor = rl.function.rlContinuousGaussianActor.createDefault(observationInfo, actionInfo, useSquashInNetwork, initOptions);
Error in rlACAgent (line 87)
[Actor, Critic, AgentOptions] = rl.internal.util.parseOnPolicyAgentInitializationInputs(AgentType,varargin{:});
Error in trainRLAgent (line 59)
agent = rlACAgent(ObservationInfo,ActionInfo,initOpts)
This is why I tried using the "Reinforcement Learning Designer" app.