rlMultiAgentFunctionEnv
Description
Use rlMultiAgentFunctionEnv
to create a custom multiagent
reinforcement learning environment in which all agents execute in the same step. To create
your custom environment, you supply the observation and action specifications as well as your
own reset and step MATLAB® functions. To verify the operation of your environment,
rlMultiAgentFunctionEnv
automatically calls validateEnvironment
after creating the
environment.
Creation
Description
creates a multiagent environment in which all agents execute in the same step. The
arguments are the observation and action specifications and the custom step and reset
functions. The cell arrays env
= rlMultiAgentFunctionEnv(observationInfo
,actionInfo
,stepFcn
,resetFcn
)observationInfo
and
actionInfo
contain the observation and action specifications,
respectively, for each agent. The stepFcn
and
resetFcn
arguments are the names of your step and reset MATLAB functions, respectively, and they are used to set the StepFcn
and ResetFcn
properties of env
.
Input Arguments
Properties
Object Functions
getActionInfo | Obtain action data specifications from reinforcement learning environment, agent, or experience buffer |
getObservationInfo | Obtain observation data specifications from reinforcement learning environment, agent, or experience buffer |
train | Train reinforcement learning agents within a specified environment |
sim | Simulate trained reinforcement learning agents within specified environment |
validateEnvironment | Validate custom reinforcement learning environment |
Examples
Version History
Introduced in R2023b
See Also
Functions
rlPredefinedEnv
|rlCreateEnvTemplate
|validateEnvironment
|rlSimulinkEnv
|getObservationInfo
|getActionInfo