rlSimulinkEnv
Create environment object from a Simulink model already containing agent and environment
Syntax
Description
The rlSimulinkEnv
function creates an environment object from
a Simulink® model that already includes your agent block. The environment object acts as an
interface so that when you call sim
or train
, these
functions in turn call the (compiled) Simulink model to generate experiences for the agents.
To create an environment object from a Simulink model that does not include an agent block, use createIntegratedEnv
instead. For more information on reinforcement learning environments, see Create Custom Simulink Environments.
creates the reinforcement learning environment object env
= rlSimulinkEnv(mdl
,agentBlocks
)env
for the
Simulink model mdl
. agentBlocks
contains the
paths to one or more reinforcement learning agent blocks in mdl
. If you
use this syntax, each agent block must reference an agent object already in the MATLAB® workspace.
creates the reinforcement learning environment object env
= rlSimulinkEnv(mdl
,agentBlocks
,observationInfo
,actionInfo
)env
for the model
mdl
. The two cell arrays observationInfo
and
actionInfo
must contain the observation and action specifications for
each agent block in mdl
, in the same order as they appear in
agentBlocks
.
creates a reinforcement learning environment object env
= rlSimulinkEnv(___,'UseFastRestart',fastRestartToggle
)env
and
additionally enables fast restart. Use this syntax after any of the input arguments in the
previous syntaxes.
Examples
Input Arguments
Output Arguments
Version History
Introduced in R2019a
See Also
Functions
Objects
SimulinkEnvWithAgent
|rlNumericSpec
|rlFiniteSetSpec
|rlFunctionEnv
|rlMultiAgentFunctionEnv
|rlTurnBasedFunctionEnv
Blocks
Topics
- Compare DDPG Agent to LQR Controller
- Train DDPG Agent to Swing Up and Balance Pendulum
- Train DDPG Agent to Swing Up and Balance Cart-Pole System
- Train DDPG Agent to Swing Up and Balance Pendulum with Bus Signal
- Train DDPG Agent to Swing Up and Balance Pendulum with Image Observation
- Train DDPG Agent for Adaptive Cruise Control
- Create Custom Simulink Environments
- How Fast Restart Improves Iterative Simulations (Simulink)