Main Content

rlSimulinkEnv

Create environment object from a Simulink model already containing agent and environment

Description

The rlSimulinkEnv function creates an environment object from a Simulink® model that already includes your agent block. The environment object acts as an interface so that when you call sim or train, these functions in turn call the (compiled) Simulink model to generate experiences for the agents.

To create an environment object from a Simulink model that does not include an agent block, use createIntegratedEnv instead. For more information on reinforcement learning environments, see Create Custom Simulink Environments.

env = rlSimulinkEnv(mdl,agentBlocks) creates the reinforcement learning environment object env for the Simulink model mdl. agentBlocks contains the paths to one or more reinforcement learning agent blocks in mdl. If you use this syntax, each agent block must reference an agent object already in the MATLAB® workspace.

example

env = rlSimulinkEnv(mdl,agentBlocks,observationInfo,actionInfo) creates the reinforcement learning environment object env for the model mdl. The two cell arrays observationInfo and actionInfo must contain the observation and action specifications for each agent block in mdl, in the same order as they appear in agentBlocks.

example

env = rlSimulinkEnv(___,'UseFastRestart',fastRestartToggle) creates a reinforcement learning environment object env and additionally enables fast restart. Use this syntax after any of the input arguments in the previous syntaxes.

Examples

collapse all

Create a Simulink environment using the trained agent and corresponding Simulink model from the Control Water Level in a Tank Using a DDPG Agent example.

Load the agent in the MATLAB® workspace.

load rlWaterTankDDPGAgent

Create an environment for the rlwatertank model, which contains an RL Agent block. Since the agent used by the block is already in the workspace, you do not need to pass the observation and action specifications to create the environment.

env = rlSimulinkEnv("rlwatertank","rlwatertank/RL Agent")
env = 
SimulinkEnvWithAgent with properties:

           Model : rlwatertank
      AgentBlock : rlwatertank/RL Agent
        ResetFcn : []
  UseFastRestart : on

Validate the environment by performing a short simulation for two sample times.

validateEnvironment(env)

You can now train and simulate the agent within the environment by using train and sim, respectively.

For this example, consider the rlSimplePendulumModel Simulink® model. The model is a simple frictionless pendulum that initially hangs in a downward position.

Open the model.

mdl = "rlSimplePendulumModel";
open_system(mdl)

Create rlNumericSpec and rlFiniteSetSpec objects for the observation and action specifications, respectively.

The observation is a vector containing three signals: the sine, cosine, and time derivative of the angle.

obsInfo = rlNumericSpec([3 1]) 
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: [0x0 string]
    Description: [0x0 string]
      Dimension: [3 1]
       DataType: "double"

The action is a scalar expressing the torque and can be one of three possible values, -2 Nm, 0 Nm and 2 Nm.

actInfo = rlFiniteSetSpec([-2 0 2])
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [3x1 double]
           Name: [0x0 string]
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

You can use dot notation to assign property values for the rlNumericSpec and rlFiniteSetSpec objects.

obsInfo.Name = "observations";
actInfo.Name = "torque";

Assign the agent block path information, and create the reinforcement learning environment for the Simulink model using the information extracted in the previous steps.

agentBlk = mdl + "/RL Agent";
env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 
SimulinkEnvWithAgent with properties:

           Model : rlSimplePendulumModel
      AgentBlock : rlSimplePendulumModel/RL Agent
        ResetFcn : []
  UseFastRestart : on

You can also specify a reset function using dot notation. For this example, randomly initialize theta0 in the model workspace.

env.ResetFcn = @(in) setVariable(in,"theta0",randn,"Workspace",mdl)
env = 
SimulinkEnvWithAgent with properties:

           Model : rlSimplePendulumModel
      AgentBlock : rlSimplePendulumModel/RL Agent
        ResetFcn : @(in)setVariable(in,"theta0",randn,"Workspace",mdl)
  UseFastRestart : on

Create an environment for the Simulink model from the example Train Multiple Agents to Perform Collaborative Task.

Load the file containing the agents. For this example, load the agents that have been already trained using decentralized learning.

load decentralizedAgents.mat

Create an environment for the rlCollaborativeTask model, which has two agent blocks. Since the agents used by the two blocks (agentA and agentB) are already in the workspace, you do not need to pass their observation and action specifications to create the environment.

env = rlSimulinkEnv( ...
    "rlCollaborativeTask", ...
    ["rlCollaborativeTask/Agent A","rlCollaborativeTask/Agent B"])
env = 
SimulinkEnvWithAgent with properties:

           Model : rlCollaborativeTask
      AgentBlock : [
                     rlCollaborativeTask/Agent A
                     rlCollaborativeTask/Agent B
                   ]
        ResetFcn : []
  UseFastRestart : on

It is good practice to specify a reset function for the environment such that agents start from random initial positions at the beginning of each episode. For an example, see the resetRobots function defined in Train Multiple Agents to Perform Collaborative Task.

You can now simulate or train the agents within the environment using sim or train, respectively.

Input Arguments

collapse all

Simulink model name, specified as a string or character vector. The model must contain at least one RL Agent block.

Agent block paths, specified as a string, character vector, or string array.

If mdl contains a single RL Agent block, specify agentBlocks as a string or character vector containing the block path.

If mdl contains multiple RL Agent blocks, specify agentBlocks as a string array, where each element contains the path of one agent block.

mdl can contain RL Agent blocks whose path is not included in agentBlocks. Such agent blocks behave as part of the environment, selecting actions based on their current policies. When you call sim or train, the experiences of these agents are not returned and their policies are not updated.

Multi-agent simulation is not supported for MATLAB environments.

The agent blocks can be inside of a model reference. For more information on configuring an agent block for reinforcement learning, see RL Agent.

Observation information, specified as a specification object, an array of specification objects, or a cell array.

If mdl contains a single agent block, specify observationInfo as an rlNumericSpec object, an rlFiniteSetSpec object, or an array containing a mix of such objects.

If mdl contains multiple agent blocks, specify observationInfo as a cell array, where each cell contains a specification object or array of specification objects for the corresponding block in agentBlocks.

For more information, see getObservationInfo.

Action information, specified as a specification object or a cell array.

If mdl contains a single agent block, specify actionInfo as an rlNumericSpec or rlFiniteSetSpec object.

If mdl contains multiple agent blocks, specify actionInfo as a cell array, where each cell contains a specification object for the corresponding block in agentBlocks.

For more information, see getActionInfo.

Option to toggle fast restart, specified as either 'on' or 'off'. Fast restart allows you to perform iterative simulations without compiling a model or terminating the simulation each time.

For more information on fast restart, see How Fast Restart Improves Iterative Simulations (Simulink).

Output Arguments

collapse all

Reinforcement learning environment, returned as a SimulinkEnvWithAgent object.

Note

Before training or simulating an agent within a Simulink environment, to make sure that the RL Agent block runs at the intended sample time, set the SampleTime property of your agent object appropriately.

For more information on reinforcement learning environments, see Create Custom Simulink Environments.

Version History

Introduced in R2019a