createIntegratedEnv
Create environment object from a Simulink environment model that does not contain an agent block
Syntax
Description
Given a Simulink® environment model that does not include your agent block, the
createIntegratedEnv
function generates a new closed-loop Simulink model that contains an agent block and references your original environment
model from its environment block. The function also returns an environment object that you can
use for training and simulation. The environment object acts as an interface so that when you
call sim
or train
, these
functions in turn call the created (and compiled) Simulink model to generate experiences for the agents.
To create an environment object from a Simulink model that already includes an agent block, use rlSimulinkEnv
instead. For more information on Simulink reinforcement learning environments, see Create Custom Simulink Environments.
creates a Simulink model with the name specified by env
= createIntegratedEnv(refModel
,newModel
)newModel
and returns a
reinforcement learning environment object, env
, for this model. The
new model contains an RL Agent block and references
refModel
within its Environment block. For more
information on model referencing, see Model Reference Basics (Simulink).
[
returns the block path to the RL Agent block in the new model and the
observation and action data specifications for the reference model,
env
,agentBlock
,obsInfo
,actInfo
] = createIntegratedEnv(___)obsInfo
and actInfo
, respectively.
[___] = createIntegratedEnv(___,
creates a model and environment interface using port, observation, and action sets
specified using one or more Name=Value
)Name=Value
arguments.
Examples
Create Environment from Simulink Model
This example shows how to use createIntegratedEnv
to create an environment object starting from a Simulink model that implements the system with which the agent will interact, and that does not have an agent block. Such a system is often referred to as plant, open-loop system, or reference system, while the whole (integrated) system that includes the agent is often referred to as the closed-loop system.
For this example, use the flying robot model described in Train DDPG Agent to Control Sliding Robot as the reference (open-loop) system.
Open the flying robot model.
open_system("rlFlyingRobotEnv")
Initialize the state variables and sample time.
% initial model state variables theta0 = 0; x0 = -15; y0 = 0; % sample time Ts = 0.4;
Create the Simulink model myIntegratedEnv
containing the flying robot model connected in a closed loop to the agent block. The function also returns the reinforcement learning environment object env
to be used for training.
env = createIntegratedEnv( ... "rlFlyingRobotEnv", ... "myIntegratedEn")
env = SimulinkEnvWithAgent with properties: Model : myIntegratedEn AgentBlock : myIntegratedEn/RL Agent ResetFcn : [] UseFastRestart : on
The function can also return the block path to the RL Agent block in the new integrated model, as well as the observation and action specifications for the reference model.
[~,agentBlk,observationInfo,actionInfo] = ... createIntegratedEnv( ... "rlFlyingRobotEnv","myIntegratedEnv")
agentBlk = "myIntegratedEnv/RL Agent"
observationInfo = rlNumericSpec with properties: LowerLimit: -Inf UpperLimit: Inf Name: "observation" Description: [0x0 string] Dimension: [7 1] DataType: "double"
actionInfo = rlNumericSpec with properties: LowerLimit: -Inf UpperLimit: Inf Name: "action" Description: [0x0 string] Dimension: [2 1] DataType: "double"
Returning the block path and specifications is useful in cases in which you need to modify descriptions, limits, or names in observationInfo
and actionInfo
. After modifying the specifications, you can then create an environment from the integrated model IntegratedEnv
using the rlSimulinkEnv
function.
Create Integrated Environment with Specified Port Names
Open the open-loop water tank model.
open_system("rlWatertankOpenloop")
Set the sample time of the discrete integrator block used to generate the observation, so the simulation can run.
Ts = 1;
Call createIntegratedEnv
using name-value pairs to specify port names. The first argument of createIntegratedEnv
is the name of the reference Simulink model that contains the system with which the agent must interact. Such a system is often referred to as plant, or open-loop system.
For this example, the reference system is the model of a water tank. The input port is called u
(instead of action
), and the first and third output ports are called y
and stop
(instead of observation
and isdone
). Specify the port names using name-value pairs.
env = createIntegratedEnv( ... "rlWatertankOpenloop", ... "IntegratedWatertank",... ActionPortName="u", ... ObservationPortName="y", ... IsDonePortName="stop")
env = SimulinkEnvWithAgent with properties: Model : IntegratedWatertank AgentBlock : IntegratedWatertank/RL Agent ResetFcn : [] UseFastRestart : on
The new model IntegratedWatertank
contains the reference model connected in a closed-loop with the agent block. The function also returns the reinforcement learning environment object to be used for training.
Input Arguments
refModel
— Name of environment reference model
string | character vector
Name of environment reference model, specified as a string or character vector. This
is the Simulink model implementing the environment the agent interacts with. Such a
system is often referred to as plant or open
loop system, while the whole (integrated) system that includes both agent
and environment is often referred to as the closed loop system. The
generated model newModel
is a closed loop system that contains an
RL Agent block and references refModel
within its
Environment block. For more information on model referencing, see Model Reference Basics (Simulink).
Note
The reward signal at time t must be the one corresponding to the transition between the observation output at time t-1 and the observation output at time t.
If your observation contains multiple channels, group the signals carried by the channels into a single observation bus. For more information about bus signals, see Simulink Bus Capabilities (Simulink).
Note
To avoid (potentially unsolvable) algebraic loops, you must avoid any direct feedthrough (that is any direct dependency in the same time step) from the action to the observation output signal. This is because in the Simulink implementation of the agent block, the action at a given time step depends on the observation at the same time step. In other words, the agent block has a direct feedthrough from its observation input to its action output.
Additionally, for models created using createIntegratedEnv
the environment block is a referenced subsystem.
Referenced subsystems are normally treated as a direct feedthrough block unless the
Minimize algebraic loop occurrences parameter
in the referenced subsystem is enabled. When the referenced
model has no direct feedthrough from an input port that participates in an artificial
algebraic loop to any of its outputs ports, enabling this
parameter can remove artificial algebraic loops involving the model.
In general, adding a Delay (Simulink) or Memory (Simulink) block to the action signal between the agent and environment blocks removes the algebraic loop (alternatively you can add delay or memory blocks to all the environment output signals). For more information on algebraic loops and how to remove some of them, see Algebraic Loop Concepts (Simulink) and Remove Algebraic Loops (Simulink).
newModel
— Name of the generated model
string | character vector
Name of the generated model, specified as a string or character vector.
createIntegratedEnv
creates a Simulink model with this name, but does not save the model. The created model
newModel
is a closed loop system that contains an RL
Agent block and references refModel
within its
Environment block. For more information on model referencing, see Model Reference Basics (Simulink).
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: IsDonePortName="stopSim"
sets the stopSim
port of the reference model as the source of the isdone
signal.
ObservationPortName
— Name of the observation output port in the environment reference model
"observation"
(default) | string | character vector
Name of the observation output port in the environment reference model, specified
as a string or character vector. Specify ObservationPortName
when
the name of the observation output port of the reference model is not
"observation"
.
Example: ObservationPortName="x"
ActionPortName
— Name of the action input port in the environment reference model
"action"
(default) | string | character vector
Name of the action input port in the environment reference model, specified as a
string or character vector. Specify ActionPortName
when the name of
the action input port of the reference model is not
"action"
.
Example: ActionPortName="u"
RewardPortName
— Name of the reward output port in the environment reference model
"reward"
(default) | string | character vector
Name of the reward output port in the environment reference model, specified as a
string or character vector. Specify RewardPortName
when the name of
the reward output port of the reference model is not
"reward"
.
Example: RewardPortName="r"
IsDonePortName
— Name of the is-done output port in the environment reference model
"isdone"
(default) | string | character vector
Name of the is-done output port in the environment reference model, specified as a
string or character vector. Specify IsDonePortName
when the name of
the is-done flag output port of the reference model is not
"isdone"
.
Example: IsDonePortName="done"
ObservationBusElementNames
— Names of observation bus leaf elements
string array
Names of observation bus leaf elements for which to create specifications,
specified as a string array. To create observation specifications for a subset of the
elements in a Simulink bus object, specify BusElementNames
. If you do not
specify BusElementNames
, a data specification is created for each
leaf element in the bus.
ObservationBusElementNames
is applicable only when the
observation output port is a bus signal.
Example: ObservationBusElementNames=["sin" "cos"]
creates
specifications for the observation bus elements with the names
"sin"
and "cos"
.
ObservationDiscreteElements
— Elements of finite observation set
cell array of name-value pairs
Elements of finite observation set, specified as a cell array of name-value pairs. Each name-value pair consists of an element name and an array of discrete values.
If the observation output port of the reference model is:
A bus signal, specify the name of one of the leaf elements of the bus specified in by
ObservationBusElementNames
Nonbus signal, specify the name of the observation port, as specified by
ObservationPortName
The specified discrete values must be castable to the data type of the observation signal arriving to the observation output port in the environment reference model.
If you do not specify discrete values for an observation channel, the signals carried by the channel are continuous.
Example: ObservationDiscreteElements={"observation",[-1 0 1]}
specifies discrete values for a nonbus observation signal with port name
observation
.
Example: ObservationDiscreteElements={"gear",[-1 0 1 2],"direction",[1 2 3
4]}
specifies discrete values for the "gear"
and
"direction"
leaf elements of a bus action signal.
ActionDiscreteElements
— Elements of finite action set
cell array of name-value pairs
Elements of finite action set, specified as a cell array of name-value pairs. Each name-value pair consists of an element name and an array of discrete values.
If the action input port of the reference model is:
A bus signal, specify the name of a leaf element of the bus
Nonbus signal, specify the name of the action port, as specified by
ActionPortName
The specified discrete values must be castable to the data type of the action signal that can be accepted by the action input port in the environment reference model.
If you do not specify discrete values for the action channel, the signals carried by the channel are continuous.
Example: ActionDiscreteElements={"action",[-1 0 1]}
specifies
discrete values for a nonbus action signal with port name
"action"
.
Example: ActionDiscretElements={"force",[-10 0 10],"torque",[-5 0
5]}
specifies discrete values for the 'force'
and
'torque'
leaf elements of a bus action signal.
Note
While creating an integrated environment more than action channel is possible, Reinforcement Learning Toolbox™ agents only allow a single action channel.
Output Arguments
env
— Reinforcement learning environment
SimulinkEnvWithAgent
object
Reinforcement learning environment interface, returned as an SimulinkEnvWithAgent
object. You can use this object to train and simulate
agents in the same way as with any other environment.
Note
Before training or simulating an agent within a Simulink environment, to make sure that the RL Agent block runs at
the intended sample time, set the SampleTime
property of your
agent object appropriately.
For more information on reinforcement learning environments, see Create Custom Simulink Environments.
agentBlock
— Block path to the agent block
string
Block path to the agent block in the new model, returned as a string. To train an
agent in the new Simulink model, you must create an agent and specify the agent name in the RL Agent block
indicated by agentBlock
.
For more information on creating agents, see Reinforcement Learning Agents.
obsInfo
— Observation data specifications
rlNumericSpec
object | rlFiniteSetSpec
object | array of data specification objects
Observation data specifications, returned as one of the following:
rlNumericSpec
object for a single continuous observation specificationrlFiniteSetSpec
object for a single discrete observation specificationArray of data specification objects for multiple specifications
actInfo
— Action data specifications
rlNumericSpec
object | rlFiniteSetSpec
object | array of data specification objects
Action data specifications, returned as one of the following:
rlNumericSpec
object for a single continuous action specificationrlFiniteSetSpec
object for a single discrete action specificationArray of data specification objects for multiple action specifications
Note
While creating an integrated environment more than action channel is possible, Reinforcement Learning Toolbox agents only allow a single action channel.
Version History
Introduced in R2019a
See Also
Functions
Objects
Blocks
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)