Main Content

rlMultiAgentFunctionEnv

Create custom multiagent reinforcement learning environment

Since R2023b

    Description

    Use rlMultiAgentFunctionEnv to create a custom multiagent reinforcement learning environment in which all agents execute in the same step. To create your custom environment, you supply the observation and action specifications as well as your own reset and step MATLAB® functions. To verify the operation of your environment, rlMultiAgentFunctionEnv automatically calls validateEnvironment after creating the environment.

    Creation

    Description

    env = rlMultiAgentFunctionEnv(observationInfo,actionInfo,stepFcn,resetFcn) creates a multiagent environment in which all agents execute in the same step. The arguments are the observation and action specifications and the custom step and reset functions. The cell arrays observationInfo and actionInfo contain the observation and action specifications, respectively, for each agent. The stepFcn and resetFcn arguments are the names of your step and reset MATLAB functions, respectively, and they are used to set the StepFcn and ResetFcn properties of env.

    example

    Input Arguments

    expand all

    Observation specifications, specified as a cell array with as many elements as the number of agents. Every element of the cell must contain the observation specifications for a corresponding agent. The observation specification for an agent must be an rlFiniteSetSpec or rlNumericSpec object or a vector containing a mix of such objects (in which case every element of the vector defines the properties of a specific observation channel for the agent).

    Action specifications, specified as a cell array with as many elements as the number of agents. Every element of the cell must contain the observation specifications for a corresponding agent.

    The action specification for an agent must be one of the following:

    The action specification defines the properties of an environment action channel, such as its dimensions, data type, and name.

    Note

    For non-hybrid action spaces (either discrete or continuous) or only one action channel is allowed. For hybrid action spaces, you must have two action channels, the first one for the discrete part of the action, the second one for the continuous part of the action.

    Properties

    expand all

    Environment step function, specified as a function name, function handle, or handle to an anonymous function. The sim and train functions call StepFcn to update the environment at every simulation or training step.

    This function must have two inputs and four outputs, as illustrated by the following signature.

    [NextObservation,Reward,IsDone,UpdatedInfo] = myStepFunction(Action,Info)

    For a given action input, the step function returns the values of the next observation and reward, a logical value indicating whether the episode is terminated, and an updated environment information variable.

    Specifically, the required input and output arguments are:

    • Action — Cell array containing the current actions from the agents. This must contain as many elements as the number of agents, matching the order specified in actionInfo. Each element must match the dimensions and data type specified in the corresponding element of the actionInfo cell.

    • Info and UpdatedInfo — Any data that you want to pass from one step to the next. This can be the environment state or a structure containing state and parameters. The simulation and training functions (train and sim) handle this variable by:

      1. Initializing Info using the second output argument returned by ResetFcn, at the beginning of the episode

      2. Passing Info as second input argument to StepFcn at each training or simulation step

      3. Updating Info using the fourth output argument returned by StepFcn, UpdatedInfo

    • NextObservation — Cell array containing the next observations for all the agents. These are the observations related to the next state (the transition to the next state is caused by the current actions contained in Action). Therefore, NextObservation must contain as many elements as the number of agents and each element must match the dimensions and data types specified in the corresponding element of the observationInfo cell.

    • Reward — Vector containing the rewards for all the agents. These are the rewards generated by the transition from the current state to the next one. Each element of the vector must be a numeric scalar.

    • IsDone — Logical value indicating whether to end the simulation or training episode.

    To use additional input arguments beyond the allowed two, define your additional arguments in the MATLAB workspace, then specify stepFcn as an anonymous function that in turn calls your custom function with the additional arguments defined in the workspace, as shown in the example Create Custom Environment Using Step and Reset Functions.

    Example: StepFcn="myStepFcn"

    Environment reset function, specified as a function name, function handle, or handle to an anonymous function. The sim function calls your reset function to reset the environment at the start of each simulation, and the train function calls it at the start of each training episode.

    The reset function that you provide must have no inputs and two outputs, as illustrated by the following signature.

    [InitialObservation,Info] = myResetFunction()

    The reset function sets the environment to an initial state and computes the initial value of the observation. For example, you can create a reset function that randomizes certain state values such that each training episode begins from different initial conditions. The InitialObservation must be a cell array containing the initial observations for all the agents. Therefore, InitialObservation must contain as many elements as the number of agents and each element must match the dimensions and data types specified in the corresponding element of the observationInfo cell.

    The Info output of ResetFcn initializes the Info property of your environment and contains any data that you want to pass from one step to the next. This can be the environment state or a structure containing state and parameters. The simulation or training function (train or sim) supplies the current value of Info as the second input argument of StepFcn, then uses the fourth output argument returned by StepFcn to update the value of Info.

    To use additional input arguments beyond the allowed two, define your argument in the MATLAB workspace, then specify stepFcn as an anonymous function that in turn calls your custom function with the additional arguments defined in the workspace, as shown in the example Create Custom Environment Using Step and Reset Functions.

    Example: ResetFcn="myResetFcn"

    Information to pass to the next step, specified as any MATLAB data type. This can be the environment state or a structure containing state and parameters. When ResetFcn is called, whatever you define as the Info output of ResetFcn initializes this property. When a step occurs the simulation or training function (train or sim) uses the current value of Info as the second input argument for StepFcn. Once StepFcn completes, the simulation or training function then updates the current value of Info using the fourth output argument returned by StepFcn.

    Example: Info.State=[-1.1 0 2.2]

    Object Functions

    getActionInfoObtain action data specifications from reinforcement learning environment, agent, or experience buffer
    getObservationInfoObtain observation data specifications from reinforcement learning environment, agent, or experience buffer
    trainTrain reinforcement learning agents within a specified environment
    simSimulate trained reinforcement learning agents within specified environment
    validateEnvironmentValidate custom reinforcement learning environment

    Examples

    collapse all

    Create a custom multiagent environment by supplying custom MATLAB® functions. Using rlMultiAgentFunctionEnv, you can create a custom MATLAB reinforcement learning environment with universal sample time, that is an environment in which all agents execute in the same step. To create your custom turn-based environment, you must define observation specifications, action specifications, and step and reset functions.

    For this example, consider an environment containing two agents. The first agent receives an observation belonging to a four-dimensional continuous space and returns an action that can have two values, -1 and 1.

    The second agent receives an observation belonging to a mixed observation space with two channels. The first channel carries a two-dimensional continuous vector and the second channel carries a value that is either 0 or 1. The action returned by the second agent is a continuous scalar.

    To define the observation and action spaces of the two agents, use cell arrays.

    obsInfo = { rlNumericSpec([4 1]) , ... 
               [rlNumericSpec([2 1]) rlFiniteSetSpec([0 1])] };
    actInfo = {rlFiniteSetSpec([-1 1]), rlNumericSpec([1 1])};

    Next, specify your step and reset functions. For this example, use the functions resetFcn and stepFcn defined at the end of the example.

    To create the custom multiagent function environment, use rlMultiAgentFunctionEnv.

    env = rlMultiAgentFunctionEnv( ...
        obsInfo,actInfo, ...
        @stepFcn,@resetFcn)
    env = 
      rlMultiAgentFunctionEnv with properties:
    
         StepFcn: @stepFcn
        ResetFcn: @resetFcn
            Info: {[4x1 double]  {1x2 cell}}
    
    

    Note that while the custom reset and step functions that you pass to rlMultiAgentFunctionEnv must have exactly zero and two arguments, respectively, you can avoid this limitation by using anonymous functions. For an example on how to do this, see Create Custom Environment Using Step and Reset Functions.

    You can now create agents for env and train or simulate them as you would for any other environment.

    Environment Functions

    Environment reset function.

    function [initialObs, info] = resetFcn()
    % RESETFUN sets the default state of the environment.
    %
    % - INITIALOBS is a 1xN cell array (N is the total number of agents).
    % - INFO contains any data that you want to pass between steps.
    %
    % To pass information from one step to the next, such as the environment 
    % state, use INFO.
    
    % For this example, initialize the agent observations randomly 
    % (but set to 1 the value carried by the second observation channel
    %  of the second agent).
    initialObs = {rand(4,1), {rand(2,1) 1} };
    
    % Set the info argument equal to the observation cell. 
    info = initialObs;
    
    end

    Environment step function.

    function [nextObs, reward, isdone, info] = stepFcn(action, info)
    % STEPFUN specifies how the environment advances to the next state given
    % the actions from all the agents. 
    % 
    % If N is the total number of agents, then the arguments are as follows.
    % - NEXTOBS is a 1xN cell array (s).
    % - ACTION is a 1xN cell array.
    % - REWARD is a 1xN numeric array.
    % - ISDONE is a logical or numeric scalar.
    % - INFO contains any data that you want to pass between steps.
    
    % For this example, just return to each agent a random observation 
    % multiplied by the norm of its respective action. The second observation 
    % channel of the second agent carries a value that can be only be 0 or 1.
    nextObs = {  rand([4 1])*norm(action{1}) , ....
                 {rand([2 1])*norm(action{2}) 0} };
    
    % Return a random reward vector and a false is-done value.
    reward = rand(2,1);
    isdone = false;
    
    end

    Version History

    Introduced in R2023b