Multi agent reinforcement learning for gain tuning power electronics

13 次查看(过去 30 天)
i am trying to setup environment and agents for the DRL, the env setupt code is as follow .mlx
% Load the Simulink model
mdl = 'RL_two_agents.slx';
open_system(mdl);
% I/O specifications for agent A
obsInfo1 = rlNumericSpec([3, 1]);
obsInfo1.Name = 'observations1';
obsInfo1.Description = 'a, b, c'; % Add space after commas for clarity
actInfo1 = rlNumericSpec([1 ,1], 'LowerLimit', 1, 'UpperLimit', 20);
actInfo1.Name = 'gain_1';
% I/O specifications for agent B
obsInfo2 = rlNumericSpec([3 ,1]);
obsInfo2.Name = 'observations2';
obsInfo2.Description = 'a2, b2, c2'; % Add space after commas for clarity
actInfo2 = rlNumericSpec([1 ,1], 'LowerLimit', 0.01, 'UpperLimit', 1);
actInfo2.Name = 'gain_2'; % Add action name for clarity
% Combine observation and action specs into cell arrays
observationInfo = {obsInfo1,obsInfo2}; % Cell array of observation info
actionInfo = {actInfo1,actInfo2}; % Cell array of action info
% Create the reinforcement learning environment
env = rlSimulinkEnv('RL_two_agents', ...
'RL_two_agents/RL Agent1', ...
'RL_two_agents/RL Agent2', ...
observationInfo,actionInfo); % Explicitly pass as cell arrays
% Set the reset function
env.ResetFcn = @(in)localResetFcn(in);
the errror msg: The error message indicates that the observationInfo and actionInfo parameters should be either rl.util.RLDataSpec objects or cell arrays of rl.util.RLDataSpec objects.
when i check the class; disp(class(obsInfo2));rl.util.rlNumericSpec
what step should i take to resolve this issue.
  2 个评论
Aravind
Aravind 2024-10-22
Could you please share your Simulink model file and other related files? This will help in identifying and debugging the issue.
Muhammad
Muhammad 2024-10-22
@Aravind Unfortunately, I'm unable to share the model files at the moment, but if you have any questions about the model that could help resolve this issue, I’d be glad to provide further details. I'd also like to mention that the setup works fine for a single agent with two actions. However, when I attempt to use two agents with separate actions within the same environment, I encounter this error.
Both the agents have different observations and actions as well as the reward function is customized for both the agents

请先登录,再进行评论。

回答(1 个)

Umar
Umar 2024-10-22

Hi @Muhammad,

Upon reviewing the code, it appears that the specifications for observations and actions are created using rlNumericSpec, which is indeed a valid class for defining observation and action spaces. However, the error message indicates a mismatch in the expected input types. The issue may arise from the way the observationInfo and actionInfo are being passed to the rlSimulinkEnv function. To resolve this issue, ensure that the observationInfo and actionInfo are explicitly defined as cell arrays of rl.util.RLDataSpec objects. The current code already does this correctly, but it is essential to verify that the rlNumericSpec objects are indeed being recognized as rl.util.RLDataSpec objects.Here is a revised version of the code with additional checks:

% Load the Simulink model
mdl = 'RL_two_agents.slx';
open_system(mdl);
% I/O specifications for agent A
obsInfo1 = rlNumericSpec([3, 1]);
obsInfo1.Name = 'observations1';
obsInfo1.Description = 'a, b, c'; % Add space after commas for clarity
actInfo1 = rlNumericSpec([1, 1], 'LowerLimit', 1, 'UpperLimit', 20);
actInfo1.Name = 'gain_1'; 
% I/O specifications for agent B
obsInfo2 = rlNumericSpec([3, 1]);
obsInfo2.Name = 'observations2';
obsInfo2.Description = 'a2, b2, c2'; % Add space after commas for clarity
actInfo2 = rlNumericSpec([1, 1], 'LowerLimit', 0.01, 'UpperLimit', 1);
actInfo2.Name = 'gain_2'; 
% Combine observation and action specs into cell arrays
observationInfo = {obsInfo1, obsInfo2}; % Cell array of observation info
actionInfo = {actInfo1, actInfo2}; % Cell array of action info
% Create the reinforcement learning environment
env = rlSimulinkEnv(mdl, ...
  {'RL_two_agents/RL Agent1', 'RL_two_agents/RL Agent2'}, ...
  observationInfo, actionInfo); % Explicitly pass as cell arrays
% Set the reset function
env.ResetFcn = @(in)localResetFcn(in);

Make sure that the agent block paths are passed as a cell array in the rlSimulinkEnv function and verify that the observationInfo and actionInfo are indeed recognized as cell arrays of rl.util.RLDataSpec objects.

Hope this helps.

  11 个评论
Muhammad
Muhammad 2024-10-24
Hey @Umar i mostly using matlab function blocks and RL_agent block with power electronic switches and load.
when adding an agent in App i got this error
Error in executing callback registered with ViewModel:
Error using matlab.ui.control.internal.model.AbstractStateComponent/set.Items (line 204)
'Items' must be a 1-by-N cell array of character vectors or a string array.
Umar
Umar 2024-10-24

Hi @Muhammad,

This error typically arises when the Items property of a UI control (like a dropdown or list box) is not being set correctly. In MATLAB, the Items property must be defined as either a 1-by-N cell array of character vectors or as a string array. Here's how you can address this issue: Ensure that the data you're trying to assign to the Items property is formatted correctly. For example:

Cell Array

app.DropDown.Items = {'Item1', 'Item2', 'Item3'}; 

String Array

app.DropDown.Items = ["Item1", "Item2", "Item3"];

Verify the source of the data being assigned to Items. If it is being generated dynamically (e.g., from a simulation), print it out before assignment to confirm its structure. If you are using a function to generate this list, ensure that it returns the correct format. Here is an example of how to correctly set up a dropdown in your App:

function startupFcn(app) 
% Example items for the dropdown 
items = {'Agent1', 'Agent2', 'Agent3'}; 
app.DropDown.Items = items; % Assigning items correctly end

If you are setting Items within a callback function, make sure that it is being executed after all necessary variables are initialized and available.

Hope this helps.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Applications 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by