photo

ali farid


Last seen: 3 months 前 自 2023 起处于活动状态

Followers: 0   Following: 0

统计学

  • Thankful Level 2
  • Explorer

查看徽章

Feeds

排序方式:

提问


Problem with single agent Simulink using RL toolbox
I am using RL toolbox to train a single agent with the following specifications: for type=1 % obsMat = [1 1];...

4 months 前 | 1 个回答 | 0

1

个回答

提问


How to setup a multi-agent DDPG
Hi, I am trying to simulate a number of agents that collaboratively doing mapping. I designed the actor critic networks, but I...

4 months 前 | 1 个回答 | 0

1

个回答

提问


Reinforcement Learning: competitive or collaborative options in MARL Matlab
Hello, I am trying to set up three explorer agents to explore the unknown area in collobrative or competive manners. I am won...

5 months 前 | 1 个回答 | 0

1

个回答

提问


Problem with bus input of RL agent
I used a block diagram of a RL agent in Simulink which in a Matlab example was used, but I modified the inputs of RL agent and I...

9 months 前 | 1 个回答 | 0

1

个回答

提问


Cannot propagate non-bus signal to block because the block has a bus object specified.
I have a Simulink model that observation was only an image, and I added two other vector to the observation in RL toolbox. Since...

9 months 前 | 1 个回答 | 0

1

个回答

提问


Observation specification must be scalar if not created by bus2RLSpec.
I am using a RL system that is initially designed for one type of observation which is image. Recently I added two scalar observ...

10 months 前 | 1 个回答 | 1

1

个回答

提问


A problem with RL toolbox: wrong size of inputs of actor network.
I have a problem with getSize which shows a wrong size, my input is a scalar with a size [1 1], but get size returns 2. I am usi...

10 months 前 | 1 个回答 | 0

1

个回答

提问


Reinforcement Learning Error with two scalar inputs
I have a strange error from a critic network that has 3 inputs, image, and two scalars. But I see the following error: Error ...

10 months 前 | 0 个回答 | 0

0

个回答

提问


Add scalar inputs to the actor network
I have a CNN based PPO actor critic, and it is working fine, but now I am trying to add three scalar values to the actor network...

10 months 前 | 1 个回答 | 0

1

个回答

提问


Design an actor critic network for non-image inputs
I have a robot with 3 inputs including wind, and current location and the current action. I use this three inputs to predict the...

11 months 前 | 1 个回答 | 0

1

个回答

提问


I see a zero mean reward for the first agent in multi-agent RL Toolbox
Hello, I have extended the PPO Coverage coverage path planning example of the Matlab for 5 agents. I can see now that always, I...

1 year 前 | 0 个回答 | 0

0

个回答

提问


Replace RL type (PPO with DPPG) in a Matlab example
There is a Matlab example about coverage path planning using PPO reinforcement learning in the following link: https://www.math...

1 year 前 | 1 个回答 | 0

1

个回答