Community Profile

photo

Emmanouil Tzorakoleftherakis

MathWorks

Last seen: 1 day 前 自 2018 起处于活动状态

Followers: 0   Following: 0

统计数据

All
  • Thankful Level 3
  • 12 Month Streak
  • Personal Best Downloads Level 1
  • Pro
  • Knowledgeable Level 5
  • GitHub Submissions Level 1
  • First Submission
  • Revival Level 2
  • First Answer

查看徽章

Feeds

排序方式:

已回答
Reaching observation data and pass them to the learning process
In general, you cannot change the observation/action space definition once they are defined. That said, it seems to me that what...

30 days 前 | 0

| 已接受

已回答
How to build a reinforcement learning environment for a DCDC converter?
I would look at this example which starts with a model that has a PID controller and shows how to replace it with an RL Agent.

2 months 前 | 0

| 已接受

已回答
PPO minibatch size for parallel training with variable number of steps
No data will be discarded actually. As of R2023b, the 4 experiences that are left in your example form their own minibatch and a...

2 months 前 | 0

已回答
Why is my DDPG agent converging to a state where it gets continuous penalization, while having a state it can go with 0 penalization?
My guess is that this happens due to the specifics of the problem. You want to build a controller that generates 'zeroes' when t...

2 months 前 | 0

已回答
Reinforcement learning: Step function "AvoidObstaclesUsingReinforcementLearningForMobileRobotsExample" example
This example trains the agent against a Simulink environment, not a MATLAB one. The equivalent of the 'step' function is inside ...

2 months 前 | 0

已回答
How can I deploy the trained DRL model in a microprocessor, such as DSP or STM32?
You can follow the steps here to generate code from the trained policy. We also have hardware support for STM32 processors, so i...

2 months 前 | 1

| 已接受

已回答
Parallel Training of Multiple RL Agents in same environment
Parallel training is currently not supported for multi-agent reinforcement learning. One thing you could do is train the agents ...

2 months 前 | 0

| 已接受

已回答
Augmenting MPC Block with Integral Action
Hello, Let me paste a couple of links here that show how we formulate the underlying QP problem in linear mpc in Model Predicti...

2 months 前 | 0

| 已接受

已回答
Does my PI + MPC (feedforward controller) configuration make sense?
Looks like the whole point of using an MPC controller was to provide deltas on the PI output based on the output of the piezo ac...

2 months 前 | 0

已回答
how to freeze and reset the weights to initial values of neural network.?
You can accomplish what you asked with something along the lines of: init_model = getModel(getCritic(agent)); new_model_layers...

2 months 前 | 0

已回答
Cannot propagate non-bus signal to block because the block has a bus object specified.
Looking at the first screenshot, looks like the output of the grid world block is not a bus, but the observations in your RL Age...

2 months 前 | 0

| 已接受

已回答
Constraint to state derivatives with NLMPC
See here for all available constraint options with nlmpc. If the state derivatives you need are part of the state vector, you ca...

3 months 前 | 0

| 已接受

已回答
Cannot generate C code from MPC object
Please take a look at the example here that uses 'codegen' command.

3 months 前 | 0

| 已接受

已回答
RL Agent learns a constant trajectory instead of actual trajectory
Thanks for adding all the details. The first thing I will say is that the average reward on the Episode Manager is moving in the...

3 months 前 | 0

已回答
Tune PI Controller Using Reinforcement Learning
Do you maybe have linearize shadowed somewhere on your path? If not, a reproduction model would be good.

3 months 前 | 0

| 已接受

已回答
Training Reinforcement Learning Agents --> Use ResetFcn to delay the agent's behaviour in the environment
You can place the RL Agent block inside a triggered subsystem and set the agent's sample time to -1 (see e.g. here). Then set th...

3 months 前 | 0

| 已接受

已回答
How to specify the training algorithm of an agent - Reinforcement Learning
'train' takes an agent object as input, so yes the algorithm will be selected depending on the agent.

3 months 前 | 1

| 已接受

已回答
DDPG training converges to the worst results obtained during exploration
I cannot see your training options, but what do you mean by "converges"? The training plot only shows about 1800 episodes. There...

3 months 前 | 1

已回答
Not able to use multibel GPUs when training a DDPG agent
Can you share your agent options and the architecture of the actor and critic networks? As mentioned here, "Using GPUs is likely...

3 months 前 | 0

已回答
Problem with RL agent block
You can use a delay block for the last observation and set the initial value of the delay in the block dialog. That should resol...

3 months 前 | 1

| 已接受

已回答
decaying clip factor or entropy loss weight for PPO
These parameters are fixed and cannot be changed after training begins. One workaround would be to train the agent for a certain...

3 months 前 | 0

| 已接受

已回答
How to solve the error "Error using sqpInterface Nonlinear constraint function is undefined at initial point. Fmincon cannot continue." Error occurred when calling NLP solver
The dynamics/state function are turned into constraints internally when creating a NLP for fmincon. You don't provide all the fu...

3 months 前 | 0

| 已接受

已回答
How do I Tune Model Predictive Controller (MPC) in the Real Time?
There could be many reasons why you don't see the expected results. First thing I would check is whether the controller can actu...

3 months 前 | 0

| 已接受

已回答
How Fast is Simulink Real Time? Is Simulink Real Time Faster than Rasberry Pi?
Hi, First of all, how fast (wall clock time) does the Simulink model run with your current MPC implementation? This would be a ...

3 months 前 | 0

已回答
Design an actor critic network for non-image inputs
I may be missing something but why don't you frame your observations as a [4 1] vector? That way it would be consistent with how...

3 months 前 | 0

已回答
Pausing reinforcement learning by forcing
The proper way to stop it would be through the Episode Manager (top right of the window). Does this not work for you?

3 months 前 | 0

| 已接受

已回答
Why DQN training always fails to converge to the optimal value
What I am seeing here is that the average reward tends to converge to the Q0 profile which is the expected behavior of a converg...

3 months 前 | 0

已回答
My reinforcement learning simulation runs for only 0 steps and 0 times in Simulink. I am not getting any error messages so I cannot pinpoint the issue, so I decided to ask.
It likely has to do with the priority of execution of the data store blocks. I would look more into it, but honestly I think you...

4 months 前 | 0

已回答
PPO and LSTM agent creation
Hi, With lstm policies, BOTH the actor and the critic should have lstm layers. That's why you are getting this error. LSTM po...

4 months 前 | 0

已回答
Matlab reinforcement learning stop working after a while
125 episodes are not that many. Is it always freezing on that episode? Given that you are not getting any errors/crashes, my hun...

4 months 前 | 0

加载更多