Feeds
已回答
Errror: Undefined function 'getActionInfo' for input arguments of type 'struct'.
Hi Emmanouil - Here is below my full "simulation" code. Basically I have trained two models using PPO and DDPG and am trying to...
Errror: Undefined function 'getActionInfo' for input arguments of type 'struct'.
Hi Emmanouil - Here is below my full "simulation" code. Basically I have trained two models using PPO and DDPG and am trying to...
3 years 前 | 0
已回答
Errror: Undefined function 'getActionInfo' for input arguments of type 'struct'.
Hello Emmanouil Thank you for your help. The error is not in the code as that runs fine. It is the Simulation run that genera...
Errror: Undefined function 'getActionInfo' for input arguments of type 'struct'.
Hello Emmanouil Thank you for your help. The error is not in the code as that runs fine. It is the Simulation run that genera...
3 years 前 | 0
提问
Errror: Undefined function 'getActionInfo' for input arguments of type 'struct'.
Hi This would work previously. I now get an error when I try to test a RL agent. Is this an issue of data-type expected? I hav...
3 years 前 | 2 个回答 | 0
2
个回答提问
Reinforcement Learning multiple agent validation: Can I have a Simulink model host TWO agents and test them
Hi, I am conducting research to see how PPO performs versus DDPG - for non-linear plants. I have trained two agents. Can I hav...
3 years 前 | 1 个回答 | 1
1
个回答提问
Toolbox .zip / .tar files: Where can I find Toolbox installables?
Hi, I need to install RL toolboox directly from a .zip/tar file. Where can I find Toolbox installables, please? Thanks for ...
3 years 前 | 1 个回答 | 0
1
个回答提问
CodeOcean - how do I install Toolboxes from .tar files
Hello, I am submitting MATLAB code that came up from my research on Reinforcement Learining, to CodeOcean (https://codeocean.co...
3 years 前 | 1 个回答 | 0
1
个回答已回答
How do I save Episode Manager training data for *plotting* later
Thank you Asvin and Emmanouil. I didnt realize that I could store trainingStats and I can do anything with the data later. Awes...
How do I save Episode Manager training data for *plotting* later
Thank you Asvin and Emmanouil. I didnt realize that I could store trainingStats and I can do anything with the data later. Awes...
4 years 前 | 1
| 已接受
提问
How do I save Episode Manager training data for *plotting* later
Hi, Lets say I am using two algorithms to train for a task (DDPG and PPO). How do I save the data to plot a comparion later? T...
4 years 前 | 2 个回答 | 0
2
个回答提问
Linear Analyzer: PID + Valve with delay
Hello, I am using the Linear Analyzer to analyze a simple PID + valve system and am facing the following issue. The plant is a ...
5 years 前 | 1 个回答 | 0
1
个回答提问
How do I count the number of times zero is being crossed by a signal?
Hi, I am trying to build a control system and I want to count the oscillations in a time-span of say 100 seconds. Can I count ...
5 years 前 | 2 个回答 | 0
2
个回答已回答
DDPG Agent: Not stabilizing creating an unstable model
Based on several rounds of training, my personal observation is that RL will converge initially to an optimal expected value. A...
DDPG Agent: Not stabilizing creating an unstable model
Based on several rounds of training, my personal observation is that RL will converge initially to an optimal expected value. A...
5 years 前 | 0
提问
DDPG Agent: Not stabilizing creating an unstable model
Dear MATLAB, Am training a DDPG agent on randomly set straight lines (levels) and later testing on a benchmark waveform. Should...
5 years 前 | 1 个回答 | 0
1
个回答已回答
How to TRAIN further a previously trained agent?
Hi Sourav, I figured it out after reading the documentation moer carefully! I need to also set the ResetExperienceBufferBeforeT...
How to TRAIN further a previously trained agent?
Hi Sourav, I figured it out after reading the documentation moer carefully! I need to also set the ResetExperienceBufferBeforeT...
5 years 前 | 6
| 已接受
提问
How to TRAIN further a previously trained agent?
Hi, My agent was programmed to stop after reaching an average reward of X. How do I load and extend the training further? I di...
5 years 前 | 4 个回答 | 3
4
个回答提问
DDPG Control - for non-linear plant control - Q0 does not converge even after 5,000 episodes
Dear Matlab, Firstly, I must say being RL into the MATLAB platform and have the capability to integrate to Simulink is just so ...
5 years 前 | 1 个回答 | 1
1
个回答提问
Are SimScape models idealistic or realistic?
Hello, Do valves components in Simscape Fluids exhibit non-linearity just like real valves? I want to model a realistic system...
5 years 前 | 1 个回答 | 0
1
个回答提问
Simscape Fluids Valve models: Do they exhibit non-linearity too?
Hello, Simscape Fluids VALVE models: Non-linearity I want to model a realistic system and therefore want to view effects of a...
5 years 前 | 1 个回答 | 0