Why does this error occur when using rlDQNAgent "First argument must be a Q value function object"
2 次查看(过去 30 天)
显示 更早的评论
I am new to the reinforcement toolbox and I am trying to simulate a mec environment that observes 3 states from the environment, that is, a scalar value SNR, a scalar value data, and a 1x3 row vector channel gain and reproduces an action from the range of 0 to 1 which determines the amount of data to be offloaded (offloading factor). My observation info is a cell array which I have concatenated to become a vector when I run my custom environment code. But I keep on getting this error " Error using rlDQNAgent
First argument must be a Q value function object, a Q value representation object, or
an observation specification created using 'rlNumericSpec' or 'rlFiniteSetSpec'
objects.
Error in DRL_trial_dLL (line 52)
agent = rlDQNAgent(layers, obsInfo, actInfo);" I have attached my codes
0 个评论
回答(1 个)
Sachin Lodhi
2023-9-18
Hi Emmanuella,
Based on the information provided, it appears that the issue you are encountering is related to incorrect parameters being used with the “rlDQNAgent” function. The first parameter of “rlDQNAgent” should be an ObservationInfo object of type “rlFiniteSetSpec” or “rlNumericSpec”, or an array containing a combination of such objects. However, it seems that in your code, you have provided a layer as the first input argument, which is causing the error message to appear.
If your intention is to use the network (layers) that you have created, you will have to create a critic using the “rlVectorQValueFunction” and then pass that critic as the first argument instead of the layers. This adjustment should allow your code to run without encountering any errors.
I recommend referring to the following documentation that provides an example of creating a DQN Agent using a critic.
I hope this helps in simulating your environment and resolves the error.
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!