Stopping conditions for DQN training

7 次查看(过去 30 天)
Hello all,
I am currently playing around with DQN trainning. I am trying to find a systemic way to stop the trainning process rather than to stop it mannually. However, for my trainning process, I have no idea what the end rewards will be and I don't have a target point to reach. Therefore, I do not know when to stop.
Is there a way for me to stop DQN agent without those information and guarentee some type of convergence?
Thanks for helping!

回答(1 个)

Madhav Thakker
Madhav Thakker 2020-11-25
Hi Zonghao zou,
One possible parameter to consider when stopping training is Q-Values. If the Q-Values are saturated, it means that no learning is happening in the network. You can perhaps look at your Q-values and decide a threshold, to perform early-stopping in the network. You don't need the end-reward or target-point to perform early stopping based on Q-Values.
Hope thi helps.

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by