Q-learning reinforcement learning agent
The Q-learning algorithm is a model-free, online, off-policy reinforcement learning method. A Q-learning agent is a value-based reinforcement learning agent which trains a critic to estimate the return or future rewards.
For more information on Q-learning agents, see Q-Learning Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
|Train reinforcement learning agents within a specified environment|
|Simulate trained reinforcement learning agents within specified environment|
|Obtain action from agent or actor representation given environment observations|
|Get actor representation from reinforcement learning agent|
|Set actor representation of reinforcement learning agent|
|Get critic representation from reinforcement learning agent|
|Set critic representation of reinforcement learning agent|
|Create function that evaluates trained policy of reinforcement learning agent|
Create an environment interface.
env = rlPredefinedEnv("BasicGridWorld");
Create a critic Q-value function representation using a Q-table derived from the environment observation and action specifications.
qTable = rlTable(getObservationInfo(env),getActionInfo(env)); critic = rlQValueRepresentation(qTable,getObservationInfo(env),getActionInfo(env));
Create a Q-learning agent using the specified critic value function and an epsilon value of
opt = rlQAgentOptions; opt.EpsilonGreedyExploration.Epsilon = 0.05; agent = rlQAgent(critic,opt)
agent = rlQAgent with properties: AgentOptions: [1x1 rl.option.rlQAgentOptions]
To check your agent, use getAction to return the action from a random observation.
ans = 1
You can now test and train the agent against the environment.