Q-learning is a one of the most popular and frequently used model-free reinforcement learning method. Among the advantages of this method is independent in its prior knowledge and there is a proof for its convergence to the optimal policy. One of the main limitations of More
Q-learning is a one of the most popular and frequently used model-free reinforcement learning method. Among the advantages of this method is independent in its prior knowledge and there is a proof for its convergence to the optimal policy. One of the main limitations of this method is its low convergence speed, especially when the dimension is high. Accelerating convergence of this method is a challenge. Q-learning can be accelerated the convergence by the notion of opposite action. Since two Q-values are updated simultaneously at each learning step. In this paper, adaptive policy and the notion of opposite action are used to speed up the learning process by integrated approach. The methods are simulated for the grid world problem. The results demonstrate a great advance in the learning in terms of success rate, the percent of optimal states, the number of steps to goal, and average reward.
Manuscript profile