Stochastic games, as the generalization of Markov decision processes to the multi agent case, have long been used for modeling multi-agent system and are used as a suitable framework for Multi Agent Reinforcement Learning. Learning Automata (LA) were recently shown to b More
Stochastic games, as the generalization of Markov decision processes to the multi agent case, have long been used for modeling multi-agent system and are used as a suitable framework for Multi Agent Reinforcement Learning. Learning Automata (LA) were recently shown to be valuable tools for designing Multi-Agent Reinforcement Learning algorithms. In this paper a model based on learning automata and the concept of entropy for finding optimal policies in stochastic games is proposed. In the proposed model, for each state in the environment of the game and for each agent an S-model variable structure learning automaton is placed that tries to learn the optimal action probabilities in those states. The number of its adjacent states determines the number of actions of each learning automaton in each state and every joint action corresponds to a transition to an adjacent state. Entropy of the probability vector for the learning automaton of the next state is used to help learning process and improve the learning performance and is used a quantitative problem independent measurement for learning progress. We have also implemented a new version of the proposed algorithm that balances exploration with exploitation yielding improved performance. The experimental results show that the proposed algorithm has better learning performance than the other learning algorithms in terms of cost and the speed of reaching the optimal policy.
Manuscript profile