Simple reinforcement learning agents: Pareto beats Nash in an algorithmic game theory study |
| |
Authors: | Steven O. Kimbrough Ming Lu |
| |
Affiliation: | (1) Operations & Information Management Department, The Wharton School, University of Pennsylvania, 500 Jon M. Huntsman Hall, Philadelphia, PA 19104-6340, USA |
| |
Abstract: | Repeated play in games by simple adaptive agents is investigated. The agents use Q-learning, a special form of reinforcement learning, to direct learning of behavioral strategies in a number of 2×2 games. The agents are able effectively to maximize the total wealth extracted. This often leads to Pareto optimal outcomes. When the rewards signals are sufficiently clear, Pareto optimal outcomes will largely be achieved. The effect can select Pareto outcomes that are not Nash equilibria and it can select Pareto optimal outcomes among Nash equilibria.Acknowledgement This material is based upon work supported by, or in part by, NSF grant number SES-9709548. We wish to thank an anonymous referee for a number of very helpful suggestions. |
| |
Keywords: | Q-learning algorithmic game theory games learning and games |
本文献已被 SpringerLink 等数据库收录! |
|