We introduce a generalization of smooth fictitious play with bounded $m$-memory strategies. We use this learning algorithm to prove a Folk theorem from learning in repeated potential games. If a payoff profile is supported by an $m$-memory pure strategy subgame perfect equilibrium, then there is a non-zero probability of learning an $m$-memory strategy profile that is arbitrarily close to the desired equilibrium, so that the desired payoff profile is (approximately) achieved in an appropriate continuation game.
Our results make the connection between Folk theorems and player learning in games.
Short Bio: José Penalva is Assiociate Professor of Finance at the Business Department of the Universidad Carlos III and fellow of the Oxford-Man Institute. He has an economics doctorate from the University of California, Los Angeles (UCLA) since 1997. He currently teaches Information in Markets and Market Microstructure, and Financial Mathematics.
His research interests are: the economics of information with a special emphasis on learning and financial applications, market microstructure, and risk assignment and distribution in insurance markets. His publications include journals of international prestige such as Econometrica, Journal of Banking and Finance, Quaterly Journal of Finance, Review of Economic Dynamics and the Journal of Risk and Insurance. He has also participated in several edited books, and has co-authored (with Alvaro Cartea and Sebastian Jaimungal) the book "Algorithmic and High-frequency Trading".