Sebastian Jaimungal, University of Toronto

Algorithmic Trading and Deep Q-Learning for Nash Equilibria
Date
Oct 16, 2019, 4:30 pm5:30 pm
Location
101 - Sherrerd Hall
Event Description

Model-free learning for multi-agent stochastic games is an active area of research. Existing reinforcement learning algorithms, however, are often restricted to zero-sum games, and are applicable only in small state-action spaces or other simplified settings. Here, we develop a new data efficient Deep-Q-learning methodology for model-free learning of Nash equilibria for general-sum stochastic games. The algorithm uses a local linear-quadratic expansion of the stochastic game, which leads to analytically solvable optimal actions. The expansion is parametrized by deep neural networks to give it sufficient flexibility to learn the environment without the need to experience all state-action pairs. We study symmetry properties of the algorithm stemming from label-invariant stochastic games and as a proof of concept, apply our algorithm to learning optimal trading strategies in competitive electronic markets.

Bio: Sebastian is a full Professor of Mathematical Finance at the University of Toronto's Department of Statistical Sciences where he is the Director of the Master of Finance and Insurance program. He is the current chair of the SIAM activity group in Financial Mathematics & Engineering, acts on the editorial board of SIAM J. on Financial Mathematics and Quantitative Finance among others, and is a Fields-CQAM lab leader. Sebastian's current research interests span applied stochastic control and games, algorithmic trading, mean-field games, and machine learning.