Prof. Yishay Mansour got his PhD from MIT in 1990, following it he was a postdoctoral fellow in Harvard and a Research Staff Member in IBM T. J. Watson Research Center. Since 1992 he is at Tel-Aviv University, where he is currently a Professor of Computer Science and has serves as the first head of the Blavatnik School of Computer Science during 2000-2002. He is currently the director of the Israeli Center of Research Excellence in Algorithms.
Prof. Mansour has a part-time position at MicroSoft Reaserch in Israel, and has held visiting positions with Bell Labs, AT&T research Labs, IBM Research, and Google Research. He has mentored start-ups as Riverhead, which was acquired by Cisco, Ghoonet and Verix.
Prof. Mansour has published over 50 journal papers and over 100 proceeding papers in various areas of computer science with special emphasis on communication networks, machine learning, and algorithmic game theory, and has supervised over a dozen graduate students in those areas.
Prof. Mansour is currently an associate editor in a number of distinguished journals and has been on numerous conference program committees. He was both the program chair of COLT (1998) and served on the COLT steering committee.
The “wisdom of the crowds” has become a hot topic in the last decade with the rapid adaptation of the Internet. At the core of the phenomena is that users do not only consume information but also produce it. This dual role of the users leads to a fundamental design question, how to incentivize the users to explore (produce new information) rather than exploit (use existing information).
We provide a first step in understanding this new aspect of the classical tradeoff between exploration and exploitation in the face of agents’ incentives. Our abstraction studies a novel model in which agents arrive sequentially one after the other and each in turn chooses one action from a fixed set of actions to maximize his expected rewards given the information he possesses at the time of arrival. (More concretely, each agent is a two-arm bandit, maximizing his own utility given the information he has observed.)
The information that becomes available affects the incentives of an agent to explore and generate new information. We characterize the optimal disclosure policy of a planner whose goal is to maximizes social welfare. The planner's optimal policy is characterized and shown to be intuitive and very simple to implement. As the number of agents increases the social welfare converges to the optimal welfare of the unconstrained mechanism and the regret is bounded by a constant.
[Based on a joint work with Ilan Kremer and Motty Perry.]