David Brown, Duke University

Index Policies and Performance Bounds for Dynamic Selection Problems
Date
Dec 5, 2017, 4:30 pm5:30 pm
Location
101 - Sherrerd Hall
Event Description

Abstract: We consider dynamic selection problems, where a decision maker repeatedly selects a set of items from a larger collection of available items. A classic example is the dynamic assortment problem, where a retailer chooses items to offer for sale subject to a display space constraint. The retailer may adjust the chosen assortment over time in response to the observed demand. These dynamic selection problems are naturally formulated as stochastic dynamic programs (DPs) but are difficult to solve because optimal selection decisions depend on the states of all items. In this paper, we study heuristic policies for dynamic selection problems and provide upper bounds on the performance of an optimal policy that can be used to assess the performance of a heuristic policy. The policies and bounds that we consider are based on a Lagrangian relaxation of the DP that relaxes the constraint limiting the number of items that may be selected. We characterize the performance of the optimal Lagrangian index policy and bound and show that, under mild conditions, these policies and bounds are both asymptotically optimal for problems with many items; tiebreaking plays an essential role in the analysis of these index policies and has a surprising impact on performance. We also develop an efficient cutting-plane method for solving the Lagrangian dual problem and develop an information relaxation bound that improves on the standard Lagrangian bound. We demonstrate these policies and bounds in two large-scale examples: a dynamic assortment problem with demand learning and an applicant screening problem. [Joint work with Jim Smith (Duke).]

Event Category
ORFE Department Colloquia