Georgina Hall, a fifth-year Ph.D. student and a Gordon Y. S. Wu Fellow in the Department of Operations Research and Financial Engineering, was awarded the 2016 INFORMS Computing Society (ICS) Student Paper Award for her paper, "DC Decomposition of Nonconvex Polynomials with Algebraic Techniques." The INFORMS Computing Society (ICS) Student Paper Award is given annually to the best paper on computing and operations research by a student author, as judged by a panel of the ICS. She is advised by Assistant Professor Amir Ali Ahmadi.

We consider the optimal learning problem of optimizing an expensive function with a known parametric form but unknown parameters. Observations of the function, which might involve simulations, laboratory or field experiments, are both expensive and noisy.

We consider the problem of estimating the expected value of information for Bayesian learning problems where the belief model is nonlinear in the parameters. Our goal is to maximize some metric, while simultaneously learning the unknown parameters of the nonlinear belief model, by guiding a sequential experimentation process which is expensive.

We present a technique for adaptively choosing a sequence of experiments for materials design and optimization. Specifically, we consider problem of identifying the choice of experimental control variables that optimize the kinetic stability of a nanoemulsion, which we formulate as a ranking and selection problem.

We consider the choices and subsequent costs associated with ensemble averaging and extrapolating experimental measurements in the context of optimizing material properties using Optimal Learning (OL). We demonstrate how these two general techniques lead to a trade-off between measurement error and experimental costs, and incorporate this trade-off in the OL framework.

- Tong Zhang

In this paper, we cast online PCA into a stochastic nonconvex optimization problem, and we analyze the online PCA algorithm as a stochastic approximation iteration.

I study the problem of learning the unknown parameters of an expensive function where the true underlying surface can be described by a quadratic polynomial. The motivation for this is that even though the optimal region for most functions might be unknown, it can still be well approximated by a quadratic function.

We research how to help laboratory scientists discover new science through the use of computers, data analysis, machine learning and decision theory. We collaborate with experimentalist teams trying to optimize material properties, or to discover novel materials, using the framework of Optimal Learning, guided by domain expert knowledge and relevant physical modeling.

Our problem is motivated by healthcare applications where the highly sparsity and the relatively small number of patients makes learning more difficult. With the adaptation of an online boosting framework, we develop a knowledge-gradient (KG) type policy to guide the experiment by maximizing the expected value of information of labeling each alternative, in order to reduce the number of expensive physical experiments.

We derived the first finite-time bounds for a knowledge gradient policy. We also introduce a Modular Optimal Learning Testing Environment (MOLTE) which provides a highly flexible environment for testing a range of learning policies on a library of test problems.