18:00-20:00 | Welcome Reception (Institut d'Estudis Catalans) / registration |
8:30–9:00 | Registration |
9:00–10:00 | Unsupervised Learning; Dictionary Learning; Latent Variable Models (chair: Philippe Rigollet) |
- New Algorithms for Learning Incoherent and Overcomplete Dictionaries by Sanjeev Arora, Rong Ge and Ankur Moitra | |
- Belief Propagation, Robust Reconstruction and Optimal Recovery of Block Models by Elchanan Mossel, Joe Neeman and Allan Sly (BEST PAPER AWARD) | |
- Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability by Aditya Bhaskara, Moses Charikar and Aravindan Vijayaraghavan | |
10:00–11:00 | Invited Talk: Michael Jordan |
11:00–11:30 | Coffee Break |
11:30–12:30 | Concentration (chair: Karthik Sridharan) |
- Localized Complexities for Transductive Learning by Ilya Tolstikhin, Gilles Blanchard and Marius Kloft | |
- Learning without Concentration by Shahar Mendelson | |
- An Inequality with Applications to Structured Sparsity and Multitask Dictionary Learning by Andreas Maurer, Massimiliano Pontil and Bernardino Romera-Paredes | |
12:30-12:50 | Unsupervised Learning; Dictionary Learning; Latent Variable Models II |
- (short) Edge Label Inference in Generalized Stochastic Block Models: from Spectral Theory to Impossibility Results by Jiaming Xu, Marc Lelarge and Laurent Massoulie | |
- (short) Learning Sparsely Used Overcomplete Dictionaries by Prateek Jain, Praneeth Netrapalli and Rashish Tandon | |
- (short) Density-preserving quantization with application to graph downsampling by Morteza Alamgir, Ulrike von Luxburg and Gabor Lugosi | |
- (short) A Convex Formulation for Mixed Regression: Minimax Optimal Rates by Yudong Chen, Xinyang Yi and Constantine Caramanis | |
12:50–14:50 | Lunch Break |
14:50-16:10 | Statistical Learning Theory (chair: Peter Auer) |
- Uniqueness of ordinal embedding by Matthäus Kleindessner and Ulrike von Luxburg (FINALIST BEST STUDENT PAPER) | |
- On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems by Harish Ramaswamy, Balaji S.B., Shivani Agarwal and Robert Williamson | |
- The complexity of learning halfspaces using generalized linear methods by Amit Daniely, Nati Linial and Shai Shalev-Shwartz (FINALIST BEST STUDENT PAPER) | |
- (short) Sample Compression for Multi-label Concept Classes by Rahim Samei, Pavel Semukhin, Boting Yang and Sandra Zilles | |
- (short) The sample complexity of agnostic learning under deterministic labels by Shai Ben-David and Ruth Urner | |
- (short) Bayes-Optimal Scorers for Bipartite Ranking by Aditya Menon and Robert Williamson | |
- (short) Elicitation and Identification of Properties by Ingo Steinwart, Chloe Pasin and Robert Williamson | |
16:10-16:40 | Coffee Break |
16:40-17:40 | Unsupervised Learning; Mixture Models (chair: Bob Williamson) |
- The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures by Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher and James Voss | |
- Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures of Gaussians by Constantinos Daskalakis and Gautam Kamath | |
- Learning Mixtures of Discrete Product Distributions using Spectral Decompositions by Prateek Jain and Sewoong Oh | |
17:40-18:40 | Impromptu Talks |
9:00–10:40 | Online Learning (chair: Elad Hazan) |
- Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations by Brendan McMahan and Francesco Orabona | |
- Follow the Leader with Dropout Perturbations by Tim van Erven, Wojciech Kotlowski and Manfred K. Warmuth | |
- A Second-order Bound with Excess Losses by Pierre Gaillard, Gilles Stoltz and Tim van Erven (FINALIST BEST STUDENT PAPER) | |
- Online Nonparametric Regression by Alexander Rakhlin and Karthik Sridharan | |
- (short) Learning with Perturbations via Gaussian Smoothing by Jacob Abernethy, Chansoo Lee, Abhinav Sinha and Ambuj Tewari | |
- (short) Online Learning with Composite Loss Functions Can Be Hard by Ofer Dekel, Jian Ding, Tomer Koren and Yuval Peres | |
- (short) Higher-Order Regret Bounds with Switching Costs by Eyal Gofer | |
- (short) Most Correlated Arms Identification by Che-Yu Liu and Sébastien Bubeck | |
10:40–11:15 | Coffee Break |
11:15–12:15 | Invited Talk: Yishay Mansour |
12:15–13:15 | Open Problems Session |
13:15–15:15 | Lunch Break |
15:15-16:15 | Statistical and Online Learning (chair: Sasha Rakhlin) |
- Efficiency of conformalized ridge regression by Evgeny Burnaev and Vladimir Vovk | |
- Community Detection via Random and Adaptive Sampling by Se-Young Yun and Alexandre Proutiere | |
- Logistic Regression: Tight Bounds for Stochastic and Online Optimization by Kfir Levy, Elad Hazan and Tomer Koren | |
16:15-16:45 | Coffee Break |
16:45-17:55 | Learning with Partial Feedback (chair: Sebastien Bubeck) |
- Resourceful Contextual Bandits by Ashwinkumar Badanidiyuru, John Langford and Aleksandrs Slivkins | |
- On the Complexity of A/B Testing by Emilie Kaufmann, Olivier Cappé and Aurélien Garivier | |
- Finding a most biased coin with fewest flips by Karthekeyan Chandrasekaran and Richard M. Karp | |
- (short) Stochastic Regret Minimization via Thompson Sampling by Sudipto Guha and Kamesh Munagala | |
- (short) Lipschitz Bandits:Regret Lower Bounds and Optimal Algorithms by Stefan Magureanu, Richard Combes and Alexandre Proutière | |
17:55-19:45 | Poster Session |
20:30 | Banquet |
9:00–10:20 | Computational Learning Theory/Algorithmic Results (chair: Yishay Mansour) |
- Near-Optimal Herding by Samira Samadi and Nick Harvey | |
- Learning Coverage Functions and Private Release of Marginals by Vitaly Feldman and Pravesh Kothari | |
- Distribution-Independent Reliable Learning by Varun Kanade and Justin Thaler | |
- Fast Matrix Completion Without the Condition Number by Moritz Hardt and Mary Wootters | |
10:20–10:50 | Coffee Break |
10:50–11:30 | Computational Learning Theory/Lower Bounds (chair: Vitaly Feldman) |
- Computational Limits for Matrix Completion by Moritz Hardt, Raghu Meka, Prasad Raghavendra and Benjamin Weitz | |
- Lower bounds on the performance of polynomial-time algorithms for sparse linear regression by Yuchen Zhang, Martin Wainwright and Michael Jordan | |
11:30–12:00 | Coffee Break |
12:00-13:00 | Learning with Partial Feedback (chair: Nicolo Cesa-Bianchi) |
- lil’ UCB: An Optimal Exploration Algorithm for Multi-Armed Bandits by Kevin Jamieson, Matthew Malloy, Robert Nowak and Sebastien Bubeck | |
- Multiarmed Bandits With Limited Expert Advice by Satyen Kale | |
- Volumetric Spanners: an Efficient Exploration Basis for Learning by Elad Hazan, Zohar Karnin and Raghu Meka | |
13:00-15:00 | Lunch Break / Business meeting |
15:00-16:20 | Statistical Learning Theory (chair: Shai Ben David) |
- Optimal Learners for Multiclass Problems by Amit Daniely and Shai Shalev-Shwartz | |
- Principal Component Analysis and Higher Correlations for Distributed Data by Ravindran Kannan, Santosh S. Vempala and David Woodruff | |
- The Geometry of Losses by Robert Williamson | |
- Sample Complexity Bounds on Differentially Private Learning via Communication Complexity by Vitaly Feldman and David Xiao | |
16:20-16:50 | Coffee Break |
16:50-17:50 | Sequential Learning (chair: Ohad Shamir) |
- Robust Multi-objective Learning with Mentor Feedback by Alekh Agarwal, Ashwin Badanidiyuru, Miroslav Dudik, Robert Schapire and Aleksandrs Slivkins | |
- Approachability in unknown games: Online learning meets multi-objective optimization by Shie Mannor, Vianney Perchet and Gilles Stoltz | |
- Compressed Counting Meets Compressed Sensing by Ping Li, Cun-Hui Zhang and Tong Zhang |