Thursday, February 7 2019
12:20pm - 1:20pm
Operations Research Seminar
Nonconvex Stochastic Optimization: from Conditional Gradient to Newton's Method

Developing algorithms with efficient non-asymptotic guarantees for nonconvex stochastic optimization has been of great interest in recent years, motivated especially by their applicability to machine learning and data science. In particular, such algorithms are used to develop the state of the art approaches for training contemporary deep neural networks, reinforcement learning, low-rank matrix estimation, simulation-based optimization, etc.

In the first part of this talk, I will present different algorithmic frameworks, namely, stochastic conditional gradient, (accelerated) gradient, and Newton's method for solving the aforementioned class of problems. I will also present the zeroth-order variants of these algorithms where only noisy objective values are available. A particular emphasis will be on non-asymptotic guarantees for these algorithms.

In the second part, I focus specifically on a class of nested stochastic optimization problems, where the objective function is a composition of two expectations. I will present a Nested Averaging Stochastic Approximation (NASA) algorithm for solving this non-classical problem and show that its sample complexity matches the well-known complexity of the stochastic gradient method for the classical single-level nonconvex stochastic optimization. Moreover, the convergence analysis of the NASA method is the same for both constrained and unconstrained problems implying that it does not require a batch of samples per iteration. This property makes NASA attractive for online learning where samples are received one by one.
Speaker:Saeed Ghadimi
Affiliation:Princeton University
Location:SCB 4125


Download as iCalendar

Done