Bollapragada 600x400


Adaptive Sampling Methods for Stochastic Optimization
Tuesday, November 24, 2020
3:30PM – 5PM
Zoom Meeting

Raghu Bollapragada

Nonlinear stochastic optimization problems arise in a wide range of applications, from acoustic/geophysical inversion to deep learning. The scale, computational cost, and difficulty of these models make classical optimization techniques impractical. To address these challenges, we have developed new optimization methods that, in addition, are well suited for distributed computing implementations. Our techniques employ adaptive sampling strategies that gradually increase the accuracy in the step computation in order to achieve efficiency and scalability, and incorporate second-order information by exploiting the stochastic nature of the problem. We provide global convergence rates for strongly convex and non-convex functions, and illustrate our algorithm's performance on large-scale machine learning models.

Raghu Bollapragada is an assistant professor in the Operations Research and Industrial Engineering graduate program at the University of Texas at Austin (UT). Before joining UT, he was a postdoctoral researcher in the Mathematics and Computer Science Division at Argonne National Laboratory. He received both PhD and MS degrees in Industrial Engineering and Management Sciences from Northwestern University. During his graduate study, he was a visiting researcher at INRIA, Paris. His current research interests are in nonlinear optimization and its applications in machine learning. He has received the IEMS Nemhauser Dissertation Award for best dissertation, the IEMS Arthur P. Hurter Award for outstanding academic excellence, the McCormick terminal year fellowship for outstanding terminal-year PhD candidate, and the Walter P. Murphy Fellowship at Northwestern University.

**Note:  Please join this Zoom seminar online with the "Audio Only" function (no video)**

For questions, please contact:

Hosted by Omar Ghattas

 Live Zoom Meeting Link: Click Here to Watch