Oden Institute Past Seminars

In-person seminars are held Tuesdays and Thursdays in POB 6.304 from 3:30-5:00 pm, unless otherwise noted. Zoom seminars usually occur during the same time period. Speakers include scientists, researchers, visiting scholars, potential faculty, and Oden Institute/UT Faculty or staff. Everyone is welcome to attend. Refreshments are served at in-person seminars at 3:15 pm.

Tuesday, Dec 8

ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning

Tuesday, Dec 8, 2020 from 3:30PM to 5PM | Zoom Meeting

  • Additional Information

    Hosted by George Biros

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Michael Mahoney

    Speaker Affiliation: Professor, ICSI and Department of Statistics, UC Berkeley

  • Abstract

    Second order optimization algorithms have a long history in scientific computing, but they tend not to be used much in machine learning. This is in spite of the fact that they gracefully handle step size issues, poor conditioning problems, communication-computation tradeoffs, etc., all problems which are increasingly important in large-scale and high performance machine learning. A large part of the reason for this is that their implementation requires some care, e.g., a good implementation isn't possible in a few lines of python after taking a data science boot camp, and that a naive implementation typically performs worse than heavily parameterized/hyperparameterized stochastic first order methods. We describe ADAHESSIAN, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the Hessian. ADAHESSIAN includes several novel performance-improving features, including: (i) a fast Hutchinson based method to approximate the curvature matrix with low computational overhead; (ii) a spatial averaging to reduce the variance of the second derivative; and (iii) a root-mean-square exponential moving average to smooth out variations of the second-derivative across different iterations. Extensive tests on natural language processing, computer vision, and recommendation system tasks demonstrate that ADAHESSIAN achieves state-of-the-art results. The cost per iteration of ADAHESSIAN is comparable to first-order methods, and ADAHESSIAN exhibits improved robustness towards variations in hyperparameter values.

    Bio
    Michael W. Mahoney is a Professor at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he is on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he co-organized the Simons Institute's fall 2013 and 2018 programs on the foundations of data science, and he runs the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets. He is currently the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. He holds several patents for work done at Yahoo Research and as Lead Data Scientist for Vieu Labs, Inc., a startup reimagining consumer video for billions of users. More information is available at https://www.stat.berkeley.edu/~mmahoney/.

  • Multimedia

     Event Stream Link: Click Here to Watch


Friday, Dec 4

An Efficient 3D Direct Solver for the Variable Coefficient Helmholtz Equation

Friday, Dec 4, 2020 from 1PM to 2PM | Zoom Meeting

  • Additional Information

    Hosted by Shane McQuarrie

    Sponsor: Oden Institute Virtual Seminar - CSEM Student Forum series

    Speaker: Anna Yesypenko

    Speaker Affiliation: CSEM PhD student, Oden Institute, UT Austin

  • Abstract

    We present an efficient direct solver for the variable-coefficient Helmholtz equation discretized on rectangular domains, with applications to electromagnetic and acoustic scattering. The solver is based on a domain decomposition approach, where the domain is partitioned into a collection of thin strips for 2D problems, or thin slabs for 3D problems. Competitive computational complexity is attained by using a hierarchical technique both for the initial computation within each slab, and for the merger of all the slabs to build a global solution operator. The new solver is designed to work well for local high-order discretizations, to parallelize well, and to attain high practical performance on modern latency-bound architectures.

    Bio
    Anna Yesypenko is a 4th year PhD student advised by Professor Per-Gunnar Martinsson. She holds a Bachelor's degree in Computer Science from Cornell University and a Master's degree in CSEM from the University of Texas at Austin.

    The CSEM Student Forum is a seminar series given by current CSEM graduate students to their peers. The aim of the forum is to expose students to each other's research, encourage collaboration, and provide opportunities to practice presentation skills. First- and second-year CSEM students receive seminar credit for attending.

  • Multimedia

     Event Stream Link: Click Here to Watch


Thursday, Dec 3

Towards simulations on the Exascale hardware and beyond

Thursday, Dec 3, 2020 from 3:30PM to 5PM | Zoom Meeting

  • Additional Information

    Hosted by George Biros

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Sivasankaran (Siva) Rajamanickam

    Speaker Affiliation: Sandia National Laboratories, Albuquerque, NM

  • Abstract

    This talk will cover two paths that we are pursuing to adopt scientific simulations as we march towards building exascale systems. The first path for scientific simulations on exascale systems relies on adopting codes to be performance portable. This talk will cover the Kokkos ecosystem as a programming model. Then we will cover key design principles in developing high performance scientific simulations that are also portable using Kokkos. I will show examples from the exascale computing project.

    The second path is to pursue novel machine learning techniques that build on recent advances. These advances have helped in tasks such as image classification and board games. The question we are all interested in is whether machine learning can also advance science and how it can help advance science. This talk will cover a portion of this space from the perspective of applications, and computer architectures. I will cover recent advances that enabled us to use machine learning within a material science workflow. Specifically, I will cover our recent ML model that is a surrogate for Density Functional Theory calculations. Finally, I will cover recent advances in computer architectures that is focused on machine learning and the impact of such architectures on future performance.

    Bio
    Siva Rajamanickam has a PhD in Computer Science and Engineering from the University of Florida. He is a principal member of technical staff in the Scalable Algorithms department at the Center for Computing Research at Sandia National Laboratories. His focus is in the intersection of high performance computing, combinatorial scientific computing, graph algorithms and machine learning. He is the Sandia PI for the Office of Science Advanced Scientific Computing and Research funded co-design center, ARIAA, that focuses on upcoming machine learning accelerators. Dr. Rajamanickam also leads a Sandia LDRD on accelerating material simulations with machine learning. Dr. Rajamanickam is also part of the Exascale Computing Project’s (ECP) ExaLearn project which addresses the machine learning needs of ECP applications. On the scientific computing side, he leads the linear solver efforts in Sandia's Trilinos framework, and Kokkos Kernels library for performance portability.

  • Multimedia

     Event Stream Link: Click Here to Watch


Tuesday, Dec 1

State Estimation for Parabolic Problems

Tuesday, Dec 1, 2020 from 3:30PM to 5PM | Zoom Meeting

  • Additional Information

    Hosted by Leszek Demkowicz

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Wolfgang Dahmen

    Speaker Affiliation: Professor, SmartState Endowed Chair, Mathematics Center in Data Analysis, Simulation, Imaging and Visualization, University of South Carolina

  • Abstract

    State Estimation or Data Assimilation are about estimating "physical states'' of interest from two sources of partial information: data produced by external sensors and a (typically incomplete or uncalibrated) background model, given in terms of a partial differential equation. In this talk we focus on states that ideally satisfy a parabolic equation with known right hand side but unknown initial values. Additional partial information is given in terms of data that represent the unknown state in a subdomain of the whole space-time cylinder up to a fixed time horizon. Recovering the state from this information is known to be a (mildly) ill-posed problem. Earlier contributions employ mesh-dependent regularizations in a fully discrete setting. In contrast, we start from a regularized least-squares formulation on an infinite-dimensional level that respects a stable space-time variational formulation of the parabolic problem. We argue that this has several principal advantages. First, it allows one to disentangle discretization and discretization parameters and identify reasonable “target objects” also in the presence of inconsistent data. More specifically, exploiting the equivalence of errors and residuals in appropriate norms, given by the variational formulation, we derive rigorous computable a posteriori error bounds quantifying the uncertainties of numerical outcomes. Moreover, these quantities provide stopping criteria for an iterative Schur complement solver that is shown to exhibit optimal performance. Finally, it gives rise to estimates for consistency errors and suggests a "doubley nested” iteration striving for an optimal balance of regularization and discretization. The theoretical results are illustrated by some numerical tests, including inconsistent data. If time permits we indicate possible combinations with reduced basis techniques and related recovery methods or stochastic interpretations.

    This is joint work with R. Stevenson and J. Westerdiep.

    Bio
    After receiving in 1981 his habilitation at the University of Bonn, Germany, Wolfgang Dahmen had professor positions at the University of Bielefeld, the Free University of Berlin, and RWTH Aachen. Since August 2017 he has been holding the Williams-Hedber-Hedberg Chair in Mathematics at the University of South Carolina. In 2002 he was awarded the Gottfried-Wilhelm-Leibniz Prize, in 2009 he was elected into the German National Academy of Sciences, Leopoldina, and received in 2011 together with P. Binev and T. Vogt, a Keck Future Initiative Award of the US National Academies. In 2019, he became a SIAM Fellow and was awarded the Robert-Piloty-Prize of the Technical University of Darmstadt. Since 2003 he has been co-Editor-in-Chief of Constructive Approximation.

    **Note:  Please join this Zoom seminar online with the "Audio Only" function (no video)**
    
  • Multimedia

     Event Stream Link: Click Here to Watch


Tuesday, Dec 1

Introducing a dedicated home for computational science from Nature Research

Tuesday, Dec 1, 2020 from 12PM to 1PM | Zoom Meeting

  • Additional Information

    Hosted by Karen Willcox

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Fernando Chirigati

    Speaker Affiliation: Chief Editor, Nature Computational Science

  • Abstract

    Nature Computational Science, launching Jan 2021, will be a multidisciplinary journal that focuses on publishing research related to the development and use of computational techniques and mathematical models, as well as to their novel application to address complex problems across a range of scientific disciplines. In this talk, Fernando Chirigati, the chief editor at Nature Computational Science, will provide more information about the aims and scope of the journal, the editorial process, and his vision for this journal in the growing field of computational science.

    Bio
    Fernando Chirigati is the chief editor at Nature Computational Science, a new journal from Nature Research to be launched in January 2021. Before, he was a Postdoctoral Research Associate at New York University. Throughout the course of his Postdoc, his research focus was mainly in the area of scientific data management (applied to different real-world applications), including provenance management and analytics, large-scale data analytics, data mining, computational reproducibility, and data visualization. He has a PhD in Computer Science from New York University, and a BE in Computer and Information Engineering from the Federal University of Rio de Janeiro, Brazil.

    (Note: This seminar is being held at a different time.)

  • Multimedia

     Event Stream Link: Click Here to Watch


Tuesday, Nov 24

Adaptive Sampling Methods for Stochastic Optimization

Tuesday, Nov 24, 2020 from 3:30PM to 5PM | Zoom Meeting

  • Additional Information

    Hosted by Omar Ghattas

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Raghu Bollapragada

    Speaker Affiliation: Operations Research & Industrial Engineering, UT Austin

  • Abstract

    Nonlinear stochastic optimization problems arise in a wide range of applications, from acoustic/geophysical inversion to deep learning. The scale, computational cost, and difficulty of these models make classical optimization techniques impractical. To address these challenges, we have developed new optimization methods that, in addition, are well suited for distributed computing implementations. Our techniques employ adaptive sampling strategies that gradually increase the accuracy in the step computation in order to achieve efficiency and scalability, and incorporate second-order information by exploiting the stochastic nature of the problem. We provide global convergence rates for strongly convex and non-convex functions, and illustrate our algorithm's performance on large-scale machine learning models.

    Bio
    Raghu Bollapragada is an assistant professor in the Operations Research and Industrial Engineering graduate program at the University of Texas at Austin (UT). Before joining UT, he was a postdoctoral researcher in the Mathematics and Computer Science Division at Argonne National Laboratory. He received both PhD and MS degrees in Industrial Engineering and Management Sciences from Northwestern University. During his graduate study, he was a visiting researcher at INRIA, Paris. His current research interests are in nonlinear optimization and its applications in machine learning. He has received the IEMS Nemhauser Dissertation Award for best dissertation, the IEMS Arthur P. Hurter Award for outstanding academic excellence, the McCormick terminal year fellowship for outstanding terminal-year PhD candidate, and the Walter P. Murphy Fellowship at Northwestern University.

    **Note:  Please join this Zoom seminar online with the "Audio Only" function (no video)**
    
  • Multimedia

     Event Stream Link: Click Here to Watch


Friday, Nov 20

How to use quantum mechanics to make better semiconductors

Friday, Nov 20, 2020 from 10AM to 11AM | Zoom Meeting - Babuška Forum series

  • Additional Information

    Hosted by Stefan Henneking

    Sponsor: Oden Institute Virtual Seminar - Babuška Forum series

    Speaker: Feliciano Giustino

    Speaker Affiliation: Professor, Department of Physics, Oden Institute, UT Austin

  • Abstract

    One of the fundamental properties of semiconductors is their ability to support electric currents in the presence of electric fields (drift) or carrier concentration gradients (diffusion). These properties are described by transport coefficients such as electron and hole mobilities and diffusion coefficients. During the past decade, there has been considerable progress in quantum-mechanical calculations of carrier transport combining the Boltzmann transport equation with density functional theory. The reliability and accuracy of these calculations are improving at a fast pace, and the first applications of these recent developments in device engineering are emerging [1,2]. Within this context, I will review the formalism leading to the Boltzmann transport equation, and discuss the key approximations underlying this approach. I will describe recent algorithmic implementations, including the software package EPW [3], and discuss the computational challenges associated with quantum-mechanical calculations of transport properties. I will illustrate these concepts using recent calculations on standard semiconductors such as silicon [4], photovoltaic materials such as lead halide perovskites [5], and semiconductors for power electronics such as gallium nitride [7] and gallium oxide. I will conclude by discussing outstanding challenges and promising avenues for future research.

    [1] S. Poncé, W. Li, S. Reichardt, F. Giustino, Rep. Prog. Phys. 83, 36501 (2020).
    [2] F. Giustino, Rev. Mod. Phys. 89, 015003 (2017).
    [3] S. Poncé, E. R. Margine, C. Verdi F. Giustino, Comput. Phys. Commun. 209, 116 (2016).
    [4] S. Poncé, E. R. Margine, F. Giustino, Phys. Rev. B(R) 97, 121201 (2018).
    [5] S. Poncé, M. Schlipf, and F. Giustino, ACS Energy Lett. 4, 456 (2019).
    [6] S. Poncé, D. Jena, and F. Giustino, Phys. Rev. Lett. 123, 096602 (2019).

    BIO
    Feliciano Giustino holds the Moncrief Chair in Quantum Materials Engineering and is Professor of Physics at the University of Texas, Austin. He earned his Ph.D. in Physics at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, and held a post-doctoral appointment at the University of California, Berkeley. Prior to joining the University of Texas he spent over a decade at the University of Oxford as Professor of Materials Science, and one year at Cornell University as the Mary Shepard B. Upson Visiting Professor in Engineering. He is a Clarivate Analytics' Highly Cited researcher, the recipient of a Leverhulme Research Leadership Award, and a Fellow of the American Physical Society. Giustino specializes in electronic structure theory, high-performance computing, and the atomic-scale design of advanced materials using quantum mechanics. He is author of 140+ scientific publications and one book on density-functional theory. He initiated the open-source software project EPW for ab initio calculations of electron-phonon interactions, which is regularly used by research groups around the world.

    (The Babuška Forum series was started by Professor Ivo Babuška several years ago to expose students to interesting and curious topics relevant to computational engineering and science with technical content at the graduate student level (i.e. the focus of the lectures is on main ideas with some technical content). Seminar credit is given to those students who attend.)

  • Multimedia

     Event Stream Link: Click Here to Watch


Thursday, Nov 19

Finding low-dimensional structure in messy data

Thursday, Nov 19, 2020 from 3:30PM to 5PM | Zoom Meeting

  • Additional Information

    Hosted by Per-Gunnar J. Martinsson

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Laura Balzano

    Speaker Affiliation: Associate Professor, Electrical Engineering and Computer Science, University of Michigan

  • Abstract

    In order to draw inferences from large, high-dimensional datasets, we often seek simple structure that model the phenomena represented in those data. Low-rank linear structure is one of the most flexible and efficient such models, allowing efficient prediction, inference, and anomaly detection. However, classical techniques for learning low-rank models assume your data have only minor corruptions that are uniform over samples. Modern research in optimization has begun to develop new techniques to handle realistic messy data — where data are missing, have wide variations in quality, and/or are observed through nonlinear measurement systems.

    In this talk we will focus on two problems. In the first, our data are heteroscedastic, i.e., corrupted by one of several noise variances. This is common in problems like sensor networks or medical imaging, where different measurements of the same phenomenon are taken with different quality sensing (e.g., high or low radiation). In this context, learning the low-rank structure via PCA suffers from treating all data samples as if they are equally informative. We will discuss our theoretical results on weighted PCA and new algorithms for the non-convex probabilistic PCA formulation of this problem. In the second part of the talk we will extend the matrix completion problem to cases where the columns are points on low-dimensional nonlinear algebraic varieties. We discuss two optimization approaches to this problem, one kernelized algorithm and one that leverages existing LRMC techniques on a tensorized representation of the data. We also provide a formal mathematical justification for the success of our method and experimental results showing that the new approach outperforms existing state-of-the-art methods for matrix completion in many situations.

    Bio
    Laura Balzano is an Associate Professor in Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. She leads the Signal Processing Algorithm Design and Analysis (SPADA) lab, and is a member of the Michigan Center for Single-Cell Data Analytics. Her current research interests include statistical signal processing, matrix factorization, and optimization, particularly dealing with large and messy data.

  • Multimedia

     Event Stream Link: Click Here to Watch


Tuesday, Nov 17

Structure preserving methods for approximating fluid stresses and velocities

Tuesday, Nov 17, 2020 from 3:30PM to 5PM | Zoom Meeting

  • Additional Information

    Hosted by Leszek Demkowicz

    Sponsor: Oden Institute Virtual Seminar

    Speaker: Jay Gopalakrishnan

    Speaker Affiliation: Professor, Department of Mathematics & Maseeh Distinguished Chair, Portland State University

  • Abstract

    An age-old topic of discussion in computational fluid dynamics is the proper treatment of the incompressibility constraint on the fluid velocity u, namely div(u)=0. To obtain numerical velocities that satisfy this constraint exactly, there are (at least) two categories of methods, one that requires square integrability of all derivatives of the velocity, and another that requires square integrability of only the divergence of velocity. In the latter, instead of using the standard Lagrange finite element spaces, one uses H(div)-conforming finite elements for velocity approximation. A natural question to ask in this context is what would be a natural Sobolev space for viscous fluid stresses to pair with an H(div) velocity? We report on our research into a mixed formulation with a stress space that pairs well with such spaces for velocity. The main new insight is that stresses should lie in a nonstandard Sobolev space H(curl div). We shall see that finite elements of matrix fields with continuous normal-tangential components are appropriate for approximating viscous stresses. Prized structure-preservation properties like mass conservation and pressure robustness are immediate in our newly introduced framework.

    Bio:
    Jay Gopalakrishnan is a computational mathematician whose research centers around the design of numerical methods for partial differential equations and their rapid solution by iterative techniques. He co-invented two classes of numerical methods called HDG and DPG methods and has authored over eighty publications. He currently serves in the editorial board of a journal of the Society of Industrial and Applied Mathematics and has served in the boards of six other journals. He has worked at Bell Labs, University of Minnesota, Medtronic Inc, and for over a decade, at University of Florida. In 2012 he resigned his full professorship at University of Florida to take up the Maseeh Distinguished Chair in Mathematical Sciences at Portland State University, where you will currently find him engaged in a variety of regional activities to bolster scientific computation.

  • Multimedia

     Event Stream Link: Click Here to Watch


Monday, Nov 16

Continuous Linear Algebra and Chebfun

Monday, Nov 16, 2020 from 12PM to 1PM | Zoom Meeting

  • Additional Information

    Hosted by Afzal Shadab and Tyler Masthay

    Sponsor: Oden Institute Virtual Seminar - SIAM Distinguished Seminar series

    Speaker: Nick Trefethen

    Speaker Affiliation: Professor, Numerical Analysis and head of the Numerical Analysis Group, Oxford University

  • Abstract

    Linear algebra deals with discrete vectors and matrices, and MATLAB was built on giving easy access to these structures and the best algorithms for working with them. But almost everything in linear algebra has a continuous analogue, where vectors become functions and matrices become operators. The Chebfun project develops this analogy with mathematics, algorithms, and software, and the talk will present ideas and Chebfun demonstrations in areas including matrix factorizations, complex variables, differential equations, and data science. Once you start thinking and computing this way, it's hard to go back.

    Bio
    Nick Trefethen is a Professor of Numerical Analysis and head of the Numerical Analysis Group at Oxford University. His work spans numerous areas including pseudospectra for non-normal matrices, spectral methods, numerical linear algebra, fluid mechanics, and approximation theory. He was educated at Harvard and Stanford and held positions at NYU, MIT, and Cornell before moving to Oxford in 1997. He is a Fellow of the Royal Society and a member of the US National Academy of Engineering and served during 2011-2012 as President of SIAM. He has won many prizes including the IMA Gold Medal of the Institute for Mathematics and its Applications, the Naylor Prize of the London Mathematical Society, and the Polya and von Neumann Prizes from SIAM. He holds honorary doctorates from the University of Fribourg and Stellenbosch University.
    As an author, Trefethen is known for his books including Numerical Linear Algebra (1997), Spectral Methods in MATLAB (2000), Spectra and Pseudospectra (2005), Approximation Theory and Approximation Practice (2013/2019), and Exploring ODEs (2018). He organized the SIAM 100-Dollar, 100-Digit Challenge in 2002 and is the inventor of Chebfun.

    (The SIAM Distinguished Seminar Series aims to host accomplished researchers to talk about an insightful topic of their choice to help researchers both in terms of research and life in general.)

  • Multimedia

     Event Stream Link: Click Here to Watch