Jack Poulson

Seminar:

High-performance sampling of generic determinantal point processes
Friday, April 9, 2021
10AM – 11AM
Zoom Meeting

Jack Poulson

Determinantal Point Processes (DPPs) were introduced by Macchi as a model for repulsive (fermionic) particle distributions, but their recent popularization is largely due to their usefulness for encouraging diversity in the final stage of a recommender system. The standard sampling scheme for finite DPPs is a spectral decomposition followed by an equivalent of a randomly diagonally-pivoted Cholesky factorization of an orthogonal projection, which is only applicable to Hermitian kernels and has an expensive setup cost. Researchers have begun to connect DPP sampling to LDL' factorizations as a means of avoiding the initial spectral decomposition, but existing approaches have only outperformed the spectral decomposition approach in special circumstances, where the number of kept modes is a small percentage of the ground set size.
We show that trivial modifications of LU and LDL' factorizations yield efficient direct sampling schemes for non-Hermitian and Hermitian DPP kernels, respectively. Further, it is experimentally shown that even dynamically-scheduled, shared-memory parallelizations of high-performance dense and sparse-direct factorizations can be trivially modified to yield DPP sampling schemes with essentially identical performance.
The software developed as part of this research, Catamari, https://gitlab.com/hodge_star/catamari, is released under the Mozilla Public License v2.0. It contains header-only, C++14 plus OpenMP 4.0 implementations of dense and sparse-direct, Hermitian and non-Hermitian DPP samplers. Its extension to high-precision homogeneous self-dual embedding interior point methods within the package Conic, https://gitlab.com/hodge_star/conic, is also briefly discussed.

Bio
Dr. Jack Poulson defended his dissertation on fast, distributed-memory quasi-direct solvers for heterogeneous 3D time-harmonic wave equations in 2012. He spent a brief postdoc in Stanford's math department focusing on fast, distributed-memory algorithms for applying Egorov operators and inverting so-called strongly-admissible H-matrices before spending a few years as an Assistant Professor at Georgia Tech and then Stanford University focusing on fast computation of pseudospectra, lattice reduction techniques, and conic interior point methods. He then spent a few years as a research scientist at Google Research working at the intersection of large-scale recommendation systems and natural language processing.

For questions, please contact: anna@oden.utexas.edu

Hosted by Anna Yesypenko




 Live Zoom Meeting Link: Click Here to Watch