Speaker
Description
Estimating the trace of a matrix, that is only accessible by matrix-vector products, is a fundamental task in scientific computing and data science. This has many applications including network analysis, estimation of matrix norms and spectral densities, estimation of log-determinants, etc. Monte Carlo methods is one of the prevalent approaches for estimating the trace of the matrix. We consider Monte Carlo methods for minimizing the trace of a symmetric matrix $A(\theta)$, parameterized (perhaps, nonlinearly) by a parameter $\theta$ from a compact space $\Theta \subset \mathbb{R}^d$. We derive bounds for the number of samples required to estimate the backward error of the Monte Carlo to a given accuracy with high probability. The bounds are based on epsilon nets and chaining and use ideas from statistical learning theory and high dimensional probability. We illustrate this problem using applications to optimal experimental design and hyperparameter estimation in Bayesian inverse problems. Joint work with Ilse Ipsen.