Speaker
Description
The maximization of the (generalized) Rayleigh quotient is a central problem in numerical linear algebra.
Conventional algorithms for its computation typically rely on matrix-adjoint products,
making them sensitive to errors arising from adjoint mismatches.
To address this issue, we introduce a stochastic zeroth-order Riemannian algorithm
that maximizes the generalized Rayleigh quotient without requiring adjoint or matrix inverse computations.
Moreover, the construction can be translated to other quantities
as for example $\|A\|$ and $\|A - V\|$,
where only evaluations of the linear map $x \mapsto A x$,
respectively $y \mapsto V^*y$ are available.
Those are of interest in inverse problems and uncertainty quantification.
We provide theoretical convergence guarantees
showing that the iterates converge to the set of global maximizers of the (generalized) Rayleigh quotient
at a sublinear rate with probability one.
Our theoretical results are supported by numerical experiments,
which demonstrate the excellent performance of the proposed method compared to state-of-the-art algorithms.