Speaker
Description
There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using data generated by the forward action of the operator without access to the adjoint. However, naively, it seems essential to sample the action of the adjoint. We prove that that without querying the adjoint, one can approximate a family of non-self-adjoint infinite-dimensional compact operators via projection onto a Fourier basis. We then apply the result to recovering Green's functions of elliptic partial differential operators and derive an adjoint-free sample complexity bound.
We also discuss the following question in the discrete, transpose-free setting: for what problems in numerical linear algebra is it necessary to access matrix-vector products with both a matrix and its transpose? This question also arises in the "unmatched backprojector" setting of certain problems in imaging. In particular, we describe the importance of the transpose in low-rank approximation, least-squares problems, and different varieties of norm estimation.