Speaker
Description
Krylov subspace methods are widely used finite-memory algorithms, yet several well-known practical behaviors remain only partially understood within classical orthogonality-based analysis. These include the frequent failure of restarted methods, the success of truncated and s-step recurrences, and the large difference between truncation depth and restart length observed in applications.
In this talk, we present a framework based on an intrinsic information dimension, a problem-dependent quantity that characterizes how much spectral information a Krylov process can resolve above the floating-point noise floor. The analysis is based on deterministic decay of correlations in Krylov moment and Gram matrices induced by polynomial recurrences. We show that different notions of saturation, including approximation limits, numerical rank, and restart behavior, are closely linked and describe the same underlying limitation.
This perspective clarifies why restart discards information irreversibly, while truncation and s-step methods can preserve convergence with bounded memory. It also leads to a refined interpretation of the classical Faber–Manteuffel result: although short recurrences are excluded in the Arnoldi basis for nonsymmetric problems, truncated recurrences based on suitable polynomial bases can retain essential Krylov information. For nonsymmetric matrices, the analysis highlights the role of spectral geometry rather than condition number alone.
Overall, the results provide a unifying explanation for restart, truncation, s-step, and recycling methods, and offer a new viewpoint on finite-memory effects in Krylov algorithms.