The article discusses a novel approach, PreCorrector, in preconditioner construction, utilizing neural networks to outperform classical numerical preconditioners.
Authors mention the challenge of selecting effective preconditioners, as the choice depends on specific problems and requires theoretical and numerical understanding.
Li et al. introduced a method using GNN and a new loss function to approximate matrix factorization for efficient preconditioning.
FCG-NO Rudikov et al. combined neural operators with conjugate gradient for PDE solving, proving to be computationally efficient.
Kopanicáková and Karniadakis proposed hybrid preconditioners combining DeepONet with iterative methods for parametric equations solving.
The HINTS method by Zhang et al. combines relaxation methods with DeepONet for solving differential equations effectively and accurately.
PreCorrector aims to reduce κ(A) by developing a universal transformation for sparse matrices, showing superiority over classical preconditioners on complex datasets.
Future work includes theoretical analysis of the loss function, variations of the target objective, and extending PreCorrector to different sparse matrices.
The article provides references to related works on iterative methods, neural solvers, and preconditioning strategies for linear systems.
Appendix details training data, additional experiments with ICt(5) preconditioner, and information about correction coefficient α.
The paper is available on arxiv under CC by 4.0 Deed license, facilitating attribution and sharing under international standards.