COALA is a new framework for context-aware low-rank approximation in neural networks, aiming to overcome numerical instabilities seen in existing methods.
Existing methods rely on classical formulas that can lead to degraded approximation quality or numerically singular matrices.
COALA proposes an inversion-free regularized framework based on stable decompositions to address these limitations.
The method is capable of handling challenging scenarios like large calibration matrices, nearly singular activation matrices, and insufficient data for unique approximation, providing explicit error bounds.