Neural networks can be viewed as dynamical systems operating on a lower-dimensional latent space, according to a new study.
Autoencoder models inherently establish a latent vector field on the manifold without extra training, with training processes influencing the emergence of attractor points.
The proposed vector field representation offers insights into neural network properties, facilitating the analysis of generalization, memorization, and prior knowledge extraction from network parameters.
The study demonstrates the efficacy of this approach on vision foundation models, emphasizing its practical utility in real-world contexts.