<ul data-eligibleForWebStory="true">A theoretical framework is proposed to analyze learning dynamics in deep neural networks using dynamical systems theory.The framework introduces order-preserving and non-order-preserving transformations at the neuron level to redefine linearity and nonlinearity.Different transformation modes lead to unique weight vector organization, information extraction, and learning phases.Transitions between phases, including phenomena like grokking, can occur during training.The concept of attraction basins in sample and weight spaces is introduced to characterize generalization and structural stability.Metrics based on neuron transformation modes and attraction basins help analyze learning model performance.Hyperparameters like depth, width, learning rate, and batch size influence these metrics for model optimization.