Understanding and controlling the informational complexity of neural networks is crucial in machine learning, impacting generalization, optimization, and model capacity.
A new approach using algorithmic information theory, focusing on Binarized Neural Networks (BNNs), is proposed to capture algorithmic regularities in network structure.
The Block Decomposition Method (BDM), based on algorithmic probability, is applied to BNNs and shows better tracking of structural changes during training compared to entropy-based approaches, correlating more strongly with training loss.
This shift towards algorithmic information theory offers insights into learning dynamics, viewing training as algorithmic compression and providing a foundation for complexity-aware learning and regularization.