A new paper introduces a visualisation tool for determining the alignment of embedded data in deep learning models.
The method evaluates the distribution around planes defined by the network's privileged basis vectors, providing a holistic metric for interpreting activations.
Multiple variations of the technique are presented, with a hyperparameter to probe different angular scales and examples showing representations tend to align with privileged basis.
The paper establishes a causal link between activation functions, functional form symmetry breaking, and alignment with neuron basis, offering insights into representation alignment in neural networks.