Performance predictors are essential in neural architecture search (NAS) to speed up the evaluation phase by estimating the performance of new designs based on trained architectures.
Current predictors often struggle with generalization due to the shift in data distribution between training and testing samples, leading to inaccurate predictions based on spurious correlations.
To combat this issue, the Causality-guided Architecture Representation Learning (CARL) method has been introduced to differentiate critical and redundant features of architectures for more accurate and interpretable performance prediction.
Extensive experiments across five NAS search spaces have shown that CARL achieves state-of-the-art accuracy and enhanced interpretability, such as reaching 97.67% top-1 accuracy on CIFAR-10 with DARTS.