Neural Architecture Search (NAS) aims to enhance deep learning by refining model architectures, but it leads to high energy consumption during the search process.
Energy-aware benchmarking through surrogate-based methods helps in estimating model quality with reduced training costs.
Three design principles for energy-aware benchmarks are proposed: reliable power measurements, wide GPU usage range, and holistic cost reporting.
Study highlights the impact of GPU measurement API on benchmark quality, emphasizing the importance of calibration experiments for improved energy reporting.