The learning rate is a crucial hyperparameter in deep learning, prompting research in both AutoML and deep learning on how to control it effectively.
This paper compares different approaches for learning rate control, including classic optimization and online scheduling based on gradient statistics.
Results show that while certain methods perform well on specific deep learning tasks, they lack reliability across different settings, emphasizing the need for improved algorithm selection in learning rate control.
There is a growing trend indicating that hyperparameter optimization approaches are less effective as models and tasks become more complex, suggesting the importance of exploring new directions like finetunable methods and meta-learning in AutoML.