Automated Machine Learning (AutoML) frameworks are evaluated using the AutoML Benchmark (AMLB).AMLB proposed to evaluate frameworks using 1- and 4-hour time budgets.This work argues for considering shorter time constraints in the benchmark for practical value.Evaluations on 104 tasks show consistent rankings across time constraints and greater variety in model performance with early stopping.