Imitation Learning (IL) is widely used in machine learning, but often fails to fully recover expert behavior in single-agent games.This study investigates the impact of scaling up model and data size on IL performance.The findings show that IL loss and mean return scale smoothly with compute budget, resulting in power laws for training compute-optimal agents.NetHack agents trained with IL outperform previous state-of-the-art by 1.5x.