The AI industry is facing a crucial question about the future of building massive AI models using large amounts of data and computing power.The 'Chinchilla' strategy involves combining extensive data and computing resources for training AI models but may not be sustainable in the long run.The potential emergence of reasoning models that require less infrastructure poses a significant challenge to the expensive 'Chinchilla' approach.Companies are exploring alternatives like reasoning models and mixture of experts to reduce the reliance on massive pre-training runs.The shift towards reasoning models and MoE could potentially save trillions of dollars in infrastructure investments in the AI industry.The concept of using 'test-time-compute' in reasoning models allows for improved accuracy without the need for extensive pre-training.Synthetic training data generated by existing models is being considered as a potential solution to feed the AI models and drive further advancements.The AI industry may need to adapt to new approaches like synthetic data and recursive self-improvement to sustain the rapid growth in computing needs.The 'Chinchilla' approach, while successful so far, faces challenges due to diminishing returns and potential data scarcity for training.The industry is at a critical juncture where the way forward in AI model development could impact trillions of dollars in capital expenditure.