The AI scaling debate over whether bigger models are even possible or if innovation must take a different path has emerged. The concern is that scaling may not extend to the next generation of models. Future innovation may require integration, model architecture, optimisation techniques and data use rather than scaling. Similar diminishing returns were observed in the semiconductor industry. Hybrid AI models of symbolic reasoning with neural networks are expected so are agent technologies which enable LLMs to perform tasks autonomously. Quantum computing is also expected to be used soon for accelerating AI training and inference.
AI research community has consistently proven ingenuity in overcoming challenges unlocking new capabilities. There may even be gains through novel techniques.
Many have speculated there is a scaling wall for AI development but OpenAI CEO Sam Altman stated there was no wall. The ex-Google CEO Eric Schmidt also stated he believes scaling will continue with a factor of two, three four capability over the next five years.
Scaling challenges dominate discourse on LLM but studies show current LLMs are highly capable, and scaling may not be the sole path forward for future innovation. Current LLMs are capable of outperforming experts in complex tasks, challenging assumptions about the necessity of further scaling for impact results.
Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.