Open-source large language models (LLMs) have lagged behind closed-source LLMs in terms of benchmark performance by five to 22 months, according to a report by Epoch AI. Researchers found Meta’s Llama 3.1 405B was the most recent open model to close the gap across multiple benchmarks.
During a future where information will be mediated by AI systems, according to Meta’s chief AI scientist Yann LeCun, it will constitute the repository of all human knowledge. ”You cannot have this kind of dependency on a proprietary, closed system,” said LeCun.
Meta’s AI assistant using open models has close to 500 million users, while ChatGPT, which operates on closed models, has around 350 million users.
Llama 3.1 405B is a frontier-level open-source AI model, alongside the 70B and 8B versions. This model performs on par with the best closed-source models.
Meta aims for Llama 4 to be the “most advanced model in the industry next year”, requiring nearly 10 times more compute than Llama 3.
Meta’s quantised models, Microsoft’s Phi, HuggingFace’s SmolLM and OpenAI’s GPT Mini indicate strong efforts to build efficient, and small-sized models.
Indian IT and Infosys collaborating to develop small language models for banking and IT applications.
The study stated, the lag of the best open-source models may remain stable rather than shorten. However, it expected the gap between the best open and closed models to shorten next year.
Meta is offering developers free access to its weights and codes, and enabling fine-tuning, distillation, and deployment.
The report mentioned that closed models are outperforming not only in accuracy benchmarks but also in user preference rankings.