Mistral AI introduces Magistral, a new lineup of reasoning-optimized large language models, including Magistral Small and Magistral Medium.
Magistral Small, with 24 billion parameters, is open source, while Magistral Medium is a proprietary model available through Mistral AI's cloud services.
Both models in the Magistral series offer multilingual capabilities and feature a chain-of-thought feature for breaking down complex tasks. Magistral Medium outperformed Magistral Small in solving math problems, scoring 73.6% and 90% with optimized settings, respectively.
Mistral AI used reinforcement learning to develop the Magistral series, training without a critic model. Magistral Small's code is released on Hugging Face, while Magistral Medium is accessible via Le Chat and the company's API for developers.