Mistral AI introduces Magistral, a series of reasoning-optimized large language models (LLMs) targeting inference-time reasoning.
Magistral series includes Magistral Small open-source model and Magistral Medium enterprise-tier variant.
Key features include chain-of-thought supervision, multilingual reasoning support, and optimized deployment options.
Benchmark results show competitive accuracy levels for Magistral Small and Medium models.
Magistral Medium offers high throughput with speeds reaching 1,000 tokens per second and optimized for latency-sensitive environments.
Models feature a bespoke reinforcement learning (RL) fine-tuning pipeline and reasoning language alignment for consistent outputs.
Magistral is positioned for adoption in regulated industries, emphasizes model efficiency, and offers strategic differentiation through its release strategy.
Public benchmarking is awaited, and the models aim to be efficient, transparent, and aligned with European AI leadership.