Recent studies challenge the assumption that only large language models (LLMs) can achieve competitive reasoning performance.
Small language models (SLMs) are also shown to have strong reasoning capabilities and are favored for their efficiency and deployability.
A systematic study examines 72 SLMs from six model families across 14 reasoning benchmarks.
The findings suggest that SLMs with strong reasoning abilities can be developed through structured training or post-training compression as efficient alternatives to LLMs.