The Allen Institute for AI (Ai2) has released a new family of language models, OLMo 2, to improve natural language understanding and generation.
OLMo 2 models are available in both 7 billion and 13 billion parameter versions, trained on a wide range of datasets for improved AI performance in various applications.
The models are open-source, accessible to researchers, and have shown competitive performance against models like Qwen and Llama.
Ai2's emphasis on transparency and reproducibility in AI research is reflected in the open access to models, training data, and code.