OpenAI launched the o1 model in the API, equipped with function calling, structured outputs, reasoning effort controls, developer messages, and vision inputs.
The o1 model offers improved reasoning abilities, reduced latency, and better performance compared to the o1-preview.
Developers have the ability to control the reasoning effort parameter to balance response quality and speed based on task requirements.
OpenAI has made updates to its Realtime API, including WebRTC support, a 60% price cut for GPT-4o audio usage, and new features for managing concurrent responses and extending session times.