Alibaba's Qwen team has launched a new series of open source AI models called Qwen3, surpassing OpenAI o1 and DeepSeek R1 in performance.Qwen3 features two "mixture-of-experts" models and six dense models, setting it apart in the AI landscape.The 235-billion parameter version of Qwen3 outperforms DeepSeek's R1 and approaches performance levels of Google's Gemini 2.5-Pro.Qwen3 offers hybrid reasoning capabilities, toggling between fast responses and more intensive reasoning steps.Users can access and deploy Qwen3 models across various platforms and engage in 'Thinking Mode' for complex tasks.Qwen3 includes MoE and dense models under the Apache 2.0 open-source license, with variations in size and architecture.The models offer expanded multilingual support across 119 languages and dialects, widening their global applications.Qwen3's training process involves a substantial increase in pretraining dataset size and training improvements over Qwen2.5.Deployment options for Qwen3 models are versatile, catering to different user needs and frameworks.Qwen3 marks a significant step towards AGI and ASI goals, with plans for further scaling and enhancements in the future.