Meta is exploring to improve Llama models not only in planning but also evaluating decisions in real-time condition change.
The ultimate goal for Meta is to achieve autonomous machine intelligence that can combine perception, reasoning, and planning.
Manohar Paluri, VP of AI at Meta expressed the need to break down complex tasks into manageable steps to enable Llama models to dynamically adapt.
Meta combines versatile and task-oriented models. SSL enables models to pick up general knowledge from vast data autonomously, while RLHF is about task-specific alignment.
Llama has become a preferred choice for synthetic data generation, enabling startups like Sarvam AI to build models catering to multilingual speakers for impactful, region-specific AI solutions.
Mark Zuckerberg, CEO of Meta confirmed the pre-training for Llama 4 and expects it to be a major advancement over Llama 3, while Ragavan Srinivasan, VP of product at Meta hinted at releasing next-gen Llama models by 2025.
Meta also introduced quantized versions of Llama 3.2 models to enhance on-device AI performance with up to four times faster inference speeds.