Meta has launched Llama 3.2, its first major vision models that understand both images and text.Llama 3.2 includes small and medium-sized models, as well as lightweight text-only models.The larger Llama 3.2 models can understand charts and graphs, caption images, and extract details from images.Meta AI now supports new multimodal features and celebrity voices for more natural interaction.