menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Cloud News

>

Introducin...
source image

AWS Blogs

1M

read

219

img
dot

Image Credit: AWS Blogs

Introducing Llama 3.2 models from Meta in Amazon Bedrock: A new generation of multimodal vision and lightweight models

  • Meta has introduced the latest Llama 3.2 models in Amazon Bedrock, which is a new generation of multimodal vision and lightweight models that provide enhanced capabilities and more applicability across various use cases.
  • The new models are more accessible for edge applications and can offer builders image reasoning, unlocking more possibilities with AI.
  • Llama 3.2 is suitable and available in various sizes, from lightweight text-only 1B and 3B parameter models to small and medium-sized 11B and 90B parameter models.
  • The 11B and 90B models are the first Llama models to support vision tasks, offering image encoder representations integrated into the language model.
  • All Llama 3.2 models are designed to be energy and efficient for AI workloads, reducing latency and improving performance.
  • New models offer improved multilingual support for 8 languages and support a 128K context length.
  • Many new Llama 3.2 models are available from Meta in Amazon Bedrock for types like text, vision, and more.
  • Llama 3.2 is built over the Llama Stack, making building and deploying easier than ever.
  • The new Llama models are publicly available, which can allow customers to fine-tune the models for their needs.
  • Llama 3.2 models offer optimized transformer architecture, enhanced instruction adaption, and reinforcement learning to produce contextually relevant and safe outputs.

Read Full Article

like

13 Likes

For uninterrupted reading, download the app