<ul data-eligibleForWebStory="true">Google DeepMind introduced a vision language action (VLA) model that operates locally on robotic devices without needing a data network.The Gemini Robotics On-Device model offers task generalization, dexterity, and fast adaptation capabilities.This model is designed for bi-arm robots and allows rapid experimentation with tasks like folding clothes, unzipping bags, and assembling products.It is Google DeepMind's first VLA model available for fine-tuning to achieve better performance for specific applications.The model can adapt to new tasks with minimal demonstrations, showcasing its ability to generalize foundational knowledge.Robotics companies like Google DeepMind are focused on developing humanoid robots capable of general tasks.Advancements in Gemini Robotics aim to create multimodal robots that can understand natural language commands and perform complex tasks.Gemini's multimodality opens up possibilities for new consumer products at Google.Various companies are progressing in developing AI-powered robots that excel in general tasks, leading to a competitive market.Robotics and AI continue to merge, showcasing innovations in versatile robotic capabilities.The development signifies the industry's shift towards localized AI models for enhanced efficiency and adaptability.Companies are leveraging AI advancements to enhance robotic functionalities and cater to diverse market needs.AI-powered robots are becoming more prevalent in various sectors, promoting the growth of versatile robotic applications.The introduction of on-device AI models marks a significant step in advancing robotic capabilities and reducing dependency on external networks.Google DeepMind's efforts in developing on-device AI models reflect the industry's trend towards autonomous and adaptable robotic systems.