By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleRoboticsTech

Gemini Robotics AI can now operate robots without internet

DeepMind’s new on-device Gemini Robotics AI model allows robots to operate independently of the cloud while maintaining strong performance and adaptability.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jun 24, 2025, 2:06 PM EDT
Share
Collage of imagery demonstrating Gemini Robotics On-Device. A central abstract image of a toy busy box shaped like a human head alludes to neural functions and problem-solving.
Image: Google DeepMind
SHARE

Google DeepMind has quietly shifted robotics a step closer to true on-device intelligence by unveiling Gemini Robotics On-Device, an optimized version of its flagship vision-language-action (VLA) model that runs entirely on a robot without needing a network connection. Announced on June 24, 2025, this on-device iteration retains many of the dexterous capabilities of the cloud-enabled hybrid model introduced in March, yet is compact and efficient enough to live entirely on the robot’s own compute hardware. In effect, it offers a “starter” robotics foundation model suited for environments with unreliable connectivity or stringent privacy and security requirements, while still delivering surprisingly robust performance on a range of physical tasks.

Robotics has long grappled with the tension between powerful cloud AI and the demands of real-world deployment. High-capacity models often rely on constant network connectivity for heavy inference, posing challenges for latency-sensitive tasks, operations in remote or industrial settings, and applications where data privacy is paramount. Gemini Robotics On-Device addresses this by being engineered to run fully locally: the model requires minimal computational overhead yet can generalize to novel situations, follow natural language instructions, and execute fine-grained manipulation. This on-device approach aligns with broader industry trends toward edge AI, where running inference locally can reduce latency, lower bandwidth costs, and improve reliability in environments with intermittent or zero connectivity.

In March 2025, Google DeepMind introduced the original Gemini Robotics model, a VLA system leveraging Gemini 2.0’s multimodal reasoning capabilities to perform a wide array of tasks across different robot embodiments. That hybrid model could distribute computation between on-device hardware and cloud resources, enabling high power for complex planning or fine motor tasks while retaining some offline functionality. Carolina Parada, Head of Robotics at DeepMind, explains that the hybrid approach remains the most capable, but the new on-device version surprisingly closes much of the gap in scenarios where connectivity is limited or where simpler deployment is desired.

Despite its lightweight footprint, Gemini Robotics On-Device demonstrates impressive dexterity and generalization. It can tackle a variety of out-of-the-box tasks—including unzipping bags, folding clothes, or placing items into containers—by following natural language commands, and it can adapt to new tasks with as few as 50 to 100 demonstrations. In DeepMind’s evaluations, the on-device model outperforms previous on-device baselines on challenging out-of-distribution tasks and approaches the instruction-following performance of the full Gemini Robotics model under local inference conditions. This speaks to careful model optimization and distillation work that balances compute efficiency with the broad world understanding inherited from Gemini 2.0.

Your browser does not support the video tag.
Your browser does not support the video tag.
Your browser does not support the video tag.

A hallmark of foundation models in robotics is the ability to transfer across embodiments. Although Gemini Robotics On-Device was primarily trained on Google’s own ALOHA bi-arm platform, DeepMind has shown it can be fine-tuned to run on a variety of robots—such as Apptronik’s Apollo humanoid and the Franka FR3 bi-arm—without redesigning the core architecture. On the Franka arms, it handled fine manipulation like folding garments or executing precision industrial assembly steps; on Apollo, it performed general grasping and object handling in human-centric environments. This adaptability is crucial: robotics deployments often involve bespoke hardware, and the fewer assumptions a model makes about morphology, the broader its potential use cases across research labs and industry pilots.

To help roboticists experiment with and tailor the on-device model, Google DeepMind is releasing a software development kit (SDK) as part of a trusted tester program. The SDK allows developers to evaluate performance in simulation (e.g., via MuJoCo), fine-tune on custom tasks, and integrate with existing control pipelines. Sign-up is initially limited to a select group as DeepMind collects safety feedback and refines deployment guidelines. This marks the first time Google DeepMind has provided such an SDK for a VLA model, signaling a shift toward broader developer engagement in robotics applications.

Physical AI introduces unique safety considerations. DeepMind emphasizes a holistic safety approach: semantic content filters guard against harmful instructions, while low-level controllers enforce collision avoidance and force limits. The on-device model undergoes red-teaming and semantic safety benchmarking before new testers gain access; real-world trials will feed back into model improvements. Parada notes that limiting the rollout to trusted testers is vital for understanding edge-case behaviors in uncontrolled environments. As robotics applications move toward homes, factories, and healthcare settings, this cautious introduction underscores the importance of thoroughly vetting any system that can physically interact with people and objects.

By enabling advanced AI reasoning locally, Gemini Robotics On-Device could accelerate automation in sectors where connectivity is unreliable or data privacy is critical—think remote agriculture, off-shore maintenance, field robotics in disaster zones, or secure facilities handling sensitive materials. Small labs and startups may benefit from lower infrastructure costs compared to cloud-dependent models, fostering innovation in settings where high-bandwidth links are impractical. Moreover, this development aligns with broader edge AI trends seen in autonomous vehicles, mobile devices, and IoT, where local inference reduces latency and dependency on central servers.

Running sophisticated VLA models on-device still entails challenges. Hardware constraints vary widely across robot platforms, and ensuring real-time performance for safety-critical tasks demands tight optimization. Battery life and thermal limits may constrain prolonged operation. Additionally, while 50–100 demonstrations suffice for many tasks, certain highly specialized or novel tasks could require more data or cloud-based fine-tuning to reach production-grade reliability. DeepMind’s ongoing work will likely explore further compression techniques, hardware-software co-design, and on-device lifelong learning methods to continuously refine capabilities in situ.

Gemini Robotics On-Device represents an important step toward democratizing access to advanced robotic AI by reducing reliance on heavy cloud infrastructure and lowering the barrier to experimentation. As more teams join the trusted tester program and share insights, the robotics community may see rapid iterations and creative applications emerge. For now, Google DeepMind’s cautious, safety-first rollout aims to gather real-world feedback, tune the system, and establish best practices. Over time, on-device VLA models could unlock a new wave of autonomous robots performing valuable tasks in places where connectivity, cost, or security concerns have previously stood in the way. The path forward will involve continued collaboration between AI researchers, roboticists, and domain experts to ensure these models are both powerful and responsibly integrated into human environments.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Gemini AI (formerly Bard)Google DeepMind
Most Popular

Preorders for Samsung’s Galaxy S26 come with a $900 trade-in bonus

Gemini 3 Deep Think promises smarter reasoning for researchers

Google launches Gemini Enterprise Agent Ready program for AI agents

Amazon’s One Medical adds personalized health scores

Google is bringing data loss prevention to Calendar

Also Read
Promotional image for Donkey Kong Bananza.

Donkey Kong Bananza is $10 off right now

Google Doodle Valentine's Day 2026

Tomorrow’s doodle celebrates love in its most personal form

A modern gradient background blending deep blue and purple tones with sleek white text in the center that reads “GPT‑5.3‑Codex‑Spark,” designed as a clean promotional graphic highlighting the release of OpenAI’s new AI coding model.

OpenAI launches GPT‑5.3‑Codex‑Spark for lightning‑fast coding

Minimalist illustration of two stylized black hands with elongated fingers reaching upward toward a white rectangle on a terracotta background.

Claude Enterprise now available without sales calls

A modern living room setup featuring a television screen displaying the game Battlefield 6, with four armed soldiers in a war-torn city under fighter jets and explosions. Above the screen are the logos for Fire TV and NVIDIA GeForce NOW, highlighting the integration of cloud gaming. In front of the TV are a Fire TV Stick, remote, and a game controller, emphasizing the compatibility of Fire TV with GeForce NOW for console-like gaming.

NVIDIA GeForce NOW arrives on Amazon Fire TV

A man sits on a dark couch in a modern living room, raising his arms in excitement while watching a large wall-mounted television. The TV displays the Samsung TV Plus interface with streaming options like “Letterman TV,” “AFV,” “News Live,” and “MLB,” along with sections for “Recently Watched” and “Top 10 Shows Today.” Floor-to-ceiling windows reveal a cityscape at night, highlighting the immersive viewing experience. Promotional text in the corner reads, “From No.1 TV to 100M screens on, Samsung TV Plus.”

Samsung TV Plus becomes FAST powerhouse at 100 million

Spider Noir s1 ut 102 241022 epsaar 00447rc4

Nicolas Cage suits up in Spider-Noir teaser trailer

Promotional image for Cross thriller series.

How to watch Cross season 2 online

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.