Ever found yourself staring at a fridge full of ingredients and wishing for an instant recipe tutor? Or maybe you’ve wandered into a hardware store debating which screw size to grab? At Google I/O 2025, Google unveiled a brand-new way to turn your camera into a personal AI guide—right within Search itself.
Last year, Google dipped its toes into the world of visual AI with Project Astra, an experimental playground for getting machines to “see” through your lens. That early demo let users chat with an AI assistant about everything in view—think unpacking the contents of your pantry or identifying plants on a hike. After a successful Android rollout under the name Gemini Live, Google is now folding that functionality directly into Search’s AI Mode as Search Live.
In practical terms, Search Live adds a fourth icon alongside Text, Voice and Lens in Google’s revamped AI Mode. Tap the new Live button and grant camera access, and you instantly start a real-time visual chat with Search. Need cooking tips? Point at ingredients. Planning a garden? Scan your backyard. The AI will not just snap a single photo—it maintains a continuous feed, letting you follow up with “What about that green leaf over there?” or “Which of these peppers is spiciest?”
If you’re itching to try it, you’ll need to be in the U.S. and signed up for Labs in Google Search. Search Live is slated to hit beta later this summer, with a wider rollout to follow in the months ahead. Google hasn’t pinned down an exact date yet, but you can expect to see the Labs toggle light up by August 2025 at the latest.
Search Live is just one piece of Google’s vision for an AI-driven search ecosystem. Alongside it, Google announced:
- Deep search: a research-focused mode that synthesizes multiple sources for scholarly or investigative queries, letting you dive far deeper than a standard answer snippet.
- AI agents: automated helpers that can carry out tasks like booking tickets or comparing prices, directly within Search.
- Enhanced shopping features: tools to visualize products in your space via AR, compare items side-by-side, and even automatically apply discount codes at checkout.
Together, these additions aim to transform Search from a passive lookup tool into an active partner in problem-solving.
Over on the Gemini app, Google is rolling out the same camera-sharing magic—and then some. The iOS version will support both camera and screen sharing, so you can show Gemini what’s on your device, whether that’s a queued-up recipe video or a confusing spreadsheet. This mirrors the Android rollout, which began on Pixel 9 and Galaxy S25 models last month before expanding to all Android devices a few weeks later.
Originally, camera and screen sharing were earmarked for the paid Gemini Advanced tier, but Google reversed course and made them free for everyone—iOS and Android alike.
Real-world use cases
- Home cooking: Scan your fridge or pantry and have Gemini whip up recipe ideas or suggest substitutes on the fly.
- Shopping: In-store and unsure which lamp fits your living room? Show it the display, ask for style tips, and even see a virtual preview in your space.
- Learning and troubleshooting: Whether it’s fixing a leaky faucet or diagnosing a car issue, the AI’s ongoing visual context means you can work through multi-step problems interactively.
Of course, always be mindful when sharing live video. Google emphasizes that you’re in control: you must explicitly tap to start sharing, and you can pause or stop at any time. All feeds run through encrypted channels, and you can review or delete any stored snapshots within your Google account settings.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
