Google is quietly turning Opal from a neat no‑code AI toy into something much closer to a real agent platform—and the new “agent step” is the moment that shift becomes obvious. What used to be a fixed pipeline of model calls is now an agent that can understand your goal, plan a path, pick tools like web search, Veo, or Google Sheets, and talk to you along the way when it needs more context.
If you haven’t been following Opal closely, it started life as a Google Labs experiment where you type “make an app that…” in plain English and Opal turns that idea into a visual workflow and shareable mini‑app—no code required. You describe what you want, Opal breaks it into steps (inputs, AI calls, outputs), draws the flowchart for you, and hosts it so others can use it via a simple link. Over the last year, Google has quietly pushed it worldwide and embedded it into the Gemini web app, so anyone can spin up lightweight AI tools as easily as making a Google Doc.
The new piece is that Opal’s core “generate” step can now be an agent instead of a single model call, which is a subtle change with big consequences. Instead of you deciding “use this Gemini model, then this tool, then that one,” you hand your objective to the agent and let it decide which tools and models to chain together: web search for research, Veo for video, Sheets for long‑term memory, and so on. Under the hood, it follows a “plan then act” pattern—breaking your goal into smaller actions, choosing tools, and updating its plan as it goes—so non‑technical users can get fairly sophisticated workflows without touching branching logic or APIs.
Google’s own examples show how big the UX change is. Take their storybook demo: before, you had to predefine details like page count, questions, and prompts to generate a children’s book. Now, a “Visual Storyteller” Opal uses the agent step to figure out what information it needs, suggest plot points, and adjust the narrative in real time based on how you respond, so each run feels less like filling out a form and more like co‑writing with an assistant that has opinions. The same thing happens in design workflows: the older interior design Opal was basically “upload a photo, choose a style, get one redesigned image back,” but the new Room Styler version can ask follow‑up questions, show alternatives, and refine its understanding of your personal aesthetic over multiple turns.
To make that work, Google is layering in a few new capabilities that give Opal agents more memory and a brain. First is persistent memory: Opals can now remember things like your name, brand, or content preferences across sessions, so a video‑idea generator, for example, can store your brand identity once and instantly pitch new hooks in the same voice every time you come back. That sounds simple, but it’s the difference between “fun demo” and “tool you can rely on every morning.” Then there’s dynamic routing: as a builder, you can define multiple possible paths in your workflow and let the agent decide which one to follow based on conditions you describe in plain language. Google’s Executive Briefing Opal, for instance, automatically branches depending on whether you’re preparing for a new or existing client—searching the web for background in one case or pulling from internal notes in the other.
The last big piece is interactive chat, which is Google’s way of saying the agent is allowed to admit it doesn’t know enough yet. Instead of silently producing a mediocre result from incomplete inputs, the agent step can pause the workflow, start a mini chat with the user, ask clarifying questions, or offer options before it moves on. In practice, that means your Room Styler Opal can come back with, “Do you want more natural wood or bold color accents?” rather than guessing and forcing you to manually rerun everything. It’s a small behavioral tweak that makes these Opals feel less like scripts and more like collaborators.
From Google’s perspective, this strikes an interesting balance between automation and control. If you’re new to AI tools, you can mostly ignore the complexity: describe your goal, drop in an agent step, and the workflow “just works” because the agent can self‑correct, ask questions, remember your preferences, and string tools together on its own. If you’re a power user, all the old fixed steps are still there, so you can combine rigid, highly controlled logic with agentic blocks where flexibility and adaptation matter more. It’s very much in line with how other platforms are thinking about agents right now: don’t replace workflows, but inject agents into them where they add the most value.
Stepping back, this is also part of a broader pattern in Google’s AI stack. Gemini 3 Flash—the fast, lightweight model that already powers a lot of real‑time experiences—is behind Opal’s new agent behavior, choosing tools and planning steps on the fly. The same “agentic” thinking is showing up elsewhere, too, like Agentic Vision in Gemini 3 Flash, which uses multi‑step reasoning over visual inputs. Opal becomes the no‑code, visual front‑end to all of that: a place where you can build mini‑apps that quietly lean on increasingly capable agents without having to understand the model names or API docs.
For teams and solo creators, the practical implications are pretty straightforward. A marketer can build an Opal that pulls campaign metrics, drafts an executive summary tailored to different stakeholders, and remembers the tone and format each exec prefers, all without writing a line of code. A small business can spin up an agent that monitors incoming inquiries, looks up order history in Sheets, drafts responses, and flags only the tricky cases for a human to review. Educators, creators, and internal tools folks get a playground to prototype agents that are not just “answer bots” but multi‑step, tool‑using workflows wrapped in a simple interface.
It’s also a bit of a signal about where Google thinks everyday AI is heading. Instead of everyone talking directly to a single general‑purpose chatbot, the company envisions a layer of small, focused agents—built in Opal, shared like links, tailored to specific jobs—that sit between Gemini and the rest of your work. Today’s agent step is just one upgrade in one Labs product, but it pushes Opal from “cool way to chain prompts” to “DIY agent platform” for people who would never call themselves developers. And that, quietly, is a pretty big deal for how accessible agentic AI is about to feel.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
