By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleTech

Google adds Generative UI to Search and Gemini for fully built AI experiences

With Generative UI, Google enables Gemini to design and render complete interfaces—trip planners, educational modules, visual guides—generated instantly from a prompt.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Nov 20, 2025, 4:06 AM EST
Share
We may get a commission from retail offers. Learn more
A composite image showing three AI-generated interface examples: on the left, a clothing style showroom with three cards featuring different outfit looks; in the center, a dark-themed fractals interface displaying a large, intricate fractal shape with a title and a button; on the right, a colorful children’s math training dashboard with cartoon characters and four game options such as addition, subtraction, multiplication and binary drills.
Image: Google
SHARE

Google is taking its next big swing at how we interact with AI – not just through chat boxes and text responses, but through full-blown, custom-made interfaces that appear on demand.

With its new Generative UI system, now rolling out in the Gemini app and Google Search (AI Mode), the company is giving its models the ability to generate entire user experiences – web pages, tools, games, simulations, dashboards – in real time, directly from a prompt.

From answers to experiences

Until now, most AI interactions have looked the same: you type a prompt, the model replies in text (maybe with a few images or a table). Even when AI output is powerful, it still lives inside a static container designed by humans in advance.

Generative UI flips that model. Instead of just replying, the AI decides what kind of interface would best fit your request, and then builds it – complete with layout, visuals, and interactive elements – on the fly.

A single prompt like:

  • “Help me teach fractions to my 7-year-old.”
  • “Plan a 3-day trip to Kyoto with a visual itinerary.”
  • “Show me how RNA polymerase works and compare transcription in bacteria vs humans.”

no longer has to end in a long paragraph. It can become:

  • An interactive learning game
  • A drag-and-drop trip planner with maps and timelines
  • A visual biology explainer with diagrams, stages, and side‑by‑side comparisons​

Google’s research team describes this as moving from static, predefined UIs to dynamic, AI-generated interfaces tailored to each individual query.

How it shows up in Google products

Google is first bringing Generative UI to users in two places:

1. Gemini app: Dynamic view and visual layout

In the Gemini app, Generative UI powers two experimental features: dynamic view and visual layout.

With a dynamic view, Gemini doesn’t simply draft text and images – it designs and codes an entire interactive response for each prompt, using its own agentic coding capabilities.​

The same underlying model can:

  • Explain the microbiome to a 5‑year‑old with a colorful, simplified, interactive explainer
  • Explain the microbiome to an adult with technical diagrams, layered sections, and references
  • Build a social content gallery for a small business, complete with preview tiles and templates
  • Generate a trip planner, timeline, packing list, and budget tracker in a unified, interactive view

Google’s demos show examples like:

  • A fashion advisor interface tailored to a person’s style query
  • A fractals explorer for learning math and visual patterns
  • A basketball-themed math game that turns exercises into a playful experience​

These are not prebuilt templates. Each UI is generated in real time from the user’s request.

Your browser does not support the video tag.

2. Google Search: AI Mode with dynamic interfaces

In Google Search, Generative UI is being integrated into AI Mode, starting with subscribers to Google AI Pro and Ultra in the U.S.

Here, instead of just summarizing web content, Gemini 3 in AI Mode can:

  • Interpret the intent behind a complex query
  • Build bespoke tools and simulations tailored to that intent
  • Present the result as a dynamic, interactive environment for deeper understanding and task completion​

For example, a biology student asks:

show me how rna polymerase works. what are the stages of transcription and how is it different in prokaryotic and eukaryotic cells

might receive an interface that:

  • Visualizes the stages of transcription step by step
  • Let’s them toggle between prokaryotic vs eukaryotic workflows
  • Highlights differences interactively instead of burying them in text

Users can access this by selecting “Thinking” from the model dropdown in AI Mode.

Your browser does not support the video tag.

Under the hood: how Generative UI works

Behind the scenes, Generative UI is powered by Google’s Gemini 3 Pro model, but the raw model is only part of the story. Google’s implementation adds three crucial components:​

  1. Tool access
    A server orchestrates access to external tools such as:
    • Image generation
    • Web search
    • Other services needed to enrich the UI
      Some results are passed back to the model to improve quality; others go directly to the browser for speed.
  2. Carefully crafted system instructions
    The model is guided by a rich, structured system prompt that includes:
    • Overall goals
    • Planning strategies
    • Technical specifications (HTML/CSS/JS formats, tool manuals, constraints)
    • Examples of correct behavior
    • Tips for avoiding common UI and coding errors
      In effect, the model is not just “answering” – it is acting like a UI engineer and product designer inside strict guardrails.
  3. Post‑processing
    Once the model emits HTML/CSS/JS, the output is passed through post‑processors to fix common issues:
    • Structural errors in markup
    • Broken or unsafe JavaScript patterns
    • Formatting corrections

The result is a pipeline where:

  • The user prompt and underlying system instructions go into the LLM
  • The model calls tools as needed
  • It outputs fully formed web code
  • The browser renders a complete, interactive experience

Style, branding, and consistency

While Generative UI can freely invent layouts and visuals, Google notes that in product settings, companies may want consistent visual identity.

The system can therefore be configured to:

  • Always produce interfaces in a specific stylistic theme
  • Apply uniform colors, typography, and component shapes
  • Ensure any generated images match that brand style

In one example, Google shows multiple very different experiences – a game, a food planner, a fashion interface – all sharing the same “Wizard Green” theme, making them feel like parts of a single product family.​

Users can also influence style directly in their prompts (for example, “make it look like a sci‑fi HUD” or “use a kids’ storybook aesthetic”) in contexts like dynamic view.

Do people actually prefer this?

To test whether Generative UI is more than just a flashy demo, Google ran evaluations comparing different types of outputs for the same prompts:​

  • A human‑designed website made by experts for that prompt
  • A Generative UI interface
  • The top Google Search result
  • Standard LLM outputs (raw text or markdown)

Findings:

  • Human expert sites still came out on top in terms of user preference.
  • Generative UI experiences ranked second – and close behind humans, with a large gap between them and the traditional LLM outputs.
  • Plain text and basic markdown responses were clearly less preferred compared to both human and Generative UI interfaces.

Importantly, these evaluations ignored generation speed. Google also observed that the quality of Generative UI is highly correlated with the strength of the underlying model: newer Gemini models perform substantially better.

To support research, Google created PAGEN, a dataset of expert‑built websites for consistent benchmarking, which it plans to release to the community.

Limitations and open questions

Despite the strong early results, Google is candid that Generative UI is still in its early days.

Current limitations include:​

  • Latency: some interfaces can take a minute or more to generate, depending on complexity.
  • Occasional inaccuracies: like any LLM system, Generative UI can still produce incorrect or incomplete content, now embedded inside a polished interface.
  • Scope of tools: today’s implementation taps a specific set of tools; broader capabilities will require more integrations and safeguards.

These constraints hint at open questions for the future:

  • How do you debug an AI‑generated interface when something goes wrong?
  • How do you ensure accessibility, performance, and security at scale when UIs are constantly changing?
  • How much control should designers and developers retain versus what’s delegated to the model?

The bigger picture: a “magic cycle” of research and product

Google frames Generative UI as part of what it calls the “magic cycle of research”:​

  1. Research breakthroughs (like LLMs that can generate functioning UIs)
  2. Lead to product innovations (Gemini dynamic view, AI Mode experiences)
  3. Which then unlock new usage patterns and user needs
  4. Feeding back into new research questions and refinements

Looking ahead, Google sees several promising directions:

  • Letting Generative UI access more services and APIs, so experiences can take real actions across the web or in apps.
  • Adapting to richer context – including history, preferences, and real‑time signals – so interfaces feel even more personal and situationally aware.
  • Using human feedback more deeply to refine not just content, but layout, interaction patterns, and visual language.
  • Pushing toward “fully AI‑generated user experiences,” where people no longer choose from an app store or template gallery, but simply describe what they need and receive a one‑off, made‑to‑measure interface.

For now, Generative UI is still labeled an experiment. But for anyone watching the evolution of AI interfaces, it marks a clear inflection point: the interface itself has become part of what the model can generate, not just the words inside it.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Gemini AI (formerly Bard)
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Alexa+ adds new response styles so your smart speaker feels more personal

Perplexity Computer is now open to Pro subscribers

Apple Studio Display vs. Studio Display XDR: which one should you buy?

Prime Video Ultra is here — and it comes with 4K, Dolby Atmos, and no ads

NVIDIA Nemotron 3 Super lands on Perplexity, Agent API, and Computer

Also Read
JBL Live 780NC on-ear headphones

JBL Live 780NC and 680NC launch with LDAC and adaptive noise cancelling

JBL PartyBox On-the-Go 2 Plus portable speaker.

JBL PartyBox On-the-Go 2 Plus and EasySing mics upgrade house parties with AI

Acer TravelMate P4 14 AI laptop

Acer launches TravelMate P4 and P2 Copilot+ laptops with Intel Core Ultra Series 3

Promotional graphic for Canva AI Magic Layers showing a glossy green chair in the center, floating cloud cutouts, a purple “Klara” label, a yellow “New Drop” badge, and large text reading “Let there be layers” on a blue-to-purple gradient background.

Canva debuts Magic Layers for editable AI content

Logo featuring a stylized orange asterisk-like symbol followed by the word 'Claude' in bold black serif font on a light beige background.

You’re getting 2x Claude usage right now — but only until March 27

A large flat-screen TV displaying the Amazon Prime Video logo against a white screen, set against a dark room with a blue ambient backlight glow, placed on a dark media console with two small decorative objects on either side.

Prime Video just killed free 4K — unless you pay up

Thomas Owsianski, President of GM South America, during the announcement of Cadillac at an event with journalists in São Paulo.

Cadillac is finally coming to Brazil — and it’s going all-electric

Rivian R2 electric SUV

The $45,000 Rivian is real, and it’s called the R2

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.