By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleTech

Google adds Generative UI to Search and Gemini for fully built AI experiences

With Generative UI, Google enables Gemini to design and render complete interfaces—trip planners, educational modules, visual guides—generated instantly from a prompt.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Nov 20, 2025, 4:06 AM EST
Share
We may get a commission from retail offers. Learn more
A composite image showing three AI-generated interface examples: on the left, a clothing style showroom with three cards featuring different outfit looks; in the center, a dark-themed fractals interface displaying a large, intricate fractal shape with a title and a button; on the right, a colorful children’s math training dashboard with cartoon characters and four game options such as addition, subtraction, multiplication and binary drills.
Image: Google
SHARE

Google is taking its next big swing at how we interact with AI – not just through chat boxes and text responses, but through full-blown, custom-made interfaces that appear on demand.

With its new Generative UI system, now rolling out in the Gemini app and Google Search (AI Mode), the company is giving its models the ability to generate entire user experiences – web pages, tools, games, simulations, dashboards – in real time, directly from a prompt.

From answers to experiences

Until now, most AI interactions have looked the same: you type a prompt, the model replies in text (maybe with a few images or a table). Even when AI output is powerful, it still lives inside a static container designed by humans in advance.

Generative UI flips that model. Instead of just replying, the AI decides what kind of interface would best fit your request, and then builds it – complete with layout, visuals, and interactive elements – on the fly.

A single prompt like:

  • “Help me teach fractions to my 7-year-old.”
  • “Plan a 3-day trip to Kyoto with a visual itinerary.”
  • “Show me how RNA polymerase works and compare transcription in bacteria vs humans.”

no longer has to end in a long paragraph. It can become:

  • An interactive learning game
  • A drag-and-drop trip planner with maps and timelines
  • A visual biology explainer with diagrams, stages, and side‑by‑side comparisons​

Google’s research team describes this as moving from static, predefined UIs to dynamic, AI-generated interfaces tailored to each individual query.

How it shows up in Google products

Google is first bringing Generative UI to users in two places:

1. Gemini app: Dynamic view and visual layout

In the Gemini app, Generative UI powers two experimental features: dynamic view and visual layout.

With a dynamic view, Gemini doesn’t simply draft text and images – it designs and codes an entire interactive response for each prompt, using its own agentic coding capabilities.​

The same underlying model can:

  • Explain the microbiome to a 5‑year‑old with a colorful, simplified, interactive explainer
  • Explain the microbiome to an adult with technical diagrams, layered sections, and references
  • Build a social content gallery for a small business, complete with preview tiles and templates
  • Generate a trip planner, timeline, packing list, and budget tracker in a unified, interactive view

Google’s demos show examples like:

  • A fashion advisor interface tailored to a person’s style query
  • A fractals explorer for learning math and visual patterns
  • A basketball-themed math game that turns exercises into a playful experience​

These are not prebuilt templates. Each UI is generated in real time from the user’s request.

Your browser does not support the video tag.

2. Google Search: AI Mode with dynamic interfaces

In Google Search, Generative UI is being integrated into AI Mode, starting with subscribers to Google AI Pro and Ultra in the U.S.

Here, instead of just summarizing web content, Gemini 3 in AI Mode can:

  • Interpret the intent behind a complex query
  • Build bespoke tools and simulations tailored to that intent
  • Present the result as a dynamic, interactive environment for deeper understanding and task completion​

For example, a biology student asks:

show me how rna polymerase works. what are the stages of transcription and how is it different in prokaryotic and eukaryotic cells

might receive an interface that:

  • Visualizes the stages of transcription step by step
  • Let’s them toggle between prokaryotic vs eukaryotic workflows
  • Highlights differences interactively instead of burying them in text

Users can access this by selecting “Thinking” from the model dropdown in AI Mode.

Your browser does not support the video tag.

Under the hood: how Generative UI works

Behind the scenes, Generative UI is powered by Google’s Gemini 3 Pro model, but the raw model is only part of the story. Google’s implementation adds three crucial components:​

  1. Tool access
    A server orchestrates access to external tools such as:
    • Image generation
    • Web search
    • Other services needed to enrich the UI
      Some results are passed back to the model to improve quality; others go directly to the browser for speed.
  2. Carefully crafted system instructions
    The model is guided by a rich, structured system prompt that includes:
    • Overall goals
    • Planning strategies
    • Technical specifications (HTML/CSS/JS formats, tool manuals, constraints)
    • Examples of correct behavior
    • Tips for avoiding common UI and coding errors
      In effect, the model is not just “answering” – it is acting like a UI engineer and product designer inside strict guardrails.
  3. Post‑processing
    Once the model emits HTML/CSS/JS, the output is passed through post‑processors to fix common issues:
    • Structural errors in markup
    • Broken or unsafe JavaScript patterns
    • Formatting corrections

The result is a pipeline where:

  • The user prompt and underlying system instructions go into the LLM
  • The model calls tools as needed
  • It outputs fully formed web code
  • The browser renders a complete, interactive experience

Style, branding, and consistency

While Generative UI can freely invent layouts and visuals, Google notes that in product settings, companies may want consistent visual identity.

The system can therefore be configured to:

  • Always produce interfaces in a specific stylistic theme
  • Apply uniform colors, typography, and component shapes
  • Ensure any generated images match that brand style

In one example, Google shows multiple very different experiences – a game, a food planner, a fashion interface – all sharing the same “Wizard Green” theme, making them feel like parts of a single product family.​

Users can also influence style directly in their prompts (for example, “make it look like a sci‑fi HUD” or “use a kids’ storybook aesthetic”) in contexts like dynamic view.

Do people actually prefer this?

To test whether Generative UI is more than just a flashy demo, Google ran evaluations comparing different types of outputs for the same prompts:​

  • A human‑designed website made by experts for that prompt
  • A Generative UI interface
  • The top Google Search result
  • Standard LLM outputs (raw text or markdown)

Findings:

  • Human expert sites still came out on top in terms of user preference.
  • Generative UI experiences ranked second – and close behind humans, with a large gap between them and the traditional LLM outputs.
  • Plain text and basic markdown responses were clearly less preferred compared to both human and Generative UI interfaces.

Importantly, these evaluations ignored generation speed. Google also observed that the quality of Generative UI is highly correlated with the strength of the underlying model: newer Gemini models perform substantially better.

To support research, Google created PAGEN, a dataset of expert‑built websites for consistent benchmarking, which it plans to release to the community.

Limitations and open questions

Despite the strong early results, Google is candid that Generative UI is still in its early days.

Current limitations include:​

  • Latency: some interfaces can take a minute or more to generate, depending on complexity.
  • Occasional inaccuracies: like any LLM system, Generative UI can still produce incorrect or incomplete content, now embedded inside a polished interface.
  • Scope of tools: today’s implementation taps a specific set of tools; broader capabilities will require more integrations and safeguards.

These constraints hint at open questions for the future:

  • How do you debug an AI‑generated interface when something goes wrong?
  • How do you ensure accessibility, performance, and security at scale when UIs are constantly changing?
  • How much control should designers and developers retain versus what’s delegated to the model?

The bigger picture: a “magic cycle” of research and product

Google frames Generative UI as part of what it calls the “magic cycle of research”:​

  1. Research breakthroughs (like LLMs that can generate functioning UIs)
  2. Lead to product innovations (Gemini dynamic view, AI Mode experiences)
  3. Which then unlock new usage patterns and user needs
  4. Feeding back into new research questions and refinements

Looking ahead, Google sees several promising directions:

  • Letting Generative UI access more services and APIs, so experiences can take real actions across the web or in apps.
  • Adapting to richer context – including history, preferences, and real‑time signals – so interfaces feel even more personal and situationally aware.
  • Using human feedback more deeply to refine not just content, but layout, interaction patterns, and visual language.
  • Pushing toward “fully AI‑generated user experiences,” where people no longer choose from an app store or template gallery, but simply describe what they need and receive a one‑off, made‑to‑measure interface.

For now, Generative UI is still labeled an experiment. But for anyone watching the evolution of AI interfaces, it marks a clear inflection point: the interface itself has become part of what the model can generate, not just the words inside it.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Gemini AI (formerly Bard)
Leave a Comment

Leave a ReplyCancel reply

Most Popular

How to scan documents in the iPhone Notes app

OpenAI launches Safety Fellowship for independent AI research

Samsung confirms the end of Samsung Messages in July 2026

Reddit shuts down r/all and crowns your Home feed the new front page

ASUS ProArt PRT-BE5000 WiFi 7 router pairs with PQG-U1080 switch for creator networks

Also Read
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.

OpenAI launches Child Safety Blueprint to protect kids from AI misuse

Introducing Muse Spark" — a soft blue-grey gradient background with centered text announcing a new product or feature called Muse Spark

Meta unveils Muse Spark multimodal AI

A dark, minimalist gradient background with a soft spotlight effect from above, featuring the xAI logo and the word “GROK” in sleek, metallic lettering centered in the image.

Grok 4.2 lands in Microsoft Foundry for enterprise AI

Google Drive sharing dialog for a folder named “Project Skylight” shown over the My Drive file list, indicating the folder has limited access, listing three users with their roles (one owner, two commenters), and showing General access set to Restricted with a “Copy link” and “Done” button at the bottom.

Google Drive retires restricted access for Limited access

Green Google Sheets document icon centered on a light gray background, showing a simple white spreadsheet grid symbol on the front of the file.

Google Sheets boosts formula control and error visibility

Screenshot of the Google Admin console showing the “Resources” list under Resource management with multiple room resources in a table, two items (Compass and Lookout) selected, and the Edit menu open highlighting the option “Edit booking permissions for non-Google users” in the dropdown near the top right.

New Google Workspace update lets third-party calendars book your rooms

A Chrome browser window on a desktop shows Google’s blog article titled “All new features introduced this year,” with a left sidebar of color‑coded vertical tabs for apps like Gmail, Google Calendar, and Google Drive, while large callouts labeled “Vertical Tabs” on the left and “Immersive Reading Mode” on the right highlight the new features in a clean, light blue interface.

Google Chrome adds vertical tabs and immersive reading mode

A person wearing a gray Android XR headset sits on a chair in a modern living room while watching a large virtual screen showing a live Paris Saint‑Germain football match, surrounded by floating XR panels displaying match schedules and detailed real‑time game statistics pinned around the room.

Android XR April update gives Galaxy XR five serious upgrades

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.