By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleTech

Google’s AI video generator Flow gets realistic lighting and new audio tools

Flow users can now generate videos with natural shadows, lighting, and audio as Google upgrades its Veo 3.1 model for more realistic AI filmmaking.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Oct 16, 2025, 6:26 AM EDT
Share
We may get a commission from retail offers. Learn more
Google Veo 3.1
Image: Google
SHARE

On Oct. 15, 2025, Google quietly pushed another nudge forward in the race to make AI-generated video feel less like a novelty and more like a practical creative tool. The update pairs a new Veo model (Veo 3.1) with a set of tightened editing controls inside Flow — Google’s filmmaking interface — and, crucially, adds generated audio to many of Flow’s features. What used to be a short, silent demo can now arrive with natural-sounding soundscapes and camera-friendly lighting tricks that make the result read, at first glance, like real footage.

This matters because Flow isn’t just a lab toy. Google says Flow users have already generated hundreds of millions of short videos, and Veo 3.1 is explicitly built to give creators more control over cinematic details — the kind of things humans notice first: shadows, lens movement, and the way objects sit in a room. Flow’s new “Insert” tool now attempts to match shadows and lighting when adding an object; “Remove” will reconstruct the scene so the erased item looks like it was never there; and “Extend” (aka Scene Extension) can lengthen an existing clip by generating more frames that pick up from the last second of action. Those are not tiny quality-of-life improvements — they change what a creator can iterate on without leaving the Flow interface.

Technically speaking, Veo 3.1 brings two things in one package: better audiovisual fidelity and new generation modes. Google’s developer docs and product pages describe Veo 3.1 as a model that natively generates audio in its video outputs, and the Gemini product materials still show Veo producing high-quality, 8-second clips by default — but Flow’s Extend feature can stitch together additional generated footage to create longer sequences, “even lasting for a minute or more.” In short, the base generation unit remains short, but Flow gives you practical tools to make those units feel continuous and cinematic.

Some of the new Flow workflows are worth calling out by name because they signal where Google thinks creators want to spend time:

  • Ingredients to Video. Feed Flow up to three reference images (people, objects, styles) and Veo will use them as ingredients for a finished scene — now with generated audio tied to that visual prompt. This is useful for product spots, character tests, and stylized shorts.
  • Frames to Video. Give Flow a start frame and an end frame and it generates the footage that bridges them — effectively a controlled morph or cinematic transition with audio. Handy for motion directors and social clips.
  • Scene Extension (Extend). Pick the last second of a clip and extend the action; Flow will continue the shot, and add synchronized audio where appropriate. That’s how you can turn a neat 8-second idea into an establishing shot or a longer sequence without manual compositing.

Access and availability are familiar Google-tier choices: Veo 3.1 is rolling out in Flow and in Google’s Gemini app, and developers can use the model through the Gemini API (it’s being offered as a paid preview). Enterprise customers will see the capabilities appear in Vertex AI. In consumer-facing channels — the Gemini mobile app — Google also layers visible watermarks and embedded SynthID markings into videos to indicate they were AI-generated, part of its attempt to balance capability with disclosure.

If you’ve been following the “video-AI” arms race, this update looks calibrated to one big goal: reduce the visible gap between synthesis and reality. Competitors such as OpenAI’s Sora and other research-lab models are racing on length, character consistency and audio fidelity; Google’s play here is to combine granular editing controls (lighting, shadows, object insertion/removal) with native audio so creators get a holistic filmmaking workflow rather than separate tools for image, motion and sound. That strategy makes Veo 3.1 less of a “generator” and more of an integrated production assistant.

But every technical advance carries trade-offs. Better lighting, natural shadows and coherent audio — the very things that make a clip feel authentic — also make it easier for misuse: deepfakes, manipulated footage, and misleading edits. Google is explicitly trying to thread that needle; the company points to watermarking, SynthID, red-teaming and safety reviews as mitigations. Those measures matter, but they’re not a silver bullet. As Google itself admits, features like “Remove” (which reconstructs the background and makes it “look as though the object was never there”) will inevitably be used in both delightful and deceptive ways. That tension — power vs. provenance — is the central storyline for AI video right now.

For creators, the immediate upside is practical: less time wrestling with rotoscoping, better first-draft audio, and a faster path from moodboard to shareable footage. For journalists and investigators, it’s an arms race: newsrooms and verification teams must update toolkits that relied on audio or minor lighting inconsistencies as clues indicating fakery. For regulators and platforms, it’s a policy test: how to encourage creative experimentation while preventing malicious reuse.

What to watch next: adoption and artifacts. Will Flow’s insertion and removal tools produce clean, artifact-free frames across a range of real-world footage? Can Extend retain lip sync and dialogue if the last second lacks speech? Google’s docs warn that voice is hard to extend if it isn’t present in the final second — a practical limitation with creative implications. And while Veo 3.1’s “fast” variant targets speed, the quality vs. throughput trade-off will decide whether these clips land on brand videos, indie shorts, or just as social curiosities.

Veo 3.1 and the Flow updates mark a step toward AI video that’s truly usable — not just impressive. They make the output more convincing at a glance by tying together visuals and sound and by giving creators tools that understand cinematic logic (lighting, shadows, continuity). But with that usable power comes a renewed urgency around provenance, platform controls and verification. If AI video is going to be an honest craft, the industry will need both better tools for creation and better systems for disclosure and detection — at the same speed that the models themselves continue to improve.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Google’s Canvas AI Mode rolls out to everyone in the U.S.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

TACT Dial 01: turn it, press it, focus — that’s literally it

Perplexity Computer is the AI that actually does your work

Also Read
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

Google Pixelsnap Charger with Stand

25W Qi2 wireless comes alive with this Google Pixelsnap Charger deal

Nimble Wally Stretch 35W Wall Charger with Retractable USB-C Cable

This Nimble 35W GaN charger with retractable cable is $16 off

Nimble Fold 3-in-1 Wireless Travel Charging Dock

Charge iPhone, Apple Watch and AirPods with this Nimble 3‑in‑1 deal

A simple illustration shows a large black computer mouse cursor pointing toward a white central hub with five connected nodes on an orange background.

Claude Marketplace lets you use one AI commitment across multiple tools

99ONE Rogue 102321

99ONE Rogue wants to kill the ugly helmet comms box forever

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.