By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

Midjourney users can now create short videos from static art

The new Midjourney video tool transforms static images into moving visuals using AI, with options to control motion style and extend clip duration.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jun 19, 2025, 1:26 PM EDT
Share
A close-up of a computer screen with the words Midjourney on it.
Photo by Jonathan Kemper / Unsplash
SHARE

Midjourney, known for its trailblazing AI image-generation tools, has unveiled its first video-generation model, V1, marking a significant pivot toward multimedia creation. This initial release enables users to animate still images into short video clips, reflecting the company’s ambition to expand beyond static visuals into dynamic content generation. The announcement comes amid a broader industry push into AI-driven video synthesis, with competitors like OpenAI, Google, and Meta also rolling out or experimenting with similar capabilities.

At its core, V1 is an image-to-video model: after creating or uploading an image on Midjourney’s platform, users see an “animate” button. Pressing this generates a 5-second clip grounded in a text prompt that the system suggests by default, “just making things move,” but which users can override via a “manual” mode to specify motion characteristics. Users may also choose an uploaded image as a “starting frame.” The motion settings include “low motion” (where typically only the subject moves) and “high motion” (where both camera and subject may shift), offering a simple way to control the dynamism of the output. After the initial 5 seconds, users can extend the clip by four seconds at a time, up to four extensions, yielding a maximum of 21 seconds total.

Your browser does not support the video tag.

V1 is accessible only via Midjourney’s web interface and Discord server, maintaining the company’s familiar workflow rather than a standalone application. Access requires a subscription: the entry-level plan starts at $10/month, which provides around 3.3 hours of “fast” GPU time (roughly 200 image generations) but video jobs cost about eight times more than image jobs, equating to roughly “one image worth of cost” per second of video. Higher-tier plans offer more GPU time and access in “Relax” mode for queued, slower processing. Midjourney indicates it will review and potentially adjust video pricing following early feedback, a common approach in nascent AI services as usage patterns emerge.

Midjourney’s move comes as part of an intensifying AI video generation race. OpenAI recently debuted Sora, Google has rolled out Veo 3, Adobe’s Firefly includes video features, and startups like Runway have models such as Gen 4. Each platform balances controllability, realism, speed, and cost differently. Midjourney distinguishes itself by targeting its existing user base—artists and creative explorers—retaining a focus on aesthetic exploration over purely commercial B-roll generation. As many competitors emphasize enterprise integration (e.g., advertising, film pre-production), Midjourney’s approach remains centered on experimental creativity, though broader commercial usage may follow once quality and controls improve.

According to founder David Holz, V1 is “a stepping stone” toward models capable of real-time open-world simulations, 3D rendering, and beyond. Transitioning from 2D clips to fully interactive environments poses substantial challenges: generating coherent 3D structures, ensuring temporal consistency over longer durations, and managing computational demands for real-time performance. Midjourney’s roadmap likely involves iterative improvements: enhancing resolution, extending allowed durations, refining motion realism, integrating multi-modal inputs (e.g., text-to-video directly), and eventually supporting 3D outputs. GPU infrastructure and cost-efficiency will be critical; the current subscription model and usage-based pricing must scale to more intensive tasks. Additionally, user feedback will inform model tuning—balancing artistic flexibility with guardrails against problematic content or infringing outputs.

For creative communities on Discord and beyond, V1 adds a new dimension: users can animate favorite artworks or memes, share short clips, and collaborate on animated storytelling. This fosters engagement, community learning, and experimentation with motion design principles. Tutorials and showcase channels will likely emerge rapidly, as happened with image-generation prompts. Educators and content creators may incorporate V1 demos into workshops on AI creativity, highlighting both potential and pitfalls. Meanwhile, limitations in realism and duration may spur hybrid workflows: combining AI-generated clips with traditional editing, compositing, or manual animation tweaks. As AI video tools proliferate, skillsets around prompt engineering, post-processing, and ethical sourcing will become increasingly valuable.

Midjourney’s V1 underscores a broader shift: AI is encroaching on domains once reserved for specialized skills, democratizing motion content creation for non-experts. This can empower individuals and small teams to prototype ideas rapidly, but it also stirs debates about originality, authorship, and value. If anyone can generate short video clips with minimal technical know-how, how will professional animators adapt? Historically, technological leaps (e.g., digital cameras, video editing software) have lowered barriers while creating new opportunities; AI video likely follows a similar trajectory, with initial novelty giving way to standard toolsets integrated into creative pipelines. Yet the novelty phase is exciting: seeing static art come alive in unexpected ways, exploring surreal motions that might be arduous to animate by hand, and envisioning new narrative forms.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

The $19 Apple polishing cloth supports iPhone 17, Air, Pro, and 17e

Apple MacBook Neo: big power, surprising price, one clear target — Windows

Everything Nothing announced on March 5: Headphone (a), Phone (4a), and Phone (4a) Pro

BenQ’s new 5K Mac monitor costs $999 — here’s what you’re getting

OpenAI’s GPT-5.4 is coming — and it’s sooner than you think

Also Read
TACT Dial 01 tactile desk instrument

TACT Dial 01: turn it, press it, focus — that’s literally it

Close-up of a person holding the Google Pixel 10 Pro Fold in Moonstone gray with both hands, rear-facing triple camera array and Google "G" logo prominently visible, worn against a silver knit top and blue jacket with a poolside background.

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Screenshot of the Perplexity Pro interface with the model picker dropdown open, displaying GPT-5.4 labeled as New with the Thinking toggle switched on, and other available models including Sonar, Gemini 3.1 Pro, Claude Sonnet 4.6, Claude Opus 4.6 (Max-only), and Kimi K2.5.

GPT-5.4 is now on Perplexity — here’s what Pro/Max users get

A Microsoft Excel spreadsheet titled "Consumer Full 3 Statement Model" displaying a Balance Sheet in millions of dollars with historical financial data across four years (2020A–2023A), showing line items including cash and equivalents, accounts receivable, inventory, PP&E, goodwill, total assets, accounts payable, current debt maturities, and total liabilities, alongside an open ChatGPT sidebar panel where a user has asked ChatGPT to build an EBITDA-to-free-cash-flow conversion bridge with charts placed on the Balance Sheet tab, and the AI is actively responding by planning the analysis, filling in financing cash rows, and executing multiple actions in real time.

ChatGPT for Excel is here — and it runs on GPT‑5.4

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.