By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AdobeAIAndroidAppsCreators

Adobe launches Firefly mobile app with generative AI tools for iOS and Android

Adobe Firefly now supports partner AI models for video and image generation, all accessible through a new mobile app designed for flexible workflows.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jun 17, 2025, 3:46 PM EDT
Share
We may get a commission from retail offers. Learn more
Adobe Firefly mobile app hero image.
Image: Adobe
SHARE

Adobe has taken a significant step in democratizing AI-powered creativity by launching a dedicated Firefly mobile app for iOS and Android devices on June 17, 2025. This move extends its generative AI ecosystem beyond desktop and web interfaces, allowing creators to ideate, generate, and edit content on the go while staying within Adobe’s trusted workflow. The announcement arrives alongside an expansion of Firefly’s partner model integrations—bringing in third-party image and video models from Ideogram, Luma AI, Pika, Runway, Google, OpenAI, and Black Forest Labs—as part of Adobe’s Partner Model Integration Program.

Over the past two years, generative AI has rapidly moved from niche experiments to core features in mainstream creative tools. Adobe Firefly, first teased in 2022 and publicly launched in beta in March 2023, has led this charge within the Adobe ecosystem by offering text-to-image, text-to-video, and other media-generation capabilities trained on commercially safe datasets (i.e., images and videos for which Adobe holds rights or that are in the public domain). As competing platforms (from standalone AI tools to integrations in other suites) proliferate, Adobe has focused on integrating generative AI tightly into its Creative Cloud applications—Photoshop, Illustrator, Premiere Pro, Express, and more—so that generated assets can flow seamlessly from ideation to production. The new Firefly mobile app extends this seamless experience to wherever creators are, reflecting the broader industry trend toward on-demand, device-agnostic creative workflows.

A hallmark of Adobe’s recent Firefly updates has been the Partner Model Integration Program, which invites third-party AI models into the Firefly ecosystem. On June 17, 2025, Adobe announced additions from Ideogram (e.g., Ideogram 3.0), Luma AI (Ray2), Pika (2.2 text-to-video), Runway (Gen-4 Image), and Google’s latest Imagen 4 and Veo 3 models, joining earlier integrations such as OpenAI’s image generation and Black Forest Labs’ Flux variants. These integrations are initially available in Firefly Boards (Adobe’s AI-powered moodboarding and collaborative ideation surface) and will soon be accessible directly in the Firefly mobile and web apps.

By offering multiple aesthetic “personalities” and technical strengths—some models excel at photorealism, others at stylized renderings, others at dynamic video generation—Adobe empowers creators to experiment and iterate more broadly without leaving a single interface. Alexandru Costin, Vice President of Generative AI and Sensei at Adobe, notes that the Firefly app is “the ultimate one-stop shop for creative experimentation—where you can explore different AI models, aesthetics, and media types all in one place.” Having all these models under one sign-in and subscription plan removes friction: there is no need to juggle separate accounts, payment methods, or portal logins when trying out different engines.

Firefly Boards, which entered public beta earlier in 2025, transforms ideation by providing an infinite canvas where teams can explore hundreds of concepts across various media types. The June 2025 update brings advanced video capabilities into Boards: users can now remix uploaded clips or generate new footage using Adobe’s own Firefly Video model as well as partner video models (Google Veo 3, Luma AI’s Ray2, Pika’s text-to-video generator). Teams can make iterative edits to images using conversational text prompts via Black Forest Labs’ Flux.1 Kontext or OpenAI’s image generation, then seamlessly pivot to video generation within the same board.

Additionally, Boards can automatically organize visual elements into a clean, presentation-ready layout with a single click, facilitating quick concept reviews. Integration with Adobe documents means that when a linked asset is updated (e.g., a Photoshop file or Premiere Pro sequence), changes propagate to Boards content in real time. These collaborative and organizational tools reflect Adobe’s push to support not only solo creators but also distributed teams working on campaigns, client pitches, storyboards, and more.

The centerpiece of the announcement is the Firefly mobile app for iOS and Android, available starting June 17, 2025. According to Adobe, the app brings AI-first creativity to creators no matter where they are: users can generate images and videos from text prompts (Text to Image, Text to Video), transform existing images into videos (Image to Video), and apply editing tools like Generative Fill (removing or adding elements) and Generative Expand (extending scene boundaries) directly on their devices. All creations sync automatically with the user’s Creative Cloud account, enabling workflows such as starting a concept sketch or video storyboard on the phone and refining it later on desktop Photoshop or Premiere Pro.

The app retains feature parity in terms of model choice: creators can opt for Adobe’s commercially safe Firefly models or select partner models (Google’s Imagen/Veo, OpenAI’s image generator, Ideogram, Luma AI, Pika, Runway, etc.) based on their creative needs and preferences. While basic Firefly features are included in standard Creative Cloud subscriptions, some premium or partner models may incur extra Firefly credits or require a specific plan tier, as noted by early reporting. Adobe emphasizes that this unified sign-in and billing through Creative Cloud simplifies budget management for individuals and teams.

A consistent theme in Adobe’s approach to Firefly is “commercially safe” AI: models are trained only on assets Adobe has rights to (Adobe Stock, public domain, Creative Commons) and partner agreements stipulate that user uploads will not be used to train models further. Adobe reiterates that content produced via any model integrated in Firefly—Adobe’s own or partner’s—will not be ingested for model training, aligning with user expectations around data privacy and IP protection.

Moreover, Adobe automatically attaches Content Credentials to AI-generated outputs, indicating whether the asset was produced by Adobe’s Firefly models or a partner model. This transparency empowers creators and end-users to know when and how AI was involved, which is increasingly important for attribution, ethical considerations, and compliance in commercial or editorial contexts.

For solo creators—illustrators, videographers, social media managers—having a powerful generative AI toolkit in a mobile app lowers barriers: brainstorming visuals while commuting, editing assets on location, or experimenting with video ideas on the fly becomes feasible. For agencies and teams, Firefly Boards plus mobile accessibility support distributed workflows: a designer can capture an on-site reference photo, upload it, generate concepts via AI, and share instantly with colleagues for feedback.

From an industry perspective, Adobe’s bet on integrating third-party models signals a shift from walled-garden approaches to an “aggregator” model: Adobe positions Firefly as the hub where creators access the best-in-class AI engines across providers, rather than forcing users to choose between separate platforms. This mirrors broader trends in cloud services where interoperability and seamless user experience trump isolated offerings. However, balancing this openness with commercial safety and clear licensing is critical—Adobe’s emphasis on non-training of user content and Content Credentials aims to address these concerns.

Despite the promise, some creators may worry about costs if premium partner models consume additional credits, or about learning curves when switching between multiple model “personalities.” Adobe will need to provide clear guidance on credit usage, model strengths, and best practices for prompt engineering across engines. Mobile device limitations (processing power, battery life, network connectivity) may also affect user experience, though much of the heavy lifting likely occurs in the cloud. Ensuring smooth performance and responsive UX will be key to adoption.

Additionally, as generative AI features proliferate, concerns around originality and overreliance on AI may arise: Adobe and the creative community will need to foster responsible usage, emphasizing AI as an augmentation rather than replacement of human creativity. The Content Credentials system helps by signaling AI involvement, but broader education around ethics and IP remains essential.

With Firefly mobile and expanded partner integrations, Adobe is doubling down on generative AI as a core component of creative workflows. Future enhancements might include deeper on-device inference for offline use, more advanced collaborative features in Boards (real-time co-editing, integrated feedback loops), or AI-driven asset management and versioning. Adobe’s ongoing partnerships (e.g., potential integrations with emerging AI research labs) and user feedback will shape the next wave of features. As Alexandru Costin and other Adobe leaders have articulated, the goal is to “push ideas further” and “empower creators” by offering flexibility, control, and safety within a unified ecosystem.

For creators already invested in Adobe’s suite, the Firefly mobile app and partner model ecosystem represent a welcome extension of familiar workflows into more fluid, on-the-go contexts. For those exploring generative AI, having a one-stop hub reduces friction and highlights the diverse creative possibilities unlocked by different models. In a landscape where AI capabilities evolve rapidly, Adobe’s strategy of integration, transparency, and user-centric design aims to keep Firefly at the forefront of next-generation creative tools.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

The $19 Apple polishing cloth supports iPhone 17, Air, Pro, and 17e

Apple MacBook Neo: big power, surprising price, one clear target — Windows

Everything Nothing announced on March 5: Headphone (a), Phone (4a), and Phone (4a) Pro

BenQ’s new 5K Mac monitor costs $999 — here’s what you’re getting

OpenAI’s GPT-5.4 is coming — and it’s sooner than you think

Also Read
TACT Dial 01 tactile desk instrument

TACT Dial 01: turn it, press it, focus — that’s literally it

Close-up of a person holding the Google Pixel 10 Pro Fold in Moonstone gray with both hands, rear-facing triple camera array and Google "G" logo prominently visible, worn against a silver knit top and blue jacket with a poolside background.

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Screenshot of the Perplexity Pro interface with the model picker dropdown open, displaying GPT-5.4 labeled as New with the Thinking toggle switched on, and other available models including Sonar, Gemini 3.1 Pro, Claude Sonnet 4.6, Claude Opus 4.6 (Max-only), and Kimi K2.5.

GPT-5.4 is now on Perplexity — here’s what Pro/Max users get

A Microsoft Excel spreadsheet titled "Consumer Full 3 Statement Model" displaying a Balance Sheet in millions of dollars with historical financial data across four years (2020A–2023A), showing line items including cash and equivalents, accounts receivable, inventory, PP&E, goodwill, total assets, accounts payable, current debt maturities, and total liabilities, alongside an open ChatGPT sidebar panel where a user has asked ChatGPT to build an EBITDA-to-free-cash-flow conversion bridge with charts placed on the Balance Sheet tab, and the AI is actively responding by planning the analysis, filling in financing cash rows, and executing multiple actions in real time.

ChatGPT for Excel is here — and it runs on GPT‑5.4

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.