GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AdobeAIAndroidAppsCreators

Adobe launches Firefly mobile app with generative AI tools for iOS and Android

Adobe Firefly now supports partner AI models for video and image generation, all accessible through a new mobile app designed for flexible workflows.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jun 17, 2025, 3:46 PM EDT
Share
We may get a commission from retail offers. Learn more
Adobe Firefly mobile app hero image.
Image: Adobe
SHARE

Adobe has taken a significant step in democratizing AI-powered creativity by launching a dedicated Firefly mobile app for iOS and Android devices on June 17, 2025. This move extends its generative AI ecosystem beyond desktop and web interfaces, allowing creators to ideate, generate, and edit content on the go while staying within Adobe’s trusted workflow. The announcement arrives alongside an expansion of Firefly’s partner model integrations—bringing in third-party image and video models from Ideogram, Luma AI, Pika, Runway, Google, OpenAI, and Black Forest Labs—as part of Adobe’s Partner Model Integration Program.

Over the past two years, generative AI has rapidly moved from niche experiments to core features in mainstream creative tools. Adobe Firefly, first teased in 2022 and publicly launched in beta in March 2023, has led this charge within the Adobe ecosystem by offering text-to-image, text-to-video, and other media-generation capabilities trained on commercially safe datasets (i.e., images and videos for which Adobe holds rights or that are in the public domain). As competing platforms (from standalone AI tools to integrations in other suites) proliferate, Adobe has focused on integrating generative AI tightly into its Creative Cloud applications—Photoshop, Illustrator, Premiere Pro, Express, and more—so that generated assets can flow seamlessly from ideation to production. The new Firefly mobile app extends this seamless experience to wherever creators are, reflecting the broader industry trend toward on-demand, device-agnostic creative workflows.

A hallmark of Adobe’s recent Firefly updates has been the Partner Model Integration Program, which invites third-party AI models into the Firefly ecosystem. On June 17, 2025, Adobe announced additions from Ideogram (e.g., Ideogram 3.0), Luma AI (Ray2), Pika (2.2 text-to-video), Runway (Gen-4 Image), and Google’s latest Imagen 4 and Veo 3 models, joining earlier integrations such as OpenAI’s image generation and Black Forest Labs’ Flux variants. These integrations are initially available in Firefly Boards (Adobe’s AI-powered moodboarding and collaborative ideation surface) and will soon be accessible directly in the Firefly mobile and web apps.

By offering multiple aesthetic “personalities” and technical strengths—some models excel at photorealism, others at stylized renderings, others at dynamic video generation—Adobe empowers creators to experiment and iterate more broadly without leaving a single interface. Alexandru Costin, Vice President of Generative AI and Sensei at Adobe, notes that the Firefly app is “the ultimate one-stop shop for creative experimentation—where you can explore different AI models, aesthetics, and media types all in one place.” Having all these models under one sign-in and subscription plan removes friction: there is no need to juggle separate accounts, payment methods, or portal logins when trying out different engines.

Firefly Boards, which entered public beta earlier in 2025, transforms ideation by providing an infinite canvas where teams can explore hundreds of concepts across various media types. The June 2025 update brings advanced video capabilities into Boards: users can now remix uploaded clips or generate new footage using Adobe’s own Firefly Video model as well as partner video models (Google Veo 3, Luma AI’s Ray2, Pika’s text-to-video generator). Teams can make iterative edits to images using conversational text prompts via Black Forest Labs’ Flux.1 Kontext or OpenAI’s image generation, then seamlessly pivot to video generation within the same board.

Additionally, Boards can automatically organize visual elements into a clean, presentation-ready layout with a single click, facilitating quick concept reviews. Integration with Adobe documents means that when a linked asset is updated (e.g., a Photoshop file or Premiere Pro sequence), changes propagate to Boards content in real time. These collaborative and organizational tools reflect Adobe’s push to support not only solo creators but also distributed teams working on campaigns, client pitches, storyboards, and more.

The centerpiece of the announcement is the Firefly mobile app for iOS and Android, available starting June 17, 2025. According to Adobe, the app brings AI-first creativity to creators no matter where they are: users can generate images and videos from text prompts (Text to Image, Text to Video), transform existing images into videos (Image to Video), and apply editing tools like Generative Fill (removing or adding elements) and Generative Expand (extending scene boundaries) directly on their devices. All creations sync automatically with the user’s Creative Cloud account, enabling workflows such as starting a concept sketch or video storyboard on the phone and refining it later on desktop Photoshop or Premiere Pro.

The app retains feature parity in terms of model choice: creators can opt for Adobe’s commercially safe Firefly models or select partner models (Google’s Imagen/Veo, OpenAI’s image generator, Ideogram, Luma AI, Pika, Runway, etc.) based on their creative needs and preferences. While basic Firefly features are included in standard Creative Cloud subscriptions, some premium or partner models may incur extra Firefly credits or require a specific plan tier, as noted by early reporting. Adobe emphasizes that this unified sign-in and billing through Creative Cloud simplifies budget management for individuals and teams.

A consistent theme in Adobe’s approach to Firefly is “commercially safe” AI: models are trained only on assets Adobe has rights to (Adobe Stock, public domain, Creative Commons) and partner agreements stipulate that user uploads will not be used to train models further. Adobe reiterates that content produced via any model integrated in Firefly—Adobe’s own or partner’s—will not be ingested for model training, aligning with user expectations around data privacy and IP protection.

Moreover, Adobe automatically attaches Content Credentials to AI-generated outputs, indicating whether the asset was produced by Adobe’s Firefly models or a partner model. This transparency empowers creators and end-users to know when and how AI was involved, which is increasingly important for attribution, ethical considerations, and compliance in commercial or editorial contexts.

For solo creators—illustrators, videographers, social media managers—having a powerful generative AI toolkit in a mobile app lowers barriers: brainstorming visuals while commuting, editing assets on location, or experimenting with video ideas on the fly becomes feasible. For agencies and teams, Firefly Boards plus mobile accessibility support distributed workflows: a designer can capture an on-site reference photo, upload it, generate concepts via AI, and share instantly with colleagues for feedback.

From an industry perspective, Adobe’s bet on integrating third-party models signals a shift from walled-garden approaches to an “aggregator” model: Adobe positions Firefly as the hub where creators access the best-in-class AI engines across providers, rather than forcing users to choose between separate platforms. This mirrors broader trends in cloud services where interoperability and seamless user experience trump isolated offerings. However, balancing this openness with commercial safety and clear licensing is critical—Adobe’s emphasis on non-training of user content and Content Credentials aims to address these concerns.

Despite the promise, some creators may worry about costs if premium partner models consume additional credits, or about learning curves when switching between multiple model “personalities.” Adobe will need to provide clear guidance on credit usage, model strengths, and best practices for prompt engineering across engines. Mobile device limitations (processing power, battery life, network connectivity) may also affect user experience, though much of the heavy lifting likely occurs in the cloud. Ensuring smooth performance and responsive UX will be key to adoption.

Additionally, as generative AI features proliferate, concerns around originality and overreliance on AI may arise: Adobe and the creative community will need to foster responsible usage, emphasizing AI as an augmentation rather than replacement of human creativity. The Content Credentials system helps by signaling AI involvement, but broader education around ethics and IP remains essential.

With Firefly mobile and expanded partner integrations, Adobe is doubling down on generative AI as a core component of creative workflows. Future enhancements might include deeper on-device inference for offline use, more advanced collaborative features in Boards (real-time co-editing, integrated feedback loops), or AI-driven asset management and versioning. Adobe’s ongoing partnerships (e.g., potential integrations with emerging AI research labs) and user feedback will shape the next wave of features. As Alexandru Costin and other Adobe leaders have articulated, the goal is to “push ideas further” and “empower creators” by offering flexibility, control, and safety within a unified ecosystem.

For creators already invested in Adobe’s suite, the Firefly mobile app and partner model ecosystem represent a welcome extension of familiar workflows into more fluid, on-the-go contexts. For those exploring generative AI, having a one-stop hub reduces friction and highlights the diverse creative possibilities unlocked by different models. In a landscape where AI capabilities evolve rapidly, Adobe’s strategy of integration, transparency, and user-centric design aims to keep Firefly at the forefront of next-generation creative tools.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Anthropic launches full Claude Platform on AWS with native integration

Quick Share’s AirDrop support is coming to more Android brands

Anthropic rolls out fast mode for Claude Opus 4.7 on API and Claude Code

Anthropic ships agent view to tame your Claude Code chaos

Google adds Gemini AI and auto browse to Chrome on Android

Also Read
Anthropic logo displayed as bold black uppercase text on a light beige background.

Anthropic and Gates Foundation seal $200 million AI deal for global good

Illustration showing an AI-assisted financial workflow interface connected to business apps and spreadsheets. On the left, a dark panel contains a prompt requesting payroll cash position analysis using QuickBooks and PayPal data, along with reminders for overdue invoices. Below the prompt are connector buttons for Intuit QuickBooks and PayPal. On the right, a Microsoft Excel spreadsheet titled “April-Payroll-Reconciliation.xlsx” displays account balances, payroll obligations, reserve targets, projected cash flow, and highlighted financial gaps using color-coded cells. The background features a soft green abstract pattern.

Anthropic launches Claude for Small Business with deep app integrations

Close-up top view of two Nothing Ear (open) Blue earbuds on a light gray background. The earbuds feature curved open-ear hooks in pastel blue, metallic silver stems, and transparent housings that reveal internal components with distinctive red and white circular accents.

Nothing Ear (open) now comes in a soft blue for $99

Minimalist Android logo on a light gray background. The image features the word “Android” in black text alongside the green Android robot head mascot with antennae and black eyes.

Android 17 brings big upgrades for creators

Wide in-car infotainment display showing the Android Auto interface with navigation, messaging, and music controls. The main screen features a 3D-style map with driving directions to Seneca Street, route guidance, and estimated travel time. A sidebar on the left provides quick access to apps such as Google Maps, Spotify, phone controls, and system settings. On the right, a notification panel shows a new message from “Jennifer Travis,” while a Spotify music widget displays the song “You Got to Listen” by Michael Evans with playback controls. The interface is designed for multitasking while driving.

Android Auto’s big upgrade brings 3D Maps, video and Gemini to your car

Three smartphone screens demonstrating data transfer from an iPhone to an Android device. The left screen shows an iPhone “Apps and Data” page where users can select items to transfer, including apps, app data, passwords, accessibility settings, and accounts. The center Android screen displays a progress interface with the message “Copying your data...” and animated graphics while the transfer is in progress. The right Android screen confirms the transfer is complete, listing successfully copied items such as apps, calendars, contacts, files, and home screen layout, with checkmarks beside each category.

Google and Apple just made switching from iPhone to Android feel painless

Illustration showing three Android smartphone screens demonstrating a digital wellbeing or focus feature called “Pause Point.” The left screen displays a calming breathing exercise with the text “Breathe in” inside a large rounded shape. The center screen asks users to set a timer for an app called “Tiny Knight,” offering options for 5, 15, or 30 minutes. The right screen suggests alternative activities with the message “Why not focus elsewhere?” and lists apps like Fitbit, Play Books, and Mellow Mindspace. Each screen includes a blue action button such as “Don’t open” or “Close app,” emphasizing mindful app usage and screen time management.

Pause Point for Android adds a 10-second speed bump to distracting apps

Colorful collage of assorted emoji icons arranged in a grid on a light gray background. The image includes a wide variety of emojis such as food items, animals, weather symbols, objects, nature elements, facial expressions, and activities. Visible emojis include pizza, tiger face, fireworks, bacon, cat face, rainbow, sloth, pumpkin, books, diamond, fire, money bag, UFO, guitar, gift box, violin, and many others, creating a playful and vibrant emoji-themed pattern.

Android is getting a full 3D emoji makeover with Google’s Noto 3D

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.