AI tools promised us faster assets. But if you’ve ever tried to build a full campaign off a single prompt, you know how quickly things fall apart: one hero image looks great, the next feels off-brand, and by the time you adapt it for social, web, and product screens, the visual language has drifted into something else entirely.
Figma Weave steps straight into that chaos and quietly replaces it with something that feels more like a production pipeline than a prompt toy. Instead of typing into a single text box and hoping for the best, you lay out your thinking as nodes on a canvas—inputs, models, effects, and outputs all chained together so you can see how every decision leads to a final image, video, or 3D model.
Under the hood, Weave is the evolution of Weavy, the AI-native media creation startup Figma acquired and rebranded in 2025, with the promise of bringing image, video, animation, motion design, and VFX into the same orbit as your design system. In practice, that means the jump from “idea” to “production asset” is no longer a disconnected journey through three different apps and half a dozen exports—it’s one continuous workflow you can tweak, replay, and scale.
To show what that looks like in reality, the Figma team built out five workflows around a fictional brand called Epoch—a contemporary sound and video shop with a visual language rooted in distorted textures, natural materials, and 3D forms. Think hibiscus petals fused with sandstone, plants made of stone, and rocks that feel like they’ve been pulled out of a glitchy sci‑fi title sequence. Those two starting references—just a flower and a rock—turn into an entire brand system: new imagery, adaptive layouts, 3D models, and finally a motion-rich homepage, all without a single photoshoot.
The interesting part isn’t just what you can generate; it’s how these five workflows stack together to quietly rewrite the way design teams think about AI in production.
Workflow 1: Turning two images into a reusable style, not just a one-off prompt
The first workflow starts in a place every brand designer knows: you have a few images you love, and you need “more like this”—same mood, same texture, same lighting, but new compositions and subjects. Traditionally, that’s either a painstaking search for more references or a costly shoot.
In Weave, the team feeds Epoch’s reference images—a hibiscus flower and a rock face—into an Image Describer node. Instead of guessing at what makes them special, Weave breaks each image down into a text description of its visual DNA: color palette, texture (velvety petals vs. layered stone), lighting style, composition, and overall mood. Those descriptions are editable, so art directors can dial the language up or down the way they’d refine a brand guideline.
Once both descriptions are ready, the team blends them into a single new style definition—imagine the organic structure of a flower fused with the striated, mineral feel of carved rock. Crucially, this isn’t just “type a clever prompt and hit generate.” Because every step sits on the node graph, you can literally adjust the influence of each reference: more flower, less rock; more harsh shadows, less saturation; more macro photography, less wide shot.
From there, they run that hybrid style through different image-generation models to see where it holds up best, stress-testing it the way you’d test a logo across print, web, and signage. Out of that comes something more stable than a one-off prompt: a reusable style guide expressed as text, ready to be plugged into any later workflow.
For teams, this is the big mental shift: style becomes a first-class asset, not a lucky accident. You define it once, you keep it in your Weave canvas, and you use it everywhere.
Workflow 2: Scaling that style across subjects, channels, and aspect ratios
Once Epoch’s hybrid style exists, the next problem is the one every product or marketing team hits: how do we turn one look into a complete asset family—mobile hero, desktop banner, social story—without everything drifting off-brand?
To do that, the team pipes their favorite style outputs into an Any LLM node, which lets them use a text model as a kind of style editor-in-chief. They ask it to produce a master style description—a tighter, more universal specification they can apply to new subjects.
Epoch’s visual language is grounded in nature, so they apply that master style to a begonia plant: the same fused flower–rock texture now wraps a completely different organic form. The result is six variations of plants that all look like they belong in the same universe—same lighting, same material logic—but with enough diversity to work across product cards, playlists, or editorial slots.
The clever part happens next. From a single chosen favorite, Weave automatically generates three aspect ratios tailored to real surfaces a product team cares about:
- 1:1 for mobile UI or app cards
- 967×420 for desktop layouts and web hero slots
- 9:16 for social stories and vertical video covers
Instead of manually cropping and praying a composition still works, those outputs are generated as intentional frames, ready to drop straight into Figma Design. The designer’s job shifts from endless resizing to picking the most compelling version and refining details.
In other words, Weave doesn’t just create assets—it thinks in channels the way a modern design system does.
Workflow 3: Turning distortion and effects into a controlled exploration, not random filters
Epoch’s brand leans heavily on displacement and distortion, the kind of visual language that can easily turn from “artful” to “overcooked” if you’re experimenting blind. In most tools, you try effects one at a time, stack layers, and hope you remember which combination you liked. Weave flips that by making “trying everything” the fastest path, not the slowest.
The team takes their now-iconic flower–rock–plant image and passes it through a chain of nodes representing different distortion styles, using Epoch’s previous references as guides. The result is eight distinct distorted outcomes generated in a single pass.
Because everything lands on the same canvas, they can instantly strip backgrounds, place each variant on brand colors, and see the effect in context—does this one work better on deep charcoal? Does that one read clearer on soft gray? Which distortion feels like “Epoch” when it sits beside the app UI?
What’s powerful here is the side‑by‑side decision‑making. You’re not choosing based on memory or a messy Photoshop history; you’re looking at all options at once and picking the one that best fits the story you’re telling.
This is where Weave’s node-based approach shows its editorial side: you’re no longer “prompting for a vibe,” you’re directing a set of controlled experiments.
Workflow 4: From single image to rotation-ready 3D object
Static imagery will take you far, but as soon as you want more dynamic compositions—or a product hero you can reframe endlessly—you hit the limits of flat assets. Epoch’s world is full of rocks, plants, and tactile objects, which naturally begs the question: what if these weren’t just pictures, but 3D models you could spin, light, and recompose at will?
In the fourth workflow, the team leans on Rodin 3D V2, one of the 3D models supported inside the Weave ecosystem. They start from a set of natural references—a leaf, a cactus, a cluster of rocks—and generate a new white rock that fits Epoch’s visual universe.
Instead of asking AI to “imagine different angles,” they take a more structured route: they generate front, back, left, and right views of that rock as separate images, then feed those into Rodin 3D V2 to reconstruct a coherent 3D model.
Once the model exists, the creative freedom kicks in. They can:
- Rotate the rock to any angle that best suits a homepage hero.
- Experiment with compositions without worrying about “the shot we captured on set.”
- Export stills for static layouts or pass the model into a later video workflow.
The upshot is that composition drives the shot, not the other way around. No reshoots. No “we can’t get that angle because the lighting rig is fixed.” Just a 3D object ready to be art‑directed like any other digital asset.
For design teams used to treating 3D as a specialized, siloed pipeline, this is a big shift: 3D becomes another node in the same canvas, manipulated with the same logic as images and video.
Workflow 5: Compositing everything into motion, then handing it back to design
The final workflow asks the natural follow-up: once you have on‑brand imagery, a hero 3D object, and a distortion language, how do you turn that into a living interface—something that moves, reacts, and feels designed rather than thrown together?
Here, the Weave canvas becomes a mini production studio. The team starts with Epoch’s homepage layout from the previous workflow and introduces a simple animation reference that defines how a distorted image at the bottom of the page should move.
The 3D rock is driven by a combination of a 3D node and a Kling Element node, a setup that gives the system a precise understanding of the object’s shape and angles. That allows the animation to treat the rock like a real subject—rotating, drifting, or reacting in space—rather than just sliding a flat texture around.
Alongside it, the distorted texture at the bottom of the page is controlled by a motion mask, shaping its movement so it feels like a cohesive part of the layout rather than an overlay floating on top.
Once the motion feels right, the final video is exported from Weave and dropped back into Figma, ready for handoff to developers. No round‑tripping through a separate motion tool, no “final final FINAL_v7.mp4” buried in a drive. The motion asset lives in the same broader workflow that created the stills, the style, and the 3D model.
At this point, those two original references—a flower and a rock—have become a full brand system: style guide, image set, responsive layout imagery, hero 3D object, and animated homepage. All inside a single ecosystem.
Why this matters for actual design teams, not just AI enthusiasts
Zoom out from the specifics of Epoch, and Figma Weave is clearly aiming at something larger than “yet another AI image generator.” It’s trying to build a new production layer for teams that already live in Figma.
A few things stand out:
- Node-based workflows mirror real production thinking. Art direction, experimentation, approvals, and final production all become visible steps—not opaque magic tied to whoever happened to click “generate.”
- Consistency becomes a system, not a superstition. Once you define a style, you can replay it across models, subjects, and channels, instead of hoping a prompt “feels the same” tomorrow.
- Different media types live in one canvas. Images, video, 3D, and even audio live side by side, with clear inputs and outputs, tied to the same brand logic.
- Handoff back to design is direct. Assets are designed to flow smoothly into Figma Design, with deeper integration promised later this year, so you’re not stuck in export hell.
Figma has been open about that roadmap: Weave exists today as its own environment, but the long‑term plan is to fold AI-native media generation into the core Figma experience, letting designers jump from canvas to canvas without losing context. In that sense, these five workflows are less a how‑to and more a preview of a future production stack, where AI is treated as another creative department rather than a gimmick bolted on at the end.
If you want to try any of this yourself, Figma has published 20+ workflow templates in the Figma Community and is actively showcasing Weave use cases through tutorials, livestreams, and a dedicated knowledge center—all aimed at helping teams move from “fun experiments” to repeatable, shareable pipelines.
The throughline across all five workflows is simple but easy to miss: AI isn’t the star—workflow is. Figma Weave just happens to be the place where that workflow finally gets a canvas of its own.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
