GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicCreatorsProductivityTech

Figma partners with Anthropic to bridge code and design

Instead of screenshotting AI‑built interfaces, you can now capture live screens and paste them straight into Figma, where layers, hierarchy, and layouts are ready for design‑grade edits.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Feb 17, 2026, 1:03 PM EST
Share
We may get a commission from retail offers. Learn more
A large black cursor arrow points to the right, riding on a diagonal trail of colorful, overlapping code snippets in bright blues, greens, reds, oranges, and purples on a light gray background, symbolizing code transforming into visual design.
Image: Figma
SHARE

Figma is tightening its embrace of AI—this time by meeting developers where they actually work: in code. In a new partnership with Anthropic, the company behind Claude, Figma is rolling out a “Claude Code to Figma” (also described as “Code to Canvas”) flow that turns live, AI-generated interfaces into fully editable Figma designs with a couple of clicks. For teams already experimenting with Claude Code as an AI coding agent in their terminal or IDE, this effectively closes a workflow loop that used to be held together with screenshots, copy‑paste, and a lot of manual re‑creation.

At the center of this is a simple idea: many product teams no longer start in a design file. A developer or product engineer might open Claude Code, describe a sign‑up flow or dashboard, and get a working UI scaffolded by AI in their local environment or staging build. Until now, moving that interface into a shared design space meant either painstakingly rebuilding it screen by screen in Figma or trying to iterate directly in code while designers watched from the sidelines. The new Claude Code to Figma capability flips that dynamic: you can capture a real, running screen from your browser—production, staging, or localhost—and send it straight into a Figma file as an editable frame.

The workflow is intentionally lightweight. From a Claude Code‑powered session, you capture UI pages or states; those captures can be copied to your clipboard and pasted into any Figma design file, where they appear as frames like anything a designer would have drawn themselves. Layout, components, and visual hierarchy come across as editable layers rather than flattened images, so teams can rearrange sections, tweak visual language, or experiment with entirely different flows without ever touching the underlying code. For longer journeys—say, a checkout funnel or onboarding—multiple screens can be captured in one session, preserving sequence so that flow reviews still make sense on the canvas.

This is where the partnership earns its keep: AI makes it trivial to get “something” on screen, but that first version is rarely the right one. Claude Code is good at quickly assembling UI from a description—hooking up forms, states, and basic interaction logic in a way that compiles. Figma, by contrast, is where teams argue about taste, usability, and product strategy. Bringing AI‑generated UIs into Figma reframes the conversation from “can we build this?” to “is this actually the best experience?”—and does it at a moment when changing your mind is still relatively cheap.

Internally, Figma is positioning this as part of a larger move away from rigid, linear pipelines and toward more fluid, “round‑trip” workflows between design and code. On one side, there’s Figma Make, which lets people turn natural‑language prompts directly into working prototypes, then push those previews onto the canvas via features like Copy design. On the other, there’s this new Claude Code to Figma path, which respects the reality that a lot of experimentation happens in code first, especially now that AI tools can scaffold frontends at speed. Different starting points; same end game: a shared, editable artifact in Figma where designers, PMs, and engineers can converge.

Around this sits the Figma MCP (Model Context Protocol) server, which has quietly become the connective tissue between design tools and AI agents like Claude. MCP is an open standard for letting AI assistants talk to external tools and data sources, and Figma’s implementation exposes design files, components, and tokens in a way that AI models can understand. Initially, that emphasis was very much “design‑to‑code”—use Claude Code plus the Figma MCP server to read your design system and spit out production‑ready UI code that actually matches your mockups. With Claude Code to Figma, Figma is now making that loop bidirectional: agents can generate interfaces from design context, and those interfaces can be captured back into design space for further refinement.

For teams that already live in Dev Mode or have wired up the MCP server, the promise is a genuine round trip rather than a one‑way handoff. You might start with a high‑level product conversation in Claude, generate a first pass of UI in code, capture that into Figma, run a structured design critique on the canvas, then send updated frames back into the coding workflow using the MCP server and Claude’s design‑aware prompts. It’s closer to an ongoing loop than the traditional “design, then hand off specs to engineering” model that design tools have historically supported.

The practical upside is pretty obvious if you’ve ever tried to iterate on an AI‑generated UI. Today, developers using Claude Code or other AI coding assistants can get realistic, data‑aware frontends running quickly—but small UX changes are still bottlenecked by code edits, rebuilds, and redeploys. With Claude Code to Figma, design teams no longer need to file tickets for every tweak they want to explore. They can duplicate frames, try alternate layouts, explore different copy, or re‑order steps visually, then converge on one direction before anyone spends time rewriting the implementation. Even “failed” explorations remain valuable, because they’re persisted on the canvas as options to revisit later rather than disappearing in Git history.

Strategically, this move also says a lot about how Figma sees AI reshaping the design stack. Rather than focusing solely on generative tools inside its own UI, Figma is acknowledging a fragmented reality: people are using Claude in the browser, Claude Code in the terminal, specialized editors like Cursor or VS Code, and a growing ecosystem of MCP‑compatible tools. By plugging into that world instead of trying to replace it, Figma positions itself as the central collaboration surface where all those AI‑driven explorations eventually land. It’s essentially betting that “design context” is the scarce resource AI will need most—and that Figma is the best place to maintain it.

Anthropic, for its part, gets a showcase use case for Claude Code as more than just a smart autocomplete. The terminal‑based agent already understands entire codebases, navigates repositories, and can orchestrate multi‑file edits; adding a clean bridge into design tools makes it more compelling for teams that care about crafting polished frontends, not just shipping backend logic. With Claude now distributed via platforms like Amazon Bedrock and used heavily in enterprise settings, tying into Figma—arguably the default interface design tool for modern SaaS—strengthens Anthropic’s story around “AI that collaborates across the whole product lifecycle.”

If you zoom out, this partnership lands at a moment when both design and development are being pulled apart and reassembled around AI agents. Agentic coding tools like Claude Code, Cursor, and others are making it normal to “ask” for features rather than write every line by hand, while AI‑driven design tools are turning prompts into prototypes in seconds. The weak link has been the glue between them: design files that don’t reflect reality, frontends that drift away from shared UX intent, and a constant back‑and‑forth over edge cases. By letting AI‑generated code flow into design, and AI agents consume design context through the MCP server, Figma and Anthropic are trying to make that glue a little less brittle.

Will it instantly fix every handoff problem? Of course not. Production teams will still have to worry about code quality, performance, accessibility, and design systems drift. But it does shift the default from “design and code live in parallel universes” to “they’re two views on the same evolving artifact.” In a world where AI is already generating more UI than humans could ever manually keep in sync, that’s a pretty meaningful step forward.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AIClaude CodeFigma
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Claude for Microsoft 365 is now generally available

How to stream all five seasons of The Boys right now

Anthropic launches full Claude Platform on AWS with native integration

OpenAI upgrades its Realtime API with three new voice AI models

AI-powered Google Finance launches across Europe now

Also Read
Logo featuring a stylized orange asterisk-like symbol followed by the word 'Claude' in bold black serif font on a light beige background.

Anthropic rolls out fast mode for Claude Opus 4.7 on API and Claude Code

Person holding a smartphone displaying the Gemini app in dark mode with an AI-generated optics study guide on screen. The document includes explanations of spherical mirror geometry, focal points, and mirror equations, along with mathematical formulas and bullet-point notes for exam preparation. The phone is held in a warmly lit indoor environment with a blurred background, creating a focused study atmosphere.

Turn handwritten notes into a smart Gemini study guide

Screenshot of a dark-themed terminal window running “Claude Code” on a desktop interface. The terminal displays project task management information for a workspace named “acme,” including one task awaiting input and several completed coding tasks such as test coverage improvements, load testing, payment migration, performance auditing, PR reviews, and dark mode implementation. A highlighted task labeled “release-notes” requests guidance on feature priorities. At the bottom, a command prompt invites the user to “describe a task for a new session.” The interface appears on a muted green background with subtle wave patterns.

Anthropic ships agent view to tame your Claude Code chaos

Apple App Store logo

Apple rebalances South Korea App Store pricing to keep global tiers in line

Close-up mockup of an iPhone displaying an RCS text conversation in the Messages app. The chat is with a contact named “Grace,” shown with a profile photo at the top. Below the contact name, the interface displays “Text Message • RCS” and “Encrypted,” indicating secure RCS messaging support. A green message bubble asks, “How are you doing?” and the reply says, “I’m good thanks. Just got back from a camping trip in Yosemite!” The screen uses Apple’s clean light-mode Messages interface with the Dynamic Island visible at the top.

iOS 26.5 update adds secure RCS messaging for iPhone users

Modern kitchen interior featuring a Samsung Bespoke AI Refrigerator Family Hub in a soft green-themed space. The large white refrigerator has a built-in display panel on the upper door showing abstract artwork. Surrounding the refrigerator are matching pastel green cabinets, a kitchen island with open shelving, and a dark countertop with a gold-tone faucet. Natural light enters through a large window beside the minimalist kitchen setup, highlighting the clean and modern design.

Gemini AI comes to Samsung’s Bespoke AI refrigerator Family Hub screen

Screenshot of the Windows 11 touchpad “Scroll & zoom” settings page in dark mode. The panel shows multiple enabled touchpad options with blue checkmarks, including “Drag two fingers to scroll,” “Automatic scrolling at edge,” “Automatic scrolling with pressure,” “Accelerated scrolling,” and “Pinch to zoom.” A “Single-finger scrolling” option is set to “Right Side.” The interface also includes sliders for “Scroll speed” and “Zoom speed,” along with a dropdown menu for “Scrolling direction” set to “Down motion scrolls up.”

Windows 11 adds custom scroll sliders to Settings

Illustration comparing Gmail writing suggestions before and after personalization. On the left, under the heading “Today,” a generic email draft to “Alex Liu” uses formal, template-style language with placeholder text. On the right, under “With personalization,” the same draft is rewritten in a more natural and conversational tone with specific influencer campaign details, highlighted text snippets, and a personalized sign-off. Along the right side are three colored labels reading “Personalized tone and style,” “Based on past emails,” and “Based on Drive files,” emphasizing how Gmail uses user context to improve writing suggestions.

Help me write in Gmail gets smarter with personalization

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.