By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

Figma expands MCP server to let AI agents access design code directly

With the latest MCP update, Figma allows AI models to read the actual code behind Make prototypes instead of relying only on visuals.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 24, 2025, 3:46 AM EDT
Share
We may get a commission from retail offers. Learn more
Illustration showing a UI component called Card connected to code with import statements and a visual preview.
Image: Figma
SHARE

Figma’s latest move isn’t another flashy feature buried in a menu — it’s plumbing. This week, the company widened the reach of its Model Context Protocol (MCP) server so that AI coding agents can stop making educated guesses from screenshots and instead read the actual code behind a prototype built in Figma Make. That shift — from “what the app looks like” to “how the app is built” — is a small change in wording but a big one for anyone who’s ever watched an LLM try to reconstruct a design from a flat image.

What changed

Previously, tools trying to turn a Figma design into working UI had to infer structure and behavior from rendered artboards or screenshots. The MCP server acts like a translator between Figma files and external tools: it exposes structured design data and, now, the code that Figma Make generates from prompts. In short, AI agents that speak MCP can ask Figma, “Show me the code that makes this button behave like this,” instead of guessing from pixels. That code is indexed by the MCP server, so clients only request and receive exactly what they need.

Kris Rasmussen, Figma’s chief technologist, summed it up: “By using a Figma Make file via an MCP client, AI models can see the underlying code instead of a rendered prototype or image.” It’s the company’s way of saying: don’t guess the construction — look at the blueprint.

Who can use it today?

Figma says the Make → MCP path is available to a handful of tools and editors right away: Anthropic, Cursor, Windsurf and VS Code are listed as supported clients. That means you don’t need the desktop Figma app to let an AI assistant inspect your Make files — code editors and browser-based agents can now query the MCP server remotely. Figma also plans to open the door wider later so third-party MCP servers can plug into Make.

Community projects and open-source MCP implementations already exist — you’ll find adapters on GitHub that show how editor agents can be wired into Figma’s MCP story — which suggests this won’t be a single-vendor ecosystem for long.

Why this matters (and why it might feel like magic)

If you’ve ever used a prompt-to-app tool, you know the friction: the model describes a layout, the design team polishes it, then an engineer ports it to code — often reworking visual decisions to fit a codebase’s patterns. With MCP indexing Make’s code, an AI agent can generate (or regenerate) components that match both the visual design and the implementation details (naming, variables, layout constraints) that your app relies on.

For designers, this could mean faster prototypes that are also more faithful to production. For engineers, it can reduce the tedious translation work. For product teams, it could shorten the loop between idea and working demo. That’s the promise Figma is selling: tighter design-to-code continuity where AI is a collaborator that references the source of truth rather than guessing from screenshots.

New features that tag along

Figma also flagged two concrete features that ride alongside the MCP expansion:

  • Design Snapshot — converts Figma Make snapshots into editable layers inside Figma Design, meaning a snapshot of a generated app becomes material you can edit directly. That feature was slated to land this week, per Figma’s update.
  • In-canvas AI editing — a tool that would let users manipulate designs with AI prompts without leaving the Design canvas is in testing. That’s the more visible, designer-facing side of the updates — little prompts that tweak a component, right where you’re already working.

A few caveats (because plumbing needs valves)

This is powerful, but not risk-free. Exposing design and code via an API-style protocol raises obvious questions:

  • Access control & privacy. You don’t want arbitrary models or cloud agents scraping internal product code or design tokens. Figma’s docs and MCP guides emphasize that MCP integrations must be configured and that supported clients are how connections are established — but teams should still treat MCP endpoints like any other sensitive dev resource and enforce permissions.
  • Dependence on generated code. Figma Make creates code that’s useful for prototypes and iteration; whether that code is production-grade or matches an org’s architecture is still design- and team-dependent. The MCP server makes it easier for agents to use that code, but engineers still need to validate and integrate it thoughtfully.

What this does to workflows

Think of three plausible short-term workflows:

  1. Designer + AI assistant in the editor. A designer prompts Make to generate an app screen, snapshots it into editable layers, and then asks an AI in VS Code to scaffold a React component that matches naming conventions and existing style tokens. Result: fewer handoffs.
  2. Agent-led bug fixes. An AI agent that understands both your Figma design and your codebase could propose a fix when a component visually drifts from the spec, and even suggest the code change needed to bring implementation back in line.
  3. Prototyping at speed. Product teams can iterate on feature ideas by generating live prototypes via Make and letting MCP-backed agents convert them into working demos that engineers can refine.

The bigger picture

Figma has been leaning into AI for a while — from asset search to prompt-driven features — but this feels less like feature-creep and more like infrastructure. Making design context available “everywhere you build” reframes Figma: not just a canvas, but a live source of truth that tools can query programmatically. That’s the direction lots of platform vendors have been hinting at, and Figma is trying to make it practical for teams that want AI to help implement rather than just imagine.

Bottom line

If you care about faster, less error-prone design-to-code workflows, Figma’s MCP expansion for Make is a notable step. It doesn’t replace engineers or rigorous review, but it hands AI agents better blueprints to work from — and in the near future, that could shave hours or days off iteration cycles. For teams that treat design files as living artifacts, not static pictures, this is the sort of behind-the-scenes upgrade that quietly changes how work gets done.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Figma
Most Popular

Claude rolls out Microsoft 365 connectors across all plans

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

Claude Platform’s new Compliance API answers “who did what and when”

OpenAI offers $500 Codex credit per Business workspace

Also Read
A dark background with colorful rounded rectangles floating around a central white search-style bar that asks “What do you want to make?” with simple icon buttons on the left and right.

Figma Make kits and attachments finally bring real context to AI prototyping

2026 LG QNED evo Mini LED TV

LG 2026 QNED evo Mini LED TVs go ultra-large with 115-inch flagship

Samsung The Frame Pro LS03HW

Samsung expands 2026 The Frame lineup with new sizes and expanded art options

2026 Samsung S95H OLED TV

Samsung S95H, S90H and S85H bring brighter 2026 OLED TV upgrades

A laptop on a light background displays the Ring Appstore webpage, showing a grid of security camera thumbnail views at the top and a featured app section below with cards for Ring Cheer Chime, Lumeo, and Visionify, highlighting tools that add AI capabilities to Ring cameras.

Ring Appstore opens its cameras to third-party AI developers

Illustration of a blue Android smartphone next to a small blue hardware module with a white geometric AI logo, glowing accents, and floating abstract shapes on a dark background, representing on‑device AI or Gemma 4 integration.

Gemma 4 lands in AICore to supercharge on‑device Android AI

Stylized illustration showing a blue hardware block with the Gemma logo plugged into a white Android Studio block with the Android Studio icon, connected by a port on a dark background with flowing blue shapes and floating circles.

Android Studio levels up with Gemma 4 local code assistant

Android Developers and Gemma 4 wordmark lockup on a dark gradient background, featuring the green Android robot head above and the Gemma symbol with “Gemma 4” text below.

Gemma 4 is the engine behind next-gen Gemini Nano on Android

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.