Inspiration 💡

The "handoff" from design to development is a notorious bottleneck in software delivery. Engineers spend days manually translating pixels into boilerplate code, often losing design fidelity. I wanted to build an agent that could "see" a designer’s intent and instantly provide a production-ready codebase, allowing developers to focus on high-value logic rather than UI repetition.

What it does 🧩

Figma to Code is an autonomous AI agent that interprets Figma prototypes through computer vision and generates pixel-perfect React Native code. It doesn't just export assets; it reasons through UI hierarchies and brand systems, persisting the results in Google Cloud Storage and deploying a live preview via Google Cloud Run for instant stakeholder validation.

How I built it 🏗️

The solution uses a Hybrid Edge-Cloud Architecture:

  • The Brain & Hands: Built with Gemini 3.1 Flash and the Gemini Agent Development Kit (ADK). It uses Playwright and PyAutoGUI to physically navigate and capture Figma canvases.

  • The Persistence: Every build is automatically uploaded to Google Cloud Storage using the Python SDK.

  • The Presentation: A Google Cloud Run microservice (Flask/Docker) serves a live dashboard that renders the generated .tsx builds in real-time.

Challenges I ran into 🛠️

Navigating Figma’s infinite canvas required precise coordinate mapping and robust multimodal feedback loops. Additionally, configuring the IAM security layer for automated local-to-cloud deployments was a significant hurdle that required fine-tuning service account permissions to ensure a secure and seamless pipeline.

Accomplishments that I'm proud of 🏅

I successfully closed the loop between Physical Automation and Cloud Deployment. Seeing the agent "move the mouse" in Figma and then seeing the resulting code live on a Cloud Run URL seconds later is a testament to the power of autonomous agents in modern DevOps.

What I learned 🎓

This project highlighted the immense potential of the Gemini ADK to handle tasks that traditional APIs cannot. We learned how to orchestrate local hardware events with cloud-scale AI reasoning and the importance of building "Cloud-Native" from the start to ensure scalability.

What's next for Figma to Code powered by Gemini 🗺️ 🤖

The vision is to expand the agent's capabilities to include Component Library Integration. Instead of generic code, the agent will be trained to use a company's specific internal design system components, creating a truly seamless bridge between brand identity and production code.

Built With

Share this project:

Updates