Inspiration

While working on projects, I spent hours trying to recreate ideas in Figma without prior prototyping or design experience. The friction between a rough idea and a usable prototype made early iteration slow and frustrating. SketchToApp was built to remove that gap by turning simple sketches directly into working applications.

What it does

SketchToApp transforms hand-drawn wireframes into production-ready React code in seconds. Upload a sketch photo and get a fully functional, accessible UI component with semantic HTML, responsive layouts, and interactive elements. It infers layout patterns (forms, dashboards, navigation), maintains WCAG accessibility standards, and produces immediately usable code.

How I built it

SketchToApp uses Gemini's multimodal reasoning to analyze UI sketches and infer layout structure, components, and user intent. A constrained generation pipeline ensures predictable React output with semantic HTML and accessibility-first patterns. Generated code is rendered safely in a sandboxed iframe and can be edited and re-run live in the browser.

Challenges I ran into

One of the biggest challenges was working with Gemini's API rate limits while keeping the user experience smooth. I also had to ensure that AI-generated React code could be rendered safely without exposing security risks. These issues were addressed by adding strict prompt constraints to reduce retries, implementing client-side cooldown handling to prevent burst requests, and rendering generated code inside a sandboxed iframe with limited permissions.

Accomplishments that I'm proud of

I successfully eliminated UI hallucinations by ensuring the generated output includes only elements present in the original sketch. I built a secure, sandboxed execution environment that safely renders AI-generated code without crashes or security risks. The accessibility scoring system runs automatically with zero manual configuration. Most importantly, the entire pipeline operates reliably despite Gemini’s rate limits through smart client-side queuing and cooldown handling.

What I learned

Building SketchToApp taught me how to design reliable multimodal prompts that produce deterministic UI outputs rather than unpredictable code. I learned how to balance automation with user control in AI-assisted UX tools, and how to integrate accessibility considerations directly into the generation pipeline instead of treating them as an afterthought.

What's next for SketchToApp

Future work includes multi-screen flows, component-level regeneration, deeper Figma integration, and collaboration iteration features.

Built With

Share this project:

Updates