Inspiration
I've always been fascinated by urban design and how policy shapes the world around us. In the past few years I've read 100s of substack essays, public policy blogposts, watched 100s of urbanism, architecture-related youtube videos.
But policy documents and urban planning proposals are incredibly dense and abstract. Itβs hard for anyone who isn't a professional to visualize what a proposed change will actually look like or what its unintended consequences might be.
My inspiration was to bridge that gap. The goal was to make policy reasoning easy and accessible, almost like a game. I wanted to create a "SimCity" but for real-world neighborhoods, allowing anyone to experiment with changes in their own city. This is just the first step.
What it does
My app is an AI-powered simulator for urban planning. Here's the flow:
- Upload: You start by uploading an image of any city block or neighborhood.
- Analyze: Clicking "Analyze" sends the image to Gemini, which generates three high-level recommendations for improvement (e.g., "Increase Green Space").
- Act: When you click a recommendation, the AI generates a list of specific, actionable steps (e.g., "Convert parking lot to a park"). At the same time, it generates the potential "Second-order effects" of this entire strategy, showing you the good and the bad.
- Simulate: You select the actions you like and hit "Simulate." The app sends the original image and your chosen actions to a multimodal AI (Nano Banana) which edits the image to show your changes.
- Iterate: Every generated image is saved in a history gallery below. You can click on any previous version and start a new branch of edits, allowing for non-destructive, creative exploration.
How we built it
I built this solo using a Vibe coding platform for the frontend UI. The entire application is architected around a series of chained AI API calls.
- Frontend: The UI is a clean three-column layout built to be simple and intuitive. The state management keeps track of the currently selected image, recommendations, and actions to feed into the AI prompts.
- AI Logic (Text): I use the Gemini Pro model for all text-based tasks. It's a sequence of prompts: the first analyzes the image for recommendations, the next generates actions based on the user's choice, and a final one deduces the second-order effects from the proposed strategy. I focused heavily on prompt engineering to get structured, concise outputs.
- AI Vision (Image): The core "simulation" feature is powered by a multimodal model, Nano Banana. It takes the current image and a text prompt describing the selected actions (e.g., "add a park where the parking lot is") and generates the new, edited cityscape.
Challenges we ran into
I did a ton of back and forth planning with Gemini before vibecoding anything, so that helped a lot.
But i did run into some frontend issues. Took a few tries to fix responsiveness and image gallery bugs. Uploading an image would mess up the whole UI but i finally coaxed Gemini into fixing it!
Another challenge was designing the user flow. At first, I thought about having the second-order effects update in real-time with every click, but it felt chaotic. I simplified it so the analysis happens in one clear step after you choose a recommendation. It made the tool much more predictable and less overwhelming to use.
Accomplishments that we're proud of
I'm most proud of getting the full feedback loop working. It's not just a concept; the app successfully goes from a static image to AI analysis, to user choice, to a new, visually simulated image. Seeing the AI edit a real picture based on abstract policy ideas is the magic moment.
I'm also really happy with the Image History Gallery. It turns the tool from a simple editor into a non-destructive creative canvas. The ability to go back to a previous version and try a different path makes it feel like a true simulator where you can explore different futures for a place.
I started late, so i am happy i got a functional nice-looking web app deployed. Couldn't have done it without AI Studio's quick iteration via the App builder.
I also really like the Art Deco look of the app. Chose the font specifically for that vibe.
What we learned
I learned that the real power of generative AI isn't just in one-off generation, but in chaining models together to create a workflow. The output of one model becomes the input for the next, creating a system that can reason about a problem from multiple angles (visual, strategic, and consequential).
I also learned how crucial a simple UI is when the backend is complex. The user shouldn't have to think about the AI prompts; they should just feel like they're playing with ideas. Hiding the complexity was as important as building it.
What's next for Urban Planning AI Simulator
This is just the first version. There are a few key features I'm excited to build next:
- Link Support: Instead of just uploading images, I want users to be able to paste a link to a news article or policy PDF. The AI would first summarize the policy and then apply its effects to the simulation.
- Targeted Editing: Implement a drawing tool to let users circle a specific area of the image to apply changes, rather than editing the whole scene.
- Global Sliders: Add sliders for high-level concepts like "Walkability," "Density," or "Green Space" that would act as a baseline for the AI's recommendations, giving users more control.
- Sharing: A "Share Simulation" button that generates a unique link to your session, allowing others to see your vision and even build upon it.
Built With
- gemini
- google-ai-studio
- nano-banana
- typescript
Log in or sign up for Devpost to join the conversation.