Inspiration
Our inspiration started with the hackathon's theme: 20 years of Google Maps. Here in Ho Chi Minh City, 20 years isn't just a number; it's a complete transformation. It's the difference between a skyline with a few tall buildings and the iconic silhouette of Landmark 81. It's Nguyễn Huệ street changing from a busy roundabout to the beautiful walking plaza it is today.
We realized that while Google Maps perfectly captures the what and the where, the who and the why—the human stories behind these changes—are often lost. Our inspiration came from past winners like City of Mural Arts, which gave a city's art a voice. We wanted to do the same for a city's memories. We saw the AI-powered summaries demo and a lightbulb went off: what if we could use AI to not just summarize data, but to help articulate the feeling of a memory? Our project is our answer to that question: to build a canvas for our collective memory.
What it does
Urban Canvas AI turns Google Maps into an immersive, 4D time capsule. Users can:
Fly through a photorealistic 3D world: We use Google's 3D Tiles to create a stunning, interactive landscape, not a flat map.
Discover 20 years of memories: Explore crowd-sourced memories pinned to locations around the globe, filtering them with a timeline slider from 2005 to 2025.
Travel back in time: With our "Then & Now" feature, users can view a memory and see the historical Google Street View from that era side-by-side with the present day.
Become a storyteller with AI's help: Users can drop a pin, upload a photo, and provide a few keywords. Our AI Memory Weaver, powered by the Gemini model, helps them craft a beautiful narrative, turning a simple photo into a rich story.
See the city's soul: We use data visualization to render heatmaps of a city's most cherished locations, showing the emotional pulse of a neighborhood.
How we built it
This was an intense sprint. Our stack is modern, scalable, and built entirely on Google's ecosystem.
Frontend: We used React with Vite for a fast development experience. The core is the Maps JavaScript API, where we integrated Photorealistic 3D Tiles and the Aerial View API for the immersive experience. For data visualization layers like the heatmap, we used the Deck.gl library.
Backend & Database: We chose a serverless architecture using Google Firebase. Firestore stores all the memory data (text, coordinates, timestamps), Firebase Storage holds the user-uploaded photos and audio clips, and Firebase Authentication manages user accounts securely. Google Home API specifically integrated as Smart home action API
AI Integration: Our backend makes a secure server-side call to the Vertex AI platform to access the Gemini model. We engineered a specific prompt that feeds the model the user's keywords, the location's name and address (from the Places API), and instructs it to adopt a nostalgic, narrative tone.
Challenges we ran into
Performance on 3D Maps: Rendering thousands of memory pins on top of high-resolution 3D tiles was initially very slow. We solved this by implementing smart clustering and viewport-based data fetching. We only load memories for the visible area of the map, ensuring a smooth experience even in densely populated areas.
Getting the AI tone right: Our first prompts for the Gemini model produced very generic, Wikipedia-like descriptions. The challenge was to give it a soul. We spent a full day refining the prompt, feeding it examples, and instructing it to focus on sensory details and emotion. That was the breakthrough for making the AI feel like a creative partner.
Inconsistent Historical Data: Not every street corner in the world has Street View imagery from 2008. We couldn't promise the "Then & Now" feature for every memory. We solved this with graceful degradation in the UI: the button to activate the feature only appears if our backend confirms with the Street View API that historical imagery for that location and timeframe actually exists.
Accomplishments that we're proud of
Honestly, we're proud that it works! But specifically, we're most proud of the first time we tested the full loop: we pinned a memory of a favorite Bánh Mì cart from 2012 that's no longer there, let the AI draft a beautiful tribute to it, and then used the "Then & Now" feature to see the empty street corner where it used to be. It was a powerful, emotional moment. That seamless integration of 3D flight, AI storytelling, and historical data is our biggest accomplishment. It felt less like a feature and more like magic.
What we learned
AI as an Assistant, not a Replacement: We learned that the true power of generative AI in a creative context is to overcome "blank page syndrome." It's an incredible tool to assist and augment human creativity, not replace it.
The Power of Context: A map is just geometry until you add context. By layering personal stories, historical imagery, and emotional data, a simple map becomes an infinitely deep document.
Technical Humility: We learned that even with powerful tools like 3D Tiles, performance is a feature. Thoughtful data loading and optimization are just as important as the flashy visuals.
What's next for Urban Canvas AI
This feels like the beginning, not the end.
Build the Community: We want to add social features—user profiles, the ability to follow your favorite storytellers, and comment threads to create conversations around memories.
Curated "Memory Walks": Partnering with local historians or tourism boards to create guided tours through a city's past, like "A Walk Through Lost Saigon Cinemas" or "The Evolution of Street Art in District 1."
AR Integration: The ultimate goal is to bring these memories into the real world. Imagine pointing your phone at a building and seeing a memory from 2007 appear as an AR overlay, letting you truly stand in two moments at once.
Built With
- all
Log in or sign up for Devpost to join the conversation.