Inspiration
I was inspired by snap maps which let users make geo tagged public image and video posts. I wanted to do something similar but with the added benefit of search-ability. I have also seen a lot of cool apps recently using the RAG architecture to enhance AI prompting and retrieve semantically relevant content to the user's query.
What it does
NYC 3D is map-based platform for sharing and discovering experiences. Users can post geo-tagged stories and updates, creating a vibrant, real-time view of the world around them. Businesses can also leverage NYC 3D to promote events with eye-catching, location-specific posts with embedded links and images, reaching their local audience where they are.
NYC 3D uses AI curation, powered by Retrieval-Augmented Generation (RAG), which ensures users see the most relevant posts for what they asked. Whether you're a user exploring your city or a business wanting to boost visibility, NYC 3D brings local insights and events to life like never before.
How we built it
I used Convex to host the backend functions, file storage and metadata/vector storages.
The vector embeddings for the text are generated using Hugging face Serverless Inference API. Once posts are indexed with the embedding vector, the same embedding model is used to generate an embedding of the query to use for similarity matching.
These relevant posts are show on screen as pins, if posts fall within a certain distance of one another, they are clustered and a polygon generated by running convex hull on the included points is shown on the map.
For users with the [Chrome Built In AI]https://developer.chrome.com/docs/ai/built-in the in browser LLM is fed in these relevant posts as prompt context to allow the LLM to generate a relevant response to the user.
Users can then explore posts on their own or activate fly through which uses a series of SetTimeouts to interpolate between the top 5 posts and display their content to users.
Challenges we ran into
Chaining together the fly through experience was a bit hectic, I tried to make it iterative but in the end left it as a recursive function and limited the depth to 5.
The clustering logic was a bit difficult to reason through and figuring out the right level of object to pair with associated three-D map components took a couple iterations.
Accomplishments that we're proud of
- The RAG system was very fun to build, I have built toy RAG projects before but this is the first one that I have deployed with live ingestion and updating
- I am really proud of the curated fly through functionality
- The tiered structure of Posts and Clusters and their association with 3D Map items was also a nice clean refactor I am proud of
What we learned
- I learned how to properly use the 3D maps API for loading 3D elements and controlling the camera
What's next for NYC 3D
- Once the 3D map API is released for full use I plan to launch this app
- We will add TTL to posts
- Once we get a large enough user base, we will work with businesses to allow them to add sponsored posts/ custom AI agents for their business inside NYC 3D
- Adding videos to posts
- Adding some relevant 3D models to the map (e.x. when you zoom in on a park, I will load and display animated duck glbs walking around, cars on streets, boats in water, etc...)
Built With
- 3dmaps
- built-in-ai
- convex
- huggingface
- vite

Log in or sign up for Devpost to join the conversation.