Inspiration I wanted to build something that makes places feel alive. Most map apps are good at showing where things are, but they do not explain why a place matters or make exploring feel personal. MaxiGO was inspired by the idea of having a smart local guide with you, one that can turn live location data into short, spoken stories and recommendations.
What it does MaxiGO is an AI-powered location guide that uses your live position or map clicks to describe what is around you. It can identify nearby landmarks, churches, statues, museums, restaurants, cafés, and other named places, then turn that data into short guide-style explanations. You can tap the map to explore, tap nearby pins for focused descriptions, ask follow-up questions by voice or text, and use Walk/Drive Mode for movement-based spoken updates.
How we built it We built MaxiGO with a React + Vite frontend and an Express + TypeScript backend. The frontend handles the map, user interaction, transcript, and voice controls. The backend collects and structures place data, enriches it, and generates the final responses.
We combined:
OpenStreetMap, Nominatim, and Overpass for location and nearby places Wikidata and Wikipedia for historical and contextual information OpenRouter for turning structured place data into natural guide-style narration ElevenLabs for text-to-speech and speech-to-text Challenges we ran into One of the biggest challenges was turning raw map data into good spoken output. A map may contain lots of nearby labels, but not all of them are relevant or useful in a spoken guide. We had to improve filtering, ranking, and grouping so the app would focus on landmarks, sights, and interesting nearby places.
We also ran into speed and reliability issues. Public map-data endpoints can time out, AI generation can be slow, and browser audio playback can fail because of autoplay restrictions. We had to build fallbacks, shorter response paths, and more resilient voice playback behavior to make the experience actually usable.
Accomplishments that we're proud of We are proud that MaxiGO became more than a simple map app or chatbot. It can now connect live location data, cultural context, AI narration, and voice interaction into a single experience. We also added focused pin explanations, live Walk/Drive Mode, and location-aware voice output, which made the project feel much closer to a real guided experience.
What we learned We learned that good AI output depends heavily on the quality of the structured context behind it. Better prompts helped, but better place selection and cleaner raw data helped even more. We also learned how important UX details are in voice products, especially around timing, interruptions, permissions, and mobile browser behavior.
What's next for MaxiGO The next step is to make MaxiGO feel even more seamless in motion. That means improving Walk/Drive Mode, tuning update timing, adding smarter caching for faster responses, and making the narration even more adaptive to where the user is and how they are moving. We also want to improve production polish with domain setup, HTTPS, and a smoother mobile-first audio experience.
Built With
- api
- css
- elevenlabs
- express.js
- html
- leaflet.js
- node.js
- nominatim
- openrouter
- openstreetmap
- overpass
- react
- tailwind
- typescript
- vite
- wikidata
- wikipedia
Log in or sign up for Devpost to join the conversation.