Inspiration

When you're out walking, biking, or exploring a new area, you often find yourself wondering: "What's that building?" or "Is there something interesting nearby?" Mapwise was born from that everyday curiosity — a desire to effortlessly access real-time information about the world around you without having to stop and type.

What it does

Mapwise uses your real-time location and voice input to answer natural language queries about your surroundings. By combining OpenAI’s real-time APIs with geospatial data, it can identify landmarks, describe nearby places, and even surface relevant local news — all with a simple spoken question.

How we built it

We used Next.js for the frontend, integrated OpenAI’s real-time APIs for transcription and response generation, and tapped into Tavily to provide local news and context-aware content. It’s a location-aware, voice-first experience designed to feel effortless and conversational.

Challenges we ran into

Getting voice input to reliably convert to text across different browsers

Accomplishments that we're proud of

  • Successfully integrating Windsurf for real-time voice interactions
  • Building a working, voice-controlled conversational map assistant
  • Creating a fluid UI without relying on Google Maps navigation

What we learned

  • How to make a live conversational agent feel natural and responsive
  • How to handle (and recover from) AI hallucinations and errors
  • The power and quirks of Windsurf, Lovable, and OpenAI’s real-time APIs
  • That building a multimodal experience is very possible — and very fun

What's next for mapwise

  • Supporting longer conversations with context memory
  • Letting users save their conversations and revisit their voice history to find points of interest
  • Enhancing map personalization and proactive discovery suggestions

Built With

  • gpt
  • next.js
  • openai-realtime-api
  • tavily
Share this project:

Updates