⁉️ Why?
In 2020, there were 43.3 million blind individuals and 295 million people with Moderate or Severe Visual Impairment (MSVI) worldwide. Navigating large, complex buildings can be overwhelming for those with visual impairments, limiting their independence. Existing tools like canes or guide dogs are helpful but can't provide detailed, real-time navigation in indoor spaces. EyeMap was created to address this gap, offering a tech-driven solution that empowers visually impaired users to navigate confidently and independently through voice guidance and object recognition.
🌟 Inspiration
We often get lost in large, complex buildings like E7 🏢, and it made us wonder 🤔 how visually impaired individuals handle such situations without assistance. This challenge inspired us to create a solution that empowers them to navigate independently, providing real-time guidance and object recognition to simplify their experience in unfamiliar environments.
🧭 What it does
EyeMap is an accessory that "sees" 👁️ the world around you. We created the AI agent "Joe," which provides voice-guided navigation 🗣️, helping users move through complex buildings by detecting obstacles 🚧 and describing what's in front of them. It is also able to accurately detect people. Whether it’s understanding the space around them or navigating to hard to find areas, EyeMap offers a seamless experience for users to navigate independently.
🛠️ How we built it

We developed EyeMap using React ⚛️ for the front-end, Python 🐍 for backend processing, and FastAPI 🚀 for API management. Firebase 🔥 handles user data storage, and the Mappedin SDK powers the indoor navigation 🗺️. OpenCV was also used for image processing and Google Speech was used to transform speech to text. We used the Mappedin's provided 3d render for the frontend UI, using it to calculate closest routes, combining algorithms to detect the closest rooms, as well as tracking where the user is. Although the tracking is solely a simulation, we see great potential to improve upon the technology, such as 3D triangulating each user's location within the building using technology like wireless beacons, allowing the visually impaired to have access to where everyone is and how they would go about navigating towards them.
🚧 Challenges we ran into
One of the biggest challenges was figuring out efficiently integrating the Mappedin SDK for indoor navigation. Mapping out complex venues with accuracy and real-time processing ⏱️ was tough. Dealing with the SDK on client side JavaScript made it hard for our back-end to receive and process data. Additionally, having the speech recognition to process many different types of commands with consistent accuracy proved challenging with all the variables that could go wrong.
🏆 Accomplishments that we're proud of
We’re proud of creating a working solution that can guide visually impaired individuals through buildings 🏨 with real-time feedback. EyeMap successfully integrates voice navigation 🎙️ and object recognition 📦 to provide users with both orientation and awareness of their surroundings. Seeing the prototype perform in a complex environment is a huge accomplishment for our team.
📚 What we learned
We learned how crucial it is to consider accessibility in technology development. This project gave us insight into user-centered design 🎯, especially the importance of balancing features with simplicity for the end-user. We also deepened our technical knowledge in API development 💻, navigation SDKs, and how to make various technologies work together seamlessly.
🚀 What's next for EyeMap
We plan to refine EyeMap's voice recognition and improve its ability to navigate even more complex environments 🌍. Another important improvement is to increase the speeds of the LLM's to provide more responsive feedback. Future iterations might include more voice customization 🎙️, and support for multiple languages 🌐. We also hope to test EyeMap in real-world scenarios with visually impaired users to ensure it meets their needs effectively.

Log in or sign up for Devpost to join the conversation.