Inspiration: PathSense was born from a desire to make the world more accessible for the visually impaired. Our journey began with a simple question: "How can we leverage technology to bridge the gap between vision and navigation?" We were inspired by the challenges faced by visually impaired individuals and driven by the potential of AI and sensory technology to transform their experience of the world.

What it does: PathSense revolutionizes navigation for the visually impaired. It employs advanced object detection to identify obstacles and points of interest, allowing users to select and track objects in their environment. The system provides detailed descriptions and navigational assistance to the selected objects, integrating seamlessly with a controller interface for intuitive and responsive feedback. Additionally, PathSense includes face recognition and emotion detection, enriching social interactions. The core of PathSense is its voice feedback system, which offers a friendly, interactive experience, complemented by nuanced haptic feedback through the controller, enhancing the perception of the surroundings.

How we built it: We developed PathSense using a blend of cutting-edge technologies. Object detection and tracking are powered by the YOLO algorithm, while navigation utilizes Midas for depth perception and pathfinding algorithms. Face recognition and emotional inference are achieved through the DeepFace framework. The user interface is built on PyQt, offering a robust and interactive experience. The system is controlled via a PS Move controller, programmed to provide real-time haptic feedback. We integrated ElevenLabs' voice technology for dynamic and natural voice feedback, adding an extra layer of interaction.

Challenges we ran into: Our journey was not without its challenges. Perfecting real-time object detection and tracking for dynamic environments was a complex task. Implementing intuitive navigation that adapts to various obstacles and ensuring seamless integration of the controller's haptic feedback were significant hurdles. Balancing technical functionality with user-friendly design was critical in making PathSense accessible and effective for the visually impaired.

Accomplishments that we're proud of: We are proud of creating a tool that enhances independence and safety for visually impaired individuals. The successful integration of various technologies into a cohesive and intuitive system stands as a testament to our team's dedication. The positive impact PathSense can have on the daily lives of many is our greatest accomplishment.

What we learned: Throughout this project, we learned the importance of empathetic design in technology. Understanding the challenges faced by the visually impaired was crucial in shaping PathSense. We gained insights into integrating diverse technologies like AI, computer vision, and haptic feedback into a user-centered solution.

What's next for PathSense: The future of PathSense is bright and full of potential. We plan to enhance our AI algorithms for more accurate object detection and smoother navigation. Integrating more natural language processing for better voice interaction and exploring additional sensory feedback mechanisms are on our roadmap. We envision PathSense evolving into a comprehensive solution, not just for navigation but as a bridge connecting the visually impaired with the digital world.

Built With

Share this project:

Updates