Inspiration
PathFinder was inspired by the need to make navigation safer and more accessible for visually impaired individuals. Current mobility aids lack real-time object detection and contextual guidance, which are crucial for navigating dynamic environments. PathFinder aims to fill this gap by combining computer vision, AI, and wearable technology to enhance spatial awareness and promote independence.
What it does
PathFinder is a wearable AI navigation assistant that detects objects in real time and provides distance alerts using ultrasonic and visual sensing. By combining conversational AI, PathFinder describes nearby objects and offers guidance on safe movement. It alerts users to people, obstacles, and other potential hazards while offering directional suggestions for avoiding them.
How I built it
PathFinder brings together various technologies: 1) YOLO (You Only Look Once) for real-time object detection via a webcam, identifying potential obstacles. 2) Ultrasonic Sensor for precise, short-range distance measurement, triggering alerts based on proximity. 3) Anthropic API for generating context-aware descriptions of surroundings and guidance, converting object data into conversational audio feedback through text-to-speech. 4) Python and microcontroller programming to coordinate sensor data, process detection models, and deliver real-time user alerts.
Challenges I ran into
1) Data Collection: Finding or generating a labeled image dataset with relevant objects for training the detection model required significant effort. 2) Real-Time Processing: Achieving low-latency response times while integrating object detection and distance measurement was challenging. 3) Distance and Object Relevance: Calculating relevance scores for detected objects based on distance and type required careful tuning to ensure alerts were helpful without being intrusive. 4) API Integration: Creating prompts and parsing API responses effectively for natural, actionable guidance involved refining prompts for improved responses.
Accomplishments that I'm proud of
1) Successfully integrating object detection with ultrasonic distance alerts to provide actionable, real-time feedback. 2) Implementing an intuitive conversational AI system that translates complex sensor data into user-friendly guidance for visually impaired users. 3) Designing a wearable system that empowers users with enhanced awareness and safety in various environments.
What I learned
1) AI and Sensor Fusion: Gained practical experience in object detection, real-time sensor integration, and data handling. 2) User-Centered Design: Emphasized the importance of designing accessible technology with clear, relevant information for end-users. 3) API Optimization: Learned to work with conversational APIs to yield contextually aware responses that are helpful in real-world applications.
What's next for PathFinder
1) Enhanced Object Relevance: Incorporate additional machine learning techniques to improve object relevance and prioritize essential alerts. 2) Extended Sensor Compatibility: Explore other cost-effective sensors, like stereo cameras or IR sensors, to expand depth perception capabilities. 3) User Testing and Feedback: Conduct trials with visually impaired users to gather insights and refine the user experience. 4) Mobile Companion App: Develop a mobile app to log data, provide additional guidance, and allow users to adjust settings. 5) Edge Processing Optimization: Enhance processing speed and battery life to make the wearable device more efficient for daily use.
Log in or sign up for Devpost to join the conversation.