Inspiration

The idea for AccessMate was born from observing the daily challenges faced by individuals with visual and mobility impairments

What it does

Whether it's navigating a crowded train station or trying to locate a restroom in a public mall, these situations often lead to frustration, dependency, or even complete avoidance of public spaces.

How we built it

Python – For backend logic and object detection

OpenCV – For real-time image recognition

TensorFlow Lite – Lightweight ML models for edge devices

Google Maps API – For route guidance and geolocation

Arduino + Ultrasonic Sensors – For physical obstacle detection

Text-to-Speech (TTS) – For vocal instructions

React Native – To build a cross-platform mobile app

Challenges we ran into

Sensor Noise and Accuracy: Getting reliable readings from ultrasonic sensors in noisy environments was a major challenge.

Latency in Object Detection: Ensuring near real-time feedback without draining mobile battery.

Voice Recognition Errors: In noisy places, speech inputs often failed, leading us to add keyword-based controls.

Inclusive UX Design: Making an interface that’s fully operable through voice, vibration, or screen-readers required several iterations.

Accomplishments that we're proud of

What we learned

What's next for AccessMate

Built With

Share this project:

Updates