Inspiration
We learned from online resources that many people with disabilities face daily challenges such as navigating safely, accessing information about their surroundings, and communicating with others.
We wanted to build a solution that uses mobile technology and AI to help people with disabilities live more independently and participate more easily in everyday community life.
What it does
We developed a cross-platform mobile app (Android & iOS) with several accessibility features:
- Obstacle Detection: Uses the phone camera to detect obstacles and provide voice feedback for visually impaired users.
- Object Recognition: Uses the camera to identify objects, and users can ask the app questions about the object or information on its packaging.
- Sign Language Recognition: Recognizes hand gestures and converts sign language into text and speech to assist communication.
- Dangerous Sound Detection: Detects loud or dangerous sounds (e.g., car horns or fire alarms) and alerts hearing-impaired users through vibration and visual notifications.
How we built it
We built the app using Flutter and Dart, allowing one codebase to run on both Android and iOS. Key technologies include:
- Computer vision and deep learning models for obstacle and object detection
- A trained sign language recognition model
- Audio signal analysis for detecting dangerous environmental sounds
Challenges we ran into
- Learning Flutter and Dart quickly since we had limited mobile development experience.
- Training and tuning machine learning models for accurate obstacle and sign language recognition.
- Designing a user interface that works well for different types of disabilities while remaining simple, intuitive, and accessible, with voice and vibration feedback.
Accomplishments that we're proud of
- Successfully trained and integrated obstacle detection and sign language recognition models.
- Built a working cross-platform mobile prototype within a limited time.
- Designed accessible interaction modes, including voice feedback for visually impaired users and vibration alerts for hearing-impaired users.
- Worked efficiently as a team to turn an idea into a functional product.
What we learned
- How to train and debug machine learning models
- How to quickly learn new frameworks like Flutter
- How to define product features under time constraints
- How to collaborate effectively in a team
What's next for SenseBridge
- Expand sign language recognition from alphabet signs to common words and phrases.
- Improve the accuracy of dangerous sound detection with more training data.
- Optimize obstacle detection speed and robustness in complex environments.
- Expand the object recognition database to support more daily-use items.
Log in or sign up for Devpost to join the conversation.