Inspiration
Millions of visually impaired individuals face difficulties in performing everyday tasks that most people take for granted. Reading printed text, identifying objects around them, or understanding signs in public spaces can be challenging without assistance.
While assistive technologies exist, many of them are expensive or require specialized devices. However, smartphones are widely available and already contain powerful cameras and computing capabilities.
This inspired the idea behind SightSpeak — a simple AI-powered assistant that converts visual information into voice feedback, helping visually impaired individuals understand their surroundings more independently.
What it does
SightSpeak is an AI-based accessibility tool that helps visually impaired users interpret visual information through voice output.
The system analyzes images captured from a phone camera and provides spoken feedback describing what is detected.
Key Feauters Inclused 1.Text Reading Assistance – Detects printed text from images and reads it aloud. 2.Object Identification – Recognizes common objects and describes them to the user. 3.Voice-Based Interaction – Users can interact using simple voice commands. 4.Accessible Interface – A minimal interface with large buttons and simple navigation.
The goal is to make everyday environments more accessible using affordable technology.
How we built it
The project was developed as a web-based prototype demonstrating how AI and accessibility tools can work together.
Technologies used:
HTML, CSS, and JavaScript for the interface Web Speech API for voice input and speech output AI vision models for text detection and object recognition Figma / Canva for UI design and mockups Visual Studio Code for development
The prototype allows users to upload or capture an image, which is processed by AI to detect objects or text. The result is then converted into spoken feedback for the user.
Challenges we ran into
One of the main challenges was designing a system that focuses on ** accessibility and simplicity*. Since the primary users are visually impaired, the interface had to rely more on **voice feedback rather than visual interaction*.
Another challenge was building a meaningful prototype within a limited hackathon timeframe while still demonstrating the core idea effectively.
Balancing **technical feasibility and usability **was an important part of the development process.
Accomplishments that we're proud of
Designing a solution focused on accessibility and inclusion Demonstrating how AI vision technology can assist visually impaired users Building a working prototype within a short hackathon period Creating a concept that could be expanded into a real-world accessibility tool
What we learned
Through this project, we learned the importance of human-centered design in technology development.
We also explored how AI technologies such as computer vision and speech interaction can help solve real-world accessibility challenges.
This experience reinforced that innovation should focus not only on advanced technology but also on creating inclusive solutions that benefit everyone.
What's next for SightSpeak-AI Assistant for Independent Living
Future improvements could include: Real-time camera processing instead of image uploads Indoor navigation assistance Multi-language voice support Integration with wearable devices like smart glasses Offline functionality for better accessibility in rural areas
With further development,SightSpeak could evolve into a fully functional mobile application that supports visually impaired individuals in navigating everyday life more independently.
Built With
- ai-vision-api
- canva
- css3
- figma
- html5
- javascript
- visualstuiocode
- web-speech-api
Log in or sign up for Devpost to join the conversation.