Inspiration
Millions of visually impaired individuals struggle with navigating their surroundings independently. Existing solutions often require expensive hardware or are not user-friendly. EchoVision was inspired by the need for an intuitive assistant that empowers visually impaired users to interact with their environment using just their smartphone.
What it does
EchoVision (EV) is an application designed to assist visually impaired users by providing voice-controlled navigation and real-time object recognition. Using AI-powered computer vision and voice interaction, EchoVision enhances accessibility by describing surroundings, identifying objects, and guiding users through their environment.
How we built it
We used React Native (Expo) for cross-platform mobile development, ensuring accessibility and ease of use. Our key technologies include:
Frontend: Figma, Tailwind CSS, Shadcn UI, Radix UI, Lucide React
Text-to-Speech: Chrome's built-in speech synthesizer api
Object Detection: Meta's Detectron2, Deepseek API
The app starts with a voice-guided introduction, allowing users to navigate menus with simple voice commands. A camera scan feature identifies objects and relays the information via speech.
Challenges we ran into
- Building an AI-powered accessibility app in the hackathon timeframe was a challenge.
- There was some difficulty with cohesion and the assignment of tasks
Accomplishments that we're proud of
We were able to collaborate under pressure and deliver a working solution within the tight timeframe.
What we learned
The value of clear team communication and testing.
What's next for 27-EchoVision
Improving AI Accuracy by enhancing speech recognition and object detection for better real-world performance. Also adding a voice control feature.
Built With
- deepseekapi
- detectron2
- figma
- lucidereact
- radix
- shadcn
- synthesizerapi
- tailwindcss
Log in or sign up for Devpost to join the conversation.