πŸ’‘ Inspiration

The inspiration for VisionAI came from the everyday challenges faced by the visually impaired, such as recognizing currency, identifying clothing colors, and reading menus. These tasks, often taken for granted, can be significant obstacles without constant assistance, impacting their independence and equality. VisionAI is designed to address these issues with a hands-free, wearable device, empowering users to navigate their daily lives more independently and confidently.

❓What it does

VisionAI is a camera application designed for the visually impaired, acting as an assistive visual aid. It engages with the environment through hand gesture recognition. When the 'Text to Speech' gesture is detected, VisionAI reads aloud text from various sources like books or signs, making information accessible through speech. Similarly, the 'Object Recognition' gesture causes the app to articulate what objects are present, providing the user with a verbal understanding of their surroundings. These direct features of VisionAI enhance daily navigation and decision-making, thereby supporting greater independence.

πŸ› οΈ How we built it

App Development:

  • Frontend: React, JavaScript
  • Backend: Python, Flask, TensorFlow

AI Processing:

  • Computer Vision: OpenCV, Gemini
  • Data Handling: NumPy
  • Cloud Services: Google Cloud API

🚧 Challenges we ran into

In the process of building VisionAI, one challenge was ensuring seamless integration across the multiple APIs we were using. The task was further complicated by the limitations of outdated libraries which we were constrained to use; its outdated support system restricted our access to newer, more efficient libraries. Moreover, the inability to efficiently train new models with our existing tools was an unexpected setback.

πŸ† Accomplishments that we're proud of

The functionality of VisionAI that we take the most pride in is its adaptability and the intuitive experience it offers to users. The idea behind the device is practical and accessible, which is an achievement in itself. A major highlight is the gesture recognition feature that lets users control the device with simple hand movements for tasks like reading text and identifying objects which took a lot of effort to develop and works efficiently!

πŸŽ“ What we learned

Working with AI was exciting and expanded our view on its applications in different projects. We learned that while AI can offer innovative solutions, it requires a lot of effort to integrate effectively.

πŸš€ What's next for VisionAI

  • Conduct user studies and gather feedback for iterative improvements
  • Expand gestural command library
  • Enhance contextual awareness
  • Explore collaborations with wearable tech manufacturers
  • Integrate translation feature for global applications

Built With

Share this project:

Updates