Our eyes allow us to see the world. They help us carry out several tasks at ease, something we often overlook and take for granted. Visually impaired people struggle with tasks we consider to be very basic. For instance, shopping. The challenges they face are unimaginable . Globally the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind. So this project is an initiative to help make their lives easier.

What it does

Mobile App

beMyEyes helps the visually impaired shop conveniently by :

  • Detecting and Identifying objects around
  • Narrating the type of object identified
  • Reading labels on products using OCR (Optical Character Recognition) and narrating it to the user
  • Connecting with a human assistant nearby to help the visually impaired user with shopping
  • It's compatible for both Android and iOS.

The website enables volunteers to help visually impaired shoppers in their area by acquiring their information on the sign up page so they can be notified via text when a volunteer opportunity arises in their area.

How we built it

Mobile App.
  • Twilio - Used for messaging between users. helpers and system.
  • google cloud vision - used for ML object recognition
  • clarifai - Used for identifying food
  • gcp OCR - used for text identification
  • React-Native - Used For building a cross platform app.
  • Expo-Speech - Used for narration of results and providing instructions to the user
  • Adobe Illustrator - Used for designing the logo, assets and UI for the app
  • mongodb - Used for storing users helpers and matches data
  • Maps JavaScript API - Used For Embedded Interactive Google Maps and Auto-fill feature for the address slot in the form
  • Geocoding API - Used For Embedded Interactive Google Maps and Auto-Fill feature for the address slot in the form
  • Semantics UI Library - Used To Create Button on Join Page
  • Places API - Used For Embedded Interactive Google Maps and Auto-Fill Feature for the address slot in the form
  • Figma - Used to create prototype of website
  • Languages - HTML, CSS, and Javascript were utilized to build the website. *api deployment and testing : Sashido app was created and used for testing various API endpoints used in the system

Challenges we ran into

First of All , Our teammates are from different time zones So, It was a challenge to maintain real-time communication. For some of us React Native, and image recognition and image captioning and working with API was the first experience. Still We managed to do it unitedly.

Accomplishments that we are proud of

Overcoming the above challenges were the Biggest accomplishments.

What we learned

Through this project we had the opportunity to learn React Native , Object Detection, OCR, Image Captioning, Text Generation, and text to voice conversion.

What's next for BeMyEye

For the project’s future, We think of integrating it with every shopping mall and market for better reach. So, that product can easily help the person in need .Also we can integrate it with sensors and spectral anagram for obstacle detection . Which will in turn help the visually impaired walk with minimal risk.

Built With

Share this project: