In the current scenario, it is important for people to be mindful of their surroundings- especially watch their distance and always wear a mask! We realized how inconvenient it must be for the visually challenged. So our vision was to develop a sensory enhancer using a video screening application to not only alert people not wearing a mask but also judge how far / close the nearest person is from them. To make this more user-friendly, we wanted to convert the alerts to an audio message!

What it does

Our application feeds on images and generates an audio message telling them how close the nearest person is to them. If someone is 130 feet or closer, it alerts them if they are wearing a mask or not.

How we built it

We used machine learning frameworks-

  • OpenCV to read in images and play with them
  • Mobilenetv2
  • TensorFlow / Keras- a machine learning library to build a model to train, test, and validate our data
  • Google Text-to-Speech
  • Matplotlib - to modify the images and display them
  • Android - backend connectivity to the model / ML application

Challenges we ran into

  • Data collection - finding a good data set for any model is the most important step!
  • Best ML model- researching on the possible models and selecting the one with the best accuracy/
  • Google colab wasn't able to accommodate the huge dataset our application relies on. We had to use the power of Google Drive for storing our test, train, and validation data sets.

Accomplishments that we're proud of

Created a Life-Saving application in a short span of time. Our app is Inclusive for the visually impaired.

What we learned

Using google colab with a team + applying machine learning models + connecting our model to the android platform + using the resources available on the in the open-source to our best!

What's next for Corona Lens

Developing a mobile application and connecting it to a video sensor. Our application can be extended for video surveillance in public places, etc

Built With

Share this project: