Inspiration

There are a number of visually impaired people who walk around us every day but do we really help them or just pass by? A life for a blind person a no easy task. We created this hackathon project to help the blind since someone close to one of the team member is visually impaired & a close friend of mine is completely visually disabled.

We will never understand the struggle blind people go through but know how they are looked down, mocked in some places, even malls, and supermarkets.

What it does

We have created this project to help them while buying foods or other products or even knowing what kind of product is in front of them. Any person can use this handy tool and help us make it better.

How we built it

Created a compute engine on GCP of Linux. (We used Ubuntu as it supported python by default).

Scrape High-Quality Images from google search filtering for better efficiency of the model being created. In our case, we achieved an accuracy of 80.7%.

The model we created is based on CNN (Convolutional Neural Network) like google’s Inception-v3 model on GPU instance. The model used was for Image classification with 40 diverse food items, with about 100 images for each category with 10% of them as the training set with 22 levels with 8000 set as maximum training cycles.

Imported the model files generated by Tensorflow on GCP into an Android application using Tensorflow SDK. Used NDK for C++ code base for detection which computes faster than Java code thus enabling lag-free calculation using GPU supports.

The images were converted to Numpy array of size 299 x 299 from 720p video frames captured from phone’s camera. Deciding on a threshold of 51% thus not having confusion among two categories by applying minimum distance required for decision set as 10%.

Used TTS (Text To Speech) service for letting the user know what food item has been detected. JNI code base used extensively for such purposes.

Locale Support for different languages has been provided by using Google’s Translate API over GCP for initialization. Hence, providing non-network dependent machine learned application.

Challenges we ran into

Not being able to sleep 3 days doesn't seem much when your code keeps crashing. And when you know that it is because of a corrupt file which you are parsing for Machine Learning, there is not much you can do than just shake your head.

Accomplishments that we're proud of

I'm proud of not attempting to incorporate any of the sponsor companies' APIs just to be eligible for their prizes because that would have caused unnecessary complexity and distracted me from achieving the main goal.

What we learned

We are graduate students of cybersecurity. Using tensorflow was completely new to us, but it is really simple with the great documentation written for it. We were never going to leave until we complete this project.

What's next for SenseFood

We do not create projects at hackathons to win, we create them to make actual products. Each and every product I have created at a hackathon have used somewhere or the other I published the project on google play store at 2:30 AM in the night. We are going to take this wonderful fully functional project way more ahead. And the response we have received from across the planet just after publishing is really heartwarming. Our efforts have been worth it.

https://play.google.com/store/apps/details?id=com.codenza.sherlock

Built With

Share this project:
×

Updates