As people who are extremely passionate about food we realized that a lot of people are curious about different foods around the world but do not actually know what they are. That is why we made this app, so that people could easily identify different food items and hence be exposed to different cultures and cuisines from around the world.
What it does
Our iOS app takes in an image of a food and with some ML magic outputs the name of the dish by classifying it within one of 70 foods. Try it out for yourself!
How we built it
We built the model using Python, Keras, Tensorflow and pre-trained layers from InceptionResNetV2. The model is trained upon Food-101, a dataset of labelled food images on kaggle with 100 images each in 101 categories. The training was done using Colab and a GPU runtime. The images had augmentations applied to them via a Keras's data generator to extend the dataset. The model was then converted using core-ml tools so that it could be integrated into the app. To deploy our app we used Xcode and Swift.
Challenges we ran into
Since we used images, it took very long to train our model. Hence, time expenditure was one of the biggest challenges we ran into. We initially started with 101 categories, then trimmed down to 70. It was a challenge to increase the model's accuracy with so many categories. It was also difficult to convert the model so that it could be integrated into the app, and we also ran into many issues while building the app.
Accomplishments that we're proud of
We are extremely proud of being able to set up a functioning iOS app, as well as achieving an accuracy of 93.7% on the test set for the model given the challenge of classifying into 70 categories.
What's next for Reverse Food Image Search
We would like to add a feature where we can get the ingredients and recipe for the recognized food item. We would also like to add more food categories while also maintaining or increasing the accuracy.