Amidst the rising interest in medical problems, one condition goes largely unnoticed: allergies. As of 2016, around 32 million Americans suffer from allergic reactions. In the past few decades, the overall number of people with allergies has increased by 50%. Not only is this affliction common, but it is dangerous: every three minutes, an allergic reaction sends someone to the emergency room. Despite the prevalence of allergic reactions, there has been no cure found for this deadly condition. The only practical way to avoid symptoms is simply avoiding the triggers, and even this can be a daunting task.
As half of our team suffers from anaphylactic allergic reactions, (one of us has gone to the emergency room twice because of this!) we are well aware of the horrors of allergic reactions. We were inspired to build a program to help others avoid the hours of agony and panic that ensue after consuming an allergen.
What it does
AI Allergy helps people explore new foods while avoiding allergens. A state-of-the-art neural network is used to classify a picture of food, and a search algorithm is then employed to search a database of recipes for potential allergens. The machine learning algorithm can classify food into 404 categories, including dishes from Hawaiian, Korean, Indonesian, Japanese, Italian, American, German, Chinese, French, Canadian, English, and Spanish cultures. The recipe database contains more than 1000 recipes per food category for a total of 400,000 recipes, all devoted to finding potential allergens.
How we built it
AI Allergy was built in two parts: frontend and backend. The frontend is the app-user interface and was made in Android Studio using Dart and Flutter. The connection between the frontend and the backend was a django server running through and Apache instance on a Google Cloud server. Using HTTP requests, the app communicates with the machine learning algorithm and the SQL recipe database.
The backend consists of two parts: the classification part and the database. The classification was written in Keras and TensorFlow and was trained on a custom dataset (a combination of existing datasets and images scraped from Google Images using Bash). This dataset was created by our team's machine learning engineer and, to our knowledge, is currently the largest public food image dataset. The machine learning algorithm is a combination of a network developed by Google and a custom classification block developed by our team, allowing for increased accuracy. It was trained using AWS t2.micro and an NVIDIA GeForce RTX 2060. The combination of the NVIDIA GPU and the t2.micro allowed us to finetune the model structure more efficiently. The second part of the backend contains the SQL recipe database, which was generated using Edamame's API, Python, and MySQL. A Django instance served as a bridge between the app and the backend framework.
Challenges we ran into
We ran into challenges at every part of the application. Originally, we had trouble connecting the frontend with the backend via a server. Eventually, we were able to figure this out using Google Cloud and its documentation. In the backend, our machine learning algorithm (derived from a Google AI Research paper) initially performed poorly. We solved this by augmenting Google's machine learning algorithm with our own, task-specific customizations. Using somewhat hackish code and algorithms, we were able to raise the accuracy of our algorithm to an acceptable level. Additionally, the Edamame API was used to generate a CSV file with different recipes. However, there were many variations of how the recipes were formatted, and they needed to be standardized before being added to the SQL database.
Accomplishments that we're proud of
On the frontend, we're proud of our simple yet elegant app design that effectively implements Google Cloud. On the backend, we're proud of our customized machine learning algorithm, the dataset we created to train it, and the comprehensive set of recipes that can be referenced.
What we learned
We learned a lot about using Google Cloud and AWS servers, using in-app cameras, web crawling, and machine learning.
What's next for AI Allergy
We believe that AI Allergy is practical enough that it can be implemented. Hopefully, we will be able to extend AI Allergy from a PennApps project to a real application.