More than 1 in 3 Americans require a special diet because of health reasons, but even if you are aware of your condition, caution can only go so far. Mistakes are bound to happen, and the consequences can be deadly. In fact, according to the Global Dietary Database (GDD), poor diet is the leading cause of death and disability in the world.
Nutrition labels are often far too dense in information, and difficult for people to read in a reasonable time. In addition, these labels can often be extremely difficult for people in developing countries to scrutinize, and health risks from ingredients hiding in the ingredients list can cause serious harm or death.
We ourselves suffer from various eating restrictions: including being gluten intolerant, nut allergies, and lactose intolerance. Our experiences with having to be extremely careful with everything we eat inspired us to build AugmentTable.
What it does
AugmenTable scans a product barcode to get the nutrition facts, match data with a database, check against a number of custom conditions, and finally create interactive AR popups that guide your shopping experience. What makes our program so easy to use is our custom-made conditions system. Under the system, you can pick and choose from an exhaustive list of possible diet restrictions, all of which get saved and checked against every barcode you scan so you never buy anything potentially bad for you. Our custom condition-analyzing API takes in conditions and ingredients and allows us to reach out to those in developing countries who find it extremely difficult to read, parse, and analyze the ingredients list, which our app does instantaneously.
How we built it
This app contained a lot of moving parts that all had to be tuned and integrated together. The backend is written in Python, and it serves information about conditions and nutrition facts to the React Native frontend, which also bundles in our camera (with barcode scanner) and AR to display the results of the report. We spent a lot of time experimenting with few-shot in-context learning with GPT-3 to get the model to create a custom food analysis API using the knowledge.
Challenges we ran into
A big challenge for us/mostly me(Neil) as frontend dev was getting react-native building and deployed for the first time. Some of the modules we used required an extending class components, some couldn't automatically link with RN >=0.6, some didn't compile with Gradle 7.0, and others (ahem Redux) broke others through connection APIs, so I sure learned a lot about react-native.
I(Anthony) gained a lot of experience with machine learning, as I spent a lot of time getting in-context learning to work properly and give out a valid output. The problem with working machine learning is that it is exceedingly hard to debug, and even if all of your code compiles you have no clue what you are doing wrong.
Accomplishments that we're proud of
We are very proud of creating a system that allows anyone, regardless of education, to scan foods, and parse the entire ingredients list and locate which ingredients may cause adverse side-affects to the user. We believe this will be able to solve many avoidable deaths, simply by inputting their health conditions and scanning everything.
What we learned
- Effective time management and the importance of check-ins
- Lots of design principles and concepts for mobile development
- How to use AR and integrate with a camera
- A lot more about food and how certain conditions can affect your diet
What's next for AugmenTable
- Custom user profiling, for family members or for people to share presets
- Amazon and Uber Eats Integration, to track when you shop food online
- Diet plans to keep you eating healthy by combining the estimated nutrition of all food you buy
- Image analysis of meals without barcode, using CV AI
- Multilingual support, for those reading ingredients in developing countries.