Nutritional diseases - such as obesity, cardiovascular disease, and type II diabetes - are part of a global health epidemic that causes millions of deaths annually in the modernized world. One of our members has a grandfather, Tom, who suffers from hypertension. Tom needs to receive proper nutrition management and recommendations in order to manage his heart disease, and would benefit from a facile, convenient system to report his dietary log to his nutritionist. Meanwhile, nutritionists could provide better care for their patients by receiving a more time-sensitive, accurate record of which foods they ate, and a system that rapidly compiles and extracts conclusions about a patient's dietary habits without needing the nutritionist to read through lengthy food logs.
What it does
Our app allows users to upload photos of every meal they eat. Subsequently, using machine learning image recognition our app identifies the food contents of the plate, and subsequently determines important parameters about the meal, such as approximate calorie content, cholesterol content, sodium content, food group breakdown, etc. and sends this data to patient's nutritionist. The nutritionist can analyze this time-stamped data to quickly put together graphics that offer better holistic insight into the dietary habits of their patient, empowering nutritionists to give better, more accurate, nutritional recommendations to their patients.
How we built it
We used IBM-watson's visual recognition machine learning algorithm to enable a computer to process an image of a plate of food, and rapidly identify its contents, in addition to providing approximate estimates on the important nutritional parameters of the plate of food (ex. calorie content, cholesterol content, portion size, etc.)
Challenges we ran into
Building an artificial intelligence system that can recognize many types of common food requires lots of time to train the computer, and thus the model we built this weekend was only constructed to recognize a few common foods as an initial proof-of-concept.
Accomplishments that we're proud of
The computer that we trained can process an image of a plate of food that contains multiple different foods, and identify all at once each of the distinct foods in order to calculate approximate calorie content, food group breakdown, etc. Most alternative calorie-tracking apps that exist today on the market require users to scan images of nutrition labels or enter the names of the food (as well as the quantity they are eating) one at a time. By simplifying this entire process into taking one photo, we significantly improve the ease and accuracy with which users can report details of their dietary habits to their nutritionists.
What we learned
We learned how to use IBM-watson's tool to train computers to accurately recognize specific images. We also learn how to use HTML and CSS to build a website.
What's next for FoodAI
We want to train the algorithm to be capable of recognizing a more diverse array of food items, develop a functional user interface downloadable on iOS or Android, and implement the option of translating the app's content to other languages in order to make it more accessible to communities where English is not the first language.