We wanted to make an app that helped people with dietary restrictions/the elderly pick what foods they should or should not be able to eat. We went with IBM-Watson's image recognition because we think it would be great to have such an app that does all of this with just a picture.

What it does

The user is able to take a photo from inside the app. Once they click to analyze the photo, it tries to recognize the food items that are present. It will return all foods that are inside the picture, which the user can click to see more info. The user can also set a profile with a limit on how many nutrients he or she is limited to per day. The app will compare the nutrient values to the user limit and decide if it is okay to eat.

How we built it

For the image recognition we used IBM-Watson's Visual Recognition API (using their default data) and we parsed the output using the JSON-java library. We used fatsecrets API to look up information in their database about the foods that were recognized, and we put it all together in Android-Studio.

Challenges we ran into

Figuring out a specific idea whose scope was realizable for the duration of the hackathon. Understanding how to use the API libraries for our project. Dealing with a lot of bugs/crashing in the mobile application

Accomplishments that we're proud of/What we learned

Understanding the jargon of the various services we used. Self teaching ourselves mobile app development concepts.

What's next for CANiEatThis

Adding a way to recognize which foods can or cannot be eaten given medical conditions of the user (from a database). Using fatsecret's js library to find more specific information about the foods that we can look up (as opposed to using a Java library, which does not give as much detail). Adding a more interactive U.I, where the user can tap on each food returned by the image recognition, to view more details. Training the IBM-Watson AI for more accurate results.

Share this project: