It's very important for people to know what's in their food.
What it does
Tells users how much carbs, fat, and protein are in their unpackaged food items.
How I built it
We developed this application in Swift using ARKit. We first used Google Cloud Vision API to put a name to the food item. We then used ARKit libraries to create a graph in augmented reality that is placed near the food item. We then used a nutrition API to retrieve the appropriate data for the food item, and then display the amount of carbs, protein, and fat in the food.
Challenges I ran into
It was very difficult to figure out how to create an accurate image recognition. With such large ML trained datasets, it was very easy for certain food items such as an apple to be recognized as a peach.
Accomplishments that I'm proud of
I'm really proud of the cool UI we made through ARKit. I thought it was really cool to see how we could manipulate objects in 3D Space. Combining the UI of the ARKit with nutritional information created a really cool product.
What I learned
I learned how to develop using ARKit and Swift. I learned more about making Rest API Calls. I still have a lot more to learn regarding AR and Swift but this was a great start.
What's next for NutriViz
Making the object recognition more accurate.