Inspiration

Obesity is an often overlooked health problem that affects both young people and the elderly alike. While one of the main reasons people can experience obesity is due to genetics, 3 of the top 9 causes are related to overeating, a diet of high carbohydrates, and frequent eating. The economic cost of obesity is also beginning to outweigh the national cost of tobacco smoking: $23.3 billion compared to $18.7 billion. In the United States, these costs are estimated to be even higher, at around $150 billion.

This is not to mention the less tangible costs of obesity ranging from weight bias & weight-based discrimination to productivity loss and higher mortality. Programs like Weight Watchers & surgery promise to offer people the opportunity to slim down and live a better lifestyle, but we know the best, simple, and one of only 2 ways to that better life is a healthy diet.

Currently, nutrition apps require tiresome manual entries each meal which can be challenging to keep up while maintaining an altered lifestyle. They’re also heavily calorie oriented and restrictive which isn’t the goal of what healthy eating should be. It should be to help someone feel more confident and happier, especially oriented towards their goals, and that was our goal with ütrition. To create a user-centred, healthier, and automated alternative. .

What it does

ütrition is a simple and easy alternative that would make nutrition tracking user-friendly, attractive, and automatic. Using your phone’s pre-existing photo gallery, ütrition automatically analyses the meals you had and displays it in a summarized screen for users to conveniently look at. Using Geolocation, Google Photos Google Maps data, and Yelp, our app is able to pinpoint exactly which restaurant you had that last burger in, and match your food to its most likely tag based on its similarity to photos on the menu. We use all that information to provide you specific nutritional information about the food that you eat when you go out so you can keep up with your diet with just a simple click on your phone camera with each meal.

Our app also offers several options for viewing the details of your food further, down to its breakdown in nutrients, such as Vitamin A, and materials you may want to avoid like fat or carbohydrates. It also gives you the chance to view your progress overall, allowing you to toggle between daily, weekly, and monthly growth in the fitness score of your diet & your diet’s components (this is calculated with respect to set goals). All this data processing is done behind the scenes without your intervention and presented in an interface eager to please.

How we built it

For the app's design, we started with user research through user interviews and contextual analysis. After discovering all the pain points and brainstorming opportunities to solve them, we designed the prototype in Figma, conducted quick user testing, and coded it in React Native.

UiPath helped facilitate our use of computer vision food identification by allowing us to create an automated robot that performed data scraping to gather image data and GPS coordinates from Google Photos (this function is actually not offered in the Google Photos API & we used UiPath to bridge that gap). The data was then forwarded into the Google Cloud Platform Firebase, where a listener through Google Cloud Functions would trigger the image classification model, self-trained and Google AutoML Vision, whenever a new image/GPS coordinates came in. Our model was trained using our own dataset of menu images geolocated through the Google Places API, using the GPS coordinates forwarded by the UIPath Robot, with the Yelp Fusion API. We also frequently used Postman as we integrated all these different platforms and API’s together.

In addition, another Google Cloud function would push that classification data to Firebase again, creating a centralized database for our project, along with the nutritional information retrieved through Nutritionix API. There, our mobile app could easily access the Firestore database & load that when needed.

Challenges we ran into

Due to the short time frame, we had to conduct our user research and iterate quickly. In addition, UIPath never really behaved the way we directed it to. Another unexpected issue was due to the irregular and slow wifi connection during the hackathon which resulted in web pages loading slower than UiPath could recognize, causing errors in automation. In order to overcome this, we relied on cellular data (the attached video shows the backend web scraping done through UiPath).

We also experienced a lot of setback just because of the lack of API’s that would accomplish what we needed: extracting the location metadata from a photo, easily queuing up menu photos (a few API’s provided general restaurant photos but always at a limit, and the Locu API shut down 3 years ago for restaurant menu photos), and some of the newer Google Cloud libraries faced compatibility issues that needed to be fixed. However, we pushed through in the short amount of time given and still managed to train our model for several hundred photos.

Accomplishments that we're proud of

That leads into our next point: even given the fact that we weren’t used to hackathons being such a short span, we were able to accomplish so much: design a final mock up of our design, asking focus groups to better improve our user interface, create a progressive React web app built for mobile, integrate UIPath neatly into our project, extract hundreds of images from local restaurants we’ve visited using a combination of API’s, and train & begin to apply our model on image classification. We also all first met on the day of the hackathon and were proud of how well we worked together as a team, complementing each other’s skills.

What we learned

2 of our team was made up of UI/UX designers, so coding for those 2 team members was an almost new experience. One of those 2 UI/UX designers actually ended up creating the beautiful mobile-optimized app that we were able to present today using React. We also learned a lot about what went into designing a thorough application end-to-end, with the other 2 backend developers getting a glimpse of the progress & the focus groups as well. One of our team members is now considered a UIPath seasoned professional: we even think she should mentor future students in this challenge! She learned how to use UIPath to do computer vision. The last team member has learned so much about GET/POST requests in the past 2 days it could fill a book. She also learned a lot about using Google Maps API’s, not just for the plain map interface, and is excited to learn more in the future.

What's next for ütrition

With more computing power and time, we would try to reach out to Locu so we can expand this service to restaurants that don’t have menu pictures on Yelp & train our model on all possible restaurants to deliver the best most targeted experience for our users. We’d likely deploy that model training on high power GPU’s, as the data collection & model building can be slow at times. We would also make our app more real-time using that higher computing power as well. Finally, we would also expand our app’s code base to fully catch it up to speed with the mock-ups we designed, though we’re still proud of how far we did come with the actual app.

Share this project:

Updates