Inspiration

Originally, our idea was to use Google Cloud Vision image recognition API to determine what objects can be recycled in Massachusetts. Although we were passionate about this idea, the Recycling API we wanted to use for our data was not accessible in time for this hackathon. So, instead of giving up on the cool image recognition feature we had already programmed, we decided (upon being surrounded by food) that food safety in the U.S. is mediocre at best and affects everyone.

What it does

Healthy Vision scans the ingredient list of any product, and using API image-to-text recognition, returns to the user what harmful ingredient their food may contain.

How we built it

The image-to-text recognition uses Google Cloud Vision to convert text inside images to actual data that can be used in a backend connection. Firebase was used as a backend database. We also used React Native to design and build a mobile app. It also served as a way to connect the other key features of our app, including the Firebase database and the Google Cloud Vision API.

Challenges we ran into

In the early stages of our app's development, we ran into many issues concerning the way information was displayed. At first, the app displayed a list of items that the Google Vision API thought might represent the picture taken by the user. Considering this was a lot of confusing and irrelevant information, we had to make sure it wasn't seen by the user. We also found issues in matching the picture taken to items in our database. We tried our best to fix these problems, and for some we succeeded, but for others, we still have a long way to go.

Accomplishments that we're proud of

We are very proud that we were able to figure out how to use the Google Vision API at all. It is a very cool feature that we are glad we got to learn about and incorporate into our hackathon project. This was one of our first accomplishments, and it really set the mood for what our app could become. We are also proud of figuring out how to use React Native, being that none of us had ever used it before. Although our app isn't perfect, it shows a lot of hardwork, endurance, and resilience. Lastly, we're proud of all of the little mistakes that we were able to fix along the way, and being able to meet new and amazing people to collaborate on a project with. We met each other at this hackathon, and it allowed us to struggle together.

What we learned

We learned how to work with a lot of new programs. As mentioned earlier, it was our first time working with React Native and Google Vision, so we're proud of what we got out of it and hope to learn more. We also learned how to quickly change direction. After having an entire idea set up, we had to quickly scrap it and come up with something new, original, and interesting. Although a bit cliche, we also learned a lot of teamwork. There were many thing some of us did not know how to do (like navigating github) that others in the team taught us. It was nice to learn from a peer rather than from online sources, although we utilized those as well.

What's next for Healthy Vision

We hope to fine tune our database connection, and increase our ingredient database so it can provide more information to the user in the future. We also hope to add to our overall design of the app. After all, nice looking apps are always more user friendly! Lastly, we hope to create more efficiency within our app, both in our database and how we parse the information. Overall though, TechTogether was a great experience and we're very grateful!

Share this project:

Updates