Inspiration

Inspired by the recent student activism in countering climate change, we wanted to join the cause with our coding efforts to make the average citizen aware of their contribution to global warming. As young innovators in society, we hold a unique place as the inheritors of the planet in a time of growing technological progress.

What it does

SeeFood uses an innovative and trainable image recognition machine learning model to take pictures of user's meals and qualify each aspect of the meal. Using a ranking algorithm based on data from the EPA, SeeFood calculates the environmental impact of our meals. Although each individual meal may seem to have a minute impact on the environment, such an accumulation builds up over time if we are not aware of its true consequences.

How I built it

We trained a coreML machine learning model with thousands of images to recognize a plethora of various food types such as soda and junk food. Then we used Swift in Xcode 10 to implement the coreML model into our app's image recognition functions.

Challenges I ran into

Accumulating the massive amounts of photos that we needed to train our model was a long and arduous process, but the more photos we added, the better our model would work.

Accomplishments that I'm proud of

Creating a fully functional and effective machine learning model which works well for recognizing food items. We were also proud of our teamwork and collaboration which made the overall process much smoother.

What I learned

We gained extensive knowledge in Swift and Xcode as some of our team members had never even used the app at all. We also learned a great deal about machine learning and how it can be implemented in extremely useful ways.

What's next for SeeFood

We hope to make expand our services to all sorts of foods and provide better informed suggestions for restaurants and recipes that help users lessen their environmental impact

Built With

Share this project:
×

Updates