In 2015, NASA satellite data was released illustrating the alarming rate of global water depletion. Scientists announced that 21 of the world's largest 37 aquifers had passed their sustainability tipping points. However, given humans' wasteful use of water, these numbers are hardly surprising. To conserve water, cities like Los Angeles, California have enforced 'restricted garden-watering' days. Appliance companies are innovating water-conserving toilets, showers, and sinks. While these efforts are necessary, humans' are refusing to confront water conservation's biggest enemy––agriculture.
The Guardian reports the global agriculture sector accounts for 70% of human water use. Changing food consumption habits holds the most potential in the mission to conserve water. Producing 1 kilogram of meat can require up to 20,000 liters of water, whereas the production process of 1 kilogram of wheat requires a maximum of 4,000 liters of water. Eliminating meat from just a few meals a week can have a significant impact on a person's personal water footprint. With footprint, we want to help others take this next step towards being mindful users of water.
What it does
Our application, footprint, allows the user to take or upload a photo of their anticipated meal and receive a breakdown of their meal's water footprint. The resulting page makes the user aware of their ecological impact, specifically in the context of their food choices. The application also allows the user to manually input components of their meal or edit the automatically loaded components of their meal.
How we built it
Our application was developed using React. After the user takes or uploads an image, tags identifying the food types in the photo are retrieved using Microsoft's Cognitive Services' Computer Vision API. The application has the functionality to allow the user to delete or add tags to account for any errors made my the API. These tags are compared against a data set of foods and their respective production water footprint to reach a total displayed on the final page.
Challenges we ran into
The main difficulty we ran into was with the specificity of the image analysis. Our data set didn't include items identified by the API like 'bun', but it did include terms like 'bread'. Or, sometimes, the API would tag images with generalized terms like 'meat' or 'vegetable' without specifying the meat type or vegetable type. We had to develop an algorithm that would identify these tags and prompt the user for a more specific tag.
Accomplishments that we're proud of
All of us are first-years that had no experience with React until yesterday's workshop! We each took two hours yesterday further delving into the React documentation, watching YouTube videos and completing code-alongs before we finally began our hack. Not only did we successfully build an application using React, but we built one that also implemented an API.
What we learned
We learned that caffeine can be consumed in more ways than a person would ever want to fathom. And that tile floors don't make good mattresses. Less importantly though, we challenged ourselves and learned a completely new language. We also learned how to work with APIs to perform image analysis and retrieve information based on the analysis.
What's next for footprint
Currently, our use of the API to retrieve tags based on the meal image isn't completely accurate or comprehensive. Some components of the meal are ignored or incorrectly tagged. We hope to continue to optimize our tag sorting algorithm to generate a more accurate list of tags for the image. Furthermore, the footprint is calculate based on liters per kilogram. This calculation would be more accurate if, instead of based on a kilogram, was based on a typical serving size of that food. After these implementations, we hope to share this app with family and friends to spread awareness of individual ecological impact!