Inspiration: None of us or even our mentors knew what their actual carbon footprint is, let alone knowing how to offset it. Everyone should be aware of how what they buy/do affects the world and how can they compensate for their emissions. The problem with currently available apps is that they require manual data entry for every item, which is extremely time-consuming, and even then they give an approximate and average footprint calculation.

What it does: foodPrint you easily calculate your overall and daily carbon footprint (even of your smallest of the purchases). All you have to do is click the picture of the grocery bill. The app automatically recognizes all the items, aggregates individual item scores, and provides you with the overall carbon emissions of that purchase. Based on your buying behavior, the app also recommends the easiest ways to compensate for these emissions. You can track your progress and earn benefits from our green partners.

How we built it: We wrote the main code in R. Using google cloud's vision API to detect text in images. Furthermore, we integrated apache Casandra as an SQL database where we access food nutrition and carbon emissions data.

Challenges we ran into: Integrating data Stax Astra into R as no package exists. Hence, an ORM model for curl had to be written by ourselves. Additionally, the main challenge was to parse the relevant receipt food items from the OCR text. For this, an algorithm was developed which took into account the spatial location of every word and how it interlinks to other text items. Using this info, line breaks were introduced to get a proper representation of the receipts. Subsequently, the relevant line food items were parsed.

Accomplishments that we're proud of: Our use case works. We are now able to scan a receipt (from a reasonably lit and clicked picture) and get all the information. With a 90% success rate, our code parsed the information about items from the database and gave us both the individual item score and overall carbon emission details.

What we learned:

Technical Skills: How to use google cloud as a backend. How to use Figma to create interactive prototypes. Personal learnings: With all the hard-hitting facts we came across, we are definitely more aware of our carbon footprint information. How our regular purchases and food we buy impact the world on a more glandular level. Also, buying completely locally does not mean that you are doing good.

What's next for foodPrint : Since there is a lack of data availability currently, our first further steps will be to complete full-fledged data integration and finish prototype development. With that, we can do our initial beta launch and do market testing. Once we have enough market adoption i.e. proof of concept we will proceed to secure funding for country-wide launch and for instigating partnerships.

Built With

Share this project: