Canada is seventh out of the top ten countries accountable for 70% of global greenhouse gas emissions. Compared to countries like Bangladesh who emit 1.1 tonnes of greenhouse emissions per person each year, Canadians produce 20.96 tonnes of CO2 emissions in comparison.
Canadians need to be more conscious about the choices they make every day and improve when it comes to the way they impact the environment. Climate change is an overwhelming problem to tackle, and impacting us in a variety of ways, such as climate-related changes making it increasingly difficult for Inuit hunters to reach locations where they are used to hunting.
As users of emerging technology such as AR/Computer Vision in Google Lens and Microsoft Office Lens, we have seen the way that these tools have opened up pathways to information awareness. We have not yet seen these tools tackle the issue of raising informative awareness about current issues. For this project we combined this technology with the issue of environmental destruction, to create the means of improving our awareness in how we impact the environment at a global scale.
What it does
With 73% of millennials willing to pay more for products sold by purpose-driven brands, and 81% of them expecting companies to publicly share their environmental efforts, there is a clear demand for environmental responsibility. Although impact awareness and responsibility for the environment is what the majority of these consumers want, it’s not something that is widely available. This is where EcoVis comes in to provide a bridge for companies that don’t disclose their impact.
EcoVis uses AR and real-world data to create a physical world for an complex problem: climate change. With EcoVis, our mission is to mitigate greenhouse gas emissions through spreading awareness of where our consumables originate from, and the impact that consuming goods from these origins has on our environment. The primary function of EcoVis is using Computer Vision to detect objects such as foods, and displaying relevant and global scale data about the foods environmental impact.
How we built it
To recognize the objects, we use augmented images as a way to craft computer vision, which would then allow us to cater the experience on the type of object it recognized. For the app to obtain the data it presented, it utilizes flexible scrapers which receive data from real swappable datasets, created by credible sources. In our case, the main dataset we used was created by the Food and Agriculture Organization of the United Nations. After obtaining data, the app would use AR to fixate data to the object spacially, which gave room for creating user engagement. In creating this tool, it was important to stay relatable to our users, so we crafted our app with a human-centric design approach, keeping accessibility not only in mind, but as a top priority using tools like Stark to check for problems for the possibility of users with accessible needs.
Challenges we ran into
The biggest challenge of ours was our initial idea for this hackathon. An investment app aimed at informing young working adults about how to invest and different types of investments that they should engage in. We utilized TD Da Vinci API to grab mock user data, and using AR as a tool for storytelling. Through benchmark analysis and feedback from various bank sponsors such as BMO and TD, we realized on day two after countless hours of product thinking/design, that there were too many variables to make a concrete and feasible product in 36 hours. We then decided to pivot to a new issue that was current and real to us. As usability oriented thinkers, we aimed to again solve a real-world problem, but this time took the approach of crafting solutions for social good. While working on our second project, EcoVis, challenges we came across were cleaning datasets and training accurate computer vision using AutoML. The difficulty with AutoML was that our bounding boxes were not accurate due to the amount of training we did. Due to time constraints we didn’t have the capacity to add additional training, and switched over to augmented images.
Accomplishments that we're proud of
• Utilizing real datasets
• Creating a use for emerging technologies
• Pivoting and creating a presentable solution in the final eight hours of the hackathon (rip sleep)
What we learned
• Research is important and relevant in validating ideas through users and mentors in crafting a product
• Clean and relevant datasets can be hard to find
• It’s difficult to tackle topics without prior knowledge
What's next for EcoVis - Environmental Impact Awareness
In terms of scalability, EcoVis can move forward into scanning and providing environmentally impactful information about food material, containers, and shipping packing. Further strengthen user awareness about all aspects of food production that makes an environmental impact.