About the project
Inspiration
Pokemon Go was a major inspiration for us. The game incentivizes players to go outside and exercise with cute monsters and a collection/leveling system. We wanted to try to gamify tackling litter and waste management with similar map-based solution to Pokemon Go. This would greatly benefit the community through a fun activity that reduces litter on city streets.
What it does
Like Pokemon Go, the app allows a user to view themselves on their local map. Each player starts off with a little garbage monster of their own. As players walk around their city, whenever they come across trash, they can take a picture of it via the game’s camera features. The game will recognize the trash in the picture and award points to the player. With every point collected, the player’s monster grows and can eventually be evolved.
How we built it
For our front end, we primarily used Next.js and Typescript. Numerous React components were created to create pages and UI features for the application. A React webcam library was used to configure the app’s camera. We also integrated the Google Maps API into the application to allow players to see their location. For our backend, we used JavaScript. In our backend, we used AWS features to implement our application. DynamoDB was used to create a database for collecting information for our users, their monsters, and their points. Amazon Rekognition was used to analyze the pictures users take and measure the amount of trash they’ve collected.
Challenges we ran into
We ran into a few challenges while developing Litter Critters. The first challenge we ran into was when deciding on a method of recognizing trash in our images. A well trained model called TrashAI was available, but it had no API even though it was deployed on a website. We attempted to upload our images through the website automatically, but were not able to with the time constraint, so we decided to use Amazon Rekognition for our purposes. Secondly, we struggled to implement a schema based database with Dynamo database because when things fail on the cloud and observability is not set up, it is hard to troubleshoot and fix bugs. Once it was set up, however, we ran into getting the google maps API to play nicely with the Next.js app router, and since it is such new technology, a lot of the troubleshooting had to be done with our own devised methods and not just pulling a fix from a video or google.
Accomplishments that we're proud of
Our UI and image recognition were two of our greatest achievements with Litter Critters. We opted for a very simple and user friendly UI that puts the user directly into the game when the app is launched. It guides the user’s choice to three simple options (profile, camera, and settings) that are extremely intuitive using familiar buttons and layouts. We used Amazon Rekognition to power our image recognition through our backend. When a picture is taken, it gets sent to the backend as a base 64 string to be analyzed by Rekognition for trash, and it is returned to the front end so that the users points can be tallied and added toward evolving their Litter Critter. We are also proud of the awesome sprites and logo.
What we learned
Throughout the development of Litter Critters, we gained more experiences with React, Next.js, Auth0, and TypeScript. The integration of a machine learning model to analyze the trash took several hours of testing different models and ultimately deciding on Amazon Rekognition. Since some of us already had AWS accounts, we decided it would be easier to implement multiple software from Amazon, such as Dynamodb. We gained more experience with these specific tools. The webcam library and the integration of the Google Maps API were some of the more interesting tools to use and develop into the application.
What's next for Litter Critters
Implementation of a more in-depth trash recognition, better user authentication, and working map markers. If the TrashAI model was implemented, stronger trash image recognition could be achieved, and the type of trash could also be returned. With the types of trash, we could create a more individualized profile screen that details the types of trash that users pick up and an appropriate data visualization. For example, if someone picks up a lot of recyclable bottles their litter critter could evolve into a recyclable bottle style critter. Lastly, a feature that was floated was an Echo3D implementation that would calculate the volume of trash that an user has collected and project a scaling 3D model onto the ground in AR to show the user the impact of their trash collecting. The more trash they collect, the larger the model could scale, and the higher the impact Litter Critters could have on its users, Philly, and the environment at large.
Built With
- dynamodb
- google-maps
- javascript
- next.js
- react
- rekognition
- tailwind

Log in or sign up for Devpost to join the conversation.