Inspiration

As a team, we wanted to use traffic cameras as an out-of-the-box solution, and as college students, we spend a lot of time traveling through our local areas. In urban areas, litter is hard to ignore and threatens our environment. As such, we wanted to create a product that would dynamically gather data from Inrix's API to flag excessive amounts of trash.

What it does

Utilizing INRIX’s extensive traffic data and Camera APIs, our project analyzes multiple camera feeds from a coordinate square specified by the user and gathers images from cameras within said region. Images are passed through the Bedrock AWS service, querying the Llama 3.2 generative AI model to determine if trash is present. If trash is present, it also identifies the trash type and amount.

How we built it

  • Overall Tech Stack: React.js, Node.js, Express.js, Flask, AWS Services (Bedrock, EC2), Google Maps API, INRIX Traffic Cameras in Box API, Figma.
  • Our front-end was designed in Figma and implemented with React.js. The frontend website additionally integrates the Google Maps API in order to display the map for a given city (in our case, Seattle).
  • We utilized INRIX APIs to access a given set of cameras and their feeds in a surrounding “box,” formed by providing two corner points of latitude and longitude. The Google Maps API dynamically provides these bounds based on the user's current map view.
  • Our backend was implemented with two layers: Express.js and Flask.
  • We used Express.js to hold most of our API endpoints. We used it to call INRIX’s Token API and Traffic Cameras in Box API. We handled all the responses from INRIX’s API, whether JSON or XML, with JavaScript libraries.
  • We then used Express.js to call our AI-related code in our Flask backend.
  • Our Flask backend then called Bedrock from AWS where we queried Llama to detect the presence of trash in a given camera image.
  • This data is then represented once again in our front end as instances (red pins) on the map where each instance will change color depending on how much trash is being observed. Each instance will list the different types of trash observed as well as the number of pieces of trash of each kind.

Challenges we ran into

  • Learning AWS — Our group had little to no experience with AWS services, so we were thrown into the deep end attempting to decipher all of the different services on the platform.
  • Figuring out the best dataset — From INRIX’s APIs to public datasets, we had to pinpoint what specific dataset would be most relevant for our project. Through careful deliberation, we were able to identify the dataset that best aligned with our objective to detect litter in cities.

Accomplishments that we're proud of

  • We are incredibly proud of how quickly we familiarized ourselves with AWS and its services. By diving into workshops and utilizing AWS's comprehensive documentation, we effectively integrated AWS's services into our project, and we're excited about the skills we've developed along the way.
  • For our front-end development, it was our first time working with the Google Maps API. As it is an integral part of our product, having the API functioning properly and working alongside our other technologies is an accomplishment we are proud of.
  • We are particularly incredibly proud of our back-end. We took a more unconventional approach to building it with a two-layer back-end with Express and Flask. We seamlessly integrated retrieving INRIX Camera Data with Express.js and analyzing the images with AWS Bedrock via Flask.

What we learned

  • We gained hands-on experience with AWS services like Bedrock and EC2 and successfully optimized them for real-time image analysis.
  • We honed our ability to engineer prompts for AI models and designed a flexible, efficient backend, which empowered us to tackle complex challenges with confidence.
  • This experience taught us how to integrate many diverse tools effectively to create a final product.

What's next for Littercator

While Littercator provides general information about objects located around the target area, what it currently doesn't allow for is a more interactive experience. We plan on allowing users to further examine locations to examine trash in the area, including but not limited to: Los Angeles, New York, Chicago, and Houston. In the future, we will implement a feature that opens the camera feed on individual locations, then use Amazon Rekognition to locate and better classify trash in the camera view.

Share this project:

Updates