Inspiration The inspiration for SwacchhAI comes from the problem that people in rural areas often have very little knowledge about waste segregation. As a result, they tend to dump all their waste in a single place, which can lead to various diseases and other problems. What it does SwacchhAI is a waste segregation system that classifies waste from real-time data. It helps users properly segregate their waste without needing a person's help. The system uses a camera feed to classify waste, so users don't need to take and upload pictures. The project provides real-time, accurate, and time-saving output. It can also identify more waste classes, provide real-time coordinates of the detected waste, and offer multi-lingual support. The admin-side application can keep a record of users, view the data they're feeding, and contact users if suspicious, dangerous, or hazardous waste is found. How we built it The project was built with a modular architecture. The process involves several steps: • Image Upload: An initial image is provided for processing. • Process Image: A Streamlit/Flask UI processes the image. • View Prediction: The prediction results are displayed to the user. • Live Webcam Feed: The webcam feed is activated to capture video input. • Real-Time Object Detection: An ML model is used to detect objects in real-time. • Waste Classification Logic: The system classifies waste into recyclable, non-recyclable, or hazardous categories. • Prediction Output: The system displays the prediction output with labels, categories, and accuracy. The model deployment ecosystem includes PyTorch for model development, ONNX as a standard for model interchange, and support for dual model formats. It also uses Edge computing and Cloud Inference for remote model execution. Challenges we ran into The provided document does not mention any specific challenges the team ran into. Accomplishments that we're proud of The provided document does not explicitly state accomplishments the team is proud of. However, the project's features and potential impact suggest several points of pride: • Creating a waste classification tool that provides real-time, accurate, and time-saving output. • Developing a system that helps people in rural areas who have little knowledge about waste segregation. • Building an admin-side application that can keep records of users and identify hazardous waste. • Creating a solution that has potential industrial applications, like smart bins and factory lines. What we learned The provided document does not explicitly state what the team learned. However, the project's description suggests they gained experience in: • Building a real-time waste classification tool. • Implementing a modular architecture for an application. • Working with a model deployment ecosystem that includes PyTorch, ONNX, and dual model format support. • Integrating edge computing and cloud inference into a solution. What's next for SwacchhAI: Live Waste Segregation The future scope for SwacchhAI includes several planned developments: • Real-time FPS Counter: A real-time FPS (frames per second) counter will be developed and integrated with the solution using a GPU/CPU to visualize and monitor performance. • Feedback System: An interactive, multi-lingual feedback system will be implemented to collect user feedback and continuously improve the model's accuracy. • Dockerized Deployment: The full application stack will be containerized using Docker, allowing for deployment on platforms like Huggingface Spaces and Streamlit Cloud for easy, seamless access. • Integration of Cloud: There will be an expansion into cloud integration for leveraging streaming model output with data analytics platforms to enable better accessibility and collaboration. • Input Heatmaps/Overlay: Dynamic, real-time heatmaps or overlays will be incorporated to visualize model accuracy, improving interpretability and debugging. • Model Output: The model output will be improved by providing visualizers and explanations to include heatmaps, confidence scores, and multi-class probabilities, which will provide richer output on object images. • Performance Monitoring: Comprehensive performance monitoring will be implemented using W&B, real-time FPS/GPU/CPU stats, and memory utilization, automatically logging performance metrics. • Feedback Loop: The introduction of a feedback loop will allow for iterative model training based on user feedback and data.

Built With

  • cloud
  • computing
  • cpu
  • docker
  • dual
  • edge
  • format
  • gpu
  • inference
  • model
  • onnx
  • pytorch
  • streamlit/flask
  • support
  • ui
  • w&b
Share this project:

Updates