Inspiration

Industrial-scale deforestation remains a pressing global issue, threatening the habitats of countless species and accelerating climate change. Inspired by Dr. Seuss's The Lorax, who spoke for the trees, our team set out to create a real-time acoustic detection system that could help identify illegal logging events in remote wilderness areas and empower local communities, environmental agencies, and conservationists to respond swiftly.

What it does

Lorax - Acoustic Anti-Logging System constantly monitors ambient sound levels in a forest using a sound sensor. If noise surpasses a certain threshold—potentially indicating chainsaws, heavy machinery or high levels of human activity—the system automatically captures a brief audio clip, transforms it into a spectrogram, and runs it through a machine learning model trained to detect the distinctive audio signatures of illegal logging activities and other forest noises. Upon detection, Lorax triggers real-time notifications on our dashboard, providing both the suspected logging event label and the associated audio visualizations for quick verification.

How we built it

Hardware Setup

We employed a Phidget sound sensor to monitor continuous SPL (Sound Pressure Level) data in real time. This sensor provides a rapid stream of decibel readings, allowing us to detect if the environment goes above our threshold for suspicious noise (e.g., 57 dB). Upon reaching this threshold, a separate high-fidelity microphone is triggered to record a 5-second audio clip. We chose a sensitive microphone to capture detailed acoustic signatures, ensuring better accuracy when identifying specific sounds like chainsaws.

Data Processing Pipeline

Threshold Triggering: As soon as the SPL crosses the defined threshold, we automatically record a short audio clip via the microphone. Spectrogram Conversion: We use librosa to transform the raw audio data into a log-mel spectrogram, which is then visualized with matplotlib. This spectrogram provides a time-frequency representation crucial for training and inference with our CNN.

Machine Learning Model

We trained a convolutional neural network (CNN) on a labeled dataset of forest audio clips, featuring both benign sounds (wind, birds, rain) and illicit activities such as chainsaw noise. The CNN processes the spectrogram and outputs a probability distribution across possible classes. If the model assigns a high probability to “chainsaw,” the system flags the audio event as suspicious.

Web Application

To tie everything together, we built a Flask-based web app. The interface:

  • Displays a live chart of SPL values from the Phidget sound sensor.
  • Shows the current classification results (e.g., “Area is CALM” vs. “Suspicious Activity Detected”).
  • Renders the most recent spectrogram images for visual inspection.
  • Allows users to start/stop monitoring or observe the environment in real time from a browser, providing an interactive dashboard for quick verification and decision-making.

Challenges we ran into

Data Collection: Sourcing high-quality, real-world chainsaw audio samples alongside benign forest noises proved challenging. We had to carefully curate diverse training data for robust performance.

Threshold Tuning: Determining the optimal dB threshold for “suspected activity” required iterative testing, so that we neither missed potential illegal events nor flooded the system with false positives.

Remote & Real-Time Constraints: Designing a system that could operate reliably in remote areas, potentially with limited connectivity, called for efficient data processing and minimal reliance on external compute resources.

Accomplishments that we're proud of

End-to-End Deployment: We integrated hardware, signal processing, machine learning, and a user-friendly interface into a cohesive system capable of automatically identifying suspicious logging sounds in real time.

Accurate Classification: Despite data constraints, our model demonstrated a solid accuracy in detecting illegal chainsaw operations over various background forest sounds.

Conservation Impact: Above all, we’re proud to contribute a practical tool that can be leveraged by conservation groups and local authorities to protect forests and wildlife habitats.

What we learned

Importance of Domain-Specific Data: Training an accurate model for chainsaw detection required carefully selected audio samples from real-world or closely simulated forest environments.

Rapid Iteration & Testing: Integrating hardware with an AI-driven pipeline taught us the value of iterative testing—especially threshold adjustments—to refine detection accuracy.

Real-Time System Design: Managing asynchronous streams (sound sensor vs. microphone vs. web dashboard) underscores the necessity of well-structured, event-driven architectures.

What's next for Lorax - Acoustic Anti-Logging System

Edge Implementation: We aim to deploy the ML model on edge devices like Raspberry Pi, making Lorax even more self-sufficient in areas with unreliable internet.

Extended Sound Library: Beyond chainsaws, we plan to classify other illicit activities (e.g., gunshots, vehicles in restricted areas) and track wildlife presence for broader conservation insights.

Satellite/UAV Integration: Linking the acoustic alerts to drone or satellite-based imaging could offer visual confirmation of illegal activities, enhancing the system’s reliability and response capabilities.

Built With

Share this project:

Updates