Inspiration

Modern battlefields face an emerging threat: small drones used for reconnaissance, payload delivery, and swarm attacks. Traditional radar and visual systems struggle with low-altitude targets, especially at night or in adverse weather. We built AALIS (Acoustic Autonomous Lightweight Interception System) to provide 24/7 drone identification using passive acoustic detection—giving warfighters eyes-that-hear when traditional sensors fail.

What it does

AALIS identifies drone models from their acoustic signatures in ~240ms, providing actionable intelligence for threat assessment and response:

  • Drone model classification with confidence scoring
  • Threat assessment metadata: payload capacity, rotor count, threat level (surveillance/payload delivery)
  • SNR-aware adaptive thresholding for reliability in noisy operational environments
  • Extensible threat library updated in the field via web interface
  • Passive detection requiring no active emissions (RF-silent operation)

The system works where visual and radar systems can't: darkness, fog, dense terrain, and RF-denied environments.

How we built it

Frontend (React): Lightweight web interface for field operators—captures audio via tactical microphones or system audio, displays threat assessments with actionable metadata.

Backend (Go): Battle-hardened processing pipeline:

  • Audio preprocessing with automatic gain control for varying distances
  • Dual-mode feature extraction (PANNS deep learning + hand-crafted fallback)
  • KNN classifier with adaptive thresholding for noisy environments
  • Comprehensive threat metadata (payload capacity, threat level, risk category)

ML Pipeline (Python): Rapid retraining capability for new threat signatures encountered in theater.

Key Innovation: Z-score feature normalization ensures reliable discrimination even between similar drone models, critical when distinguishing friendly vs. hostile platforms.

Challenges we ran into

Environmental Noise: Battlefields are loud. We implemented SNR-aware adaptive thresholding that dynamically adjusts confidence requirements based on ambient noise levels, preventing false positives during artillery fire or vehicle operations.

Feature Discrimination: Initial prototypes couldn't distinguish between similar models. One acoustic feature dominated the classifier, making a Mavic look identical to a Matrice. Z-score standardization across all 19 acoustic features solved this, enabling reliable threat identification.

Real-time Performance: Operators need immediate intelligence. We optimized the pipeline to sub-250ms inference while maintaining accuracy—fast enough for integration with counter-UAS weapons systems.

Field Updateability: New drone models appear constantly. We built a web-based prototype upload system allowing operators to update the threat library in theater without technical expertise.

Accomplishments that we're proud of

  • Sub-250ms inference suitable for integration with kinetic/electronic countermeasures
  • Passive detection with zero RF emissions—undetectable by adversary sensors
  • Noise-resilient classification maintaining accuracy in operational environments
  • Field-updateable threat library via simple web interface
  • Comprehensive threat metadata for command decision-making
  • Production-ready architecture deployable on tactical edge hardware

What we learned

Acoustic signatures are stable identifiers: While visual appearance can be modified and RF signatures changed, the fundamental acoustic properties of rotor systems remain consistent—making acoustic classification highly resistant to adversary countermeasures.

Adaptive systems for adaptive threats: Hard-coded thresholds fail in real-world operations. Our SNR-aware system maintains reliability across diverse environments from urban to desert operations.

Feature engineering reveals vulnerabilities: Analyzing spectral kurtosis and harmonic structure taught us that different motor/rotor configurations create distinct acoustic "fingerprints"—knowledge applicable to future threat analysis.

Operational simplicity matters: The best technology is useless if operators can't use it under stress. We prioritized intuitive interfaces and automated processing over complex configuration.

What's next for AALIS

  • Integration with counter-UAS systems for automated threat response
  • Multi-drone detection for swarm identification
  • Directional acoustic arrays for bearing estimation
  • Edge deployment on ruggedized tactical hardware
  • Expanded threat library with adversary drone models

Try it yourself

The complete source code and documentation is available on GitHub: https://github.com/IAMAMZ/aalice-drone-detection-knn-backend

Quick Start

Prerequisites: Go 1.21+, Node 18+, FFmpeg

# Clone the repository
git clone https://github.com/IAMAMZ/aalice-drone-detection-knn-backend.git
cd aalice-drone-detection-knn-backend

# Setup model
cd server
cp drone/prototypes.example.json drone/prototypes.json

# Start backend (terminal 1)
go run . serve -proto http -p 5000

# Start frontend (terminal 2)
cd client
npm install
npm start

Visit http://localhost:3000 and grant microphone permissions to test drone classification.

Optional: For enhanced accuracy with PANNS embeddings:

cd ml
pip install -r requirements-panns.txt
python embedding_service.py

Full documentation available in the repository including training pipeline, API documentation, and deployment guides.


Built With

Languages:

  • Go
  • JavaScript
  • Python

Frameworks & Libraries:

  • React
  • PANNS (Pretrained Audio Neural Networks)
  • librosa
  • scikit-learn

Tools:

  • FFmpeg
  • WebSocket
  • Web Audio API

ML Techniques:

  • K-Nearest Neighbors with adaptive thresholding
  • Deep learning embeddings (2048-dimensional)
  • Acoustic feature engineering (spectral, temporal, harmonic)
Share this project:

Updates