EchoElephant Denoiser — Hackathon Journey

Inspiration

I came into this hackathon as a first time hacker and freshman, honestly not expecting much out of myself and as anxious as can be. I didn’t have a team and I had multiple occasions where I thought I did but then something would happen like someone would leave, or join another team, or the team would be full. At one point I seriously considered dropping out because it felt like I was already behind everyone else and I didn't think I'd make it on my own.

But I stayed.

I decided why not just see how far I can get. And I'm glad I did. I've learned so much through this entire event. Not only about coding, but about myself. I learned that no matter how scary or difficult a situation may seem, there's no harm in trying. Even if I fail, I should still be proud I tried. I also learned a lot about coding. This was, believe it or not, my first time working frontend on a project. And man did I enjoy it. Having the freedom and creativity to make my app look however I wanted was so exciting and freeing. Whether I win something or not, I am still proud of myself.

The elephant challenge was very interesting to me. I'm someone who absolutely loves animals and wildlife, so getting the opportunity to build something that could potentially help them while also building my programming skills seemed like a chance I couldn't just pass up.

So, I created EchoElephant, an elephant call isolator. The goal of this project was to isolate elephant vocalizations from real-world noisy environments, that way researchers can analyze the audios more effectively.


What I Built

I built a full-stack machine learning web application that:

  • Takes an uploaded audio file
  • Converts it into a spectrogram representation
  • Uses a trained U-Net model to predict a noise-reduction mask
  • Applies signal processing techniques to isolate meaningful audio features
  • Reconstructs a cleaned audio file for playback

Mathematically, the core idea can be expressed as:

[ \hat{M}(f, t) = f_\theta(X(f, t)) ]

where:

  • (X(f, t)) is the noisy spectrogram
  • (f_\theta) is the neural network (U-Net)
  • (\hat{M}(f, t)) is the predicted clean mask

The final cleaned spectrogram is computed as:

[ X_{\text{clean}}(f, t) = X(f, t) \cdot \hat{M}(f, t) ]


How I Built It

Backend (FastAPI + PyTorch)

I built a Python backend using FastAPI, which handles:

  • File uploads
  • Audio preprocessing (librosa, NumPy)
  • Model inference using a PyTorch U-Net
  • Reconstruction of the final audio output

Audio Pipeline

  1. Load audio file
  2. Convert waveform → spectrogram
  3. Apply log scaling for stability
  4. Run inference on the spectrogram
  5. Apply post-processing (smoothing, clipping, filtering)
  6. Convert back to waveform

Frontend (React)

The frontend is a simple but functional React app:

  • Upload audio files
  • Trigger backend processing
  • Display and play cleaned audio output

Challenges I Faced

This project pushed me more than I expected.

1. Doing Everything Alone

Not having a team meant I had to:

  • Design the ML pipeline
  • Build the backend API
  • Build the frontend UI
  • Debug deployment and feature issues -Create the presentation, demo video, and elevator pitche all on my own

There were moments where everything broke at once and I had no one to ask. But I kept pushing through until I got a project I was proud of.


2. Being a First Time Hacker

Not having any prior experience made it that much more difficult:

  • I was lost the beginning, not knowing where everything was or even how to check in
  • Feeling anxious because of all the work I had to do alone in a short period of time
  • Having less experience than everyone around me
  • Not feeling like I had someone to talk to and ask questions

3. Audio Processing Instability In Project

The model pipeline was sensitive:

  • Spectrogram shape mismatches caused crashes
  • Some runs produced empty audio outputs
  • The system occasionally hung during inference I had to debug step-by-step through:
  • tensor shapes
  • masking logic
  • reconstruction pipeline

4. Environment Issues

Even running the backend was a challenge:

  • missing dependencies (torch, soundfile)
  • virtual environment confusion
  • uvicorn import errors

5. Adding More Made It Worse

As I was getting deeper into the project, adding stuff just broke other things:

  • I have a good prototype down so I want to add another feature then boom, nothing works anymore
  • Having to go back a debug the same code over and over
  • Overcomplicating things forced me to go to simpler models

I spent a surprising amount of time just getting the system to run consistently.


What I Learned

This project taught me more than any class so far:

  • How full-stack ML systems actually work in practice
  • How fragile real-world pipelines can be
  • How many different and fun things you can do with the UI
  • How to debug across frontend, backend, and model layers simultaneously
  • How much of a lifesaver github is
  • It's better to be proud of yourself for failing and trying then never trying at all
  • Most importantly: how to keep going even when you're alone and everything breaks

Final Reflection

I didn’t finish this project as perfectly as I imagined when I started.

But I finished it all by myself.

And somewhere between debugging Python errors at 2 AM and fixing React hook crashes, I learned that building real systems isn’t about everything working smoothly, it’s about staying in the fight long enough to make something real.

Even if it’s imperfect.

Even if you’re alone.

Even if it almost doesn’t work.


Closing Thought

If elephants can communicate across miles of noise and chaos, maybe this project is just my first attempt at listening to myself properly.

Built With

Share this project:

Updates