Inspiration

We wanted a music tool that feels as effortless as humming a tune. The gap between “idea in your head” and “a song you can play” is huge, especially for non‑producers. Hum AI closes that gap.

## What it does

Hum AI records a short hum, analyzes pitch/tempo/mood, and generates layered accompaniment. In a DAW‑style studio you can mix, mute/solo, pan, add effects, and export. You can also add beats/instruments with natural‑language prompts and an AI agent.

## How we built it

Next.js + React UI, Zustand for state, Tone.js for audio rendering, WaveSurfer for waveforms. Gemini handles analysis/ MIDI prompts; ElevenLabs handles beat and vocal generation. We built a mixer engine, effects chain, and an AI agent tool layer.

## Challenges we ran into

Latency and sync between generated audio layers, making AI‑generated parts feel musically coherent, and managing mixed audio in the browser without glitches.

## Accomplishments that we're proud of

A seamless “hum → song” flow, real mixing controls, live waveforms, and an AI agent that can manipulate tracks with simple commands.

## What we learned

Great UX beats complexity—fast feedback matters. Audio in the browser is powerful but needs careful scheduling. AI outputs need guardrails and smart defaults.

## What's next for Hum AI

Better timing control for vocals, richer style control, stem exports, collaboration, and more “one‑click” creative presets.

Built With

Share this project:

Updates