HushMap: Your Favorite AI Room Monitor and Librarian
About the project!
HushMap is a real‑time campus noise and occupancy monitoring system that combines IoT microphones, AI voice assistance, and computer vision on a live interactive map. It answers one of college students’ biggest and most common questions: Where can I study quietly right now?
Inspiration
- Constant annoyance with study areas: they were too loud to work in, and walking across campus only to find a noisy or full space wasted too much time.
- Existing tools showed schedules for study rooms, but gave nothing about actual ambient conditions and noise levels. We wanted real‑time data.
What HushMap does
- Live noise map – interactive campus map showing current decibel levels in common study locations.
- Historical trends – 24‑hour playback and per‑location noise charts.
- TerpAI voice assistant – ask “Where’s the quietest spot to study?” and get a spoken answer backed by live sensor data.
- Hardware integration – M5GO nodes continuously sample audio; when noise exceeds a threshold, the device can audibly remind people to keep quiet.
- Occupancy estimation – YOLOv8 computer vision analyzes webcam snapshots to estimate how packed a room is and how many seats are free.
How we built it
System architecture
| Component | Technology | Role |
|---|---|---|
| Frontend PWA | SvelteKit, Vite, Bun | Interactive map, charts, voice‑assistant UI |
| Backend API | FastAPI (Python) | REST + WebSocket endpoints, sensor ingestion |
| Database | MongoDB | Store noise readings, location metadata |
| AI Voice Pipeline | ElevenLabs (TTS), TerpAI (LLM), faster‑whisper (STT) | Conversational query understanding and spoken responses |
| Computer Vision | YOLOv8, OpenCV | Occupancy inference from camera frames |
| IoT Nodes | M5GO (MicroPython) | I2S microphone sampling, HTTP data push |
Data flow
- M5GO devices capture ambient audio, compute average decibel level over a short window, and POST to
/api/sensorsevery few seconds. - Backend stores readings in MongoDB and pushes updates over WebSockets to connected frontends.
- Frontend map updates noise levels in real time, drawing heatmap overlays.
- Voice‑assistant queries are streamed via WebSocket:
- Audio chunks → Whisper transcription → TerpAI context injection (current dB stats from DB) → ElevenLabs TTS response.
- Audio chunks → Whisper transcription → TerpAI context injection (current dB stats from DB) → ElevenLabs TTS response.
- Vision endpoint (
/api/vision/room-status) accepts an image, runs YOLOv8, and returns person count and estimated seat availability.
Challenges we ran into
- Hardware reliability – Getting I2S microphone blocks working reliably on M5GO devices, handling network dropouts, and keeping power consumption low. We initially tried to use a Seeed Studio XIAO ESP32-S3 Sense as a camera, but the hardware itself was not working properly so we had to switch to YOLOv8.
- Accessibility features – Making sure different visual modes rendered nicely and the website contained multiple accessibility features
- Real‑time synchronization – WebSocket fan‑out with concurrent sensor streams required careful handling of connection state and backpressure.
Accomplishments that we're proud of
- Fully functional map + data visualization – Heatmap and trend charts update live with no need for manual refresh.
- TerpAI integration and ElevenLabs – The assistant answers questions like “Is McKeldin quiet?” using actual sensor history and talks and listens to you.
- End‑to‑end IoT pipeline – We created the entire project in under 36 hours!
What we learned
- Hardware – We learned that reliable microphone sampling on M5GO devices requires careful power, Wi‑Fi, and buffer management.
- Analysis of network processes – We analysed the data flow processes between API calls to ensure a seamless flow between multiple AI or LLM engines. -Friendship and the Power of White Monsters – Half of us pulled all nighters, but we got the project done, so it was great! Go friendship! And go White Monster Energy Drink Zero Ultra!
What's next for HushMap
- Implement occupancy analysis – Combine noise levels with YOLOv8 person counts to predict both quietness and seat availability. Dots expand and contract depending on how packed a study area is
- Mobile push notifications – Alert users when their favorite study spot drops below a chosen noise threshold.
- More campus‑wide sensor coverage – Deploy additional M5GO nodes in high‑traffic libraries, lounges, and across campus.


Log in or sign up for Devpost to join the conversation.