Inspiration

Inspiration

One of our teammates was diagnosed with autism at the age of two. He has spent his life navigating a world that wasn't built for how his brain actually works — a world where attention, focus, and sensory regulation operate by rules different from the default.

For years, he developed personal strategies. Apps for focus. Reminders. Meeting notes. Workarounds for the moments his attention drifted. None of them understood what was actually happening inside his head. They just reacted after the fact.

We wanted to build something that didn't just react — something that listened. Something that could meet his mind on its own terms, in real time, and adapt the world around him before he had to ask.

That's Pepper.

And once we started building it, we realized: this isn't just for him. It's for everyone whose brain doesn't fit the default. People with ADHD. People recovering from depression who need gentle nudges back to focus. Patients in early-stage Alzheimer's who deserve to feel anchored to the moments they're forgetting. Students drowning in lectures their attention can't hold. Engineers who lose hours to context-switching they didn't choose.

Pepper is a brain-computer interface that listens to the people the world has stopped listening to.

What it does

Pepper is a wearable EEG headset paired with six AI services that read your cognitive state in real time and autonomously adapt your digital environment to match.

The headset captures brain wave activity (alpha, beta, theta) and heart rate. Six AI services interpret that data and take action:

  • Cognitive State Agent — Classifies your mental state every second: focused, calm, stressed, or drowsy.
  • Environment Agent — Switches your Spotify playlist as your cognitive state changes.
  • Wellness Agent — Detects when stress crosses a threshold. Vibrates haptic motors on your temples. Speaks a calm, empathetic coaching message. Triggers a guided breathing animation.
  • Guardian Agent — A 2-minute cooldown layer so haptic and voice interventions can't fire too frequently and become more disruptive than the stress itself.
  • Meeting Agent — Records meetings while tracking your attention timeline. When your mind wanders, it logs the timestamp. At the end, it generates a summary AND a personalized "What You Missed" report that catches you up on the exact content from the moments your brain wasn't there.
  • Coach Agent — At the end of a session, generates a narrative productivity report: peak focus periods, stress triggers, and personalized recommendations for tomorrow.

The headset itself is built from scratch: AD620 instrumentation amplifiers, Ag/AgCl forehead electrodes, an ESP32 microcontroller, a MAX30102 pulse sensor, vibration motors, and a WS2812B LED strip that reflects your real-time brain state in color.

Use cases beyond focus

  • Autism support — Real-time sensory and cognitive feedback. Helps the wearer understand their own patterns without judgment, and helps loved ones understand them too.
  • Depression treatment adjuncts — Many patients describe an inability to recognize their own state until it's too late. Pepper makes invisible patterns visible.
  • Early Alzheimer's care — The "What You Missed" feature isn't just for distracted moments — it's for forgotten ones. Imagine a tool that quietly catches a patient up after a conversation slipped away from them.
  • Studying & exam prep — Pepper protects your focus, detects when fatigue is setting in, and generates a session report showing exactly when and why your attention dropped.
  • Knowledge work — Engineers, writers, and analysts lose hours to context-switching. Pepper guards against it actively.

How we built it

Hardware:

  • DIY EEG analog front-end built around the AD620 instrumentation amplifier (~1000× gain)
  • Custom passive filter chain (high-pass at 0.48 Hz, low-pass at 34 Hz) to isolate the EEG band and reject 60 Hz hum — the bandpass is done entirely in hardware
  • Voltage bias network to center the signal at 1.65 V for the ESP32 ADC
  • ESP32 WROOM-32 sampling at 256 Hz over GPIO36, streaming 1-second batches of raw voltage data over WiFi WebSocket
  • MAX30102 pulse sensor on the I²C bus
  • WS2812B addressable LED strip and 2N3904-driven coin vibration motors for haptic and visual feedback

Backend:

  • FastAPI WebSocket server receives raw EEG samples at 256 Hz
  • NumPy FFT (numpy.fft.rfft) extracts alpha (8–13 Hz), beta (13–30 Hz), and theta (4–8 Hz) band powers from a single EEG channel
  • Six AI services orchestrated through Gemini 2.5 Flash. One of them — the meeting summarizer — runs on Google's Agent Development Kit (ADK) with InMemorySessionService for stateful session handling. The other five are prompt-driven services using google-genai directly
  • Spotify Web API for real-time playlist control

Frontend:

  • Next.js + React + ECharts dashboard with vanilla CSS styling
  • Real-time three-band oscilloscope (alpha/beta/theta) traced from a single EEG electrode pair
  • Cognitive state badge, heart rate display, scrolling agent reasoning log
  • Meeting recording via the browser MediaRecorder API
  • Stress intervention with breathing animation and SpeechSynthesis voice coaching

Challenges we ran into

Signal noise. EEG signals are microvolts. Everything is noise — power lines, your own muscle movements, the WiFi radio on the same board. We spent hours tuning the filter chain and bias network before we got a clean alpha wave to appear when our wearer closed their eyes.

Hardware reliability. Our first transistor array (NTE2321) didn't behave as expected, costing us hours of debugging before we swapped to discrete 2N3904s. Our first AD620 module overheated due to a perfboard short. We learned to test components in isolation before integrating.

Multi-agent coordination. Multiple AI services acting on the same signal can easily spiral — switching playlists every two seconds, vibrating motors right after a stress event has already passed. Building the Guardian as a deterministic cooldown layer was essential to keep the system from becoming more distracting than the problem it was meant to solve.

Personal stakes. This project meant something to one of us in a way no hackathon project ever has. That made every bug feel heavier and every win feel lighter.

Accomplishments we're proud of

  • A working DIY EEG that actually picks up alpha waves when you close your eyes — built from individual components on a breadboard during a 36-hour hackathon
  • A meeting recorder that doesn't just transcribe what was said, but personalizes the summary based on when your attention drifted
  • A wearable that combines hardware, signal processing, multi-agent AI, real-time WebSocket streaming, and a clinical-grade dashboard into a single coherent system
  • Most of all: a teammate seeing his own brain on a screen for the first time and watching an AI respond to him in real time

What we learned

  • Brain wave amplitudes are smaller than we ever appreciated, and proper grounding matters more than gain
  • Multi-agent systems need a Guardian — without one, helpfulness becomes harm
  • Hardware projects at hackathons are won at the soldering iron at 3 AM
  • The most powerful AI applications aren't the ones that do the most. They're the ones that listen.

What's next for Pepper

  • OS-level Do Not Disturb integration — Auto-toggle macOS / Windows DND when the wearer enters a focused state
  • Slack status integration — Update Slack presence with the user's cognitive state so colleagues respect their focus
  • A "flow" state classifier — Detect when the wearer crosses from focused into flow and protect that state aggressively
  • More EEG channels — A second AD620 channel for spatial resolution, opening the door to attention/distraction discrimination beyond what a single channel can detect
  • Personalized models — Right now Pepper uses generic band-power thresholds. The next step is per-user calibration that learns each person's individual cognitive baseline
  • Clinical pilots — We want to put Pepper into the hands of researchers studying autism, ADHD, and early-stage cognitive decline
  • Apple Vision Pro integration — A spatial UI for the dashboard, so the wearer's cognitive state is visualized in their environment, not on a separate screen
  • Open-source hardware — A $60 BOM should be available to anyone who wants to build it themselves

A final word

Tony Stark inspired a generation of engineers. But everyone builds the suit. Everyone builds JARVIS as a voice assistant or a robot arm.

Nobody builds Pepper. The companion who knew Tony better than he knew himself. The one connected to him, not just to his tools.

We built Pepper. And we built her for the people who need her most.

What it does

How we built it

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for PEPPER

Built With

Share this project:

Updates