Inspiration
Schizophrenia affects roughly 1 in 300 people worldwide, yet when a hallucination episode strikes, patients are essentially on their own. There's no real-time tool that helps in the moment, nothing that can detect what's happening neurologically and respond before a caregiver can even be called. We started asking a simple question: the information a patient needs during a hallucination is just what is physically in front of them right now, so why can't a system tell them that automatically, the instant their brain signals something is wrong? That question became NeurASSURE.
What it does
NeurASSURE is a real-time hallucination detection and grounding system designed to support patients during active episodes.
The patient wears an Emotiv EPOC X EEG headset during daily life. Our backend continuously analyzes stress, relaxation, engagement, focus, and raw spectral activity. When the system detects a pattern associated with hallucination onset, it activates a grounding sequence.
A camera captures the patient’s real surroundings
YOLOv8 identifies visible objects
Gemini 2.5 Flash generates a warm, simple narration describing the actual environment
ElevenLabs converts the narration to audio
The patient hears a calm message such as: You are in your living room. There is a couch, a lamp, and sunlight coming through the window. You are safe.
A live dashboard built with Next.js displays EEG readings, band power graphs, the camera feed with detected objects, the narration text, and a timestamped event log. All updates stream in real time through a WebSocket connection.
How we built it
The backend is a fully asynchronous FastAPI application that manages the entire pipeline. We connect to the Emotiv EPOC X through the Emotiv Cortex 2 WebSocket API, which provides continuous performance metrics and raw EEG band power data. We created a weighted scoring model that combines both sources into a hallucination likelihood score between zero and one. The model incorporates stress, relaxation, engagement, excitement, focus, temporal gamma activity, and the frontal beta to alpha ratio.
When the score crosses a threshold, the system launches the grounding pipeline. OpenCV captures a frame, YOLOv8 Nano performs object detection, and a safety check confirms that the scene is appropriate for narration. The frame and detected objects are then sent to Gemini 2.5 Flash, which produces a supportive, present-tense description of the environment. ElevenLabs converts the narration to speech, and the audio is streamed back to the frontend.
The dashboard is built with Next.js 14 and uses Recharts for EEG visualization. All communication between backend and frontend occurs through a WebSocket channel for real-time updates. We also created a Demo Mode that simulates escalating EEG metrics so the full system can be demonstrated without requiring an actual hallucination event.
Challenges we ran into
Getting the Emotiv Cortex API to authenticate and stream reliably was our first major hurdle. The WebSocket handshake, token flow, and session management required careful sequencing with no margin for error. Calibrating the hallucination-detection threshold was genuinely hard: EEG signals are noisy, personal, and context-dependent, so we had to build a rolling-average smoothing system and run live tests on team members in different cognitive states to find values that were sensitive without being trigger-happy. Structuring the Gemini agent so it reasons in a clinically appropriate way, never mentioning mental illness, never alarming the patient, and always ending on a grounding anchor, required many iterations of prompt engineering. Latency was also a constant battle: we needed the full pipeline from detection to spoken audio to complete in under five seconds to be clinically meaningful.
Accomplishments that we're proud of
We successfully built a complete end-to-end neuro AI system within a hackathon timeframe. The project integrates EEG hardware, custom signal processing, computer vision, multimodal AI, text-to-speech, real-time streaming, and a live dashboard. All components work together smoothly.
We are especially proud of the narration quality. Instead of producing a generic list of objects, we engineered prompts that generate warm, supportive, and context-aware language. This tone is essential for a patient who may be frightened or confused.
We also feel confident in our hallucination scoring model. It combines both AI-derived performance metrics and raw spectral features, with smoothing and state management on top. It represents a meaningful attempt to detect hallucination onset rather than a simple threshold on a single metric.
What we learned
Working with real EEG hardware taught us how different the theory is from actual neurotechnology. We learned how sensitive the Emotiv EPOC X is to electrode placement, hydration, and movement, and how easily noise can distort band power readings. Understanding the difference between performance metrics and raw spectral data helped us design a scoring model that reflects real cognitive signals instead of random spikes.
We also learned how challenging real-time EEG streaming can be. The Cortex API sends data at a very high rate, and even a small blocking operation can break the stream. This forced us to rethink our architecture and build a pipeline that treats EEG input as the highest priority signal.
Integrating computer vision, multimodal AI, and text-to-speech in the same loop taught us how to coordinate components with very different latencies. We learned how to keep the system responsive even when individual steps take several seconds.
Most importantly, we learned how to build responsibly in a mental health context. Tone, timing, and safety checks matter just as much as technical accuracy. That mindset shaped our narration design, our state machine, and our approach to false positives.
What's next for NeurASSURE
Our next priority is clinical validation. We want to collaborate with neurologists and psychiatrists who can evaluate our detection model using real labeled EEG data and help us refine the thresholds for responsible deployment.
We also plan to personalize the system for each patient by learning their baseline EEG patterns and adjusting sensitivity over time. Additional features include customizable narration styles and real-time caregiver alerts.
Our long-term vision is a lightweight wearable device with integrated EEG sensors and a forward-facing camera. The goal is a system that can accompany the patient throughout daily life and provide support at the exact moments when it is needed most.
Log in or sign up for Devpost to join the conversation.