Inspiration

Computers speak fast; we don’t. Human→machine communication bandwidth is tiny—and for people who can’t use hands or voice, it can be near zero. NeuroRelay explores a different path: use the eyes to choose, then let a local reasoning agent do the heavy lifting—all offline for privacy and reliability.

What is SSVEP ?
Steady-State Visual Evoked Potentials appear when you look at a flickering stimulus (e.g., 10 Hz). The visual cortex echoes that rhythm (and harmonics). If each on-screen tile flickers at a different frequency, an EEG decoder can infer which one you’re looking at.


What it does

A clean 2×2 UI shows four large choices: HELP / READ / PLAN / MESSAGE.
The user fixes their gaze; we run an SSVEP decoder to pick the winning tile, show neurofeedback (confidence + dwell ring), then trigger a local gpt-oss agent to act inside a sandbox:

  • READ → short summary tailored for offline TTS.
  • PLAN → 5–8-step action plan or extracted TODOs.
  • DEADLINES → due-date extraction + optional .ics file.
  • MESSAGE / HELP → concise caregiver message or a high-contrast on-screen overlay.

Everything is audited to JSONL (BrainBus) and files are written to workspace/out/.
Runs entirely offline; if LM Studio is present the agent uses a local gpt-oss-20b/120b; otherwise it falls back to deterministic heuristics.


How we built it

  • Stimuli & UI (PySide6/Qt): 2×2 grid, frame-locked flicker with sinusoidal contrast.

    • Frequencies (60 Hz displays): 8.57, 10, 12, 15 Hz (auto-selects 50 Hz set if needed).
    • Comfort controls: intensity slider, pause, fullscreen; accessibility-first typography.
    • Neurofeedback: confidence bar + circular dwell indicator (default 1.2 s).
  • Signal pipeline (SSVEP):

    • Window: ~3.0 s rolling; prediction rate: 4 Hz.
    • Preprocessing: band-pass 5–40 Hz, optional notch 50/60 Hz; target channels O1, Oz, O2.
    • Decoder: CCA against sine/cosine references at 1*f* and 2*f*; softmax over z-scored scores → confidence; stability + dwell to commit.
  • Live / Replay / Sim:

    • Live: LSL (pylsl) inlet with thread-safe ring buffer.
    • Replay: EDF/CSV reader for deterministic demos.
    • Simulation: keyboard control or synthetic LSL streamer (sine + noise) that mirrors the live path.
  • Local agent (offline):

    • BrainBus JSON from UI → agent subprocess.
    • Tools: summarize, todos/plan, deadlines(+ICS), compose message/overlay.
    • Sandboxed I/O: reads workspace/in/, writes workspace/out/.
    • LM Studio (localhost:/v1) with gpt-oss if available; otherwise heuristic fallbacks.
    • Non-destructive by design (drafts only, never sends email).
  • Observability & UX polish: badges for Offline / Local LLM / Live status, last output shortcut, log tail viewer, “take a visual break” hint after multiple commits.


Challenges we ran into

  • Hardware timing: without guaranteed access to the EEG cap before the deadline, we needed a demo path that is credible and reproducible → synthetic LSL stream that uses the same decoder and timing as live.
  • Visual comfort vs. SNR: balancing flicker intensity and duty to reduce fatigue while preserving SSVEP power.
  • Numerical robustness: filter padding, notch design vs. Nyquist, and regularized CCA to avoid ill-conditioned covariances.
  • End-to-end latency: keeping the post-window processing under ~100 ms so total stimulus→action remains ~3.5–4.0 s.

Accomplishments that we're proud of

  • A fully offline, auditable brain-to-agent loop that turns gaze into real work on local documents.
  • Reproducible demo modes (live, replay, sim) using the same decoder pipeline.
  • Accessibility-first UI: high-contrast HELP/MESSAGE overlays, offline TTS, and large targets.
  • Clean JSONL telemetry and a safe sandbox so judges can verify exactly what happens.

What we learned

  • Discrete BCI + local reasoning is a powerful combo: four reliable choices are enough to unlock meaningful workflows.
  • Designing for auditability and safety (JSON logs, drafts-only, sandbox paths) makes offline AI more trustworthy.
  • Small UX touches (dwell ring, stability checks, comfort controls) dramatically improve perceived control.

What's next for NeuroRelay: Brain-to-Agent

  • Light personalization: FBCCA / TRCA and quick per-user channel weighting.
  • Hybrid confirmation: SSVEP select + P300 confirm for double-safe commits.
  • More options (6–8) with adaptive layouts and duty cycles.
  • Assistive ecosystems: AR/VR stimuli, wheelchair/robot triggers, multilingual TTS.
  • Clinical-style packaging: calibration flow, metrics overlay, and operator reports.

Built With

Share this project:

Updates