Inspiration
Severe aphasia and traumatic brain injury can leave patients unable to communicate or even understand language. In the early stages of recovery, speech-language pathology (SLP) intervention is critical, but therapy resources are often limited and difficult to personalize for patients with severe motor or cognitive deficits.
SLP research shows that aphasia-friendly materials, simple sentences, images, bright visuals, and reduced cognitive load, help re-engage patients with language. However, creating personalized content for each patient is time-consuming and often inaccessible when patients cannot easily communicate preferences.
We asked a simple question: What if the brain itself could tell us what language it understands best?
FABLE was inspired by the idea of using brain-computer interfaces (BCIs) to detect semantic processing directly from neural signals and dynamically build language experiences tailored to the patient’s cognitive state.
What it does
FABLE is a BCI-driven adaptive storybook for patients with severe aphasia.
Using EEG, the system measures neural markers of semantic processing, engagement, and cognitive load while presenting simple story prompts with two visual choices. Signals such as the N400, spectral power features, and cross-frequency coupling help estimate which option the brain processes more meaningfully.
The system then selects the more semantically engaging option and expands the story by presenting related concepts generated through LLM-based semantic trees and word embeddings.
Over multiple steps, FABLE constructs a short, aphasia-friendly narrative tailored to the user’s neural responses.
Key features include:
EEG-based detection of semantic engagement Adaptive branching story generation Visual, aphasia-friendly interface Dynamic difficulty adjustment based on cognitive state
The result is a personalized, brain-guided language experience that encourages semantic processing even when patients cannot verbally respond.
How we built it
We built FABLE by combining BCI signal processing, semantic AI, and adaptive storytelling.
Hardware
8-channel EEG headset (g.tec Unicorn) Electrodes placed at frontal, central, and parietal sites (Fz, FC5, FC6, Cz, CP3, CP4, Pz, Oz)
Signal Processing
EEG preprocessing and filtering Extraction of neural features linked to semantic processing: N400-related activity Band power features (theta, alpha) Cross-frequency coupling Engagement and fatigue metrics
Semantic Engine
LLMs and word embeddings generate a semantic tree Each node produces two candidate concepts or images The story branches according to neural engagement signals
Interface
A simple visual interface presents: short sentences two image choices auditory narration The system iteratively builds a 5–7 step personalized story
Challenges we ran into
One of our biggest challenges was hardware reliability with EEG acquisition. We attempted to use two different g.tec Unicorn EEG headsets, but both devices experienced channel failures where certain electrodes stopped producing usable signals. Since EEG signal quality is critical for detecting semantic features like N400 activity and spectral patterns, these hardware issues made stable data collection difficult and slowed down development.
Another challenge involved the semantic tree generation used to build the adaptive story. While LLMs and word embeddings allowed us to dynamically generate related concepts, some generated options occasionally drifted out of context with the developing story. This sometimes produced choices that were semantically unrelated or confusing for the narrative flow.
With more development time, we would refine the semantic filtering and constraint system to ensure tighter contextual consistency between story branches while still maintaining creative variability.
Despite these challenges, we were able to prototype the core pipeline connecting EEG signals, semantic analysis, and adaptive storytelling.
Accomplishments that we're proud of
We’re proud that we built a complete end-to-end framework for a semantic BCI system within the time constraints of a hackathon. Rather than focusing on isolated components, we implemented the full pipeline, from the user interface to neural signal analysis and machine learning.
We developed a working app interface and a custom data collection GUI that presents aphasia-friendly prompts with images and narrated sentences. We also built a story generator powered by a semantic tree, allowing narratives to branch based on concept relationships. Using this system, we generated 50+ unique stories and collected neural data across many semantic decision points.
Finally, we implemented an EEG signal processing and feature extraction pipeline, and built a machine learning model trained on an open-source dataset with a similar task and EEG structure. Using the same features, our model achieved 63.8% classification accuracy, demonstrating that our approach can meaningfully capture semantic engagement signals.
What we learned
We learned how to translate semantic neuroscience concepts into a working BCI pipeline. Specifically, we gained experience extracting EEG features relevant to language processing, including band power features, temporal event-related activity related to the N400, and cross-frequency interactions.
We also learned how important clean signal acquisition and preprocessing are for EEG-based ML models, especially when working with limited channels and noisy hardware. On the AI side, we learned how to structure semantic trees using embeddings and LLMs to maintain narrative coherence while generating adaptive content.
Finally, we learned how to integrate EEG signal processing, machine learning, and a real-time application interface into a single system that can dynamically adapt content based on neural responses.
What's next for FABLE
Our long-term vision is to turn FABLE into a general semantic BCI platform, not just a single therapy application. Because the hardware and core signal-processing pipeline remain constant, the system is horizontally scalable, new applications can be built by modifying the software layer while keeping the same EEG interface.
One major direction is language learning and education. FABLE’s semantic engagement detection could identify when learners truly understand new vocabulary or concepts, allowing educational platforms to dynamically adjust lessons, examples, and explanations based on neural comprehension signals rather than quizzes alone.
The framework could also extend to other semantic-driven applications, such as:
Cognitive rehabilitation beyond aphasia, including stroke, traumatic brain injury, and early-stage neurodegenerative conditions Adaptive reading and learning tools that adjust difficulty based on semantic comprehension Assistive communication systems that infer intended meaning when patients cannot speak or type Personalized knowledge exploration, where educational content branches according to what concepts the brain processes most strongly Attention and cognitive fatigue monitoring to optimize therapy or learning sessions
Because semantic understanding underlies communication, learning, and decision-making, FABLE’s paradigm could serve as a general interface for measuring and strengthening conceptual understanding directly from brain activity.
Ultimately, we aim to build a platform where semantic engagement becomes a measurable signal that can drive adaptive experiences across healthcare, education, and accessibility technologies.
Built With
- custom-gui
- erp-core-n400
- feature-extraction
- live-eeg-recording
- ml
- python
- signal-processing
Log in or sign up for Devpost to join the conversation.