Inspiration

We were inspired by the challenge of making emotional learning accessible for kids with disabilities, particularly those with autism or sensory processing differences. Existing tools often rely solely on visual cues without context or use generic voices that lack emotional connection. We wanted to create something more: a tool that combines cutting-edge AI with research-backed pedagogy. The idea of a parent recording their voice while on vacation so their child can still hear bedtime stories sparked our vision: what if we could create personalized, multi-sensory stories that actively teach emotional intelligence?

What it does

StoryLume is an AI-powered interactive storybook platform that helps children with disabilities learn to recognize and understand emotions through personalized, multi-sensory storytelling Key Features:

  • AI Story Generation: Parents input simple prompts (via Gemini API), and our system generates age-appropriate stories rich with emotional context and social situations
  • Voice Cloning: Using ElevenLabs' IVC technology, the stories are narrated in a loved one's voice, providing comfort and familiarity even when they're away
  • Emotion-Based Color Display: Real-time sentiment analysis creates dynamic, soft-transition color lighting that corresponds to the emotional beats of the story (calm blues for sadness, warm yellows for joy)
  • Interactive Learning: The system doesn't just show colors, it explicitly labels emotions and provides context ("The lamp turned blue because the wolf is sad. When you feel sad, you can take deep breaths.")
  • CRUD Story Management: Parents can curate a library of stories tailored to their child

How we built it

Frontend:

  • React-based UI with three main interfaces: story library (parents), voice sampling (parents), and immersive player (children)
  • Smooth color transitions using CSS animations to avoid jarring sensory experiences

Backend & AI Pipeline:

  • Gemini API for story generation with custom prompts emphasizing emotional vocabulary and social story structure
  • ElevenLabs API for voice cloning and high-quality text-to-speech generation
  • Emotion-to-color mapping based on research into color psychology for neurodivergent children

Challenges we ran into

  • Balancing Research & Feasibility: Our research showed that color cues alone aren't statistically significant for emotional learning; we had to find a more holistic approach that combines color with explicit labeling and interactive prompts
  • Full-Stack Integration: Coordinating real-time color changes with audio playback while maintaining smooth UX across multiple APIs was technically challenging

Accomplishments that we're proud of

  • Research-Driven Design: We didn't just build something cool, we built something effective, grounded in literature about emotional learning for neurodivergent children
  • Multi-Sensory Integration: Successfully combining audio, visual, and text cues in a way that's pedagogically sound
  • Voice Cloning Implementation: Creating a seamless workflow for parents to record and clone their voices

What we learned

  • The importance of understanding your users deeply; reading research papers made our product 10x better than if we'd just built based on assumptions
  • Accessibility isn't one-size-fits-all; we learned about sensory sensitivities, processing differences, and the need for adjustability
  • Voice AI has come incredibly far; the emotional quality of ElevenLabs' cloning is genuinely moving

What's next for StoryLume

  • User Testing: Partner with special education teachers and occupational therapists to validate effectiveness with real children
  • Expanded Emotion Library: Include more nuanced emotions and culturally diverse emotional expressions
  • Coping Strategy Integration: After showing an emotion, provide age-appropriate coping mechanisms ("When you're scared, you can...")

Built With

Share this project:

Updates