Fable – Hackathon Submission
Inspiration
For over 30 years, both of my parents have been dedicated educators. My mother, in particular, has spent much of her career fighting to make learning more accessible for all students-- from students with learning disabilities to those who have English as their second language. She has earned multiple grants from the state of Ohio for her work on using music as a tool to break down educational barriers, especially for students with learning disabilities and those learning English as a second language.
Growing up, I watched her dedicate herself to making sure no student was left behind, no matter their challenges. She showed me firsthand how learning isn’t one-size-fits-all—some students struggle with reading, others with comprehension, and sometimes, all it takes is the right sensory experience to unlock a whole new way of understanding.
Her work has deeply shaped my values. I believe technology should be used to make education more inclusive, more engaging, and more human. That’s exactly what inspired us to build Fable—a reading experience that adapts to the reader, not the other way around. By integrating AI-generated ambience, music, and accessibility features, we’re bringing stories to life while ensuring that reading is an experience that everyone can enjoy, no matter their background or ability.
What it does
Fable enhances the reading experience with:
- AI-generated ambience & music that adapts to the mood of the text.
- Customizable UI for font styles, background colors, and text formatting.
- Hand gesture scrolling for a seamless, touch-free reading experience.
- Dyslexia-friendly rewording with a simple double-click to improve readability.
How we built it
- FastAPI – Powered the backend for real-time text processing and AI-driven rewording.
- AudioCraft– Generated dynamic ambience and sound effects, balancing quality with efficiency.
- LangChain & Gemini Flash 2.0 – Handled AI-driven readability improvements and text customization.
- OpenCV – Implemented intuitive scrolling using webcam-based motion detection.
- React Frontend – Built a highly customizable reading interface with accessibility in mind. Compiled with Vite.
Challenges we ran into
- Tuning AI-generated music to match a story’s tone naturally. We didn't expect to actually know music theory to tune the AI (luckily one of our teammate was a band kid and clutched).
- Balancing interactivity and minimalism to avoid distractions while providing the best experience.
- Implementing gesture-based navigation in a way that feels intuitive.
Accomplishments that we're proud of
- Successfully created a working MVP around midnight and actually get to sleep 😎.
- Utilized AI-powered rewording that significantly improves readability (and actually benchmark it, we don't just assume it's better).
- Churned out a final design that was both minimal and immersive.
What we learned
- How to integrate multiple multimodal AI (text + sound + vision) into a seamless experience.
- How to develop new, more accessible and inclusive modes of interaction with media and AI.
- How to use LLMs as very flexible text processors and enhancers.
What's next for Fable
- Expanding to immersive visualization experiences as well (we have it there, we just couldn't make it in time 😭).
- Adding text-to-speech integration for an audiobook hybrid experience, and add an option to export the audiobooks.
- Personalize UX for different accessibility targets (ex. color blindness, ADHD,...)
- Use AI for even more useful things: summarizing text, explaining graphs, draw character connections and relationships.
- Crowdsourced sound effects.
Built With
- audiocraft
- css
- fastapi
- gemini
- langchain
- mediapipe
- motion
- opencv
- react
- typescript
- vite

Log in or sign up for Devpost to join the conversation.