About Once Upon AI
Inspiration
As a parent, I've read countless bedtime stories, but I always noticed how my kids' eyes would light up whenever I'd improvise and make them the hero. "What if YOU found the magic door?" "What if YOU could fly to the moon?" Those personalized moments created the most engagement and the sweetest dreams.
When I discovered the latest AI video generation capabilities, I realized we could finally create something that felt impossible just a year ago: Pixar-quality animated stories where every child is the star, personalized nightly, at a price point that works for every family.
What it does
Once Upon AI is a mobile app that generates personalized animated bedtime stories. Parents upload a photo of their child during setup, then each evening fill out a quick mad libs style form. Within moments, they receive a fully animated 2-minute video featuring their child as the protagonist, complete with professional narration and stunning visuals that shift styles daily (watercolor Monday, anime Tuesday, Pixar Wednesday...).
The magic is in the balance: every family gets the same core story each night - creating shared experiences kids can discuss at school - but the personalization makes each version unique to that child.
How we built it
Technical Stack:
- Mobile Development: Built entirely with Bolt.new using Expo for rapid cross-platform deployment
- AI Pipeline:
- GPT-4 for story template creation and personalization
- ElevenLabs for professional voice narration
- Luma and Kling for state-of-the-art video generation
- Flux for character consistency and image creation
- Infrastructure: RenderNetworks for distributed video processing at scale
- Monetization: RevenueCat for subscription management
Architecture Decisions:
- Pre-generated story templates with insertion points rather than fully dynamic generation - ensures quality and safety
- Modular video segments that get assembled based on mad libs choices, reducing generation time
- Advance rendering - we generate tomorrow's base assets tonight, only personalizing at request time
Challenges we ran into
1. Character Consistency Maintaining the child's likeness across different shots and styles was our biggest technical hurdle. We solved this by:
- Creating a character embedding from the initial photo using Flux
- Fine-tuning consistency across Luma and Kling outputs
- Limiting certain angles/movements in our shot lists
2. Content Safety With children's content, safety is paramount. We implemented:
- Open text input for maximum creativity
- GPT-4 as a smart content moderator that checks responses in real-time
- Gentle redirects when inappropriate content is detected ("How about something more magical?")
- All stories reviewed by child development consultant
3. Rendering at Scale Generating unique videos for potentially thousands of users nightly required:
- RenderNetworks' distributed processing pipeline
- Smart caching of common elements
- Progressive quality (SD preview while HD renders)
4. The Uncanny Valley Early tests were "creepy" - too realistic but not quite right. We pivoted to more stylized, animation-focused output that feels magical rather than trying for photorealism.
Accomplishments that we're proud of
- Built a complete mobile app in days using Bolt.new, from concept to working prototype
- Achieved character consistency across multiple AI video platforms - something many said was impossible
- Created a sustainable business model with RevenueCat integration from day one
- Maintained sub-3-minute generation time for personalized videos through clever pipeline optimization
- Smart content moderation that preserves creativity while ensuring child-appropriate content
What we learned
The intersection of AI and parenting requires extreme thoughtfulness. Parents want magic, not machinery. Our biggest learning was that the technology should be invisible - parents care about their child's delight, not our render pipeline.
We also discovered that constraints breed creativity. Limiting to 2-minute stories, 10 mad libs prompts, and pre-structured narratives actually made the experience better than unlimited options would have.
Building with Bolt.new dramatically accelerated our timeline - what would have taken weeks took days, letting us focus on the AI pipeline rather than boilerplate code.
What's next for Once Upon AI
- Voice Cloning: Parents can narrate stories in their own voice using ElevenLabs' voice cloning
- Sibling Stories: Multiple children can star together in the same adventure
- Educational Themes: Partner with educators to weave learning into adventures
- Print-on-Demand: Transform favorite digital stories into physical keepsake books
- Community Features: Safe sharing of stories with family members and story-time playdates
- Adaptive Storytelling: Stories that evolve based on child's age and interests over time
Once Upon AI proves that AI can enhance our most human moments - not replace them, but make them more magical than ever before.
Built With
- amazon-web-services
- bolt.new
- elevenlabs
- expo.io
- flux
- gpt-4
- kling
- luma
- node.js
- postgresql
- react-native
- rendernetworks
- revenuecat
Log in or sign up for Devpost to join the conversation.