Inspiration

We wanted to build an AI-powered storytelling experience where writing a narrative automatically turns into a visual storybook. The goal was to bridge creative writing and generative AI so users can see their imagination come alive through illustrations generated from their own text. The inspiration came from modern AI storytelling tools and the idea of making story creation feel more interactive, immersive, and visual.

What it does

Quill & Canvas allows users to write a story in a beautiful React-based interface and then generate corresponding illustrations for that story using AI. Once a user submits their text, the system processes it through a backend service, enriches it with contextual understanding, and generates images for key scenes. These images are then displayed in a storybook-style UI, turning plain text into a visual narrative experience. The book can then be downloaded as an e-book pdf.

How we built it

We built the application using a full-stack architecture. The frontend is developed in React, where the UI handles story input, state management, and rendering of generated images. The backend is built with FastAPI, which receives story input from the frontend, processes it, and coordinates communication with AI services. We introduced a context layer using Backboard.io to maintain narrative consistency and enrich prompts before sending them to the image generation model. For visual generation, we integrated the Leonardo API, which produces high-quality illustrations based on the processed prompts.

Challenges we ran into

One of the main challenges was transitioning from a static HTML-based UI into a fully React-driven architecture while preserving the original Quill Canvas design. Removing inline JavaScript and replacing it with React state management required a full rethink of how the interface behaved. Another challenge was ensuring smooth communication between the frontend, backend, and AI services while maintaining low latency. We also had to carefully manage prompt quality and context consistency to ensure generated images matched the story narrative.

Accomplishments that we're proud of

We successfully built a complete end-to-end AI storytelling system that transforms written stories into illustrated storybooks. We were able to integrate multiple systems including React, FastAPI, Backboard, and Leonardo AI into a cohesive workflow. The UI retains a polished, immersive storytelling experience while being fully dynamic and data-driven. We also established a scalable architecture that can support future expansion into multi-page books and richer media generation.

What we learned

Through this project, we learned how important architecture design is when working with multiple AI services. We gained experience in structuring a full-stack system where the frontend, backend, and AI components each play a distinct role. We also learned how context management significantly improves AI output quality and consistency. On the frontend side, we deepened our understanding of React state management and how to replace imperative DOM logic with declarative UI patterns.

What's next for Quill & Canvas

Next, we plan to expand Quill & Canvas into a fully featured AI storybook platform. This includes supporting multi-page story generation where each scene is automatically split and illustrated. We also aim to enhance the context system so characters and visual styles remain consistent across entire books. Future improvements may include exporting storybooks as PDFs, adding sharing capabilities, and introducing animated or audio-enhanced storytelling to create a more immersive experience.

Built With

Share this project:

Updates