Low child literacy rates are unToddleable! That’s why we need to ToddleTale on the statistics — and the solution. Get more bang for your book with ToddleTale, and Toddle your way to success!


Stimulating toys are crucial to the development of toddlers, who lay down 80% of their neural pathways by the age of 3. Foundations for reading, writing, and communication skills, as well as those for lateral thinking and imaginative creation, are particularly essential in today’s highly connected and rapidly changing world.

Around 47 million children are still out of [pre-primary] school — a number that has held constant since 2014.


According to the statistics, 47 million children aren’t receiving sufficient education to stimulate proper brain development! This stifles their potential, preventing many from reaching higher education and improving their socioeconomic status. This inspired us to make ToddleTale, an educational tool that sparks imagination.

What it does

ToddleTale is a website that allows children (the Rookies of life) to create a personalized picture book just by telling a story out loud. While the child speaks into the microphone, ToddleTale automatically generates pictures and exciting animations as the story unfolds. Not only does this help children develop reading, writing, and pronunciation skills, it also fosters a creative imagination. To supplement the user’s own stories, we’ve included sample stories that can be played back with audio.

How we built it

We used Svelte, a JavaScript component framework, to build our frontend client, and Julia, a high-level scientific computation language, to build our backend natural language processing API.


The Svelte client displays a list of currently available books and allows the users to create a new book. When the user creates a new book, they are prompted for a title and are then free to conjure any story they can imagine. As they speak, a speech to text algorithm is used on the client end to preserve privacy, and the resulting text is sent to the Julia API for processing. When a result is received, we draw the images and animations to the book interface and allow the user to continue to develop the storyline.


The API receives text from the client and runs a dependency parsing algorithm to build a dependency tree. The dependency tree is then traversed to find the subject, object, scenery, and verb in the sentence. The verb is used to identify which animation to use, and Bing images is scraped for pictures of the subject, object, and scene. We also use a memory system that can identify name assignments like “Penny is a penguin” so that we know to use the same image of a penguin any time “Penny” is the subject or object.

Challenges we ran into

  • A bug with context in Svelte
  • Styling a Svelte component
  • High memory usage when using word embeddings in Julia (solved by limiting vocabulary size to that of a child or teen)
  • Larger latencies when scraping images (solved by caching results)

Accomplishments that we’re proud of

  • Highly adaptable presentation with creative animations and interactions

What we learned

  • Svelte components
  • HTML Canvas >:)
  • NLP in Julia

Domain.com Domains:

  • LiteratureLetElonMuskGetTo.Space

Built With

  • julia
  • svelte
  • web-speech-api
+ 1 more
Share this project: