We know that reading and writing is an important of a kid's education but often times it ends as chore so we wanted to make that process more interactive. Our team focused on this problem to make sure the children move forward in their development process especially during these COVID times.
So we are pushing for a Make-your-Story-book-Adventure, by modeling a game with visuals that has fill the blanks to fuel a kid's creativity and choices. This is modeled after AI Dungeons and Dragons.
What it does
We take contextual data and provide user's 3 choice of words to create their own adventure with pictures painted on a web-app as they play. Furthermore, additional prompts and story lines are created based on previous user's choices.
How we built it
We are building it my using natural language processing neural networks in the form of LSTM (Long short term memory) and recurrent neural networks that have feedback as well as feed forward layers.
Additionally, we have two layers one that has an reLU activation with a embedded linear function and another that has a tanh activation function with a corresponding linear function.
Finally, based on these technologies we were able to output 3-4 predictions that were scenarios based on the user's input of words or sentences which we treated as input features/context.
Challenges we ran into
Trying to train large storybook text files took a long time with minimal improvements in accuracy.
Hosting on google cloud with functions api was difficult to hold and train models to ensure they work reliably and continually improve over-time.
Accomplishments that we're proud of
The accomplishment we are proud of was the fact that were able to generate cohesive sentences and phrases out of several different storybook datasets through careful experimentation of model parameters as well algorithmic parsing/design of the machine learning networks.
What we learned
We learned that it was truly difficult to form sentences that were intriguing and storybook, that there needs to be planning when using different GPU and tensor-flow cores.
What's next for Interactive Stories
We would like to able to train more datasets in a much more organized and open-source manner with multiple forms including audio.