What if your spoken words could be transformed into dynamic visual worlds in real-time?
What it does
Using voice input, machine learning, and neural networks for image generation from a dataset of more than 600 thousand images with animations, we've built a hack to forge the imaginary worlds that you can play around with.
How we built it
Our hack was built in less than a day, so it might be rough for some edge cases. We have built this using:
- React
- Google's neural network
- Canvas API
Challenges we ran into
There were a LOT of challenges in making this possible in such a short time. We had trouble drawing illustrations on the canvas, using voice to draw animations in real-time, reacting to different input adjectives like "near" or "far" to adjust relative image sizes, and getting it all to work together.
Accomplishments that we're proud of
Building this in time, the 8-bit themed website aesthetic and having an actual working hack to present!
What we learned
Canvas API is trouble to work with, how did people ever build complex games using this?
What's next for Fantasy.io
Getting that Victory Royale
Music credits:
Wataboi, bellesemijoiasloja, Caffeine Creek Band, Zen_Man
Log in or sign up for Devpost to join the conversation.