Ideation and Initial Concept Sketches
Today we often focus on how we can digitize our world. There are wearables to record our bio metrics, virtual reality to augment or replace the space right in front of us, and social media to broadcast our thoughts to people far from our locality. We sought to reverse this trend, and leverage the digital world to bring new experiences to the physical.
What it does
Jaberwocky plays pictionary. Piggybacking on Amazon Echo's speech to text and internet capabilities Jaberwocky interacts with the user by picking a popular word, finding an image, and drawing it. The player then guesses what is being drawn till correct, at which point they are congratulated and offered another round.
Where it's at
Currently we preload a bank of pictures we pulled off the internet, vectorized, and convert to g-code for our machine.
How we built it
Jaberwocky is made from laser cut acrylic, an Arduino, an adafruit motor shield, 2 stepper motors, a micro servo, a splash of delrin, rubber bands, and some vhb tape.
Challenges we ran into
Most of this project was very new to use. None of us had much Arduino or coding experience. Our members come from the mechanical and robotics fields, dominated by such gems as Matlab and Labview, so working to create the processing end of the game was a journey.
Accomplishments that we're proud of
We all made massive strides in our skills and created a compelling play experience in the process.
What we learned
How to use an Arduino, Alexa, serial port communications, 5 pin steppers (are evil) and so much more.
What's next for Project Jaberwocky
These is a lot of debugging, product integration, usability, and aesthetic refinement.