Inspiration

We are inspired by the interactive cooking presented by Walmart, and we are thinking why not try to make the process of drawing art as interactive and creative like that well with some help from artificial intelligence.

What it does

Waveart first detects the user's mood from either being joyful, surprised to sorrowful or angry, then the game starts by asking what sort of scenery the user likes to create, i.e. choosing between city or nature. All the pictures generated are determined by user's mood. At the end, the user is presented with a customized picture by just waving their hand.

How we built it

We utilizes computer vision from Google to detect user's mood. Then the mood factor is fed into GIPHY Api to find the corresponding picture. Meanwhile, the user could select what picture to display on the screen by just waving their arms using Tensorflow.

Challenges we ran into

Writing the algorithm for which direction the arm is waving towards with )posenet_; Finding the right image based on mood factor.

Accomplishments that we're proud of

We wrote a descent algorithm that can determine whether either your arm is waving left or right. The result of this is that user can select the picture by just waving the hand. We figured out how to combine multiple mood factors from Google computer vision API to find the best picture from GIPHY API. The drawing is also properly working!

What we learned

AI technology is evolving at a rapid rate and we need tons of caffeine to just try keeping up with that.

What's next for Waveart

Add speech-to-text to our drawing tool so that the user can choose what to draw or name an element to draw. Implement hand swiping instead of arm waving since it is much easier.

Built With

  • computer-vision
  • hand-motion
  • speech-to-text
  • tenserflow
Share this project:

Updates