Inspiration

We were inspired by the Iron Man movies where Tony Stark would interact with Jarvis by seemingly having his every movement recorded to match gestures with actions.

What it does

Reads user hand landmarks from user's webcam and draws onto a canvas with varying pen size and color.

How we built it

We built this project by finding out about what libraries could allow us to record our hand movements through our webcams. We found OpenCV and MediaPipe and from there began designing a program that would read hand "landmarks" (i.e locations of several key points on one's hand) and draw them out on a virtual canvas.

Challenges we ran into

One of the main challenges we ran into was python syntax/usability, as most of our members are new/beginner to Python. Added on, most of our group members had also not experimented with AI or webcam motion recognition. Another problem we ran into (in regards to our original plan of making an ASL interpreter) was the lack of open-source data. Despite researching companies like SignAll, our group did not manage to be able to find open data sets that would allow our group to train an AI to recognize and interpret sign language on top of being a canvas tool.

Accomplishments that we're proud of

We are proud that we were able to make the canvas that draws hand movements by using geometry. We are also proud of how we learned Python, OpenCV, and MediaPipe relatively on the spot in order to get our project done. For most of us, this is our first or second major hackathon, not to mention written in Python. Being able to make at least some product is an accomplishment we are really proud of.

What we learned

We learned the value of staying focused and on task but also not forgetting why we are doing what we are doing. Sometimes we would find ourselves solving problems that weren't pivotal to our product's success, and from this experience we've learned an important lesson about task prioritization and time management.

What's next for Virtual Artist

Next for Virtual Artist would be training an AI with ASL datasets in order to make a platform that can be used in schools by teachers to more effectively communicate with those who are hard of hearing/deaf. On top of that, Virtual Artist, despite our efforts, can still be polished up more to be more user-interactive and "clean" overall. We intend, as a group, to further our knowledge in Python, OpenCV, and MediaPipe and potentially do more projects with these services.

Built With

Share this project:

Updates