Inspiration
I am a Computer Vision and Machine Learning enthusiast. I was surfing through Machine Learning pages on instagram when I came across a model made by a IIT 3rd-year student Devashi Jain. It captured the movement of the person in the video that is being run on the computer. So I thought to myself what if I could make one where I can control it in real time i.e. using Web Cam.
What it does
If we run the program in the terminal it opens up your web-cam and a GUI interface. If you are present in the frame of the web-cam it will estimate your positions mark your joints and make the character in the GUI do exactly what you do.
How we built it
To build it I used python language. The modules that I used for this project were openCV, mediapipe and tkinter.
Challenges we ran into
Being a beginner in the field of Data Science it was really difficult for me. I did a lots and lots of research first, about how openCV works, how I can create a Graphical User Interface using tkinter and mediapipe. It was difficult for me having not much experience in the field I had to read lots and lots, got so many errors at time I felt like giving up.
Accomplishments that we're proud of
When I finally received the output it was really exciting for me, having no mentor, and no teammates, building my very first model all on my own gave me a lot of joy and excitement. I am really proud of my work,
What we learned
I got a good knowledge of one of the python module's openCV, I got a first hand experience in building a graphical User Interface using tkinter. I learned to have patience and believe in myself.
What's next for Control-The-Character
This can be further improved and lot of amazing things can be done with it. I would love to hear any ideas from anyone to improve my model.

Log in or sign up for Devpost to join the conversation.