Inspiration

One of our teammates had a large interest in computer vision and another had an interest in wanting to help those with disabilities have an easier time living their day-to-day time. Combining our 2 interests we settled on this.

What it does

It uses your video camera to identify your hands using computer vision and via a limited set of gestures do an action.

How we built it

Using the Google motion pipe and task API we implemented computer vision that allowed us to get user input

Challenges we ran into

Due to the lack of documentation we had to scour the internet for any sort of usable code that we could use as an example. We attempted to use chatGPT but the improper codes that it provided didn't help us understand the formatting.

Accomplishments that we're proud of

We are able to implement all of this with the lack of documentation and with an API we have 0 experience.

What we learned

We learned a new API that implements computer vision to detect hands and hand gestures. As well as working as a group on a large project such as this.

What's next for HandDetection

We convert the Windows cursor to a Windows pen so we can use this as a virtual tablet and take advantage of the virtual keyboard.

Built With

  • google-mediapipe
  • google-task
  • hopes-and-dreams
  • multi-threading
  • python
Share this project:

Updates