Inspiration

My interest in ML/Ai and how we can use statistics, Linear Algebra to predict and make a computer see things.

What it does

This project has hand coordinates of x,y,z positioning. But also detects Sign Language and hand gestures using the camera, it also outputs to the screen what you're doing and a percentage of accuracy.

How we built it

I built it using mediapipe Api, tensor flow, opencv. And using the library's they provided. But also used and a program called Labelmg to select the Sign Language I'm doing and setting that image to a name for the data.

Challenges we ran into

By far the hardest thing I ran into was using a prebuilt model provided from Media Pipe and using my own photos to re train the model with more data and better accuracy. But I was getting a lot of errors that took hours to fix.

Accomplishments that we're proud of

Improve my knowledge of Machine Learning and AI. Doing research on a topic thats very niche, and its very hard finding information on bugs you get in you're code if you cant figure it out.

What we learned

What I learned is that the people who made the actual algorithms to do all this stuff are absolute geniuses. And props to them.

What's next for Sign Language Detection

What's next for this project is to retrain the model with my own data and get the model to work with my camera. And once I get that working to add more Data to it. I would also like to get the model to do Object Detection and more.

Built With

Share this project:

Updates