Inspiration

We were inspired by the global COVID-19 pandemic reshaping how we live our lives. Keeping with this change, we thought of ideas that would change the way restaurants operated ensuring safety, efficiency, and practicality. Thus, Touchless was born, an efficient touch free way of placing orders.

What it does

The site uses machine learning to locate any hands in frame and determine what gesture is being held up- 1 fingers, 2 fingers, 3 fingers, 4 fingers, 5 fingers, thumbs up, thumbs down, or closed fist. It uses this information to navigate a menu, allowing you to pick what category, item, and side you want then to add it to a list.

How we built it

First we took over 1000 images of our hands doing various gestures and manually labelled them for training data for the AI. Next we used a RoboFlow tensorflow model to train an AI model for over 30,000 iterations so it could detect where hands were held up and what gesture they had. We added this to a website built with React which has a live camera feed that was inputted to the AI and uses it to navigate the menu on the right.

Challenges we ran into

Getting the AI to properly detect the hand was very difficult. It often would not detect it at all, detect the wrong gesture, or detect random things in the background as hands. We had to train it for multiple hours for it to be somewhat accurate, and even now it is very finicky. We think this is due to not having enough variety in training data since we were the only ones providing images and variable lighting conditions and backgrounds. It was also pretty difficult to get the model on the website from the camera.

Accomplishments that we're proud of

We are proud of making a website where you could navigate through a menu and ‘order’ something without too much difficulty without ever touching a mouse or any input device. We are also proud of getting a pretty accurate machine learning model that could be helpful in real life scenarios. We also hosted our website on Firebase using a domain we got through domain.com

What we learned

We learned how to use tensorflow and RoboFlow for image detection and then how to use that in a hosted website. We also got more familiar with using a nested json structure which we used to store the menu and then navigating the layers of it.

What's next for Touchless

Next, we would definitely like to incorporate more data sets allowing our machine learning algorithm to be more accurate. We would also like to expand this model to detecting whether or not people are wearing masks. We also want to improve the model by providing more data and more varied data, then allowing it to train for longer. We would also want to implement it into a more accessible app or website that allows businesses to more easily create and customize their menu and add it to the site.

Built With

Share this project:

Updates