Inspiration

The extensive use of touchscreens for human-computer interactions has made them one of the biggest sources for touch-mediated disease transmission. In an airport, “it is 30 times riskier to get pathogen transfer with a touchscreen than anything else in an airport”. While you may be able to bypass the local touchscreen kiosks with the new check-in experience American airlines have rolled out, if you fly international or have a transfer with another airline, you will eventually end up in front of either a kiosk or a service desk. These features also have potential in other aspects of the flying experience. This technology can replace the need for fliers to touch the IFE screen. If we want the airline industry to return to healthier levels, smart initiatives like these could go a long way.

What it does

Using a facial detection algorithm, our software recognizes when a customer is facing the kiosk for more than 3 seconds, after which the user is greeted with a tutorial screen. We also use face detection to retrieve the Passenger Name Record of the customer. On the tutorial screen, the user is first asked to put their hand in the center of the camera. All the gestures start with this position. The user can swipe to the left to proceed and to the right to go back. After the brief tutorial, the user is greeted with a welcome message and is asked to select a language, which they can do by swiping up or down. With these simple and intuitive gestures, the customer can complete the entire check-in process without touching the screen.

How we built it

Our gesture-driven contactless checking kiosk is built on OpenCV. We first apply K-means segmentation on the video feed with K=2. This separates the image into two clusters based on their color. We then use contour detection to find the outline of the client's hand. By keeping track of the tallest finger, we can track the direction of each swipe. Through a socket connection with our frontend, we emit a message with details on how to navigate the app.

Challenges we ran into

The two biggest challenges were getting the OpenCV video feed to detect the fingers and how to implement Socket.io to act as the communication between the feed and the front end. The video feed issues were extremely problematic due to its necessity for specific lighting and detailed setup. We had gone to the extent of having 2 people hold a blanket to block the sun and standing on chairs to block the windows.

Accomplishments that we're proud of

Having really tough challenges enabled us to be really proud of them when we got over them. We were ecstatic when we found the perfect angle that enabled us to detect the tips of the fingers. Close to giving up, our eureka moment came when we managed to find an implementation that worked.

What we learned

We learned that teamwork is everything. No one person can do it all. Another thing we learned was that taking breaks saved us, whether it was just a 1 hr nap or a game of pool, it was a great way to let our minds reset. We learned about OpenCV, frontend to backend connection, UI/UX design, and most importantly we learned how to problem-solve.

What's next for Swiper Go Swiping

Currently, our hand detection algorithm is not very robust. Changes in the background can easily derail the prediction. There has been a lot of research done in the field of hand pose detection. One in particular that caught our attention is the SRHandNet, which is capable of real-time 2D detection of 21 joint positions. Additionally, facial recognition could be further developed in regards to retrieving the Passenger Name Record. With this information, we can create a lot more sophisticated and robust gestures to greatly improve the user experience.

How to run our project

  1. Clone our repository (https://github.com/jshih08/tamuhack2021) To run our backend, install dependencies using "pip3 install -requirements.txt" and then run "python3 server.py". * Our gesture detection algorithm works best when only the hand is in the view frame and the background is of a color that contrasts the color of your skin.

    To run our frontend, cd into our frontend directory and install dependencies with "yarn install". To run, use "yarn start".
    

Built With

Share this project:

Updates