Inspiration

After trying out Synaptics' touchscreen development kit, we were inspired to make use of the richness of data it provides. We noticed that it was fairly easy to tell the differences between various objects, including our respective palms.

What it does

The Handprint system uses the shape of your palm to identify you. It can tell the difference between non-hand objects and the hands of each user registered in its system.

How we built it

On the hardware side, we used Synaptics' Development Kit to get more "raw" touchscreen sensor data than just the touch coordinates that are made available on other devices. On the software side, we implemented a convolutional neural network classifier in Python using Google's tensorflow library to classify each frame.

Challenges we ran into

Our main challenge was finding an easy way for the user to generate enough training data to train the neural network. The system we settled on was to grab images from the sensor as the user moves her/his hand around the screen.

Accomplishments that we're proud of

We trained our proof-of-concept model to consistently identify Ehsan's hand while rejecting Evan's hand and other objects.

Try it

If you have the Synaptics Development Kit touchscreen set up, you can clone our project at link, run server.py, and open localhost:8080/demo.html to view the touchscreen input.

What we learned

The combination of deep convolutional neural nets and high-dimensional touchscreen data offers great promise for authentication systems.

What's next for Handprint

In the real world, there is not just one wrong hand for the system to reject -- it has to reject a vast set of hands that it has never seen before. To reach this next step of consistency, we need to develop a system to automatically generate lots of randomized wrong hands.

Built With

Share this project:
×

Updates