H-Auth which stands for Handwriting Authentication was brought on by a need for secure encryption that could not be broken by a brute force program. This could be solved by 2-Factor Authentication, but we wanted to make something stronger than 2-Factor Authentication, so we base our encryption around a user's specific handwriting style.
What it does
The program learns how a user writes his/her letters through machine learning on the vectors taken in from the synaptic touchpad. The program can then encrypt any file into a zip and when the user wants to unzip the file, they get a captcha phrase to their phone which they must then write in to authenticate. If a handwriting that is not consistent with the user's is used, the file will not unzip.
How we built it
We used python as our backend language. We took data frames from the Synaptic touchpad and transformed them into vectors which we normalized based on size and location on the pad. We then put the data into a neural network on the Azure machine learning platform to distinguish handwritings. So when a user writes their captcha phrase, the data is sent to Azure and determined whether it is the correct handwriting.
Challenges we ran into
We ran into some problems where some letters were hard to distinguish, because they were written in very similar ways with other data sets. To solve this, we decided that those letters would have a lower probability of being chose in the captcha phrase much like a confusion matrix. Thus even if they were chosen, they would not skew the reading too much.
Accomplishments that we're proud of
We managed to create a fully functioning program while applying some of the things we learned in the machine learning course we are taking this semester.
What we learned
We learned that in order to have an effective neural network, there needs to be a lot of data points. Thus we spent time writing over 3,000 letters this weekend as data points.
What's next for H-Auth
We hope to make our handwriting recognition more robust, much like how signature recognition works. Of course this will take many more data points.