We were looking for low cost secured cash box for our home,business etc,most of the low cost lockers are key driven and high end high security cash boxes like finger print secured cash boxes were too expensive and out of our reach so we started working on free APIs that can offer cutting edge biometric security.So we started building our Biometric Cash Box

What it does

Biometeric Cashbox provides digital lock using voice recognition and face recognition.User avails a cashbox,register his credential with Knurld,his face impression with Microsoft Oxford AI APIs.He also registers his voice with Knurld.

When he tries to unlock cashbox all he has to do is press a button,speak out three phrases asked to him by Knurld and smile for a photograph.When he face is verified by Oxford AI followed by successful voice verification by Knurld the lock is automatically opened.

Forging face bio-metric is relatively simpler as an impostor can obtain a false verification just by presenting a static photograph of the actual user.However as the voice needs to be generated in real-time (as the phrases are randomized)it offers an extremely high level of security through inherent aliveness detection

How we built it

We first worked with Knurld's API's and understood the functionality of the workflow of the entire process. We played around with raw data in interactive API tool provided by Knurld's website. We had to develop two distinct sets of APIs, one that runs on PC, for setting and syncing the cash box. So we built our C# based SDK for Knurld API first. We created individual methods for calling different http calls and getting and formatting the response. As, the process is quite complex and long, using only async methods would not present a user with enough understanding of the estimated time left for a process to complete.

So, we first manually calculated the average time needed for every operation like audio analysis, verification, enrollment and it's correlation with current PC's bandwidth. We incorporated that into the sdk. So when user initiate a process, he gets the feedback about current state as well as the estimated time left in the current state.

Once C# sdk was ready, we imported the dll and built windows form solution to implement voice recognition. One of the problems with voice recognition is that when we use native win32 api's for audio recording, the app never knows the right audio level that would result in better analysis. So we created a method to obtain the system audio level, and created a feedback based system to fix the audio level automatically in order to get the right volume and channel settings for using Knurld's services . Several mail exchange with Knurld's engineers helped us understand their internal process and we incorporated that into our app. So the challenge was not only to create methods for calling the end points, but also to maximizing the accuracy and efficiency of the voice ecognition process.

We wanted to work with dual biometric system. Generally fingerprints are used as standards biometric trait for current lockers and cashbozes. But, we wanted a correlation with voice. We realized, face is a great biometric method. Microsoft's Oxford face AI was a good pick out of several other cloud based face services that we tried. However, the service also has user management the same way knurld has. It essentially means that we needed to mitigate the same account into multiple places. We wanted to avoid that. So we created a workflow that needed the users to keep their account only with knurld and only use face verification service. So we created a simple flat file data management to link Knurld and Oxford services at the code level instead of multiple registration of the same account at different service providers.

The next big challenge was to create the url of the data ( audio or face). As Knurld's main processes( enrollment and verification) need audio url and doesn't support multi part data, we had to integrate dropbox apis as storage service.

Once this process was created at the PC app, we had to get the same worlflow running in our IoT device ( Intel Edison).

We wrote complete Node.js solution for faceApi integration, Knurld's voice API integration, dropbox integration.

The microphone of PC and one that is used in IoT device differs in quality. So we had to write core audio process even in IoT device to get the audio right. We used a RGB LCD that changes color from red to green in steps just like a progressbar to send visual feedback to the user about the process.

Finally we wrote MqTT based handshake protocol for PC and IoT devices to communicate.

Challenges we ran into

1)Integration of Knurld's Api was extremely challenging due to its complex process flow.A voice sample needed to be mitigated through several APi's,inter exchanging data to complete both verification and enrollment. 2)It was immensely difficult to design a hybrid biometeric that utilizes two different sets of Api's from different service providers 3)Because this is an IOT app and data exchange though REST API's is extremely high latency operation it was difficult to give a feedback to the user with current state of the process and give him appropriate instructions.

Accomplishments that we're proud of

We are proud of building a multi biometric system purely at the device level.We are also proud of building our own API's to link cloud storage with two different sets of biometric operations,and our achievement of hardware level session sharing amongst peripherals(mic and webcam).

What we learned

It was a great learning experience indeed. We had to play with ten's of npm libraries to get the modules working. But finally we setteled for restler, a npm library for calling rest apis. We learnt that instead of using individual libraries for different services, we can use one single rest api module and build our own sdk for any cloud based solution. Also dropbox integration was one of the cool things to learn. As most of the cloud based analysis service supports url based data submission, integrating dropbox can be part of a large number of use cases. Also, abstracting the Api calls and then integrating them at the workflow level was awesome. IoT biometric was a significant step forward as it needed not only integrating different services but working with raw signals.

What's next for Biometric Cash Box

We would really want to evolve this product and take it to the market.

Share this project: