We recognized the fact that SmartHome systems such as Amazon Alexa have revolutionized people's domestic lives. However, as with any new technology, there are gaps in the services they provide. We believe that the lack of security and personalization offered by voice recognition systems is a missed opportunity. Keyper seeks to change that.

What it does

Keyper is a system that enables a new level of security in voice recognition technologies. When prompted by a voice command, Keyper sends a trigger to the Nvidia Jetson TX1 to take a picture of a user and run this image through Microsoft's Cognitive Services API. This allows the system to apply facial recognition software to identify the user. If the user has proper clearance, the safe door will prop open and a personalized, user-friendly comment will appear.

How we built it

To build Keyper, we used the Nvidia Jetson TX1 as the board to run our recognition code. We started by building an Alexa App to trigger the Nvidia Jetson's photo capture. After we made the Alexa app we also made an iOS app in order to quickly interface with the Jetson's server. On board the Jetson, we run a python server that accepts POST requests. We had difficulties getting the Alexa to send requests to the Jetson, so we use the iOS app as the main interface and trigger.

When the Jetson receives a POST request, we capture 5 images with different exposures in order to detect who the person is. We first run the 5 images through the Microsoft Face API to determine if they are in fact Face images. Then we run the images again through the Face API with our preset image database and figure out who is the person in the picture and the confidence the API has. Along with detecting who the person is, we also send the image to the Microsoft Emotions API to personalize a response when unlocking the door. Depending on the emotion results, we give the user different responses when opening the door. After the computer vision analysis is done, we create a personalized response and return it in the POST request to the iOS app, which speaks it out loud to the user.

If the person in the image is detected to match one of the people in our whitelist, we open the safe box and light it up green. In order to open up the safe box, we use the Nvidia Jetson with an Arduino, because the Jetson does not have enough power to run the servos.

Challenges we ran into

We had some difficulties setting up the server that the Jetson ran. There was a lot of issues with MIT's network not allowing connections between computer to computer, but we were able to figure it out.

Accomplishments that we're proud of

We are most proud of the fact that Keyper merges a wide variety of technologies to achieve one common goal. One of the most memorable MakeMIT moments for us was when we facilitated a discussion between Nvidia and Amazon Alexa mentors to discuss the pros and cons of integrating the two products; technology should not be created in a vacuum and we believe Keyper does a great job of showing that the greatest successes come from the merging of various different products.

What we learned

All of us came into the make-a-thon with a wide range of skills. With three of us being stronger on the software development side, it was intriguing to learn more about how to program GPIO pens and the Arduino. In addition, we learned how to set up a server, create Amazon Alexa skills, and design IOS applications.

What's next for Keyper

in the future we would like to add more personalization to Keyper's comments as well as improve the technology's range of skills.

Built With

Share this project: