Facial recognition is a biometric solution that measures unique characteristics about one’s face. Applications available today include flight checking, tagging friends and family members in photos, and “tailored” advertising.
Retailers are using facial recognition to collect data about customers as they shop in stores, according to an IT company with insight into the space. The data collected include how many people are coming in, age, ethnicity, gender. It's all about knowing the foot traffic better and trying to serve more appropriate offers to those customers.
Facial-recognition software meant to weed out travelers with fake passports will be rolled out to all international airports in the U.S. as part of a plan to crack down on identity fraud among visitors from countries with visa waiver agreements, according to Customs and Border Protection.
Companies, such as FaceFirst, are creating a personalized planet through facial recognition technology. Their solution can detect and deter real time threats, transform team performance and strengthen customer relationships.
But all these solutions are very expensive and need a big team to implement. AWS DeepLens make it very easy and convenient to implement deep learning algorithem on the device.
What it does
We have implemented two use cases for face identification solution with DeepLens: First use case is Customer Identification. To satisfy the goal, we used customers’ profile pictures on a system (i.e., Library, Grocery Store, Bank , etc) to match with DeepLens face detection results on real time basis when customer enters a branch. upon customer identification via DeepLens algorithm, an API call fetches customer's account details and surface the information on desktop web application. Here is a demo for customer identification.
Second use case is Amber Alert. We developed a mobile application that takes a picture of a missing person (or wanted person) and uploads it to storage back-end (i.e., AWS S3). As soon as image is uploaded, DeepLens uses this new picture in its face identification algorithm and identifies the person on real time video streaming. When missing person is detected, DeepLens sends a notification via email or phone, whichever is subscribed, to notify the authorities. Here is a demo for Amber Alert use case. Here is a demo for Amber Alert use case.
How I built it
Face recognition model is built using dlib's state-of-the-art face detection libraries. The model has an accuracy of 99.38% on the Labeled Faces in the Wild benchmark 1. Greengrass lambda function runs the face detection algorithm on the DeepLens device and publishes an event to the IOT topic as soon as it identifies a face. There exists another lambda function listening to this topic and fetches customer's info via API call from back-end's database. After data got retrived, this Lambda will publish an event to SNS with customers information as payload. A NodeJs web server (running on EC2) accepts the POST call from SNS and broadcasts a message to all of the clients (web or mobile) registered to it via WebSocket. This architecture allows real time update of the Client application as soon a new customers step into the branch.
Challenges I ran into
First challenge was setting up openCV libraries on our DeepLens and testing it. At very beginning we had some issues with debugging the Greengrass lambda function code and redeploy it without using AWS console. Another challenged we faced was making the project working end-to-end with all its components (ML code, DataStorage, APIs, Mobile application, web application, etc) with a very short amount of time we had as all of us working full-time.
Accomplishments that I'm proud of
OneEye identifies faces with very high accuracy and dynamically updates new faces to the database. It was an exciting experience to build and run complex Deep Learning model on a device and we are proud to present a fully functioning solution for a yet very complicated problem.
What I learned
Working with DeepLense is both fun and educational. I gained a lot of knowledge and experience about how to use Greengrass and AWS IOT by implementing the use cases. I also learned that technology is growing so fast and these kind of projects/hackatons allows us to keep up to date.
What's next for OneEye
Customer detection and missing person scenarios are existing real world cases which needs an urgent lift and modifications. This DeepLens project is a start to disrupt current ways of doing this. Next step for OneEye would be finding sponsors to make OneEye available for these use cases.