More and more often, we see an increased number of both first responders and the people they're trying to help getting hurt in situations that are generally preventable. Stigma against police officers are on the rise, and they have seen a more enemy-like perspective from many, than upholding their conventional 'protect and respect' vision. No matter your opinion on the matter, there is one indisputable fact which is the direct causation of mutual violence between both sides.

The inspiration for this system is the concept for the preservation of life and the diminution of violence. The recent rise in police cameras, squad car networks, and standardization of the force makes an ideal platform to deploy this type of technology. Malpractice and injustices fueling much of the stigma and the fact that some legitimate cases may be prevented by our algorithm, and create an environment where people with malicious intent become accountable, was also an extremely appealing reward. This, of course, on top of protecting our first responders on the front line.

What it does is an app that allows a first responder to, in real-time:

View points of information related to a subject based on their facial recognition data and make educated decisions in the field through the forms of things like:

-Positional data through the form of real-time AR (subject tracking and tagging).

-information defined in a custom database about the subject (eg. can be used for past crimes, public safety concerns, mental concerns, along with normal identification information etc.).

The app also keeps scaled usability in mind, with all clients able to register faces and tag them respectively in the user database via the 'capture' selection of the application.

How we built it

In the team of 4 consisting of Derek Xu, Kevin Wang, Daniel Ye, and Rodolfo Funes, at the 2018 Hack The Hammer Hackathon in Hamilton, the 4 used Python3 along with several vision and deeplearned modules (face_recognition, pygame/, cv2, etc) to successfully accomplish the task of real-time facial recognition and usable information to appear on a UI, ready for the user to use and analyze.

This was all accompanied by our website made with HTML5 and CSS3.

Challenges we ran into

Throughout the Hackathon, there were various challenges we ran into. Thankfully, for most, we were able to work around or solve the issues.

First of all, after coming late to the hardware lab and being unable to check out a webcam, we shortly discovered the module we had planned on to open a webcam feed on would not support the pixel format of our alternative camera (nexus5x DroidCamX (ip)). After considering our options which included rearranging the array into an RGB colorspace or numpy array, we realized the computation tradeoff would be far too great and would hinder the usability of our system. After consulting with some reps from HumanCode, we were exposed to the idea of ram disks being mounted through an ubuntu CLI and decided to go with that. Through vigorous testing and algorithm modifications, we finally got to a frame by frame analysis of the video feed.

Second of all, we realized that the deep learned face recognition may have learned based on parameters other than things of the face. Hinted at by the workshops at Google (machine learning), we realized that during training, it is challenging to isolate for specific variables (such as lighting, background, etc.). As a result, we could see abnormal results in our face recognition results, but learning from our workshop, we varied conditions of the sample face pictures to the AI combined with consistently testing our tolerances for comparisons (from 0.6-0.105) to find a consistent threshold that gave us the best accuracy. We eventually found the optimal configuration at if over 1 False in a face feature comparison array of over 30 as the fail threshold @ tolerance level 0.105.

Accomplishments that we're proud of

We're definitely proud of the consistent accuracy that the program eventually brought after constant adjustment. After surveying almost everyone at the venue, twice, we only had 1 false positive. This was because of a system that used a majority positive system to rule out single frame false positives and to determine the most likely match of the face before displaying information.

In addition to this, we also are proud of the user experience we brought outside of the API. We made sure that the database was scalable, through the addition of the database appender. We also implemented outside of our normal AR tracking, a non volatile 'sighted' list, which allows the manipulation of a queue of sighted persons of interest after they leave the frame.

Working towards having usable performance, we are also proud of the level of usability we achieved from when we started this project. Implementing more efficient code, methods and placement of operations helped to make this possible.

What we learned

This was our first ever cross API code that forced us to work around their limitations and have them work together to solve our problem. All APIs offered an invaluable contribution, but became difficult when interacting with each other. Specifically here, we investigated a plethora of solutions including colorspacing, matrixes for colours and new methods of array manipulation. We learned that specifically manipulating accessible resources in your runtime environment may also be helpful when all else fails, such as the case where a ram disk was needed.

What's next for

Our aspirations for the would be to push the program into real-world data situations in parallel with a development journey to further improve functionality. This type of technology in a commonly used environment can be critical to eliminating legitimate risk towards first responders, and we feel can eventually be further expanded to reach other fields and data with time, scale, and relevancy (eg. GPS first responder sighting, central database, etc.). We've already open sourced the code on github w/ an MIT license, and would be delighted if you would help us grow this for the better.

Built With

Share this project: