Inspiration

Every single day on campus, I have to fumble through my cards to find and swipe my ID when entering buildings. I have to stand in lines and deal with congestion as hoards of students try to scan their cards to enter a building. It'd be nice if I could just walk in normally while also getting the protection that this system provides.

What it does

My solution is a smart camera I developed that can identify individuals through live footage. It uses the back camera of my phone to scan for my face, and once it recognizes mine, "checks" me in, noting the date and time. This could also be a great enterprise tool for employers, as this camera could allow employees to clock in automatically. I built in a push notification system that could send information to employee's phones as the camera notices them entering or leaving the building ("Welcome Adam. I'm booting up your desktop now" or "Leaving already? Don't forget there's a meeting at 5 pm").

Challenges I ran into

The biggest challenge was getting the model to accurately identify my face; I spent so much time training it that I wasn't able to implement as many features as I wanted, unfortunately. Different lighting situations seemed to greatly affect my software's accuracy while increasing the number of pictures that I trained the model with seemed to have little effect.

How I built it

Using my iPhone as a prototype camera, I trained a machine learning model with hundreds of pictures of myself and used it to power an app I built. The app uses Apple's Vision and CoreML APIs to perform the image recognition all onboard the device.

Accomplishments that I'm proud of

I'm very proud that I was able to complete the app all on my own! I only slept two hours total this week but being in this environment among other coders is exhilarating– I'm proud I persevered through and finished it. A lot of obstacles arose during this development process, and I'm proud of myself for finding solutions every step of the way.

What I learned

I learned a lot about how training an ML model works, as well as the benefits and limitations of using an iPhone and app as the platform for an image recognition software.

What's next for Zuum

I'm going to continue working on Zuum, but with a different approach. This idea will take longer, but more fully utilizes the scope and power that visual recognition and machine learning has to offer us programmers. I'm working on developing Zuum into a more advanced facial recognition software. I would apply a similar, more powerful model to a live camera feed, crop out pictures of the individual's face using saliency software, and then perform reverse google image searches on that face. I believe that using python scripts, I can then find and scrape the social media accounts associated with that face, gathering data such as name, birthday, age, location, and more. For example, I could scrape all of the individual's Instagram photos and perform image analyses on each of them to find out things such as hobbies or frequented locations. Sentiment analysis from tweets could give insight into one's personality. And once a name is gathered, it could perform background checks to see a person's criminal history. All of this data could be instantly formed into a comprehensive profile that could follow this person around wherever the cameras are. It's quite the dystopian idea, but the number of use cases ranging from retail shopping to school safety seems staggering. It's a long term project I'm pursuing.

Built With

Share this project:
×

Updates