It's an infamous phenomena that riddles modern society - camera galleries filled with near-similar group and self portraits. With burst shutter shots and the general tendency to take multiple photos of a gathered group out of fear that one image may be cursed with a blink, misdirected gaze, or perhaps even an ill-conceived countenance, our team saw a potential tool to save people some time, and offer new ways of thinking about using their camera.

What it does

This app can either take a series of image urls or a Facebook album's id, and parse the images with Azure's Face Cognitive service to determine the strength of the smile and general photo quality. The app then returns the same series of images, sorted from "best" to "least," in accordance to Microsoft's algorithms regarding blurriness, happiness, and size of smile.

How we built it

We built the app on a NodeJS server and immediately began working on learning about how to prepare data for the Azure cognitive surfaces. This web server runs express to quickly deploy the app, and we used Postman repeatedly to troubleshoot API calls. Then, we hosted the web server on Google's cloud platform to deploy the dynamic site, and with that site we used Facebook's graph API to collect user images upon entering an album ID. The front end itself takes its design from Materialize.

Challenges we ran into

One of the main sources of troubleshooting was working with very particular image urls. For Azure's cognitive services to take the image files, they must be urls of images already hosted on the internet. We spent a while thinking about how to overcome this, as Google Photos images were not reliably returning data from the Azure service, so instead we used Facebook albums. Additionally, we never really got to figure out which features are best correlated with picture quality, and instead arbitrarily chose blurriness and happiness as a stand-in for picture quality.

Accomplishments that we're proud of

Getting the album to display user information was amazing, and connecting our pipes between our server infrastructure and Microsoft's cognitive service was extremely awarding. We were also proud of being allowed to bulk compare photos with Facebook's API.

What we learned

How to handle tricky AJAX calls, and send more tricky header calls to retain information. We also learned about the variety of web hosting platforms in the area, and took our first foray into the world of Computer Vision!

What's next for FotoFinder

Integration with Google Photos, customized ML models for image quality, and an open source tool for a project like this so other companies can simply use the idea with a public API.

Share this project: