In the show How I Met Your Mother, there's a runnig joke that Barney Stinson can't take a bad picture, and at it's core our app does just that. If you've ever tried to take a group photo or the perfect shot you know it can be frustrating and unsuccessful. Our algorithm will take a series of frames and select the frame in which the whole group looks most photogenic, based on everyone's presence and facing the camera, blinking, smiling, and other facial features.
What it does
Takes a short video or a sequence of images, and gives back the frame in which all the participants are most photogenic using Principal Component analysis and regression algorithms. While we trained the model for photogenicness, the core algorithm is broad enough to be trained on a variety of factors such as professionalism, happiness factor, etc. It can be a great as well for those with impaired vision, or those with Parkinsons to take the perfect photo.
How we built it
We used a mobile face detection API to detect certain quantitative facial features. We trained a regression model after performing Principal Component analysis on these facial features, as well as calculated feature ratios, all on scraped online photo data. We used this regression model within a mobile app, and implemented additional Android functionality, including a camera implementation.
Challenges we ran into
Dividing a video into a series of high rate frames was difficult if not impossible to process within the Android framework, so instead we created our own camera implementation to save frames from the incoming byte stream. We also initially trained a PCA neural net as our regression model but then needed a server connection to process each photo. Instead, we settled for PCA and a less calculation-intense regression model to be able to do all the calculations on the mobile device. We also wanted the training data to be consistent and the API only exists in the Android framework, so all of our training data had to be processed through Android as well.
Accomplishments that we're proud of
We were proud to be able to combine our interests in machine learning and mobile app development. We ran into a lot of problems but we were able to create a minimum viable product that is fully functional.
What we learned
We learned how to practically apply machine learning to a real world case, and many specifics of the Android framework.
What's next for PhotoOpt
Our current implementation of the camera is rather slow and not completely optimized. We expect to make this much smoother in the future. We are also looking to improving the machine learning algorithm, although the performance already seems very reasonable.