When taking photographs with friends or family, very often someone blinks or makes an uncomfortable face in one. No one wants to be the odd one out, so the best option is to leave no one behind. As such, our project "Smile" serves this purpose well - merging faces to have a composite image where everyone looks good!
What it does
Smile will take several photos, and merge them to ensure that everyone looks good in the photos. It uses the Google Cloud Vision API to analyze sentiment to find the happiest photo and then use openCV to merge the photos to have a superior final photo, deployed via Firebase.
How we built it
We primarily used Google Cloud Vision and OpenCV to analyze the photos. Google's Cloud Vision returns indicators for joyfulness (a positive quality), anger (a negative quality), and orientation (forward is better). After feeding a series of frames comprising a live photo into Google Cloud Vision, our algorithm identifies the best frame for each person in the photo (these frames need not be the same!). Then, we morph each individual's best shot into a combined photo. This process is accomplished in Python/OpenCV via delauney triangulation and parameterized blending. This photo is then returned to the user's iphone via our app.
Then, for the app (on IOS) we're demonstrating, the app asks the user to select a live photo. The app then separates the ~3s live photo into 10 individual frames. The app sends these frames to our back end (Described above) via APIs and google cloud storage, and then displays the super-photo back to the user.
Challenges we ran into
While we had challenges designing the front end, and successfully using the google Firebase/Cloud store integration, our primary challenge was handling OpenCV.
Going into this, our team didn't have OpenCV experience, just a vision of what an exciting project would look like. As such, learning all about facial blending was a huge challenge. Our initial idea was to simply find the facial points, crop, resize and drop the best face all on the same picture, but it looked particularly unrealistic. With some research and careful thought, we found facial morphing via OpenCV would require a good way to have a proper one-to-one mapping of many facial features to enable this project to work.
Accomplishments that we're proud of
We're proud of learning so much about OpenCV in such a short timespan, and in a mere 36 hours (including time spent sleeping), hacking together a very cool functional project!
What we learned
We learned a lot about facial morphing and OpenCV, proper ways to manipulate images, and how to use google APIs for Firebase and Vision. Coming in with no experience, the things we learned in OpenCV expand far and wide.
What's next for Smile
An android version would be ideal. Further testing is always useful - so we can expand our test suite to cover more situations.