I was compelled to undertake a project on my own for this first time in my hackathoning career. One that I covers my interests in web applications and image processing and would be something "do-able" within the competition.
What it does
Umoji is a web-app that take's an image input and using facial recognition maps emoji symbols onto the faces in the image matching their emotion/facial expressions.
How I built it
Using Google Cloud Vision API as the backbone for all the ML and visual recognition, flask to serve up the simple bootstrap based html front-end.
Challenges I ran into
Creating an extensive list of Emoji to map to the different levels of emotion predicted by the ML Model. Web deployment / networking problems.
Accomplishments that I'm proud of
That fact that I was able to hit all the check boxes for what I set out to do. Not overshooting with stretch features or getting to caught up with extending the main features beyond the original scope.
What I learned
How to work with Google's cloud API / image processing and rapid live deployment.
What's next for Umoji
More emojis, better UI/UX and social media integration for sharing.