Inspiration

Filters have become a new tool to quickly add a new perspective to photos. Giving users access to a filttr on twitter would add a new dimension to the utility of this social media platform.

What it does

Using both facial recognition technology and advanced, cutting edge emojis, we are able to track faces and emotions to overlay emojis onto faces in pictures.

How we built it

We created several different sections for each person to work on, divvying up responsibilities and each learning a different portion of the overall code. Using node.js as the baseline, we created several javascript files each doing a different task. For example, one javascript file worked on taking on the image from the bot and using the Google Vision API, attaining several different values to create a full working face map with key coordinates and emotion values.

Challenges we ran into

With our lack of experience in Node.js and scripting languages as a whole, we ran into a great amount of issues in formatting our code. This led to issues in creating our image modifying script. Along with this the modules we selected for our image editing prooved to be difficult to manipulate to our purpose. Other issues we faced included having to switch face recognition APIs eight hours in because Microsoft did not give us keys to use their face recognition and emotion APIs. The Twitter API was fairly easy to implement but gave struggle integrating with our lack of knowledge in Node.js

Accomplishments that we're proud of

Learning a completely new programming language. All of our members were inexperienced in using node.js as well as javascript and spending 24 hours to learn enough to create a working application was something we are proud of. Combining and meshing different modules and learning how to use the Google Vision API is also something we believe was an accomplishment worth mentioning. Compounded by our inexperience, figuring out how to work all the different modules together and attaining values from the API was not an easy task. Since we did not have access to algorithms that other face tracking and overlay software uses, we had to figure out the optimal positioning based on the values given from the Google Vision API. Using a large amount of googling and relearning basic trigonometry and geometry, we were able to get a rough idea of what is the best spot for an overlay based on the values that we can attain from the API.

What we learned

As mentioned previously, much of our time and effort in the beginning was figuring out the syntax of Javascript and how node.js worked. While they both seem to be extremely useful tools, they still took some time to figure out. It was also an eye-opening experience to work with the Google Vision API because while we initially hoped to use Microsoft Azure, we found in the end that the Vision API was much more information rich and gave us a lot more values to work with. Finding out the correct way to meld the API and our code was certainly a very enriching experience. Coordinating values to find the optimal positioning of the overlay was also a very interesting topic to figure out. Most of us had forgotten what we had learned in trigonometry in high school and had to reteach ourselves to find information based on those coordinates and using that to find an optimal position for the filltr. While it is still in a rough shape, we believe it to be working well for the time we spent on it.

What's next for filttr

Because Google Vision API gives a lot more information values (such as hairline and ear values) we have opened up a whole new world of different filltrs. Once we figure out the math to find a best fit for newer filltrs, we can add many new features such as animal ears and hats. We hope to add upon what we have and learn even more about Javascript and node.js.

Built With

Share this project:
×

Updates