Inspired by my interest in deep learning and usage of frameworks like Keras
The android application takes in a video stream and frame by frame is passed to a model which detects the emotion and send a github repository based on that
I only worked on the backend which involved training a CNN model to recognize emotions from images of faces, then using open-cv would take in frame by frame from the video stream to confirm an emotion and send a github repository under that emotion
CNN model was not accurately deciphering the emotion from time to time, but would be better with training.
Very interested about learning more in Machine learning and AI and it was nice to be able to apply what I learnt in making my first model
We will change the model a little in its functionality to hopefully get more accuracy and also develop the UI