Help to improve oral presentations by focusing face into the public.

What it does

Tracks face, eyes, position in order to find where face is focusing. When face is not looking at front, it'll show a red border. At the end of the presentation, it creates an stadistics about your "face gestures" in order to improve your oral presentations.

How we built it

We've used OpenCV for track the faces with dlib. We use JSON storage for stadistics. We're using NodeJS as API REST and AngularJS as web client.

Challenges we ran into

Track 3D face was so difficult for us because we need to use 6 vector in order to get all face positions/focuses.

Accomplishments that we're proud of

We never used AR tracking and develop something using C++.

What we learned

How OpenCV works, how to track 3D Face.

What's next for Slide Coacher

Try to add more detections, like hands. For example: Having the hands in your pockets is not a appropiate for a presentation.

Share this project: