I was competing at a biztech case competition two months ago with a group of four other students. When we were presenting our product in front of senior executives, we had a hard time communicating with each other since all we could do was to send each other awkward eye signals and not-so-subtle hand gestures. We ended up running out of time for the first round of judging (miraculously we made it to the finals after that).

I wanted to change the way we do presentations. While you are giving a live presentation, you tend to ignore your surroundings and forget how you might present yourself to others - your tone, volume, pace, eye contact, level of jargon, hand/arm movements, position on the stage, and many others. When you are co-presenting with a colleague, it is also very hard to tell him/her to "hurry up", or "speak louder" without making a scene in front of the audience. For people in leadership positions, the way we carry ourselves in public speaking engagements is often heavily scrutinized.

The name ticktocktalk was inspired from the metronome, a device that musicians use to help correct and adjust their performance. Metronomes have a distinct "tick tock" sound, and we also include a small animation that mimics the appearance of a metronome to remind the speaker to stay at an optimal pace. With ticktocktalk, we hope that it also becomes a valuable tool for people who are trying to improve their presentation and speaking skills.

What it does

Our product, ticktocktalk, is a progressive web application that gives you real-time feedback on your performance during presentations. To use the application, visit on your mobile browser (Chrome recommended), begin a new presentation with a time limit, and set the phone in front of you while you present. Co-presenters and audience members can go to the same website, and given the access code, provide real-time feedback on your presentation skill. Libraries in the back-end receive the video stream and analyze your position, facial expressions, and more. You can also equip the Myo armband to receive physical feedback (e.g. 2 fast vibrations = you're speaking too quickly).

How we built it

We build the front-end in React, and used stdlib to provide multiple functional endpoints communication with the back end. We use AWS's DynamoDB for data persistence, and the Myo armband SDK.

Challenges we ran into

It was difficult to use the Amazon Kinesis service for node applications, since it was a newer technology with a fairly small community. The documentation was full of highly-technical jargon and we could not find any tutorials that were helpful for our use case. Using Amazon Kinesis is really sophisticated and requires mature experience with the AWS ecosystem.

Accomplishments that we're proud of

Building a polished application.

What we learned

How easy it is to develop full-stack web applications using stdlib. It took us less than 10 minutes to get started. Given it's convenience, we will probably find it helpful in future hackathons as well.

Share this project: