Everybody makes presentations; not everybody makes them well. With Butterfly, we wanted to make it easy for people to make awesome presentations.

What it does

First, Butterfly records a user's presentation. Then, using Watson's Speech-to-Text service, it transcribes the presentation to text and sends the text to Keen IO. Once Keen IO calls back, Butterfly queries their analysis API, and then shows the results in a visual format in order to suggest ways in which the user can improve their future presentations.

How we built it

We forked IBM's speech-to-text app off of GitHub, added our interface with Keen IO's API, then designed and coded our web interface.

Challenges we ran into

Jumping into the code base we forked was difficult. We also ran into a few hiccups using the Keen API; we realized halfway through the hackathon that Keen IO processes data in near-real-time rather than real-time, a delay that propagated to our users. We responded to this by sending the speech to Keen in real-time rather than all at the end of the speech.

Accomplishments that we're proud of

We're proud that Butterfly will help people significantly improve their presentation skills. We're also excited that the interface ended up as clean and user-friendly as it did.

What we learned

We learned that by taking advantage of existing API's, we could develop what is an extremely complicated application in less than 24 hours.

What's next for Butterfly

We plan to analyze speech patterns such as inflection-detection in order to better help users. We also plan to allow users to link to YouTube videos, or upload videos of their own, and analyze their gestures and other movements.

Share this project: