Inspiration

One of our members has experience with speech and debate, and he noticed that a huge problem that people have is that they use filler words when speaking, which distracts the attention of the person who is listening. Often, people who give educational speeches have to take a lot of time to overcome these speaking challenges and we think we could create an application that would help with public speaking in general.

What it does

To maximize its applicability, we made an IOS app, and also integrated this idea with Amazon Echo. We found that Amazon Echo would be cool because there are millions of people buying this technology every month in the United States. For the amazon echo, you first give your speech in the beginning, and then activate our program by saying “start speech monitor”, and then Alexa will prompt you to be more specific with what you need help with. Then, it will promptly say the statistics of how many filler/swear words you said. The IOS app itself works a little differently, because we take the users speech, and once they are done, we give them statistics on how well they did in terms of how many filler words they used.

How we built it

For the Amazon Echo public speaking helper, we used a software program called flask-ask, which is a python API to interact with the amazon echo device. We used this to write the different skills with Alexa. Since we were familiar with python, this worked out really well for us. Also, we used a service by the name of Ngrok, which we used to host our skill so that we could wirelessly test it with the amazon echo device. For the IOS app, we build it on Xcode using swift, and we used the built-in language processing to extract the words that the user says and then detect which of those words are filler words in context.

Challenges we ran into

We ran into many different errors related to syntax, and the natural language processing was hard to work with as well in the beginning. For the errors related to syntax, we had to google and look and the logic behind most of the errors because we weren’t as familiar with swift as we were with python and java. For the problem related to natural language processing, we used many varieties including the google API and the built-in one in XCode. Some of them were very accurate, but in the loud setting that we were in, it was hard to work with these language processors no matter how good they were. However, we just had to be patient and it worked out. Another major problem was that it was really difficult to get the phone to vibrate when the user uttered a filler word, and sometimes it only worked with a few of the words. We eventually worked our way through this, since it was because of the speed of the language processing, especially when we spoke rather quickly. Other than that, the small errors were those errors such as naming files, and making sure that the variable names matched.

Accomplishments that we're proud of

The fact that we were able to interact with so many API’s, and get something done that we wouldn’t have thought that we could have finished ever was quite amazing to see. We were also proud of the fact that we were able to work with the Amazon Echo and program some skills to be able to meet our own needs. Also, we are proud that we were able to get the app working so that now the users have two platforms which they can use, in case they don’t have amazon echo at home.

What we learned

We learned a lot about how to be patient and look at things with a fresh mindset when things aren’t going our way. Often, many of the small errors which frustrated us were small syntax errors or indentation, and we were able to look with a fresh mindset and solve it rather quickly. Another thing we learned is to utilize the help from the mentors around us; they are only here to guide us, so when we ran into such problems with syntax, they were there to advise us in the right direction. We also learned how to work with the different language processing APIs, which would prove useful especially in the tech-advancing world which we are living in.

What's next for iSpeech

iSpeech has much in-store for it in the future. We will not only expand on the filler words for alexa, but regarding the iPhone app, we will make an iWatch app using the same software which will allow for similar filler word recognition. This will allow for more on the go assistance, and people can use our app even more quickly on the iWatch. Also, we will provide a graph in the future on the trend of the filler words as you practice over time.

Built With

Share this project:

Updates