We The Speakers

Inspiration

Hi! My name is Shreeniket, and I am a 14-year-old who loves programming and public speaking. I decided to combine these two passions into a single iOS application, and here is the result! We The Speakers was submitted to the DubHacks hackathon :).

The Newsprint Track

I submitted to the Newsprint track as I believed this project has a lot of potential for helping the public and social good in general. Furthermore, through better speaking skills, individuals can develop more meaningful connections to other people, which was one of the guidelines of the track. This app also falls under education, public policy, communication, and productivity, which are all additional aspects of the track.

What it does

We The Speakers has 4 major components. First, it has a tips page where users can learn public speaking and improve their skills. Then, there is a teleprompter feature to record themselves speaking, with a scrolling view. The recorded video is then saved in the users camera roll on their phone. Next, there is a natural language processor used on a speech to analyze how good the user's writing is. Finally, I use the Echo AR SDK to create a NLP virtual microphone.

How I built it

I used Swift for the iOS app, Echo AR for the augmented reality, and Ruby for the expansion of tips (main page)

Challenges I ran into

The biggest challenge I ran into would be analyzing the user's inputter speech. It was difficult to get the ML model into my application, and properly analyzing a speech to be positive or negative in emotion. It was also difficult to make the side menu navigation, as I had to make it a collapsable subview in my project, which is a very advanced procedure.

Accomplishments that I'm proud of

I am very proud of getting the speech analysis working so well. It was able to accurately predict the feeling every time I tested it, which is an amazing accomplishment for me (I have never made machine learning by myself). I am also proud of my echo AR implementation, and how well it works! In addition, the teleprompter came out amazing, so I am pretty much proud of the entire project.

What I learned

I learned the implementation of machine learning, as well as the proper UI design of an app. I also learned how to make a side menu navigation, which turned out very well in the end! I am happy about Dub Hacks, as since I am only a high schooler, I do not get many opportunities to express my interests.

What's next for We The Speakers

I plan to release this app to the app store and keep working on implementing more and more features. I had an idea of facial emotion recognition in the audience, which would provide insight to the speaker on effectiveness of their presentation.

Built With

Share this project:

Updates