Why we built it#

We noticed the lack of interactive webcasting tools available in the education sphere. We wanted to leverage ML to create a smart webcasting service.

What it does# Amnis is a web application that uses Google's Cloud Natural Language Processing API to process audio from live webcasts and generate tags for the video. These tags can then be searched by users to find live videos on subjects they are interested in. In addition, the page has a live comment box where viewers can ask questions or upvote questions that they find relevant. The webcaster can then view the top questions and answer them.

How its built#

The application is being built using Google's audio to text API as well as their NLP API. These two systems are stitched together using Python and data is stored on a MongoDB database. The front end elements are built using CSS, Javascript and HTML


None of us had experience working on front end elements so we had to teach ourselves HTML, CSS, and Javascript. In addition, the Google APIs were well documented but involved complicated classes and it took us a while to understand the functionality of each one. The most difficult part for us was connecting the backend components to the front end components. None of us had experience in fullstack engineerings so it took a lot of fiddling to figure out how to get it done.

Proud Moments#

We were able to create a working prototype with no outside help other than documentation and starter code.

What we learned#

We learned how to work with APIs and how to stitch together many complicated components of an application into one cohesive unit.

What's next#

We would love to build Amnis into a platform for all University students to use in their studies. As students, we understand the difficulties faced by other students and we wanted to make lectures a more accessible experience.

Share this project: