How it started

It started from the simple idea about how to make online lectures better.

What it does

Visual lecture is a system that transcribes audio of a lecture in real time and also fetches visual aids to further enhance the experience. Not only does this enhance the experience of online lectures but also enables the hearing impaired to follow along in online or in-person classes by using our web interface. They wouldn't need a translator to have the same experiences as other students.

What its built on

Visual lecture is built with three main components. The first main component is the java applet that transcodes audio to text. The audio is transcoded by IBM's API Speech-to-Text. Once the audio is transcoded the text is sent to our server hosted on Heroku. The server was built on the Node.js runtime, mainly using the javascript framework Express and the Socket.io node module. Once the server receives the text, the text is then fed to IBM's API alchemy so that we could grab the key concepts of the lecture. These concepts are then used to search for visual aids using Flickr's API and then served back to the client's computer that is getting the transcribed audio in realtime and visual aids to enhance the experience.

Challenges we faced

The main challenge we faced was integrating all of API's together into one cohesive system. We had to learn to complete a large task under a small time constraint. One of the largest challenges was getting the audio to transcribe in a fast and accurate manner.

Share this project:
×

Updates