Realtime Augmented Learning Environment

Oops! Technical difficulties in our main video - check out our (actual) demo here! https://youtu.be/-AOdw0kwrO8

Inspiration

With countless distractions, unstable wifi, and muffled voices, online classes pose several challenges in terms of focus and concentration. It’s easy to “zone out” and be put on the spot when asked a question, or come back to a session literally minutes later to find that the topic has drastically changed. RALE helps students overcome these issues by providing a clean, organized interface for better learning.

What it does

RALE is a lightweight, highly functional application that can be consumed by students side-by-side in Zoom lectures, without distracting or taking away from the main content.

RALE presents three key features:

  • Continuously processes the lecture’s audio stream to identify current discussion topics in real-time
  • Automatically records and lists all the questions asked by instructors during the class so that students can keep up at their own pace
  • Allows instructors to send students additional resources like links, pictures, equations, and more during class using customized voice commands, powered by WolframAlpha

How we built it

It all begins with a regular Zoom meeting. For students, it will be no different than a normal session.

Behind the scenes, RALE is hard at work. The audio and video from the stream is fed via RTMP into a nginx streaming server running on a Google Compute Instance. From there, the audio is extracted in chunks and sent to Google Cloud Speech-to-Text for transcription. The text is then stored in Firebase Realtime Database. When the database updates, natural language processing in Python begins: topic modelling with LDA (Latent Dirichlet allocation) via Gensim and NLTK, question extraction via NLTK, and supplementary information fetching with Wolfram Alpha. The output is then sent back to the database, where is it consumed by an easy-to-use web application built with React, Redux, Bootstrap, and various visualization libraries, including D3.js.

The user simply opens RALE in a web browser next to Zoom to see key topics from their lecture, a running list of questions, and extra information right at their fingertips!

Check out our system architecture diagram for a better visualization of the interactions between the different components.

Challenges we ran into

  • Understanding, applying, and interpreting NLP packages.
  • Integrating several services together to produce a smooth and cohesive user experience.

Accomplishments that we're proud of

We stuck to our goal, great teamwork, and the project actually works!

What we learned

What's next for RALE

Share this project:

Updates