Inspiration

Our goal was to optimize teacher-student interactions by allowing deeper understanding of course material through LLM-augmented learning, with interactive chats and quizzes based on a RAG-retrieval of video contents.

What it does

The project allows students to upload and play lecture videos, while having access to ML generated quizzes and an AI chatbot trained on the video at the moment it is uploaded.

How we built it

Our stack was a React/Next.js front-end due to familiarity and ease of use, and we utilized a Python backend that implemented popular LLM-development libraries like Langchain and the HuggingFace API. We developed a Machine Learning pipeline for mp4 conversion to extract features and output a structurally formatted GPT response for reading into our web app as a digital form

Challenges we ran into

We had issues in hosting as certain AWS permissions for GPU-intensive EC2 instances required internal review to be approved, forcing us to rely on our localhost for the model features.

Accomplishments that we're proud of

  • We're proud that our landing page matched up to our initial design philosophies
  • We're pleased with our main page, as it was able to display all components required
  • Finally, the fully developed machine learning pipeline showcased the utility of our application for further development

What we learned

  • We learned that keeping DevOps requirements such as server hosting as a key priority is equally important as having well-run, local, code.

What's next for LearnLens

We're planning to add multi-modal analysis capabilities to the quiz form, as well as adding library features for video retention

Share this project:

Updates