Inspiration

LectureMate was inspired by the growing need for accessible, on-demand academic support. Students frequently struggle in getting timely help with complex topics, particularly during peak times like exams or project deadlines when professors may be less available. By providing instant, expert answers based on course content and lectures, LectureMate bridges this gap, empowering students to learn confidently and stay on track even during the busiest parts of the semester.

What it does

LectureMate offers instant, reliable answers to student questions by drawing on a professor’s lectures, research, and course materials. It serves as a 24/7 academic companion, providing students with quick, accurate responses to help them grasp complex concepts, review class content, and clarify doubts anytime they need support.

How we built it

We collected a dataset comprising video lecture transcripts, readings, and course materials of the class C685 Advanced NLP (Spring 2024). We conducted data preprocessing by applying tokenization and then applied padding to ensure that all sequences were of uniform length, making them suitable for batch processing during training. After preparing the data, we fine-tuned the LLaMA 3.2-3B model on this processed dataset, optimizing it for accurate, context-aware academic support. We utilized the Parameter-Efficient Fine-Tuning (PEFT) technique - Low-Rank Adaptation (LoRA) - to efficiently adapt the model to our specialized dataset, optimizing it for academic Q&A while minimizing computational demands.This setup ensures that LectureMate can deliver precise, contextually relevant responses across a broad range of topics, providing reliable support whenever students need assistance.

Challenges we ran into

The available materials for the class were limited, which posed a challenge in fully training the model to achieve optimal accuracy and depth. Consequently, we had to work within these constraints to fine-tune the model effectively. Also, we faced difficulty in running inference on local environment.

Accomplishments that we're proud of

We successfully fine-tuned a large language model on limited, specialized academic data to create an assistant that delivers accurate, context-aware support for students. Despite data constraints, we optimized the model’s performance, ensuring it reliably addresses any academic questions and eases the workload for both students and instructors. By developing a seamless, intuitive user interface, we made academic support readily accessible, allowing students to obtain real-time responses anytime they need assistance.

What we learned

Through this, we gained experience in fine-tuning large language models like LLaMA, working effectively with limited data & resources , and optimizing performance using techniques like Low-Rank Adaptation (LoRA) to adapt the model with minimal computational demands and fine-tuning hyper-parameters to achieve maximum accuracy within our constraints. Additionally, we learned how to balance the trade-offs between model performance and resource constraints, ultimately building a system that is both accurate and efficient.

What's next for LectureMate

We aim to implement LectureMate for all professors and classes , introducing multilingual support, and embedding feature that provide professors with insights into the most frequently asked questions. Additionally, we plan to incorporate feedback features to continually improve the platform.

Share this project:

Updates