💡 Inspiration

We realized that passive learning is broken. Students today drown in hundreds of pages of PDFs, textbooks, and lecture notes, but they retain less than 10% of what they read. They don't need another "summarizer" tool—summaries just make you feel like you learned without doing the work.

We were inspired by the Socratic Method and the scientific concept of Active Recall. We wanted to build a tool that doesn't just give you answers, but forces you to think, acting as a personal tutor that knows your reading material inside and out.

📚 What it does

Lecture Loop transforms static documents into an interactive, intelligent study partner using Google AI Studio.

Instead of just asking "What does this text say?", users upload entire textbooks, research papers, or lecture slides and enter "Tutor Mode."

  • The Socratic Tutor: The AI refuses to simply summarize. Instead, it guides the user with hints, follow-up questions, and quizzes to ensure deep understanding.
  • Knowledge Gap Detection: It identifies exactly which concepts the user is struggling with and points them back to the specific section of the text.
  • Contextual Mastery: Because it holds the entire document in memory, it connects concepts from Chapter 1 with Chapter 10, creating a holistic learning experience.

⚙️ How we built it

We built this entirely within Google AI Studio to leverage the native multimodal capabilities of Gemini 1.5 Pro.

  • The Engine: We chose Gemini 1.5 Pro specifically for its 2-million token context window. Standard LLMs (with 8k or 32k limits) effectively "forget" the beginning of a textbook by the time they reach the end. Gemini holds the entire document in active memory, allowing for superior reasoning.
  • System Instructions: The core "magic" is a complex System Instruction that assigns a specific persona: “You are a strict but helpful Professor. You prioritize deep understanding over quick answers.”
  • Prompt Engineering: We utilized Few-Shot Prompting to teach the model how to differentiate between a "Summary Request" (which we block) and a "Study Session" (which we encourage).

🚧 Challenges we faced

Our biggest challenge was halting the "Helpful Assistant" bias. Modern LLMs are trained to be helpful and give answers immediately. We had to fight this behavior to make the AI refuse to answer and instead ask a guiding question back to the user. Tuning the "System Instructions" to strike the right balance between being "annoying" and being "educational" took many iterations of prompt testing.

🧠 What we learned

We learned that Long Context > RAG for education. Retrieval Augmented Generation (RAG) often chops textbooks into small chunks, losing the "narrative arc" of a course. By feeding the whole document into Gemini's context window, the AI understood the progression of the subject matter, allowing for much higher quality pedagogy.

Built With

Share this project:

Updates