About the Project: Tutilo
Tutilo was inspired by a problem we experienced firsthand as university students. During exam periods, we relied heavily on AI tools to study, but quickly realized they were optimized for summarization rather than learning. They could explain content, but they could not tell us whether we truly understood it, track our progress, or adapt to our weaknesses, especially in large classes where access to lecturers is limited.
From this, we learned that effective learning requires active engagement, feedback, and evaluation, not just answers. This insight shaped Tutilo into an AI study companion focused on mastery. Tutilo grounds explanations in a student’s actual course materials, evaluates understanding through adaptive questions, and visualizes progress across concepts.
We built Tutilo using modern AI models from state-of-the-art providers, combining retrieval over course content, structured prompting, and lightweight assessment logic. Concept mastery is modeled as a graph, where understanding improves as students correctly reason through related topics. Progress can be represented as: [ \text{Mastery Score} = \frac{\text{Concepts Understood}}{\text{Total Concepts}} ]
One major challenge was avoiding hallucinated explanations and shallow learning. We addressed this by strictly grounding responses in uploaded materials and prioritizing reasoning-based evaluation over free-form answers. Another challenge was designing feedback that feels helpful rather than punitive, which required multiple iterations with student testers.
Tutilo is still evolving, but it represents our belief that AI should not just give answers, but help students genuinely understand and grow.
Log in or sign up for Devpost to join the conversation.