Inspiration

Seeing students in Indian colleges spend a lot of time studying topics that, when taken in concert with other topics on the syllabus, had little effect on their grades (or their later understanding), made me realize the need for a comprehensive analysis that would look at the relationship between what was taught vs. what was tested instead of yet another productivity tool aimed at helping students study.

What it does

The Curriculum Blind Spot Detector examines college courses by comparing the amount of emphasis placed on the course syllabus with previous examination trends and relationships between topics. The Detector shows:

  • Areas in a syllabus that are given high importance but that have very few evaluation opportunities
  • Areas that have a very high frequency of appearance on examinations but relatively low importance within the syllabus
  • Critical necessary pre-requisite knowledge areas that are under-emphasised and therefore may contribute to future academic challenges for students

The Detector produces an Auditor Report detailing the results of its examination of courses to allow teachers, course developers, and students to see how educational effort is being allocated, and where there may be issues with alignment.

How we built it

The project website offers an easy method for academics to find gaps between what is being taught in the classroom, based on the syllabus and exam questions from previous years regarding the material being covered, against the topics that are missing or lacking from those same resources, referred to as high-risk "blind spots" within the syllabus. Using NLP analysis and frequency analysis of each of the three inputs (Syllabus & Final Exam from previous years, and Topic Dependency) gives each topic its own "weight" and as such, all the signals for mismatches between the three sources, which can be viewed as an interpretable xyz audit of over-taught topics, under-taught topics, and high-risk blind spots, will be presented to end-users.

As the project was completed within the time limitations for a hackathon, the focus is intentionally kept very narrow, limited to a single subject, for clarity and transparency.

Challenges we ran into

Standardization and structuring of curriculum data was a significant challenge. There is a huge variation between how syllabi and exams are created and what terminology is used in each institution, so the process of normalizing and validating was long and laborious. There is also the risk of over-claiming accuracy in predictions, as the product is intended to be used as a diagnostic tool, but not a measurement of educational quality.

Accomplishments that we're proud of

  • Designed and built an audit pipeline from beginning to end that took actual syllabus and test data and analyzed how each piece of the curriculum relates to the other.
  • Shifted direction of the audit pipeline from student usage data to system-level inefficiencies that are often overlooked in many student-oriented AI (A) toolkits today.
  • Designed the audit pipeline to provide users with clear and easy-to-understand explanations for each identified blind spot based on the user’s data-driven results.
  • Built a functional prototype in a short amount of time without requiring or using any proprietary datasets or “black-box” models.

What we learned

From our experience working and observing Indian college students in the classroom, we noticed a pattern where students invested large amounts of time studying for certain topics they thought were going to be critical to their grades and eventual career fields. However, following examinations, students typically discovered that these previous topic investments had little bearing on their actual grade point average (GPA) or future career development. This gap between what the syllabus emphasized and what was actually evaluated led to our conclusion that an analysis should be done at the system level instead of creating another student-focused productivity application.

This project helped us validate that even though effective AI applications may not require complicated models or massive datasets, articulating the problem clearly, making accurate assumptions, and developing a logical explanation of the rationale behind your solution are far more valuable than using some "black box" AI to develop solutions for problems. In addition, we recognized the importance of shifting our focus from the behaviour of individual students to the system-level problems that negatively impact the learning experiences of tens of thousands of students concurrently.

What's next for Curriculum Blind Spot Detector

Future developments will include: an extension of the investigation into other disciplines/faculties at other universities; the addition of benchmarks from industry and professional exams; and the automation of normalising topics from various types of syllabus. With collaboration from other institutions, the tool may evolve into a continuous evaluation of curriculum health that supports academic planning through an evidence-based process.

Built With

Share this project:

Updates