Switching to remote learning this year has taken a toll on teachers and students. As Teaching Assistants (TA) ourselves, we experienced first-hand, the difficulties educators face when assessing their students. Most teachers resort to simple Multiple Choice Questions (MCQ) tests that are easy to grade.

Though, MCQs are a narrow mode of assessment and suboptimal in ensuring retention and complete understanding of concepts being taught. By observation, the best way is through long-form, open-ended questions that encourage students to deeply analyse and apply what they know.

Yet, in current times, grading of such long-form content is a time-consuming hassle for educators. That's where Sparkquiz comes in.

What it does

We use state-of-the-art Machine Learning and Natural Language Processing techniques to enable students to be assessed more accurately and rigorously through the process of answering open-ended questions in the form of a quiz. Sparkquiz provides teachers the ability to set MCQ quizzes as well.

Our application provides a practical application of automated graders – an active area of research – that smoothen the process for teachers.

How we built it

We started off by drawing a potential user-flow on a piece of paper; we identified the different modules/components of the final system. Development began with the frontend – we used JAM stack to build the different pages, views, and user-facing features.

For the backend, we had to spin up a Python server using ngrok that housed the Machine Learning models written in PyTorch. We had to experiment with several plumbing designs to deliver the student's answers from the browser to the server quickly.

Additionally, we had to set up a Firebase real-time database connection to the frontend to store user credentials, quiz metadata, test scores, and more.

We finally brought together the three separate pieces during the final integration step.

Challenges we ran into

Initially set on tackling Health, we found ourselves not immediately relating to the problems in the field. A large chunk of potential development time was taken up by the constant back-and-forth idea-bouncing that, ultimately, didn't yield much.

Midway, our decision to switch to the Education track meant that we had to quickly implement and test the application modules before putting them together.

As with other teams, owing to COVID, communication took a big hit as we weren't in close proximity to help with bugs or obstacles along the way. We had to triage the problems faced by team members via videochat which was rather inefficient and time-consuming.

Accomplishments that we're proud of

Taking all the individual moving parts (from different repos) and integrating them into one final product and testing the whole system took a while, but was worth the effort. Converting the raw application from bare components into well-designed user interface was another win for us.

What we learned

Coming up with an actionable problem statement took quite a while despite having related to the issues and prompts listed on the Hacker Guide under the "Education" section. There were times when we felt like quitting the hackathon but staying up till we got an idea was rewarding. We learned to stick around despite the backlash in hopes of finishing what we started (which we will definitely apply at future hackathons).

What's next for Sparkquiz

  • Competitive mode for students (with a class-wise leaderboard)
  • Making answer evaluation real-time for on-the-fly corrections
  • Introducing new assessment modes for teachers
  • Experimenting with better NLP models for grading
Share this project: