When originally working on a previous project, the team of myself an two friends realized that one of the bottlenecks of math education is the inability to grade short answer problems in reasonable time frames. And this is a consequence of, well, being human. Things take time. Instructors need to grade anywhere from 30 to 150 students, and at anything more than one question per student, that would be an unreasonable request for short turnarounds.
What it does
∂Credit provides a way to perform partial-credit calculations automatically based on reduction steps. Even if the student makes a minor mistake somewhere midthrough, the reduction is performed iteratively based on the previous step(s), so students are not penalized for following through with the error.
How I built it
First, I created a quick and dirty setup using sympy to generate algebra questions systematically from given inputs. This was the basis for the questions themselves. After that, it was simply a matter of applying a reduction algorithm to the relevant questions.
Challenges I ran into
For whatever reason, I could not get changes to the database to be able to persist-- so much so that when talking with the Google Cloud sponsor for over 2 hours, he could not figure out what I was doing wrong. In the end, I decided to forgo the database in favor of a horrendously hacky solution.
Accomplishments that I'm proud of
Implementing the mako template rendering system without the use of C extensions, as writing those would take too much time, and how the design of the proof-of-concept turned out, even if the register/login sections are useless in a state without a database.
https://bigredhacks2019fall.appspot.com/ is the main site-- the register link and login link are useless because I could not get a database to persist, logging in with any email/password combination will take you to the set of questions, but, for a direct link, just start from https://bigredhacks2019fall.appspot.com/questions
What I learned
Sometimes, when faced with no other options, bad solutions are viable ones. Specifically, pickling arbitrary Python objects into binary blobs, converting to base 64 and decoding to unicode, and reversing the process, for every question (the intended way was simply a database lookup by known question ID, but the moment the database refused to persist, I had come up with this solution.
What's next for ∂Credit
Providing a system for better parsing, detecting the errors that the student made and showing the student where they went wrong, LaTeX integration, and an app/device that would be integrated into a previously worked on project that is on-going for me and some friends, that would allow this system to be used on hand-written work as well.