Inspiration
“Why do I have to go to school?” — a common complaint among both elementary and college students alike. Most of us are told that school is necessary for teaching us marketable skills; but when was the last time you recall hand-computing an integral? Or being forced to memorize a direct quote from a book? This computational, methodical brand of education has children simply regurgitating information while limiting their ability to retain concepts in the long-term. Picture this: You have a test in two days and your professor has just released the study guide. It reads like a dictionary, with multiple topics to memorize. You do fine on the test, but six weeks later, you have your final and have to sit down and memorize the topics again since you’ve already forgotten them. This current educational structure isn’t conducive to learning the why and how behind academic topics. Instead, it prioritizes unmarketable, rote memorization tactics at the expense of a holistic understanding toward a given problem. Students are learning to plug and chug algorithms without learning why they are doing the operation in the first place. This is further leading to a lack of overall learning and a decrease in natural curiosity about subject matter. Educational disparities between regional school systems also prevent students from receiving contextualized lesson plans that promote a deep understanding. With Feynman, we wanted to create a study tool that helps students review concepts conceptually to figure out gaps in their knowledge and to ensure that they understand the concepts instead of just memorizing quantitative answers.
What it does
Feynman quizzes a student using prompts. The student qualitatively responds to the prompt through a text box submission. Once they submit their response, Feynman determines whether or not the response sufficiently addresses the response through a ranking from 1 – 10, where 9 – 10 reflects a sufficient response. If the response is sufficient, the user can move on. If it is not sufficient, the student has three options: resubmit another response, take a hint, or display the correct answer. If the student decides to take a hint, then Feynman will provide a list of topics that the student may want to review in order to sufficiently answer the question. If the student decides to display the correct answer, then Feynman will provide a detailed answer to the question, as well as a specific explanation for why the student’s response was not sufficient.
How we built it
We integrated the OpenAI ChatGPT API, as well as prompt engineering, to perform a correctness evaluation of a student’s response to a given prompt. We further applied prompt engineering to the API for the “Hints” and “View Correct Answer” tools.
Challenges we ran into
We struggled with the prompt engineering to ensure that our responses were accurate. We wanted the first response to only grade the numerical accuracy of the input. We also had to maneuver our queries so the responses identified key subjects that the student should review before it explained why the answer was wrong.
Accomplishments that we're proud of
Flexibility of program — it is very easy to adjust the prompt of the question Integrating API through natural English — a very cool application of code 3.0 (prompt engineering)
What's next for Feynman
We want to expanding into multiple subjects and integrate the feedback with a database of lesson plans so the study topics are tailored to a specific class the student is in. This program can also function as an an automated grader for professors so exams are reliant on conceptual understanding rather than formulaic answers. By implementing an automated grader, written responses are as efficient as numeric values for assessments.
Log in or sign up for Devpost to join the conversation.