Inspiration
We are a team of three people: two rising juniors and a grad student who also works as a teaching assistant (TA). We've all experienced the frustrations of the academic grading system from both sides of the table. The TA in our team often has to work insane hours to correct and grade assignments. Meanwhile, as students, we have endured long waiting times for grading and often felt the lack of personalized feedback.
The scenario was further exacerbated when the teaching assistants across the University of California went on strike, bringing academic activities to a near halt. We realized that this was a systemic issue, needing an innovative solution that would benefit both the students and TAs. This inspired us to develop AutoMark, a tool designed to streamline the grading process and enhance the learning experience.
What it does
AutoMark revolutionizes grading in academia. It uses state-of-the-art technology to automatically grade students' assignments and provides personalized feedback. Not only does it significantly reduce the workload for TAs, but it also decreases the waiting time for students to get their grades. Additionally, AutoMark analyzes students' performances over time and generates detailed study session guides to help them understand and improve on their weak areas.
Challenges we ran into
Our main challenge was handling calculus mathematics, specifically parsing and grading calculus problems step-by-step for relative grading. The model sometimes struggled to accurately understand and grade the steps in calculus problems, requiring us to iterate and improve our training process repeatedly.
Accomplishments that we're proud of
Our greatest accomplishment is the development of an MVP that can significantly alleviate the TA's workload and enhance the students' learning experience. We've built a tool that can process mathematical problems, grade them, provide personalized feedback, and even generate study sessions. AutoMark is a solution that addresses a real-world problem, and we're incredibly proud of that.
What we learned
Our journey with AutoMark taught us the value of breaking down a complex problem into manageable tasks. We learned that often, the most effective solutions are those that simplify complicated systems. On a technical front, we gained a deeper understanding of the capabilities of OCR tools and AI language models in addressing practical problems.
What's next for AutoMark
Feedback Analysis: Incorporating more advanced AI algorithms that analyze the feedback from students, understanding where they struggle and which topics are most challenging. This information could be used to update and refine the course materials and teaching methods.
Teacher Feedback: Using feedback and performance data to provide insights into teachers' effectiveness. This could help identify areas where teachers excel and where they might need further training or support.
Learning Objective Analytics: Using data to assess whether specific learning objectives are being met and where adjustments need to be made. This could also help in curriculum development, allowing for more targeted and efficient learning pathways.
Expansion of Data Points and Use Cases: The use cases for Automark are vast and could be expanded to a wide variety of learning environments, including primary and secondary education, higher education, professional training, and lifelong learning. By continuing to refine and expand the system, Automark could become an invaluable tool for educators and learners alike.
Partnerships and Collaborations: Forming partnerships with educational institutions, e-learning platforms, and edtech companies. These collaborations could help improve and validate Automark's technologies while also broadening its reach.
Built With
- langchain
- next.js
- openai
- react
- supabase
- typescript
Log in or sign up for Devpost to join the conversation.