Inspiration

LambdaChecker addresses the challenge of real-time, on-demand code evaluation – done at scale – needed in every coding class, in practical exams or lab assessments. It is a useful tool in teaching programming during normal times and a key actor for online teaching during COVID-19 outbreak.

Compared to existing web platforms (such as HackerRank, Leetcode, Codility), LambdaChecker goes beyond simple test-based grading, which is usefull only to test the functionality of a coding-challenge. For example, teachers would benefit from code inspection to validate proper object-oriented design and companies would benefit from in-depth profiling of students based on thousands of hours of coding.

What it does

This tool aims at offering a complete evaluation of the coding challenge, integrating deep semantic analysis of the code (in order to evaluate the correctness of the tasks - such as algorithm complexity, coding style, copyright), all done in real time and fully scalable - in order to be used in exams and labs where students often submit their solution at once (e.g. end of exam).

How we built it

The tool is fully cloud-based (built using AWS), and relies on Amazon Lambda for the actual code execution. We are using a full stack of frameworks and technologies to achieve this, detailed in the System Architecture.

Challenges we ran into

The main challenge of the project is achieving in-depth analysis of the code, beyond only testing the functionality. Learning to program correctly is more important than the code results, but assessing code correctness requires advances techniques, that imply understanding the structure of the program, how it is written and algorithm complexity.

Accomplishments that we're proud of

Until this point, we made great progress in both directions:

  1. building up a real-time, fully scalable system of code-challenges evaluation
  2. developing a prototype for semantically parsing the code in order to deeply understand it and perform necessary assessments

What we learned

The main takeaway is that current advancements in Natural Language Processing and Formal Language Theory can tackle this innovative challenge, of better asses coding-tasks for students and youngsters learning to code. We are now confident that LambdaChecker will enable trainers and professors grade coding-challenges in a fully automated way, without the need of extra evaluation of certain coding aspects.

What's next for LambdaChecker

The next big thing is transforming this intelligent code evaluator into a Virtual Room for Coding Labs, thus offering our students not only the possibility of working on a certain task but also improving their skills on the long run by carefully picking the right challenges to solve in the areas they need improvement.

With features like skill-based profiling and tracking students progress we will succeed in offering a whole new remote coding experience, that would adapt to their needs and carefully analyse their progress to become better programmers.

Share this project:

Updates