Inspiration
LeetCode has become the default way to prepare for software engineering interviews, but it misses one of the most important parts of real engineering work: reading existing code and understanding how it actually behaves. We have also seen many companies start shifting toward code review and practical debugging interviews for exactly that reason. We built Lector to train that missing skill directly.
What it does
Lector is an interactive platform for learning how to read, review, and fix code.
It has two challenge tracks:
- In the security track, users inspect vulnerable code, exploit the flaw in a live sandboxed application, capture the flag, and then patch the vulnerability so the same exploit no longer works.
- In the code review track, users analyze buggy or risky code, identify what is wrong, and improve it so the behavior is correct and the implementation is safer and more robust. The platform provides guided hints, grading, and feedback based on what the user has tried so far.
- Most importantly, Lector does not grade based on surface-level pattern matching. It verifies solutions by actually running code and checking whether the intended exploit or bug has truly been eliminated without breaking the rest of the application.
That execution-based feedback is the core of the project. We wanted solving a challenge to feel closer to real engineering work than solving a toy problem.
How we built it
We built the frontend with React, TypeScript, Tailwind CSS, and Vite, and the backend with Python, FastAPI, and Uvicorn. We used MongoDB for persistence, Monaco Editor for the in-browser coding experience, and embedded browser views so users could interact directly with vulnerable apps during security challenges.
On the backend, we built grading and validation flows that check whether a user’s submission actually works in practice:
- For security challenges, we launch sandboxed containers for vulnerable apps, let the user attack them, then apply the user’s patch, restart the app, and verify that the original exploit no longer succeeds.
- For code review challenges, we validate fixes by rerunning the relevant checks and ensuring the user’s changes resolve the issue without introducing regressions. This lets Lector grade behavior and consequences, not just syntax or keywords.
- We also designed this verification pipeline so it can support AI-assisted coding workflows as well as human learners. The same grader that checks a user’s patch can be used to verify whether an AI-generated fix actually solves the problem.
Challenges we ran into
A major challenge was that we really wanted to make the grading meaningful. We did not want to accept solutions based only on string matching or shallow heuristics, so we had to build backend validation that checks whether user changes truly fix the problem. As such, we spun up and hosted containers for each security challenge when they were started, as well as restarting the apps using the users' supplied code fixes and running tests automating tests against the users patches and making sure the app still functions.
Accomplishments that we're proud of
We are proud that Lector really feels like it fills a niche that needs to be filled. Code review is such a large part of software engineering, but there isn’t really a good place at the moment to practice this skill. Lector truly feels like a place where users can be exposed to new concepts and receive guided feedback towards the right answer.
What we learned
From this we refined our abilities to work on a team, and to break down big applications into smaller tasks that could be designated and distributed among us. Piecing together and merging these parts was also a new skill for many of us. We also learned technical skills and exposure to new tools like codex and frontend build tools like Vite.
What's next for Lector
Next, we want to expand the challenge library across both tracks, improve the quality and depth of new challenges, and build stronger progress tracking for learners. We would also like to flesh out and add some new features, for example letting users upload and create their own problems, as well as a way to automate the generation of challenges with AI. Longer term, we see Lector as a platform for interview prep, security training, and deliberate practice in code review. It could potentially become an interview platform like hackerrank where companies could interview potential candidates through Lectors challenges.
Log in or sign up for Devpost to join the conversation.