Inspiration
At first, a leaderboard sounds simple. Just sort players by score and show the results, right?
But when we started thinking about how real platforms work, things got interesting. Imagine a chess website with millions of players. Every time a match finishes, two ratings change. Thousands of players might check their rank at the same time. Suddenly the “simple leaderboard” turns into a real systems problem.
That idea stuck with us. What happens when reads and writes hit the system constantly? How do you update rankings instantly without slowing everything down?
We wanted to explore that challenge. Instead of building a basic CRUD leaderboard, we tried to design something that could actually survive heavy traffic.
What it does
Our project is a dynamic leaderboard service that supports fast ranking updates and detailed statistics.
Users can add or remove players with ratings, view the top 10 leaderboard, and explore information about the overall ranking distribution. That includes things like mean rating, median, quartiles, standard deviation, and percentile ranks.
We also built an audit log so every change to the leaderboard can be tracked over time. On top of that, the system can run stress tests and report performance metrics for different API endpoints.
In short, it is a leaderboard that not only shows rankings but also gives insight into how the whole system behaves.
How we built it
The backend is built with FastAPI and PostgreSQL. FastAPI handles the API layer, while PostgreSQL stores the audit log and persistent data.
For the ranking system, we implemented a custom skip list. This lets us insert, delete, and query ranks quickly without having to sort the entire dataset every time. Each level in the skip list keeps track of spans, which makes it possible to calculate player rank and percentiles efficiently.
To compute statistics, we used a couple of neat algorithms. Welford’s algorithm lets us update the mean and standard deviation as new scores arrive. No need to recompute everything. For percentile values we used Quickselect, which avoids sorting the full dataset.
The leaderboard itself lives in memory for speed. PostgreSQL acts as the durable source of truth through an audit log. When the server starts, the log is replayed to rebuild the in memory state.
On the frontend we used React with Vite, TypeScript, TailwindCSS, and shadcn/ui to create a clean interface.
Deployment was straightforward but fun. The backend runs in Docker on an AWS EC2 instance, and the frontend is hosted on Vercel.
Challenges we ran into
The hardest part was dealing with reads and writes happening at the same time.
Every write changes the order of the leaderboard. Every read might need sorted data or statistics. If everything goes through the same database query, performance drops fast.
We spent a lot of time thinking about concurrency. Reader writer locks helped allow many read operations to run together while still keeping writes safe.
Another challenge was statistics. Calculating averages or quartiles by scanning the entire table every time would not scale. That pushed us to use incremental algorithms and more efficient selection methods.
And of course, doing all of this in 24 hours meant plenty of quick debugging sessions and last minute fixes.
Accomplishments that we're proud of
First, we actually built a working system in a single day. That alone felt great.
We are especially proud of the custom skip list and how it supports fast rank queries. It turned what could have been a slow database operation into something much faster.
Another highlight was integrating load testing directly into the API. Being able to trigger a stress test and see the results right away made it easier to understand how the system behaves under pressure.
Finally, getting the whole stack deployed and running live on AWS and Vercel was a big win.
What we learned
This project taught us a lot about system design and performance.
We learned how small algorithm choices can make a big difference. Something like Welford’s algorithm might look simple, but it saves huge amounts of work when the data keeps changing.
We also gained experience dealing with concurrency and balancing in memory speed with persistent storage.
And maybe the biggest lesson. Problems that look easy at first often hide deeper challenges once you start thinking about scale.
What's next for Enigma
There are several directions we would like to explore next.
Long term, the goal is simple. Turn this into a flexible leaderboard system that could power games, esports platforms, or any competitive application.
Built With
- amazon-web-services
- compose
- css
- docker
- ec2
- fastapi
- locust
- postgresql
- python
- react
- shadcn/ui
- sqlalchemy
- tailwind
- typescript
- vercel
- vite
Log in or sign up for Devpost to join the conversation.