Inspiration
We were inspired by the engineering challenges faced by massive-scale e-commerce platforms like Amazon and Shopify. We wanted to move beyond simple CRUD applications and build a system that could handle the messy reality of high-concurrency environments. Our goal was to simulate a production-grade ecosystem where user actions—like clicks and purchases—trigger complex, asynchronous cascades of events for real-time analytics, fraud detection, and personalization, all without slowing down the user experience.
What it does
Mercury is a comprehensive, event-driven e-commerce platform that operates in real-time.
Real-time Analytics: It tracks user behavior instantly, aggregating "Trending Products" via a rolling window algorithm. Fraud Detection: Every transaction passes through a specialized Risk Service that calculates a fraud probability score based on velocity and transaction amount before approval. Resilient Processing: It handles high throughput gracefully; if a background worker crashes, no data is lost—events are simply queued and re-processed by the next available worker. Personalization: It generates dynamic product recommendations based on user activity streams.
How we built it
We architected Mercury as a Turborepo monorepo to manage a complex distributed system efficiently.
Frontend: Built with Next.js and TypeScript for a type-safe, server-rendered UI. Backend: A constellation of microservices (Catalog, Events, Payment, Risk, Worker) running on Node.js. Data Layer: We used Prisma with PostgreSQL for persistence and Redis Streams for our high-throughput event bus. Infrastructure: The entire stack is containerized with Docker, using an internal bridge network to isolate microservices while exposing only the API Gateway and Web App to the public.
Challenges we ran into
Distributed Consistency: Keeping data in sync across the primary database and our Redis cache was difficult. We had to implement reliable acknowledgment patterns (XACK) in our Redis consumer groups to ensure "at-least-once" delivery. Production Hardening: Early versions of our search and payment webhooks were fragile. We had to painstakingly implement circuit breakers, idempotency checks (to prevent double-charges), and strict timeouts to prevent cascading failures during high load. Testing Complexity: Debugging race conditions in an asynchronous environment proved challenging. We solved this by building a custom "smoke test" suite that validates the health of every service in the Docker mesh before traffic is allowed.
Accomplishments that we're proud of
The "Trending" Algorithm: Implementing a mathematical rolling window using Redis Sorted Sets ($ZSET$) that allows us to query the top trending items in $O(\log N)$ time, regardless of database load. Safe Mode Reliability: We built a fallback "Safe Mode" for our search service that automatically degrades gracefully if the primary search index becomes unresponsive, ensuring the site never goes down for users. Zero-Config Deployment: Despite having 7+ microservices, the entire stack starts up with a single docker compose up command, with all networking, database migrations, and seed data handled automatically.
What we learned
The Cost of Microservices: We learned that while microservices offer great scalability, they introduce significant complexity in debugging and inter-service communication. Type Safety is Critical: Sharing Zod schemas between our frontend and backend packages saved us countless hours of debugging by catching data shape mismatches at compile time rather than runtime. Infrastructure as Code: Defining our entire production environment in code allowed us to iterate rapidly without fear of "it works on my machine" bugs.
What's next for Mercury
Kubernetes Migration: We plan to move from Docker Compose to a Kubernetes cluster to enable auto-scaling for individual high-load services like the Event collector. AI-Driven Search: Replacing our basic keyword search with vector embeddings to allow semantic search capabilities. Advanced Fraud Models: upgrading our logistic regression risk model to a more robust Random Forest classifier for better accuracy.
Built With
- bun
- css
- docker
- elasticsearch
- framer
- grafana
- husky
- lucide
- motion
- next.js
- nextauth
- node.js
- postgresql
- prisma
- prometheus
- radix
- react
- redis
- stripe
- tailwind
- turborepo
- typescript
- ui
- vitest


Log in or sign up for Devpost to join the conversation.