Inspiration

The computers we use every day are almost always running well below their full potential, with processors sitting at five to ten percent capacity while we browse the internet, stream videos, or simply leave our machines switched on. At the same time, we found ourselves increasingly aware of the enormous financial and environmental cost that companies face when running artificial intelligence on massive, purpose-built data centers that consume extraordinary amounts of electricity and require constant expansion to meet growing demand. The contrast was difficult to ignore - billions of personal computers around the world are wasting the vast majority of their computing potential every single day, while businesses are simultaneously spending fortunes to build and maintain new infrastructure capable of doing that exact same work. It struck us as one of the most overlooked mismatches in modern technology and we became convinced that the solution was not to build more data centers, but to finally connect the computing power that already exists with the companies that desperately need it. This became the foundation of Corex.

What it does

Corex is a distributed compute marketplace that lets companies run their AI workloads across a network of idle personal computers instead of expensive cloud data centers. Companies submit their AI jobs through the Corex dashboard and the work gets distributed across available workers in real time. Everyday people contribute their idle CPU power through a lightweight background app, earn money for every job they complete, and get matched to jobs automatically based on their preferences and availability. Every result is verified for accuracy before it is delivered back to the company. The whole system runs without any new servers, any new data centers, or any new electricity — just computing power that already exists finally being put to use.

How we built it

We built Corex as a full stack application with three main pieces working together. The orchestrator is a Python and FastAPI backend that acts as the brain of the platform It receives jobs from companies, breaks them into smaller chunks, distributes those chunks to available workers by firing webhooks using the asynchronous httpx HTTP client, verifies results using a honeypot system, and handles billing calculations. Workers run a lightweight Python and FastAPI client on their own machine that receives those webhooks, executes the jobs, and sends results back to the orchestrator. The frontend is built in React, Vite, and TypeScript with Tailwind CSS, consisting of two separate dashboards: one for companies to submit jobs and monitor progress in real time and one for workers to see incoming jobs, track their earnings, and monitor their CPU activity. Firebase handles authentication and stores all persistent data including worker accounts, company accounts. Redis paired with the rq job queue library handles the live worker pool state and chunk tracking. We used Server Sent Events through the EventSource API to push real time updates from the orchestrator to both dashboards simultaneously and the regional heatmap on the company dashboard is built using D3.js rendered over a custom SVG world map to show which regions are actively processing jobs.

Challenges we ran into

The hardest problem we faced was the verification challenge. How do you trust that a random person's laptop actually ran the job correctly and did not fake the result? We solved this with a honeypot system where we inject known test chunks with correct answers into every job. Workers do not know which chunks are tests, so if they try to fake results they will fail the honeypot and get flagged immediately. Another challenge was working with Replit for the first time, integrating the frontend files Replit scaffolded with our own backend code was more difficult than expected, as the project structure and conventions did not always align cleanly with what we had built. We were also running two separate backend servers simultaneously, the orchestrator and the worker client, which introduced coordination complexity and made debugging significantly harder since issues could originate from either server at any point in the pipeline.

Accomplishments that we're proud of

We're proud that our system works end to end — a job submitted on one laptop is successfully picked up, processed, and verified by a separate machine on the network, with no mocking or hardcoded responses. Our live demo shows the full pipeline in real time across both the company and worker dashboards. We're also proud of the honeypot verification system, which solves the trust problem without any cryptography — unknown verification chunks are injected into each job and workers that return incorrect results are flagged and replaced automatically. Building a functioning two-sided marketplace with an orchestration layer and real-time UI within a hackathon timeline was more ambitious than we expected when we started and we're glad it came together.

What we learned

While we all come from technical backgrounds, building a product is one thing and coming up with a go to market strategy is a completely different kind of challenge. It was also genuinely exciting to take concepts we had studied — peer to peer networking, distributed systems, OS-level resource management — and actually put them to use in something real. That bridge between coursework and a working product felt meaningful. We also learned a lot about working as a team under pressure. Some of us traveled to get here, we were in a new environment, and everything was moving fast, you had to adapt quickly, communicate constantly, and trust the people next to you. The biggest takeaway was the value of doing research first and having a strong idea before we started writing our code.

What's next for Corex

The immediate next step is expanding beyond text embeddings to support image classification, audio transcription, and custom model uploads. The foundation is already built, those job types just need their execution layer completed. On the worker side we want to add GPU support so contributors with gaming laptops or workstations can take on more demanding jobs at higher pay rates. We are also planning a formal university partnership program to seed the worker network on campuses where students have laptops running all day during lectures and study sessions. Longer term, the vision is to make Corex the default alternative to cloud AI compute for any company that runs batch workloads which currently cost a fortune. The data center should be the last resort, not the default. Every idle CPU is an opportunity — Corex is how we finally put them to work.

Built With

Share this project:

Updates