Inspiration

Compute Broker is created as an alternative to expensive compute cloud services. We wanted to create a system where users can freely exchange compute power without having to find workers or workloads. It is a peer to peer computing service that handles peer discovery and connection automatically.

What it does

Compute Broker is a web service and Python library that allows users to create a workload locally and distribute that workload to other users. Users can listen for incoming workloads, and can request a pool of workers to send workloads to. The service itself stores client connection information, and what clients are looking for work.

The web service aims to be as stateless as possible. Once a user or worker group is created it is persistent until explicitly deleted. Resources can be accessed using a signed and encrypted JWT that is returned upon creation and can later be obtained using the owner's credentials. By using JWT, we ensure that a user can only access and modify resources that they own without querying the database every time.

We weren't able to complete the client implementation in the allotted time, but the ability to send and receive python objects and functions, and detect updates on data is there. Along with a client library for the API requests. The client will be easy to use and similar to writing normal Python. The client manipulates regular python objects wrapped with our framework, it will then synchronize and distribute the data on the worker systems for processing.

If you want to try out some API features, here's a few Curl requests you can run on your system:

#CREATING A USER
curl --location --request POST 'vps295572.vps.ovh.ca/client' \
--header 'Content-Type: application/json' \
--data-raw '{
    "email" : "",
    "password" : "",
    "pubKey" : "sample",
    "address" : "sample"}'

#LOGGING IN, THIS WILL RETURN YOU YOUR API TOKEN
curl --location --request POST 'vps295572.vps.ovh.ca/login' \
--header 'Content-Type: application/json' \
--data-raw '{
    "email" : "",
    "password" : ""
}'

#SIGNALING YOU ARE READY FOR A JOB
curl --location --request POST '192.99.55.160:8080/client/signal/1' \
--header 'Authorization: Bearer {API TOKEN HERE}'

#REQUESTING A SESSION WITH 2 WORKERS
curl --location --request POST 'vps295572.vps.ovh.ca/session' \
--header 'Authorization: Bearer {API TOKEN HERE}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "workers" : 2
}'

That's the basics, there's a few more minor features as well.

View the full API and client library documentation here

How we built it

The web service is written in Go using the Gorilla/Mux library for request routing, and SQLite as our database. All endpoints are an action on either a user or a session resource. A session describes a group of users that are doing work for another user. The session is stored in the database as a relation between two user ids and a session id that is unique to that session. This allows us to retrieve all user data associated with a session based on a session id. The users table stores whether a user is looking for a job. The user can change the amount of jobs they are looking for using a simple Post request with their API key. Using this, we can quickly create a set size pool of workers. We use a signed and encrypted Javascript Web token for the API key. The JWT encodes the users id and a time to live so that we can avoid unnecessary database queries to validate the user or grab user ids for other operations. All stored passwords are hashed using SHA-256, and we've made efforts to prevent SQL injection and other attacks.

The client uses python meta-classes to detect changes within wrapped classes and send updates to the worker systems accordingly. Results are sent back and recombined into a single data set.

Challenges we ran into

We ran into some difficulty in working with the Go-Jose library. It's just written in a different coding style, and it was difficult to find documentation on how to pass signing and encryption keys for the algorithms we used. On the client side we ran into some time constraints and ambiguity with how the client should function and interact with the workers. Additionally it was surprisingly difficult to send python object information over a TCP socket.

Accomplishments that we're proud of

The server does a pretty good job of avoiding data copies. It has great error handling as well, responding with appropriate status codes for each error. A request to the server API will always be the same, regardless of the server state. We were shooting to have fully RESTful API, but we feel like this came pretty close. On the client side we created some pretty powerful wrappers around arbitrary Python objects. It's a shame we didn't have time to use them.

What we learned

It's important to plan and coordinate projects before beginning. We ran into many problems communicating between client and workers on the client side. It's also important to test frequently, we spent a lot of time debugging the server because of a small mistake we didn't catch early on.

What's next for Compute Broker

We are going to finish the client and possibly establish a reward/bidding system for completing workloads.

Built With

Share this project:

Updates