Earlier this week, following the devastation of Hurricane Florence, my newsfeed surged with friends offering their excess food and water to displaced community members. Through technology, the world had grown smaller. Resources had been shared.

Our team had a question: what if we could redistribute something else just as valuable? Something just as critical in both our every day lives and in moments of crisis: server space. The fact of the matter is that everything else we depend on, from emergency services apps to messenger systems, relies on server performance as a given. But the reality is that during storms, data centers go down all the time. This problem is exacerbated in remote areas of the world, where redirecting requests to regional data centers isn't an option. When a child is stranded in a natural disaster, mere minutes of navigation mean the difference between a miracle and a tragedy. Those are the moments when we have to be able to trust our technology. We weren't willing to leave that to chance, so Nimbus was born.

What it does

Nimbus iOS harnesses the processing power of idle mobile phones in order to serve compute tasks. So imagine charging your phone, enabling Nimbus, and allowing your locked phone to act as the server for a schoolchild in Indonesia during typhoon season. Where other distributed computation engines have failed, Nimbus excels. Rather than treating each node as equally suitable for a compute task, our scheduler algorithm takes into account all sorts of factors before assigning a task to a best node, like CPU and the time the user intends to spend idle (how long the user will be asleep, how long the user will be at an offline Facebook event). Users could get paid marginal compensation for each compute task, or Nimbus could come bundled into a larger app, like Facebook.

Nimbus Desktop, which we've proof-of-concepted in the Desktop branch of our Github repo, uses a central server to assign tasks to each computer-node via Vagrant Docker provisioning. We haven't completed this platform option, but it serves another important product case: enterprise clients. We did the math for you: a medium sized company running 22,000 ec2s on Nimbus Desktop on its idle computers for 14 hours a day could save $6 million / year in AWS fees. In this case, the number of possible attack vectors is minimized because all the requests would originate from within the organization. This is the future of computing because it's far more efficient and environmentally friendly than solely running centralized servers. Data centers are having an increasingly detrimental effect on global warming; Iceland is already feeling its effects. Nimbus Desktop offers a scalable and efficient future. We don't have a resource issue. We have a distribution one.

How we built it

The client-facing web app is built with react and node.js. The backend is built with node.js. The iOS app is built with react-native, express, and node.js. The Desktop script is built on Docker and Vagrant.

Challenges we ran into

npm was consistently finnicky when we integrated node.js with react-native and built all of that in XCode with Metro Bundler. We also had to switch the scheduler-node interaction to a pull model rather than a push model to guarantee certain security and downtime minimization parameters. We didn't have time to complete Nimbus Desktop, save stepwise compute progress in a hashed database for large multi-hour computes (this would enable us to reassign the compute to the next best node in the case of disruption and optimize for memory usage), or get to the web compute version (diagrammed in the photo carousel, which would enable the nodes to act as true load balancers for more complex hosting)

Accomplishments that we're proud of

Ideating Nimbus Desktop happened in the middle of the night. That was pretty cool.

What we learned

Asking too many questions leads to way better product decisions.

What's next for nimbus

In addition to the incomplete items in the challenges section, we ultimately would want the scheduler to be able to predict disruption using ML time series data.

Share this project: