Inspiration

Wearable technologies, particularly in VR and AR, continue to advance, yet they remain constrained when running complex AI tasks such as real-time vision processing, spatial understanding, and contextual inference. Existing approaches often depend heavily on cloud infrastructure, which introduces latency, raises privacy concerns, and requires constant connectivity.
This led to a key idea: instead of relying on remote servers, why not harness the combined computational power of personal devices like phones, tablets, and laptops?
iSpine was developed from this concept a system that transforms everyday consumer hardware into a decentralized edge computing network tailored for wearable devices.

What it does

iSpine enables personal devices such as smartphones, tablets, and laptops to function collectively as a distributed compute system for AI workloads.
Rather than offloading data to the cloud, wearable devices can process demanding tasks like object detection, gesture recognition, and scene understanding locally by leveraging nearby devices.

Core features include:

  • Distributed computation: Multiple devices are unified into a single logical AI processing unit
  • Low-latency performance: Computation happens close to the user, enabling real-time responsiveness
  • Privacy preservation: Sensitive data remains on local devices instead of being transmitted externally
  • AI model execution: Supports lightweight deep learning and computer vision tasks
  • Wearable compatibility: Designed specifically for integration with AR/VR and similar devices

How we built it

The system was designed by combining principles from distributed systems and edge AI optimization.

Cluster orchestration layer:
Devices automatically discover each other over local networks (Wi-Fi or peer-to-peer) and form a compute cluster. A scheduling engine divides AI workloads into smaller segments and assigns them based on device capabilities such as processing power, thermal state, and battery level.

Model optimization:
AI models were quantized and optimized to run efficiently on mobile hardware without requiring dedicated GPUs.

Communication protocol:
A lightweight protocol was implemented to ensure fast and efficient data exchange between devices while minimizing latency and bandwidth usage.

Wearable interface:
A simple API allows wearable devices to offload computational tasks seamlessly, mimicking the behavior of local function calls.

Challenges we ran into

Developing a distributed compute system using consumer devices presented several difficulties.

Heterogeneous hardware:
Devices vary significantly in performance, making efficient load balancing complex.

Latency versus distribution:
Distributing tasks across devices can introduce communication overhead that may reduce performance gains.

Thermal and battery limits:
Mobile devices throttle under sustained workloads, requiring adaptive scheduling strategies.

Network instability:
Local connections can be unreliable, necessitating fault tolerance and dynamic task reassignment.

Model compatibility:
Not all AI models are suitable for mobile environments, requiring additional optimization efforts.

Accomplishments that we're proud of

  • Built a functional prototype of a distributed AI cluster using consumer devices
  • Achieved real-time inference for wearable applications without relying on cloud services
  • Developed a system where devices can dynamically join or leave the network
  • Ensured all processing remains local, preserving user privacy
  • Demonstrated that everyday hardware can support meaningful edge AI workloads

What we learned

  • Distributed computing can extend beyond traditional data centers and exist within personal devices
  • Edge computing emphasizes not just speed, but also privacy, reliability, and independence
  • Efficient optimization is more critical than raw computational power in constrained environments
  • Designing for wearables requires strict attention to latency and user experience
  • Coordinating multiple devices is often more challenging than performing the computations themselves

Built With

Share this project:

Updates