Inspiration
In today’s world, critical data isn’t always on a fast, reliable network. Media studios, rural labs, mobile clinics, racetrack ↔ factory setups, and disaster zones all face unstable, high-latency, or lossy connections.
Existing solutions like FTP, HTTP, or even traditional download managers either fail to resume gracefully, can’t prioritize urgent data, or struggle with integrity verification. We wanted a fast, resilient, secure, and intelligent file mover that adapts to network conditions while ensuring end-to-end integrity.
This is where SFTPX was born — inspired by aria2’s multi-source segmented downloads, QUIC’s low-latency multiplexed transport, and RaptorQ-style FEC redundancy.
What it does
SFTPX is a high-performance, adaptive file transfer protocol and system that:
- Splits files into segments and symbols for parallel transfer.
- Uses multiple prioritized streams for urgent, high-priority, normal, and background data.
- Implements FEC (Forward Error Correction) and selective retransmissions for resilience over unstable links.
- Provides real-time telemetry on throughput, latency, packet loss, and transfer progress.
- Supports resume, verification, and integrity checks (per-chunk SHA256 + Merkle tree).
- Allows multi-source transfers — stripes segments across multiple mirrors or peers.
- Operates securely using QUIC/TLS, optionally combined with a hardware TEE or secure element for key protection.
In short, it turns unreliable networks into smart, self-healing highways for critical files.
How we plan to build it
- Language: Rust (
quinn) - Segmenting: 512KB chunks + simple XOR parity for FEC
- Control: JSON-RPC over QUIC control stream
- Storage: Local manifests + bitmap persistence for resume
- Scheduler: Priority-aware, multi-source segment allocation
- Network simulation:
tc netemto emulate loss, jitter, and variable RTT
We think to build a daemon, CLI client, and a simple status UI, demonstrating segmented transfers, parallelism, resume, and FEC-based error recovery under lossy conditions.
Challenges we expect
- Unstable links: Real-world connections drop or reorder packets, requiring careful design for resume and FEC.
- Low-latency priorities: Urgent small files needed to preempt bulk transfers without stalling other streams.
- Chunking vs. memory: Large files + small segments could blow up RAM usage; needed careful balance.
- FEC integration: Implementing parity streams that are adaptive to changing packet loss was tricky.
- Cross-platform consistency: Ensuring manifests, bitmaps, and Merkle trees stayed consistent between heterogeneous nodes (ARM, x86, Raspberry Pi, desktop).
- Security vs performance: Protecting keys and signing manifests in resource-constrained devices while maintaining throughput.
What we learned
- Real-world network conditions require adaptive scheduling, not just parallelism.
- FEC + selective ARQ dramatically improves resilience in lossy or high-latency networks.
- Manifest + bitmap + Merkle tree is a simple, reliable strategy for integrity verification and resume.
- Multipath QUIC streams allow priority-based preemption without head-of-line blocking.
- Rapid prototyping on Raspberry Pi / VMs helps validate protocol logic before moving to secure hardware.
- Security (TEE or secure element) can be decoupled from heavy data processing, making it feasible even for constrained devices.
Built With
- forward-error-correction
- multipath-routing
- quic-protocol(rfc9000)
- raptorq-code(rfc6330)
- raspberry-pi-pico-2w
- rfc9001
- rust
Log in or sign up for Devpost to join the conversation.