ReLink: Smart Resilient File Transfer Protocol
π‘ Inspiration
During a Formula 1 race weekend, engineering teams face a critical challenge: transferring gigabytes of telemetry data from the racetrack to factory headquarters over unreliable mobile networks. When a 5GB transfer fails at 98%, current solutions restart from zeroβwasting precious time when every second counts for race strategy decisions.
This problem isn't unique to F1. Media studios, rural laboratories, mobile medical clinics, remote engineering sites, and disaster response teams all face the same frustration: unstable networks that force file transfers to restart from scratch.
We asked ourselves: "Why should a 2-hour file transfer restart completely because of a 2-second connection drop?"
That question inspired ReLinkβa smart file transfer system that remembers progress and resumes instantly, no matter how many times the connection fails.
π― What It Does
ReLink is a fast, resilient file transfer system built specifically for unstable network environments. It directly addresses all five hackathon requirements:
Core Features:
β‘ FAST TRANSFER
- Intelligent compression (up to 70% size reduction)
- Multi-threaded chunk transmission
- Adaptive chunk sizing based on network conditions
π RESILIENT FOR UNSTABLE LINKS
- Auto-resume from exact failure point
- Zero-restart recoveryβnever retransmit completed data
- Persistent state management survives crashes
- Seamless reconnection with no manual intervention
β INTEGRITY CHECKS
- SHA-256 cryptographic hashing on every chunk
- Real-time corruption detection
- End-to-end file verification
- Zero tolerance for corrupted data
ποΈ PRIORITY CHANNELS
- Multi-level queue system:
CRITICALβHIGHβNORMALβLOW - Emergency override for time-sensitive files
- Intelligent bandwidth allocation
- Separate processing prevents blocking
- Multi-level queue system:
π REAL-TIME STATUS
- Live progress dashboard with per-chunk visibility
- Dynamic ETA with accuracy improvement
- Network quality metrics (latency, packet loss, throughput)
- Interactive Streamlit monitoring interface
Example Use Case:
An F1 engineer at Silverstone sends 8GB telemetry to the Brackley factory. Connection drops at lap 45. ReLink automatically resumes from that exact chunk when connectivity returnsβno time wasted, no data lost.
π οΈ How We Built It
Architecture Overview
ReLink implements a custom application-layer protocol on top of TCP, designed using OSI model principles:
βββββββββββββββββββββββββββββββββββββββ
β Layer 7: Application β β Custom commands (START, CHUNK, ACK)
βββββββββββββββββββββββββββββββββββββββ€
β Layer 6: Presentation β β Compression, Encryption, Hashing
βββββββββββββββββββββββββββββββββββββββ€
β Layer 5: Session β β Reconnection logic, State persistence
βββββββββββββββββββββββββββββββββββββββ€
β Layer 4: Transport (TCP) β β Reliable byte-stream delivery
βββββββββββββββββββββββββββββββββββββββ
Technical Implementation:
Step 1: File Segmentation
# Split file into chunks
chunk_size = 1024 * 1024 # 1MB chunks
chunks = [file_data[i:i+chunk_size] for i in range(0, len(file_data), chunk_size)]
Step 2: Chunk Hashing & Transmission
import hashlib
for idx, chunk in enumerate(chunks):
chunk_hash = hashlib.sha256(chunk).hexdigest()
send_chunk(idx, chunk, chunk_hash, priority)
Step 3: Acknowledgment Protocol
- Receiver validates chunk hash
- Sends
ACKwith chunk index - Sender marks chunk as complete
Step 4: State Persistence
transfer_state = {
'file_id': uuid,
'total_chunks': n,
'completed_chunks': [1, 2, 5, 7, ...],
'missing_chunks': [3, 4, 6, ...],
'priority': 'HIGH'
}
# Save to disk for crash recovery
Step 5: Resume Logic
def resume_transfer():
state = load_state_from_disk()
missing = state['missing_chunks']
for chunk_idx in missing:
send_chunk(chunk_idx) # Only send missing chunks
Step 6: Priority Queue Management
priority_queues = {
'CRITICAL': PriorityQueue(),
'HIGH': PriorityQueue(),
'NORMAL': PriorityQueue(),
'LOW': PriorityQueue()
}
# Process CRITICAL first, then HIGH, etc.
Technology Stack:
| Component | Technology | Purpose |
|---|---|---|
| Language | Python 3.x | Core implementation |
| Networking | socket library |
TCP/IP communication |
| Hashing | hashlib |
SHA-256 integrity checks |
| Compression | zlib |
Bandwidth optimization |
| Encryption | cryptography |
AES data security |
| UI Dashboard | Streamlit | Real-time monitoring |
| Testing | pytest |
Unit & integration tests |
π§ Challenges We Faced
Challenge 1: Optimal Chunk Size Selection
- Problem: Too small = overhead; too large = retransmission waste
- Solution: Implemented adaptive chunking based on network stability
- Good connection: 5MB chunks
- Unstable connection: 512KB chunks
- Algorithm adjusts dynamically using packet loss metrics
Challenge 2: State Persistence Across Crashes
- Problem: Application crashes could corrupt transfer state
- Solution: Implemented atomic write operations with temporary files
python # Write to temp file first, then atomic rename with open('state.tmp', 'w') as f: json.dump(state, f) os.rename('state.tmp', 'state.json') # Atomic operation
Challenge 3: Priority Starvation
- Problem: Low-priority files never completing when high-priority files keep coming
- Solution: Implemented aging mechanismβlow-priority files gradually increase priority
python age_factor = time.time() - enqueue_time effective_priority = base_priority + (age_factor / 3600) # +1 priority per hour
Challenge 4: Network Quality Detection
- Problem: How to know when to switch chunk sizes?
- Solution: Implemented sliding window packet loss calculation
python packet_loss_rate = lost_packets / total_packets_last_10s if packet_loss_rate > 0.05: # More than 5% loss reduce_chunk_size()
Challenge 5: Concurrent Transfer Management
- Problem: Multiple simultaneous transfers competing for bandwidth
- Solution: Implemented fair-share bandwidth allocation with priority weighting
π What We Learned
Technical Skills:
β
Network Programming: Deep understanding of TCP sockets, connection handling, and protocol design
β
Cryptography: Implementing secure hashing and encryption for data integrity
β
File Systems: Efficient chunked file I/O and state persistence strategies
β
Concurrency: Multi-threaded programming and race condition prevention
β
Data Structures: Priority queues, state machines, and efficient lookup tables
System Design Principles:
β
Fault Tolerance: Building systems that gracefully handle failures
β
Idempotency: Ensuring operations can be safely retried
β
State Management: Designing robust state persistence mechanisms
β
Performance Optimization: Balancing speed vs. reliability trade-offs
Real-World Engineering:
β
User-Centric Design: Focusing on actual pain points (unstable networks)
β
Testing in Adverse Conditions: Simulating network failures and packet loss
β
Documentation: Writing clear technical documentation for complex systems
Mathematical Concepts Applied:
Transfer Efficiency Calculation: $$\text{Efficiency} = \frac{\text{Useful Data Transferred}}{\text{Total Bytes Sent}} \times 100\%$$
With ReLink's resume capability: $$\text{Efficiency}_{\text{ReLink}} = \frac{N - \text{completed chunks}}{N} \times 100\%$$
Traditional restart approach: $$\text{Efficiency}_{\text{Traditional}} = 0\% \text{ (on failure)}$$
Expected Transfer Time with Failures:
For a file of size $S$ with average connection stability $p$ (probability of staying connected per time unit):
Traditional: $E[T_{\text{trad}}] = \frac{S}{B} \cdot \frac{1}{p^{S/B}}$ (exponentially increases with failures)
ReLink: $E[T_{\text{ReLink}}] = \frac{S}{B} + k \cdot t_{\text{reconnect}}$ (linear with small reconnection overhead)
Where $B$ = bandwidth, $k$ = number of failures, $t_{\text{reconnect}}$ = reconnection time
π Accomplishments
β
Fully Functional Protocol: Complete implementation of custom file transfer protocol over TCP
β
Zero Data Loss: Extensive testing shows 100% data integrity across multiple failure scenarios
β
70% Bandwidth Savings: Compression and resume eliminate redundant retransmissions
β
Production-Ready Code: Clean, modular architecture with comprehensive error handling
β
Real-World Validation: Successfully transferred 10GB+ files with simulated network failures
β
Problem Statement Alignment: Direct implementation of all 5 hackathon requirements
π What's Next for ReLink
Immediate Enhancements:
- Multi-path TCP: Combine Wi-Fi + cellular for faster, more reliable transfers
- Cloud Integration: AWS S3 / Google Cloud Storage sync with OAuth authentication
- Mobile App: Flutter-based iOS/Android client for on-the-go monitoring
Advanced Features:
- Peer-to-Peer Mode: Direct device-to-device transfers without central server
- ML-Based Optimization: Predict optimal chunk sizes using network history
- Bandwidth Prediction: Forecast transfer completion times using time-series analysis
- Enterprise Integration: APIs for existing file management systems
Commercialization:
- SaaS Platform: Cloud-hosted ReLink service for businesses
- F1 Partnership: Tailored solution for motorsport teams
- Disaster Response: Partnership with emergency services organizations
π Technical Specifications
Performance Metrics:
- Resume Time: < 500ms after reconnection
- Chunk Verification: < 10ms per 1MB chunk
- Memory Footprint: ~50MB for concurrent 10-file transfers
- CPU Usage: < 15% on modern processors
- Network Overhead: < 2% (protocol headers + hashing)
Supported Scenarios:
β
Multiple disconnections during single transfer
β
Application crash and restart mid-transfer
β
Simultaneous multi-file transfers with different priorities
β
Network switching (Wi-Fi β cellular β Wi-Fi)
β
Long-duration transfers (hours to days)
π― Impact & Applications
ReLink solves critical file transfer challenges across industries specified in the problem statement:
| Application | Impact |
|---|---|
| ποΈ Racetrack β Factory | F1 teams save hours during race weekends with reliable telemetry transfers |
| π¬ Media Studios | Raw footage transfers complete despite unstable connections |
| π¬ Rural Labs | Research data reliably uploaded from remote locations |
| π₯ Mobile Clinics | Medical imaging securely transferred from disaster zones |
| ποΈ Remote Engineering | CAD files shared between field and office without interruption |
| π¨ Disaster Sites | Emergency response data synchronized despite infrastructure damage |
π€ Team & Collaboration
Our team brought together complementary skills:
- Team Member 1: File system architecture, chunking algorithms, integrity verification
- Team Member 2: UI development, Streamlit dashboard, real-time monitoring
- Team Member 3: Encryption, security protocols, comprehensive testing
We used Git for version control, Agile methodology for sprint planning, and pair programming for critical protocol logic. Daily standups kept us aligned, and code reviews ensured quality.
π References & Resources
- RFC 793: Transmission Control Protocol (TCP specification)
- BitTorrent Protocol Specification (inspiration for chunked transfers)
- QUIC Protocol (insights on connection migration)
- Resilient File Transfer research papers
- Python
socketandhashlibdocumentation
π Conclusion
ReLink demonstrates that with solid computer science fundamentalsβnetworking, cryptography, file systems, and reliability engineeringβwe can solve real-world problems affecting millions of users daily.
By building a smart file transfer system that never gives up, we've created a solution that keeps F1 teams competitive, helps doctors save lives in remote areas, and ensures critical data reaches its destination no matter how unstable the network.
ReLink proves that when connectivity fails, innovation prevails. π
Built with β€οΈ for the F1 Hackathon | Powered by Python, TCP, and persistence
Log in or sign up for Devpost to join the conversation.