Melodic Airways — Transforming Flight Routes into Music
Inspiration
Have you ever wondered what a flight from New York to Tokyo would sound like? What if the curvature of Earth, the distance traveled, and the direction of your journey could be transformed into a beautiful musical composition?
The inspiration for Melodic Airways came from a simple yet profound question: Can we make data tangible through sound?
We live in a world drowning in abstract data—numbers, coordinates, graphs—but our brains are wired for stories, emotions, and sensory experiences. Geography textbooks show us maps, but they don't help us feel the vastness of an ocean crossing or the complexity of a multi-stop journey. That's where data sonification comes in.
We wanted to create something that would:
- Make geography education more engaging and memorable through multi-sensory learning
- Give musicians and artists a new source of creative inspiration
- Demonstrate practical AI/ML applications in an accessible, beautiful way
- Bridge the gap between cold data and human emotion
The result? A platform that transforms 67,000+ real flight routes into unique musical compositions, immersive VR experiences, and AI-generated soundscapes.
What it does
Melodic Airways is a full-stack web application that converts aviation data into immersive musical and visual experiences. Here's what makes it special:
🎵 Core Features
1. Intelligent Music Generation
- Transforms any flight route into a unique musical composition
- Uses real OpenFlights data (3,000+ airports, 67,000+ routes)
- 6 musical scales automatically selected based on route characteristics
- Dynamic tempo (70-140 BPM) based on flight distance
- Three-track harmony: melody, harmony, and bass
- Downloadable MIDI files for further editing
2. AI-Powered Genre Composer
- PyTorch neural networks generate genre-specific music
- 8 AI genres: Classical, Jazz, Electronic, Ambient, Rock, World, Cinematic, Lofi
- 70-105 second compositions with proper musical structure
- AI recommends genres based on route characteristics
- Genre blending with adjustable ratios
3. Immersive VR/AR Experiences
- 3D flight path visualization with animated aircraft
- 4 unique visual styles (Default, Neon, Aurora, Cosmic)
- Synchronized spatial audio with flight animation
- Multi-platform export (WebXR, Oculus, HTC Vive, Unity, ARKit, ARCore)
- Every experience is unique with randomized elements
4. Personal Travel Logs
- Create multi-waypoint journeys with timestamps
- Convert entire travel logs into musical stories
- Tag-based organization and filtering
- Share publicly or keep private
- Transform your real travels into compositions
5. Vector Embeddings & Analytics
- 7 types of embeddings for similarity search
- Automatic recommendation system
- Find similar routes and compositions
- DuckDB-powered analytics with CSV export
- Real-time performance monitoring
6. Real-time Collaboration
- WebSocket-based collaborative editing
- Share compositions with the community
- Like, comment, and remix features
- Public gallery of user creations
How we built it
Architecture Overview
We built a modern, scalable full-stack application with clear separation of concerns:
Frontend (React + TypeScript)
- Framework: React 18.3 with TypeScript for type safety
- Styling: Tailwind CSS + shadcn/ui for beautiful, accessible components
- 3D Graphics: Three.js with @react-three/fiber for VR experiences
- State Management: React Query for server state, React Router for navigation
- Maps: Mapbox GL for interactive route visualization
- Build Tool: Vite for lightning-fast development
Backend (Python + FastAPI)
- Framework: FastAPI for high-performance async API
- AI/ML: PyTorch for neural network music generation
- Music: Mido library for MIDI file creation
- Graph Algorithms: NetworkX for route pathfinding (Dijkstra's algorithm)
- Authentication: JWT tokens with python-jose
- Validation: Pydantic schemas for all data
Databases & Caching
- Primary Database: MariaDB with async SQLAlchemy
- Caching Layer: Redis Cloud (30MB plan) with 30-minute TTL
- Analytics: DuckDB for columnar storage and vector embeddings
- Real-time: Redis Pub/Sub for WebSocket collaboration
Key Technical Decisions
1. Music Generation Algorithm
# Latitude → Scale Degree (which note to play)
note_index = int((latitude + 90) / 180 * len(scale)) % len(scale)
# Longitude → Octave Shifts (pitch variation)
octave_shift = int((longitude + 180) / 360 * 2) - 1
# Progress → Velocity (volume increases during flight)
velocity = 60 + int(progress * 40) # 60 → 100
# Distance → Duration (longer routes = longer compositions)
duration = min(30, max(10, distance / 500)) # 10-30 seconds
2. AI Genre Composer Architecture
- GenreEmbeddingModel: 3-layer feedforward network (10→64→32 dimensions)
- MusicPatternGenerator: 2-layer LSTM (32→128→12 chromatic notes)
- Trained on route characteristics to generate genre-appropriate patterns
- Real-time inference with CPU optimization
3. VR Experience Generation
- Calculate great circle route with 200 3D points
- Randomly select visual style (4 options) for uniqueness
- Synchronize music playback with animation timeline
- Variable animation speed (0.8x - 1.2x) for variety
- Export to multiple platforms (Unity, WebXR, native VR)
4. Vector Embeddings System
- 7 embedding types with dimensions ranging from 32D to 128D
- Automatic syncing to DuckDB on every user interaction
- Cosine similarity for recommendation engine
- CSV export for external analysis
5. Performance Optimization
- Redis caching reduces database load by 80%
- Async database operations for concurrent requests
- Rate limiting (1000 req/min) to prevent abuse
- Optimized SQL queries with proper indexing
- DuckDB columnar storage for fast analytics
Challenges we ran into
1. Music Theory Meets Code
Challenge: Translating geographic coordinates into musically pleasing compositions wasn't straightforward. Early versions sounded random and chaotic.
Solution: We studied music theory extensively and implemented intelligent scale selection based on route characteristics. We added three-track harmony (melody, harmony, bass) and dynamic velocity changes to create more natural-sounding compositions.
2. PyTorch Model Training
Challenge: Training neural networks to generate genre-specific music required careful feature engineering and hyperparameter tuning.
Solution: We created a custom embedding model that captures route characteristics (distance, direction, lat/lon ranges) and feeds them into an LSTM for pattern generation. After multiple iterations, we achieved genre-appropriate outputs with 70-105 second compositions.
3. Real-time VR Synchronization
Challenge: Synchronizing 3D animations with AI-generated music in real-time was complex, especially with variable animation speeds.
Solution: We implemented a timeline-based system where music playback and animation progress are tightly coupled. The system calculates exact timing for each frame and adjusts playback speed accordingly.
4. Database Performance at Scale
Challenge: With 67,000+ routes and growing user data, database queries were becoming slow.
Solution: We implemented a three-tier storage strategy:
- Redis Cloud for hot data (30-minute TTL)
- MariaDB for persistent data with proper indexing
- DuckDB for analytics and vector embeddings
This reduced average response time from 800ms to under 100ms.
5. Cross-Platform VR Export
Challenge: Different VR platforms (Oculus, Unity, WebXR) require different data formats and coordinate systems.
Solution: We built a flexible export system that transforms our internal 3D data into platform-specific formats. Each export includes proper coordinate transformations, asset references, and platform-specific optimizations.
6. Vector Embedding Synchronization
Challenge: Keeping 7 different embedding types synchronized across multiple databases was error-prone.
Solution: We created a unified VectorSyncHelper service that automatically syncs embeddings to DuckDB on every user interaction. This ensures consistency and enables powerful analytics.
7. Windows vs Linux Compatibility
Challenge: Development team used both Windows and Linux, causing script compatibility issues.
Solution: We created parallel setup scripts (setup.bat for Windows, setup.sh for Linux/Mac) that handle platform-specific commands while maintaining identical functionality.
Accomplishments that we're proud of
🎯 Technical Achievements
- 67,000+ Routes Transformed: Successfully processed and made musical the entire OpenFlights dataset
- AI Music Generation: Built working PyTorch models that generate genre-specific compositions
- Sub-100ms Response Times: Achieved through intelligent caching and optimization
- 7 Embedding Types: Comprehensive analytics system with automatic synchronization
- Multi-Platform VR: Export to 6+ platforms (WebXR, Unity, Oculus, ARKit, ARCore)
- Zero Downtime Deployment: Docker-based deployment with health checks
🎨 Creative Achievements
- Unique Every Time: Every VR experience has randomized visuals and animation speeds
- Musically Coherent: Compositions actually sound good, not just random notes
- Educational Value: Makes geography and graph theory tangible through sound
- Accessible Design: Beautiful UI with shadcn/ui components and Tailwind CSS
📊 Scale Achievements
- 3,000+ Airports: Complete global coverage
- Real-time Collaboration: WebSocket-based multi-user editing
- 30-Minute Caching: Optimized for Redis Cloud's 30MB free tier
- 1000 Req/Min: Rate limiting handles high traffic
🔒 Security Achievements
- Zero Credentials Exposed: All sensitive data in environment variables
- JWT Authentication: Secure token-based auth
- Input Validation: Pydantic schemas prevent injection attacks
- CORS Protection: Configurable origin whitelist
What we learned
Technical Lessons
Data Sonification is Hard: Converting data to music requires deep understanding of both domains. We learned that musical theory is just as important as coding skills.
AI/ML in Production: Deploying PyTorch models in a web application taught us about model optimization, inference speed, and CPU vs GPU tradeoffs.
Caching Strategy Matters: Redis Cloud's 30MB limit forced us to be strategic about what we cache. We learned to prioritize hot data and use appropriate TTLs.
Async Python is Powerful: FastAPI's async capabilities allowed us to handle concurrent requests efficiently, but required careful attention to database connection pooling.
Vector Embeddings are Versatile: We discovered that vector embeddings aren't just for NLP—they're perfect for recommendation systems, similarity search, and analytics.
Design Lessons
Simplicity Wins: Early versions had too many options. We learned to hide complexity behind intelligent defaults while still allowing customization.
Visual Feedback is Critical: Users need to see progress during music generation. We added real-time updates and progress indicators.
Mobile Matters: Many users access the site on phones. We made the entire experience responsive and touch-friendly.
Project Management Lessons
Documentation is Investment: Comprehensive README and API docs saved countless hours of support time.
Setup Scripts are Essential: Automated setup scripts (
setup.bat,setup.sh) made onboarding new developers trivial.Version Control Discipline: Clear commit messages and branch strategy prevented merge conflicts.
What's next for Melodic Airways — Transforming Flight Routes into Music
Short-term (Next 3 Months)
- Mobile Apps: Native iOS and Android apps with offline music generation
- More AI Genres: Expand from 8 to 20+ genres (Hip-Hop, Country, Reggae, etc.)
- Collaborative Playlists: Users can create and share playlists of route compositions
- Advanced Music Export: Support for more formats (MP3, WAV, FL Studio, Ableton)
- Social Features: Follow users, like compositions, comment system
Medium-term (6-12 Months)
- Real Flight Data Integration: Use live flight tracking APIs for real-time compositions
- Custom Instruments: Let users choose MIDI instruments beyond piano
- Music Visualization: Real-time waveform and spectrogram displays
- API Marketplace: Allow developers to build on our platform
- Educational Partnerships: Work with schools to integrate into curriculum
Long-term (1-2 Years)
- Machine Learning Improvements: Train on user feedback to improve compositions
- Virtual Concerts: Live-streamed VR concerts of flight route music
- Physical Installations: Airport installations that play music based on departing flights
- Spotify Integration: Export compositions directly to Spotify playlists
- Enterprise Features: Custom branding for airlines and travel companies
Research Directions
- Other Data Types: Apply sonification to weather patterns, stock markets, sports data
- Therapeutic Applications: Explore use in music therapy and meditation
- Accessibility: Audio descriptions for visually impaired users
- Cultural Preservation: Document traditional flight routes through music
Try It Yourself
Live Demo: [Coming Soon] GitHub: https://github.com/aviralSri23455/Melodic-Airways-Transforming-Flight API Docs: http://localhost:8000/docs (after setup)
Quick Start
# Clone repository
git clone https://github.com/aviralSri23455/Melodic-Airways-Transforming-Flight.git
cd Melodic-Airways-Transforming-Flight
# Backend setup (Windows)
cd backend\bat
setup.bat
# Frontend setup
cd ../..
npm install
npm run dev
Visit http://localhost:5173 and start creating music from flight routes!
Built With
- Frontend: React, TypeScript, Tailwind CSS, Three.js, Vite
- Backend: Python, FastAPI, PyTorch, NetworkX, Mido
- Databases: MariaDB, Redis Cloud, DuckDB
- APIs: OpenFlights, Mapbox GL
- Deployment: Docker, Docker Compose
Team
Built with passion by developers who believe data can be beautiful, educational, and inspiring.
- Aviral Srivastava(team leader)
- Shani Pratap Singh
- Karina Bavishi
- Mytri Mohan
License
MIT License - Feel free to use, modify, and build upon this project!
"Every flight tells a story. We just help you hear it." 🎵✈️
Built With
- ai
- ai-based
- api
- duckdb
- fastapi
- javascript
- mapbox
- mapping
- maria
- melodic
- python
- python-midi-libraries
- react
- redis
- sql
- tailwind-css
- typescript
- vite

Log in or sign up for Devpost to join the conversation.