🌪️ Inspiration

Meteorologists spend hours interpreting complex weather charts, yet public communication remains manual and time-consuming. We envisioned a world where AI could instantly translate raw atmospheric data into engaging video narratives; democratizing access to weather insights while freeing experts to focus on research. Inspired by broadcast automation and climate tech innovation, NimbusNews bridges the gap between data and storytelling.

🛠️ What it does

NimbusNews transforms weather charts (like 500 hPa vorticity maps) into TV-ready video reports:

  1. Analyzes charts using Llama 3.2 Vision AI to detect patterns (e.g., storm trajectories)
  2. Generates plain-language summaries with Llama 3.1
  3. Converts text to lifelike audio via Deepgram AI
  4. Syncs narration to an AI anchor using modified open-source Wav2Lip
    Outputs, that are stored in AWS S3 can be viewed from the streamable dashboard.

⚙️ How we built it

  • AI Core: Llama 3.2 Vision (Cloudflare WorkersAI | AWS Bedrock) for chart analysis
  • NLP Pipeline: Fine-tuned Llama 3.1 70B for weather-specific summarization
  • Audio/Video: Deepgram Aura + Wav2Lip fork with Python 3.10 support
  • Infra: AWS S3 (storage), AWS Bedrock, Cloudflare CDN & Cloudflare WorkersAI
  • Frontend: Streamlit dashboard with real-time S3 integration

🔥 Challenges we ran into

  • Wav2Lip Resurrection: Rewrote 20% of its codebase to support modern dependencies and batch processing. We had to create a flask server for Wav2Lip and run it separately, hitting the flask API from weatherman's codebase. We have also modified the code to accept mps accleration because it was the only viable and available gpu. We attempted to use resources like EC2 but due to the free tier we could not choose a compatible instance.
  • Cloud Juggling: Debugging API latency between Cloudflare WorkersAI ↔ AWS Bedrock required custom retry logic. Giving the users the option to juggle between the two powerhouses.
  • Lip-Sync Hell: Aligning phonemes from Deepgram’s audio to Wav2Lip’s frames demanded frame-accurate timestamping.

🏆 Accomplishments we're proud of

  • First-to-Market: Built the only weather-specific AI video pipeline using open-source models.
  • Real-World Impact: Processed NOAA’s 500 hPa charts 150x faster than manual analysis.
  • Seamless Fusion: Bridged 4 distinct AI systems (vision, NLP, TTS, lip-sync) into a single workflow.
    -Custom Agentic System: By taking LangChain as our inspiration we have built an Agentic architecture for our usecase.

📚 What we learned

  • Legacy Code Modernization: How to salvage deprecated repos (Wav2Lip) with dependency patching and partial rewrites.
  • Tech Stack Mastery: Leveraging Llama Vision AI for spatial analysis, Deepgram’s prosody control for narration, and AWS’s auto-scaling for cost efficiency.
  • Teamwork at Scale: Coordinating AI/cloud workflows across AWS and Cloudflare and streamlining the code for such a flow.

🚀 What's next for NimbusNews

Beyond Weather: Expanding to automate all data-driven news:

  1. Aggregation Engine: Scrape/parse sources (AP News, SEC filings, arXiv papers).
  2. Topic Segmentation: Cluster articles into categories (finance, tech, health) using NLP and machine learning.
  3. Multi-Anchor Studio: Generate region-specific reporters using StyleGAN + Wav2Lip.
  4. Live Newsroom: Combine video segments into 24/7 streams with ad breaks. Endgame: Become the "Canva for Newsrooms"—letting anyone turn datasets into broadcast-quality stories.

Built With

Share this project:

Updates