Inspiration
Faced with an information overload problem where traditional news consumption feels stale and time-consuming, we were inspired by the alternative "brainrot" content style that makes news digestible and engaging, bringing doomscrolling to current events. Watching creators spend hours manually creating short-form news reels, we saw an opportunity to automate the entire pipeline; from fetching top headlines to generating polished, shareable video content. We wanted to bridge the gap between staying informed and the addictive, scrollable format that younger consumers are familiar with.
What it does
Brainrot News Reels uses an automated content generation pipeline that transforms news articles into engaging short-form video reels. The system:
- Fetches breaking news from NewsAPI, automatically extracting full article content
- Generates viral-style scripts using AI (Claude via OpenRouter) that transform dry news into attention-grabbing narratives with hooks
- Creates natural-sounding narration using ElevenLabs text-to-speech with word-level timestamps
- Composites professional videos by combining background footage, AI-generated audio, and synchronized captions using FFmpeg
- Delivers a TikTok-style feed through a React Native mobile app where users can scroll through an endless stream of news reels
The entire pipeline runs automatically, requiring zero manual intervention. Users get a personalized feed of news content in the format they actually want to consume.
How we built it
We architected a full-stack system with clear separation of concerns:
Backend (FastAPI + PostgreSQL)
- Built a modular service architecture with dedicated services for each pipeline stage: NewsFetcher, ScriptGenerator, AudioGenerator, and VideoCompositor
- Used SQLAlchemy ORM for database management, storing articles, reels, captions, and user data
- Integrated multiple APIs: NewsAPI for content, OpenRouter (Claude) for script generation, ElevenLabs for TTS, and AWS S3 for media storage
- Implemented automatic article fetching with deduplication and content extraction using trafilatura for LLM-friendly text to minimize token usage while maximizing model comprehension
Video Processing Pipeline
- Used FFmpeg to composite videos, combining background videos, narration audio, and burned-in captions
- Generated SRT subtitle files from word-level timestamps provided by ElevenLabs
- Implemented automatic video trimming to match audio length using FFmpeg's -shortest flag Frontend (React Native/Expo)
- Built a vertical scrolling feed using React Native's FlatList with optimized rendering for smooth performance
- Implemented TikTok-style pagination with snap-to-interval scrolling
- Created a clean, dark-themed UI optimized for video consumption
Infrastructure
- PostgreSQL for persistent data storage with automatic schema initialization
- AWS S3 for scalable media storage (background videos, audio files, final reels)
- FastAPI's background tasks for asynchronous video processing
Challenges we ran into
Word-level timestamp synchronization: Aligning ElevenLabs character timestamps with readable caption groups required reconstructing words from character arrays and grouping by character limits while preserving timing. FFmpeg video compositing: Mapping video/audio streams, trimming background to audio length, and handling UTF-8 subtitles to avoid rendering issues. Async pipeline design: Status-based workflow (script_generated → audio_generated → ready) to handle partial failures and resume processing.
Accomplishments that we're proud of
End-to-end automation: Fully automated pipeline from article to video with zero manual steps. Smart article management: Auto-fetching additional content when the pool is low, with deduplication to keep content flowing. Precise caption sync: Word-level timestamps from ElevenLabs alignments for frame-accurate captions. Cross-platform app: Native-feeling React Native app on iOS, Android, and web with smooth scrolling.
What we learned
We learned to build resilient API integrations with retry logic and fallbacks, handle FFmpeg video processing (stream mapping, subtitle burning, encoding), and design database schemas with status enums for multi-stage async pipelines. We optimized React Native video feeds with viewability detection and memory management, implemented multiple content extraction strategies with fallbacks, and refined LLM prompts to generate engaging, viral-style content with proper constraints. These patterns apply broadly to automated content generation systems.
What's next for The Brainrot Times
Right now the data relies on multiple different endpoints from different services, having a streamlined approach throguh GraphQL could be explored. Additionally we would like the integration of social features like commenting and sharing, as well as a liking function which impacts a recommender system such that we can adjust the quality of content. As a nice to have, the projects long-term maintenance would greatly benefit from unit and integration testing.
Built With
- elevenlabs
- ffmpeg
- openrouter
- postgresql
- python
- s3
- typescript
Log in or sign up for Devpost to join the conversation.