Inspiration

Like many of us, we found ourselves doom-scrolling Reddit, hooked on threads so wild, insightful, or downright hilarious they deserved more than just upvotes—they deserved a stage. Meanwhile, the rise of short-form content (especially the AI-narrated kind) was exploding… but let’s be real, most tools either cost money, time, or a minor video-editing degree.

So we asked ourselves:
What if creating viral, high-quality Reddit videos was as simple as pasting a link?

That one “what if” became ReddiToks.


What it does

ReddiToks takes any Reddit thread and transforms it into a fast, funny, visually engaging short-form video—automatically. We’re talking:

  • Natural AI voiceovers
  • Talking avatars with synced lip movement
  • Auto-generated captions
  • Reddit-style UI visuals
  • No manual editing required

It’s like giving every thread a production team… without needing an actual team.


How we built it

  1. Thread Selection
    We tap into Reddit’s API, filter posts based on upvotes, engagement, and tone, and surface only the ones that feel like a story.

  2. Summarization
    A lightweight LLM condenses long threads into punchy scripts—preserving tone, sarcasm, and Reddit’s signature drama.

  3. Voiceover & Avatar Animation
    Scripts go into a TTS engine, and voices are lip-synced to animated avatars using visemes. No uncanny valley here—we tuned it till it felt human(ish).

  4. Video Composition
    FFmpeg handles the video stitching: captions, avatars, Reddit screenshots, backgrounds—all auto-rendered frame-by-frame.

  5. Frontend UI
    Built in Next.js 15, styled in a neon green-and-black Apple Liquid Glass aesthetic. Smooth, premium, slightly futuristic.

  6. Hosting & Performance
    Running on Vercel, using Redis caching and serverless rendering for scale. Snappy, even under pressure.


Challenges we ran into

  • Dry Summaries
    Our first AI scripts felt like Wikipedia with a microphone. We had to fine-tune the LLM to keep the Reddit flavor intact—especially the humor and drama.

  • Lip Sync Woes
    Talking avatars looked… off. Subtle fixes like delaying visemes by milliseconds made a huge visual improvement.

  • Slow Rendering
    Early renders took 30+ seconds per video. We optimized FFmpeg pipelines and added chunked caching—now it runs under 10 seconds.

  • UI Identity Crisis
    Our first designs felt generic. After a few iterations, we landed on a sleek, glassy interface that finally looked as good as the product felt.


Accomplishments that we're proud of

  • Built a fully automated Reddit-to-video pipeline—no manual steps.
  • Achieved average video renders in under 10 seconds.
  • Shipped a UI that doesn’t scream “last-minute hackathon” (even if it kind of was).
  • Real feedback from creators: “Wait… this is actually usable.”
  • Made something that made people laugh, think, share—and come back for more.

What we learned

This project stretched us technically and creatively. Key takeaways:

  • Working with Reddit’s API and parsing chaotic thread data.
  • Prompt engineering for tone-aware AI summarization.
  • Building a clean TTS → viseme → avatar animation pipeline.
  • Designing a modern, liquid-glass UI with Next.js 15 + Tailwind.
  • FFmpeg ninja tricks to shave seconds off render time.
  • Finding that balance between automation and authentic storytelling.

What's next for ReddiToks

  • Multi-language Support – Because the internet isn’t just English.
  • Custom Avatars – Different characters, voices, and vibes.
  • Themed Backgrounds – Fit every niche, from tech bros to true crime fans.
  • Template Marketplace – Let creators remix styles like plug-and-play.
  • Mobile Generator – Create on the go, straight from your phone.

Reddit threads were already storytelling gold—we just gave them a stage, a voice, and a spotlight.

Built With

Share this project:

Updates