Inspiration

We wanted to democratize video editing. Professional editing software (like Premiere Pro or DaVinci Resolve) has a steep learning curve, while simple mobile apps lack power. We imagined a middle ground: What if you could just tell an AI what you want your video to look like?

Our inspiration came from the idea of a "Junior Editor" - an AI assistant that handles the tedious parts of editing (cutting silence, organizing clips, basic transitions) so creators can focus on storytelling.

What it does

Talk to Your Editor: You can chat with the AI to request changes (e.g., "Cut the boring parts," "Add a fade transition," "Make this punchier"). Smart Analysis: Upload a video, and LiveEdit analyzes the content, identifying key events, scenes, and even suggesting edits based on pacing and mood. Multi-Clip Orchestration: Upload multiple raw clips, give a simple instruction like "Create a 30-second hype reel," and the AI orders, trims, and stitches them together for you. Generative Creation: Need a B-roll shot? Use the "Creative" mode to generate video assets from scratch using Google's Gemini models.

How we built it

Frontend: Built with React 19 and TypeScript for a snappy, modern UI. We used Vite for fast builds and a custom dark-mode aesthetic that feels like professional software. Backend: A Python Flask API handles the logic. We integrated Google Gemini 1.5/2.0 for the "brains" - parsing user intent and generating FFmpeg commands. Video Engine: The heavy lifting is done by FFmpeg, orchestrated by Celery workers and a Redis message broker. This ensures the web UI never freezes while rendering 4K video. Storage & Database: We use PostgreSQL (Neon) for critical data and an object storage strategy for large media files (handled locally for this demo).

Challenges we ran into

The "Context Window" of Video: Getting an LLM to understand time was hard. We had to build a system that probes video duration and maps AI "suggestions" (e.g., "cut at 00:05") to precise timestamps for FFmpeg. Async Processing Hell: Originally, long renders would time out the HTTP requests. We had to implement a robust job queue system (Celery) with polling endpoints to handle renders that take minutes or longer. Prompt Engineering: Teaching the AI to output valid JSON for edit plans was tricky. We spent hours refining prompts to ensure it wouldn't hallucinate timestamps that didn't exist or syntax errors that would break the parser.

Accomplishments that we're proud of

Seamless Multi-Clip Editing: We successfully built a pipeline where you can dump 3 random videos, ask for a "montage," and get a watchable, stitched-together result with audio mixing. The "Live" Feel: The naming "LiveEdit" isn't just for show. We optimized the feedback loop so that chatting with the AI feels responsive, even when complex processing is happening in the background. Production-Ready Architecture: Moving away from a simple script to a proper architecture (Worker + Broker + API) was a huge learning curve but makes the app scalable.

What we learned

FFmpeg is Magic (and Pain): We learned more about codec compatibility, audio ducking, and filter complexes than we ever planned to. AI Needs Structure: LLMs are creative, but for code-like tasks (video editing), they need very strict constraints and structured outputs (JSON schemas) to be useful tools. User Experience Matters: A backend that works is useless if the user doesn't know what it's doing. adding status messages ("Analyzing...", "Rendering...") was crucial for the UX.

What's next for Live Edit

Real-time Preview: Implementing WebCodecs API to preview edits in the browser before the server renders the final file. Collaborative Editing: Adding WebSockets to allow multiple users to edit the same project timeline simultaneously. Mobile Companion: A lightweight mobile app to capture footage and sync it directly to your LiveEdit project bin.

Built With

  • celery
  • css-frameworks:-react-19
  • flask
  • imagen-3-data-&-storage:-postgresql-(neon)
  • languages:-typescript
  • paystack
  • python
  • redis-(upstash)-video-processing:-ffmpeg
  • services:
  • sql
  • vite-ai-&-ml:-google-gemini-api-(1.5-flash-&-2.0)
Share this project:

Updates