Inspiration
Early-stage founders have incredible ideas but almost no resources. They can’t afford expensive video studios, agencies, or $10k production budgets. Meanwhile, top companies like Apple spent over $775 million on display advertising in 2023, including $21.4 million on a single ad for the Vision Pro that reached 1.7 billion impressions in one week. Most new founders could never dream of that level of production power. We built Glimpse to close that gap — giving small creators and early startups a cinematic storytelling experience minus the cost.
What it does
Glimpse turns one sentence about your product into a polished 12-second cinematic video designed for landing pages, social media, and pitch decks. Our system takes care of concepting, shot design, transitions, and visual direction, giving early-stage founders a product video without needing a team, budget, or editing skills.
How we built it
We built Glimpse with a Next.js 15 frontend and a Python backend powered by Open AI agents. The frontend showcases the product through a clean landing page built with TypeScript, Tailwind, Radix UI, and Framer Motion. It currently displays the workflow, examples, and an email capture form. Behind the scenes, we developed a Python-based agent pipeline where two GPT-5.1 agents collaborate: a Creative Director that transforms a product one-liner into a cinematic vision, and a Scriptwriter that converts that vision into a structured, video-ready prompt. A separate Sora integration script generates the actual 12-second video using Sora-2-Pro, Open Ai's most advanced video generation model.
Currently, the frontend doesn’t accept user input yet because we focused on presenting a polished product experience first, similar to how many YC startups launch with a strong landing page before opening their tool to the public. After analyzing the Fall ’25 and Summer ’25 YC-funded batches, we noticed a clear pattern: early-stage teams often prioritize positioning, storytelling, and waitlist collection before enabling hands-on access. We followed that approach to build trust, validate interest, and prepare for a smooth integration of our backend pipeline in the next iteration.
Challenges we ran into
Our biggest challenge was teaching AI to think like a creative director. Video generation models are powerful, but they don't inherently understand storytelling, pacing, or brand emotion. We had to design a multi-agent system where our Creative Director agent could break down a single product sentence into compelling visual concepts, then coordinate with our Scriptwriter agent to ensure each shot served the narrative.
Beyond the creative and technical complexity of coordinating multiple AI agents, we ran headfirst into the harsh realities of building on third-party AI infrastructure. This weekend Open Ai's API experienced an official issue where batch API jobs were stuck in a "finalizing" state indefinitely, completely halting our video generation pipeline. Because of this, failed generations cost us money since we pay for every video attempt, and Open AI doesn't issue refunds for failed outputs.
We also faced early coordination challenges around version control and team workflow. Not everyone on the team had experience with Git, which initially slowed us down. Fortunately, a few members stepped up to teach the rest, and once we got synchronized, our development velocity increased dramatically.
Accomplishments that we're proud of
Despite building Glimpse in just 60 hours, we achieved several milestones that normally take early-stage teams weeks to pull off. We built a full backend pipeline that converts one product sentence into a cinematic prompt and a Sora-generated video — orchestrated by our multi-agent system (MAS). We’re proud that our Creative Director and Scriptwriter agents collaborate reliably, producing consistent, high-quality shot lists and visual directions without human intervention.
We’re also proud of how polished our landing page turned out. It clearly explains what Glimpse does, shows examples, and mirrors the kind of early launch strategy we’ve seen from real YC companies. Even though users can’t input text yet, we still built the entire backend behind the scenes, hoping users would be interested before allowing them to try it out. The largest accomplishment, all things considered, was integrating frontend, backend, design, and video production into a single, seamless experience. We're proud of the fact that Glimpse already seems like the beginning of a genuine product rather than just a hackathon prototype.
What we learned
We learned how to better build full-stack web applications using TypeScript and the React framework for users to interact with and a Python backend that can prompt and interact with AI tools such as Sora and AI agents to better accurately fulfill advertisements that users would want. We also learned more about teamwork as we coordinated our workflow quite efficiently through this three day experience.
What's next for Glimpse
What's next for Glimpse is expanding our product and giving users/founders more control. We're testing and integrating next-generation models like Google's Veo and Kling to unlock new visual styles and storytelling possibilities. We also plan to test the reasoning effort parameter on the GPT-5.1 agents to see if extended reasoning would produce better prompts and ideas. We also plan to introduce a HITL (human-in-the-loop workflow) option, allowing users to collaborate directly with our Creative Director and Scriptwriter agents, where they can tweak concepts, refine shorts, and iterate until it captures their vision. Whether you want instant results or hands-on creative control, Glimpse will meet you where you are and scale as your product grows.
Built With
- css
- git
- gpt-5.1
- javascript
- next.js
- openai
- python
- react
- sora-2-pro
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.