Inspiration
The idea for VideoForm AI was born out of a simple but relatable frustration: I don’t enjoy typing long responses in traditional forms — and when I asked my friends, I realized they felt the same way. We all found filling out forms to be tedious and impersonal.
That made me wonder: What if forms could feel more like a conversation?
What if, instead of typing, people could simply speak or select, and feel like they were interacting with a real person?
That’s how VideoForm AI started — with the vision to transform boring, static forms into engaging audio- and video-powered experiences that feel natural, human, and personal.
What it does
VideoForm AI lets users:
- Create interactive forms with intro videos, custom questions, and outro videos
- Add questions that accept audio responses, multiple-choice answers, or text inputs
- Choose an AI presenter replica (via Tavus) to deliver questions in a lifelike, conversational style
- Collect responses and provide analytics + AI-generated insights to help users understand patterns, sentiment, and engagement
- Transcribe audio responses using ElevenLabs for accurate, AI-powered transcription
How we built it
We started as a team of two. First, we refined our idea with the help of ChatGPT, using it to shape and polish the concept. We created a prompt for Bolt that allowed us to quickly prototype and structure the app.
We initially built the app using Next.js on Bolt, connecting all services (Bolt, Tavus, etc.) through my teammate’s Builder Pack token. This helped us rapidly bring our vision to life.
As the project evolved, we decided to migrate the entire project to Vite + React for better performance, flexibility, and smoother deployment on Netlify. We integrated Supabase for backend services like authentication, data storage, and analytics.
Finally, we used Cursor to implement small remaining functions, clean up the codebase, and polish the project for submission.
We also integrated ElevenLabs to handle transcription of audio responses for analysis and insights.
Challenges we ran into
- Nice UI with limited prompt tokens: Designing a clean and engaging UI while working within the constraints of AI prompt token limits was tricky. We had to carefully structure prompts, reuse components smartly, and rely on minimalistic, high-impact design choices to deliver a polished experience.
- Finishing the full code with limited tokens: Building a complete and functional app while staying under token and platform size limits challenged us to write concise, clean, and efficient code. We had to think creatively about architecture and eliminate unnecessary complexity.
- Migration pains: Moving the entire codebase from Next.js to Vite + React took time and effort. We had to rethink parts of the structure, adapt components, and ensure everything worked smoothly with our new stack.
- Tavus custom audio: We occasionally ran into errors while generating custom audio, which required troubleshooting and creative workarounds to keep the flow of the form experience intact.
- Creative constraints made us better: The limitations we faced — whether in tokens, tools, or time — pushed us to be more inventive. We found new ways to prototype faster, optimize our code, and deliver a polished product without over-engineering. These constraints became an unexpected advantage that sharpened our focus and improved our final result.
Accomplishments that we're proud of
- Built a full SaaS product with very low budget and vibe-driven coding
- Used Bolt + AI-generated prompts (via ChatGPT) to rapidly prototype the app
- Successfully migrated and finished the app despite technical and platform challenges
- Delivered flexible forms that support audio, multiple-choice, and text inputs
- Integrated AI video presenters to make forms feel more human
- Added AI-powered audio transcription for deeper insights
- Designed a smooth UI/UX with analytics and AI insights
What we learned
- How to refine an idea and structure using ChatGPT + AI prompts
- How to migrate and complete a large project using Vite + React
- How to integrate AI services like Tavus and ElevenLabs
- How to set up and use Supabase for auth, storage, and analytics
- The value of Bolt for rapid prototyping
What's next for VideoForm AI - Interactive Video Questionnaires
Our vision is to evolve the platform into a powerful tool that gives users flexibility, control, and deeper insights.
Select Pre-Built Videos for Forms
Let users choose from a library of intro/outro videos to save time and reduce friction when creating forms.Subscription Options
Introduce subscription tiers to unlock premium features like custom branding, analytics exports, and higher usage limits (currently, the app isn’t fully open).Custom Audio Uploads by Users
Allow users to upload their own audio files for intros, outros, or questions for maximum creative control.ElevenLabs Custom Audio Integration
Add ElevenLabs to generate realistic AI voiceovers and improve transcription accuracy for user questions or form sections.Tavus Conversational API Integration
Expand support for Tavus’s conversational API so users can create interactive conversations with branching logic. Data will flow through webhooks and be analyzed by AI for sentiment, intent, and patterns.Custom Branding
Allow users to fully brand their forms with their logos, colors, fonts, and themes to match their identity.
DISCLAIMER: I USED AI TO REFINE MY THOUGHTS
Built With
- bolt
- elevenslab
- gemenai
- react
- supabase
- tavus
- typescript
- vite

Log in or sign up for Devpost to join the conversation.