Inspiration
I have been "vibe coding" ( using AI to help write code ) for a while, but I had never attempted to build a full, polished project in a single day. On top of that, whenever I code, I struggle to find music that perfectly matches my current mood. Building DreamStream was hitting tow birds with one stone: I got to "vibe code" a complete application from scratch, and the result is an app that generates the perfect vibe for future coding sessions. It was pure creative chaos and really fun.
What it does
A simple web app where the user types in a specific "vibe" or scenario (e.g., "Driving through Tokyo at 2 AM while raining"), and the AI generates a custom looping Lo-Fi beat, a matching AI-generated background video/image, and a fake "radio host" intro. The initial Idea was that of a Radio Vibe station but later turned into a DreamStream.
How I built it
I leaned heavily on the best AI tools available to make the experience feel "magical" and instant.
OpenAI API: Acts as the "Brain," converting user moods into descriptive tags and scripts. ElevenLabs: For the "DJ" voiceover (Text-to-Speech). Replicate: For generating the atmospheric background images. Pixabay: For sourcing the actual music tracks via CDN links based on query the vibe matches too.
I built this using Antigravity (with ChatGPT for planning). After trying everything from Cursor to kiro.dev, I found Antigravity to be a refreshing change. While the underlying tech often feels similar across platforms, the UX here was superior. It has that distinct, well-thought-out polish you expect from a Google product.
Challenges I ran into
The biggest hurdle was Integration Hell.I initially tried free models, but the latency was too high ($T > 10s$), and the quality just didn't match the vibe. Moving to premium models solved the speed issue, but introduced a new pain point: Inconsistency. Every AI SDK and model seems to have a completely different JSON structure for inputs and outputs. Trying out different models meant constantly rewriting the "plumbing" code to handle different syntax and response formats. It was a tedious process of trial and error to get them all to speak the same language. Another Issue was the dynamic Music generation on the go, initially I thought elevenLabs to be advanced enough to do this but it wasn't so in the end stayed with query based music fetching.
Accomplishments that I'am proud of
- I actually "Vibe Coded" it: I didn't just write code; I orchestrated a team of AI agents to build a full-stack Next.js application in under 24 hours. It was a chaotic experiment in AI-assisted development that actually resulted in a deployed, working product.
- Beating the Latency: I managed to chain three distinct AI models (OpenAI, Replicate, and ElevenLabs) together and optimized the flow so the "station" creates itself in seconds, not minutes.
- The "Vibe" Check: The biggest win is that it actually works. When you type "Sad rainy Tokyo," the combination of the rain-streaked visual, the melancholic Lo-Fi, and the deep voiceover genuinely makes you feel something. I captured the exact feeling I set out to find.
What I learned
Apart from the technical grind, the real takeaway was emotional. Seeing the exponential growth of AI in coding was both wonderful and slightly scary. It’s a wake-up call: rather than worrying about AI taking my job, I realized I need to adapt and start wielding these tools to my advantage immediately.
What's next for DreamStream
I want to make the platform fully immersive and interactive. Future updates will replace static images with generated video loops and introduce a gamified UI, allowing users to perform actions and manipulate the environment to match their mood.
Built With
- antigravity
- elevenlabs
- nextjs
- openai
- replicate

Log in or sign up for Devpost to join the conversation.