DreamWeaver takes any spoken or written prompt — whether it’s “a city floating in the clouds with glass bridges” or “my grandmother’s garden at sunset” — and transforms it into:
A detailed narrative with plot, characters, and dialogue.
Generated imagery matching your dream’s style.
Ambient audio that reflects the scene’s mood.
Interactive exploration mode, where you can “walk through” your imagined world.
All powered by gpt-oss-20B for offline creativity, with optional expansion to gpt-oss-120B for ultra-detailed builds.
How we built it
Languages: Python, JavaScript
Frameworks: PyTorch, Gradio for local UI, Three.js for 3D exploration
Model: gpt-oss-20B (local) for text reasoning + fine-tuned LoRA for dream storytelling
Image Gen: Stable Diffusion local inference (DreamBooth for personalization)
Audio Gen: Riffusion (offline) for soundscapes
Database: SQLite for storing past dreams and assets
Hardware: Tested on RTX 4090 PC and Jetson Orin Nano
Challenges we ran into
Aligning multimodal outputs — making text, images, and audio match in tone and theme.
Running multiple AI pipelines offline without huge lag.
Building a UI that feels magical without overwhelming first-time users.
Accomplishments we’re proud of
Created a single-click “dream render” pipeline that turns an idea into a rich multimedia experience in under 90 seconds.
Achieved fully offline operation for text, image, and audio generation.
Made a replay system — users can revisit and expand old dreams.
What we learned
GPT-OSS’s reasoning can be harnessed not just for language, but for coordinating multi-sensory creativity.
Fine-tuning is key for narrative consistency in imagined worlds.
Hardware optimization can make even large models feel real-time in an offline setting.
What’s next for DreamWeaver
VR/AR integration so users can step directly into their dreams.
Shared dream archives where communities can merge and remix worlds.
Adding touch/haptic feedback for full sensory immersion.
Log in or sign up for Devpost to join the conversation.