Inspiration
I binge the r/NoSleep subreddit for those “read at your own risk” text horrors.
That vibe + jump-scares + modern LLM super-powers = Don’t Scream.
What it does
You open a chat.
A glitchy AI named ZERO bursts in, begging you to stay quiet—“The Static” is hunting anyone who makes noise inside electronics.
ZERO tries to help, but every question you answer feeds the monster.
Highlights
- Branching, unscripted dialogue (Gemini agent)
- Real-time voice lines (custom ElevenLabs voice, slowed + echoed)
- Screen glitches & surprise audio jumps triggered by tool calls
- A doomed ending every play-through 😈
How I built it in 6 hours
| Layer | Tech | Notes |
|---|---|---|
| Rapid prototyping | Bolt | “Vibe-coding” meant I wrote prompts, not boilerplate. |
| Narrative engine | Gemini Pro | Produces JSON commands: say, playSound, glitchScreen. |
| Voice & SFX | ElevenLabs Voice API | Custom erratic voice; params tweaked on the fly. |
| Front-end | React + Tailwind (auto-generated by Bolt) | Bolt scaffolded the UI & state. |
| Deployment | dontscream.xyz | One-click deploy. |
Challenges
- Prompt drift – balancing scariness vs. coherence.
- Pacing jump-scares inside a text+audio loop without killing suspense.
- Token budget (~3-4 M during dev); Bolt’s hot-reload helped prune fast.
What I learned
- Treat an LLM like a Dungeon Master—give it tools, not scripts.
- ElevenLabs’ per-utterance parameters are a cheat-code for horror SFX.
- Bolt’s rapid “vibe coding” slashes prototype time; six hours ≠ myth anymore.
What’s next
- Branching endings & (slim) survival paths
- Multiplayer “shared haunting” using WebRTC
- Local inference to cut latency and token cost
- Extra personas (friendly AI, malevolent TV, possessed microwave…)
Built With
- bolt
- chatgpt
- claude
- css3
- elevenlabs
- gemini
- javascript
- netlify
- react
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.