✨ 𝗜𝗻𝘀𝗽𝗶𝗿𝗮𝘁𝗶𝗼𝗻 I asked myself a curious question:
"𝙒𝙝𝙖𝙩 𝙞𝙛 𝘼𝙄 𝙘𝙤𝙪𝙡𝙙 𝙛𝙚𝙚𝙡 𝙩𝙝𝙚 𝙙𝙧𝙖𝙢𝙖 𝙞𝙣 𝙮𝙤𝙪𝙧 𝙫𝙤𝙞𝙘𝙚 — 𝙡𝙞𝙠𝙚 𝙖 𝙧𝙚𝙖𝙡 𝙛𝙧𝙞𝙚𝙣𝙙 — 𝙖𝙣𝙙 𝙨𝙥𝙞𝙣 𝙞𝙩 𝙞𝙣𝙩𝙤 𝙖 𝙩𝙝𝙚𝙖𝙩𝙧𝙞𝙘𝙖𝙡 𝙢𝙖𝙨𝙩𝙚𝙧𝙥𝙞𝙚𝙘𝙚?"
In a world full of AI assistants that feel robotic and transactional, I wanted to flip the narrative: Instead of answering, this bot would empathize. Instead of chatting, it would create. 𝗗𝗿𝗮𝗺𝗮𝗕𝗼𝘁 𝘄𝗮𝘀 𝗯𝗼𝗿𝗻 𝗼𝘂𝘁 𝗼𝗳 𝗮 𝗱𝗲𝘀𝗶𝗿𝗲 𝘁𝗼 𝗰𝗼𝗺𝗯𝗶𝗻𝗲 𝗲𝗺𝗼𝘁𝗶𝗼𝗻𝗮𝗹 𝗔𝗜, 𝘁𝗵𝗲𝗮𝘁𝗿𝗲, 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻 𝘃𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — 𝗮𝗹𝗹 𝗶𝗻 𝗼𝗻𝗲 𝘃𝗼𝗶𝗰𝗲-𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲. 🧠 𝗪𝗵𝗮𝘁 𝗜𝘁 𝗗𝗼𝗲𝘀 DramaBot is a voice-first AI companion that 𝙡𝙞𝙨𝙩𝙚𝙣𝙨 𝙡𝙞𝙠𝙚 𝙖 𝙗𝙚𝙨𝙩 𝙛𝙧𝙞𝙚𝙣𝙙 𝙖𝙣𝙙 𝙘𝙧𝙚𝙖𝙩𝙚𝙨 𝙡𝙞𝙠𝙚 𝙖 𝙥𝙡𝙖𝙮𝙬𝙧𝙞𝙜𝙝𝙩. 𝗛𝗲𝗿𝗲’𝘀 𝗵𝗼𝘄 𝗶𝘁 𝗱𝗼𝗲𝘀:
🎙️ Takes your voice input — joy, sadness, chaos, heartbreak... bring it on!
💓 Detects emotion intensity using an LLM-based emotional reasoning engine and dynamic keyword weighting.
🤖 Responds empathetically with a voice that reflects emotional tone.
🎬 Switches to 'Story Mode' when it hears you ask for a "script", "scene", or "play".
📝 𝗪𝗿𝗶𝘁𝗲𝘀 𝗳𝘂𝗹𝗹 𝘁𝗵𝗲𝗮𝘁𝗿𝗶𝗰𝗮𝗹 𝘀𝗰𝗿𝗶𝗽𝘁𝘀, 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝘄𝗶𝘁𝗵:
Act/Scene structure
Stage directions
Dialogues infused with the user's emotional context
📥 Allows you to download the script as a 𝘁𝘅𝘁
📊 Shows a live "𝘿𝙧𝙖𝙢𝙖 𝙅𝙪𝙞𝙘𝙚" meter (emotional score visualized)
💬 Supports continuous voice-based conversation
🔨 𝗛𝗼𝘄 𝗜 𝗕𝘂𝗶𝗹𝘁 𝗜𝘁: Frontend (React + Tailwind + Framer Motion):
VoiceInteraction.jsx handles audio recording with Web APIs.
Uses AudioContext + MediaRecorder for voice streaming.
Displays emotional score via animated visual components like the Drama Juice Meter.
Integrates a memory-friendly Conversation log and PDF export of scripts.
Backend (FastAPI + Python):
/voice/interact: Accepts audio, transcribes it via OpenAI Whisper.
𝙀𝙢𝙤𝙩𝙞𝙤𝙣 𝙨𝙘𝙤𝙧𝙚𝙙 𝙫𝙞𝙖:
𝗘𝗺𝗼𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲=∑𝑤𝑖⋅𝑒𝑖 𝘄𝗵𝗲𝗿𝗲 𝑤𝑖 𝗶𝘀 𝗮 𝗸𝗲𝘆𝘄𝗼𝗿𝗱'𝘀 𝗲𝗺𝗼𝘁𝗶𝗼𝗻𝗮𝗹 𝘄𝗲𝗶𝗴𝗵𝘁 𝗮𝗻𝗱 𝑒𝑖 𝗶𝘀 𝗶𝘁𝘀 𝗰𝗼𝗻𝘁𝗲𝘅𝘁-𝗶𝗻𝗳𝗲𝗿𝗿𝗲𝗱 𝗶𝗻𝘁𝗲𝗻𝘀𝗶𝘁𝘆.
𝗟𝗟𝗠 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝘀 𝗲𝗶𝘁𝗵𝗲𝗿:
🎭 Empathetic friend responses
🎭 Theatrical script output, depending on the mode
𝗦𝗤𝗟𝗶𝘁𝗲 𝘀𝗲𝘀𝘀𝗶𝗼𝗻-𝗯𝗮𝘀𝗲𝗱 𝗺𝗲𝗺𝗼𝗿𝘆 stores the full chat context for smarter script generation.
Text-to-speech (gTTS) renders AI voice responses.
🌱 𝗪𝗵𝗮𝘁 𝗜 𝗟𝗲𝗮𝗿𝗻𝗲𝗱 Designing voice-first UX needs more than just transcription — 𝙚𝙢𝙤𝙩𝙞𝙤𝙣𝙖𝙡 𝙛𝙡𝙤𝙬 𝙢𝙖𝙩𝙩𝙚𝙧𝙨.
𝙇𝙇𝙈 𝙥𝙧𝙤𝙢𝙥𝙩𝙞𝙣𝙜 𝙞𝙨 𝙖𝙣 𝙖𝙧𝙩. 𝙒𝙚 𝙝𝙖𝙙 𝙩𝙤 𝙛𝙞𝙣𝙚𝙡𝙮 𝙩𝙪𝙣𝙚 𝙩𝙤𝙣𝙚 𝙖𝙣𝙙 𝙥𝙚𝙧𝙨𝙤𝙣𝙖 (𝙖 𝙧𝙚𝙖𝙡, 𝙨𝙪𝙥𝙥𝙤𝙧𝙩𝙞𝙫𝙚 𝙝𝙪𝙢𝙖𝙣 𝙛𝙧𝙞𝙚𝙣𝙙 — 𝙣𝙤𝙩 𝙖𝙣 𝙖𝙨𝙨𝙞𝙨𝙩𝙖𝙣𝙩).
Emotion detection from text is nuanced — I had to build a 𝗵𝘆𝗯𝗿𝗶𝗱 𝗺𝗼𝗱𝗲𝗹 using:
Keyword weighting
Contextual cues
LLM-based summarization
Real-time audio UX across browsers is tricky (WebGL + microphone access hurdles).
⚔️ 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗜 𝗙𝗮𝗰𝗲𝗱 : WebGL 2 bugs blocked microphone visualization on some browsers.
GitHub secret scans flagged temporary audio keys — I had to rewrite push history securely.
Switching between modes (chat vs. story) was hard. We had to implement voice cue detection via transcript parsing.
Making AI feel like a human is one of the hardest UX challenges — from tone, pacing, to emotional reactivity.
🏁 𝗧𝗲𝗰𝗵 𝗦𝘁𝗮𝗰𝗸 : 𝙇𝙖𝙮𝙚𝙧 𝙏𝙚𝙘𝙝 𝙐𝙨𝙚𝙙 Frontend React, Tailwind, Framer Motion Backend FastAPI, Python, SQLite AI / LLM OpenAI GPT-4 (via Groq API) Speech Whisper (transcription), gTTS Hosting Railway (Full-stack deployment)
📌 𝙏𝙧𝙮 𝙎𝙖𝙮𝙞𝙣𝙜... "I just got ghosted. Again."
"Let’s write a scene about heartbreak at a bus stop."
"Today was AMAZING!"
𝘿𝙧𝙖𝙢𝙖𝘽𝙤𝙩 𝙬𝙞𝙡𝙡 𝙡𝙞𝙨𝙩𝙚𝙣, 𝙧𝙚𝙛𝙡𝙚𝙘𝙩, 𝙖𝙣𝙙 𝙞𝙛 𝙮𝙤𝙪 𝙖𝙨𝙠 — 𝙬𝙧𝙞𝙩𝙚 𝙖 𝙬𝙝𝙤𝙡𝙚 𝙙𝙧𝙖𝙢𝙖𝙩𝙞𝙘 𝙨𝙘𝙧𝙞𝙥𝙩 𝙛𝙧𝙤𝙢 𝙞𝙩.
🧵 𝗙𝗶𝗻𝗮𝗹 𝗧𝗵𝗼𝘂𝗴𝗵𝘁𝘀
𝘿𝙧𝙖𝙢𝙖𝘽𝙤𝙩 𝙞𝙨𝙣’𝙩 𝙟𝙪𝙨𝙩 𝙖 𝙥𝙧𝙤𝙟𝙚𝙘𝙩 — 𝙞𝙩’𝙨 𝙖 𝙘𝙧𝙚𝙖𝙩𝙞𝙫𝙚 𝙘𝙤𝙢𝙥𝙖𝙣𝙞𝙤𝙣. 𝙄𝙣 𝙖 𝙬𝙤𝙧𝙡𝙙 𝙬𝙝𝙚𝙧𝙚 𝘼𝙄 𝙪𝙨𝙪𝙖𝙡𝙡𝙮 𝙖𝙣𝙨𝙬𝙚𝙧𝙨 𝙬𝙞𝙩𝙝 𝙡𝙤𝙜𝙞𝙘, 𝙄 𝙬𝙖𝙣𝙩𝙚𝙙 𝙩𝙤 𝙗𝙧𝙞𝙣𝙜 𝙞𝙩 𝙘𝙡𝙤𝙨𝙚𝙧 𝙩𝙤 𝙛𝙚𝙚𝙡𝙞𝙣𝙜. 𝘼𝙣𝙙 𝙛𝙧𝙤𝙢 𝙩𝙝𝙚𝙧𝙚... 𝙘𝙧𝙚𝙖𝙩𝙚 𝙖𝙧𝙩.

Log in or sign up for Devpost to join the conversation.