KawaAI - AI-Powered Live2D Virtual Streaming Platform
🎯 Inspiration
We wanted to create a streaming platform that combines the charm of Live2D anime characters with the power of AI, enabling streamers to host engaging, interactive content with virtual companions that respond to viewers in real-time.
🚀 What it does
KawaAI is a live streaming platform featuring AI-powered Live2D characters that:
- Interactive Virtual Hosts: Live2D characters overlay on streams with smooth animations, breathing, eye blinking, and physics simulation
- AI-Powered Responses: Characters automatically detect questions in chat and respond contextually using AI, staying in character
- YouTube Integration: Stream YouTube videos alongside your Live2D character host
- Real-Time Chat: Built on LiveKit for low-latency, real-time communication
- Character Personalities: Choose from multiple unique characters, each with distinct personalities and expressions:
- Haru - Cheerful & Energetic
- Mao - Cool & Confident
- Hiyori - Sweet & Gentle
- Natori - Elegant & Sophisticated
- Mark - Laid-back Cool Guy
- Wanko - Playful & Fun
- Mouse Tracking: Characters' eyes and head follow your cursor for immersive interaction
- Expression System: Click on characters to trigger random facial expressions or emotions
- Stream Replays: Rewatch past streams with the original character and chat history
🛠️ How we built it
Frontend (React + TypeScript)
- Live2D Cubism SDK for Web 5 - Rendering animated 2D characters
- React 19 with TypeScript for type-safe component architecture
- Vite for fast development and optimized builds
- Tailwind CSS 4 for modern, responsive UI design
- LiveKit Components React for real-time video and data channels
- Supabase for authentication and database
- React Router for multi-page navigation
Backend (Python Flask)
- Flask with async support via Hypercorn
- LiveKit Python SDK for room management and token generation
- JanitorAI API for intelligent character responses (streaming SSE)
- Supabase Python Client for database operations
- CORS-enabled for cross-origin requests
Architecture Highlights
- Custom WebGL renderer integration for Live2D models
- Singleton framework manager to prevent multiple initializations
- Character-specific positioning system for optimal framing
- Server-Sent Events (SSE) parsing for streaming AI responses
- Real-time message detection with pattern matching
- Automatic AI agent dispatch system
🎨 Key Technical Achievements
Live2D Integration
We successfully integrated the complex Live2D Cubism SDK into a React application, which required:
- Converting the SDK's vanilla TypeScript demo into React components
- Managing WebGL contexts across component lifecycles
- Implementing proper cleanup to prevent memory leaks
- Creating a singleton framework manager for multiple character instances
- Dynamic character positioning based on model dimensions
AI Response System
- Detects questions in chat using pattern matching (?, what, how, why, etc.)
- Streams AI responses in real-time using SSE parsing
- Maintains character personality throughout conversations
- Prevents duplicate responses with message tracking
- Gracefully handles API errors and timeouts
Character Personalization
Each character has:
- Unique positioning adjustments (some models are taller/shorter)
- Individual animation sets and expressions
- Personality-driven AI responses
- Mouse-tracking for realistic eye movement
- Click interactions for expression changes
🏗️ Challenges we ran into
- Live2D SDK Complexity: The Cubism SDK wasn't designed for React, requiring extensive adaptation of the vanilla TypeScript implementation
- WebGL Context Management: Multiple character instances caused framework re-initialization errors - solved with a singleton pattern
- Streaming API Parsing: JanitorAI returns Server-Sent Events, not standard JSON - required custom SSE parser
- Character Positioning: Each model has different dimensions and anchor points - created dynamic positioning system
- TypeScript Configuration: The SDK required disabling strict module syntax to work with Vite
- Motion Playback: Had to properly initialize eye blink and lip sync parameter IDs for motions to work
- Multi-Component Rendering: Managing 7+ simultaneous Live2D previews without crashes
🏆 Accomplishments that we're proud of
- ✅ Successfully integrated Live2D Cubism SDK into a modern React application
- ✅ Built a working AI chat system that responds contextually and in-character
- ✅ Created a smooth, Netflix-like streaming UI with real-time features
- ✅ Implemented 6 fully functional characters with unique personalities
- ✅ Achieved 60 FPS rendering with transparent backgrounds
- ✅ Built a robust character selection system with live previews
- ✅ Integrated YouTube playback seamlessly with overlay characters
- ✅ Implemented stream replay functionality with full chat history
- ✅ Created dynamic user labeling (You, User 1, User 2, etc.)
📚 What we learned
- Deep understanding of WebGL rendering pipelines and context management
- Server-Sent Events (SSE) parsing for streaming AI responses
- Live2D Cubism SDK architecture and model structure
- React component lifecycle optimization for resource-intensive rendering
- Real-time communication patterns with LiveKit
- Character-based AI prompt engineering
- Handling multiple simultaneous WebGL instances
- Database schema design for flexible content storage
🔮 What's next for KawaAI
Near-term Features
- Lip Sync: Sync character mouth movements with TTS audio output
- Voice Integration: Add text-to-speech for character voices
- Advanced Expressions: Trigger specific expressions based on chat sentiment
- Custom Characters: Allow users to upload their own Live2D models
- Animation Events: Trigger special animations based on chat interactions
- Mobile Support: Touch controls for character interaction
- Monetization: Tips, subscriptions, and virtual gifts for creators
Long-term Vision
- Multi-character Streams: Support multiple characters interacting simultaneously
- Voice Chat Integration: Let characters respond with voice, not just text
- Custom Personality Training: Fine-tune AI responses per character
- Stream Analytics: View metrics, popular clips, and engagement stats
- Social Features: Follow favorite characters, create watch parties
- SDK for Creators: Easy integration for content creators to add their own Live2D models
🎨 Built With
Frontend:
- React 19
- TypeScript
- Vite
- Tailwind CSS 4
- Live2D Cubism SDK for Web 5.r.4
- LiveKit Components React
- Supabase JS Client
- React Router v7
Backend:
- Python 3.12
- FastAPI with Hypercorn (async)
- LiveKit Python SDK
- JanitorAI API
- Fish Audio API
- Supabase Python Client
- HTTPX for async HTTP requests
Infrastructure:
- Supabase (Auth, Database, Real-time)
- LiveKit (WebRTC, Real-time Communication)
- JanitorAI (LLM API)
Built With
- fish-audio
- janitorai-api
- live2d
- livekit
- python
- react
- supabase
- typescript

Log in or sign up for Devpost to join the conversation.