Inspiration

We asked: What if your favorite anime characters could hang out in the same chat room and actually talk to each other? Inspired by the distinct voices of Saitama, Light Yagami, Sasuke Uchiha, etc., we built a space where AI doesn’t just reply, it participates.

What it does

  • Real-time multiplayer chat where users and AI anime personas converse together.
  • Users create rooms, pick characters, and watch bots jump in based on context, mentions, and moral dilemmas.
  • “Good vs Evil” debate mode where aligned bots argue before synthesizing an answer.
  • Name-mention detection (partial names like “Sasuke” trigger replies).

How we built it

Frontend

  • React + Vite, Tailwind, React Router
  • WebSockets for live, bidirectional updates

Backend

  • FastAPI + Uvicorn; WebSocket hub for multi-user rooms
  • LLM integration (OpenAI/compatible) for character-specific outputs

AI Architecture

  • Persona prompts in bot_personas.py
  • Orchestrator with should_bot_respond(), moral-dilemma detection, cooldowns/limits
  • Lightweight memory for user facts

High-level flow

React (Room/Chat) ⇄ WebSocket/REST ⇄ FastAPI (ConnMgr, RoomState, Orchestrator) ⇄ LLM

Challenges we ran into

  1. “Wrong Bot” bug: Saitama always appeared. Fix: Send authoritative room_state on connect; frontend hydrates from it.
  2. Python env issues (ModuleNotFoundError: uvicorn). Fix: Virtualenv + pinned requirements.
  3. CORS mismatches (ports 5173 vs 5174). Fix: Allow both localhost/127.0.0.1 ports.
  4. Name mention detection too strict. Fix: Split names; match parts ≥3 chars.
  5. LLM output inconsistency. Fix: Robust fallbacks: JSON→content→raw→default.

Accomplishments that we’re proud of

  • Bots that feel “in-character” and join organically instead of spamming.
  • Smooth, truly real-time multi-user rooms with synchronized bot add/remove.
  • Moral-dilemma debate mode that yields richer, balanced answers.
  • Full CRT effect using CSS only (no JS, minimal perf cost).
  • Guardrails (cooldowns, message caps, context checks) for stable autonomy.

What we learned

  • Prompt engineering is iterative—tight prompts keep voice without rigidity.
  • WebSocket state must be backend-authoritative; hydrate clients on connect.
  • Autonomy needs constraints—knowing when not to speak matters.
  • CSS can carry aesthetics (scanlines, flicker, vignette) with negligible overhead.

What’s next for AIRA (AI Room Arena)

  • Character voice messages with TTS/voice cloning
  • Reactive sprite animations and emotions
  • Battle/Debate mode with user scoring and ladders
  • Custom character creator for user-defined personas
  • Mobile optimization for small screens and touch gestures

Built With

  • asgi
  • css3
  • fastapi
  • git
  • html5
  • httpx
  • javascript-(es6+)
  • jinja
  • json
  • npm
  • openai-api-(llm)
  • pip
  • postcss
  • pydantic
  • pytest
  • python-3.13
  • react-router-dom
  • react.js
  • rest-api
  • tailwind-css
  • tenacity
  • uvicorn
  • venv
  • vite
  • websockets
Share this project:

Updates