Inspiration
Studying is usually lonely, repetitive, and easy to procrastinate. Our team wanted to make studying feel like a multiplayer game night — something you actually look forward to. We were inspired by real-time party games like Skribbl.io and Jack Box, and asked: what if studying felt like that, but was personalized to your actual lecture material? That became Study Royale — a competitive study arena powered by AI.
What it does
Study Royale transforms any uploaded lecture material (PDFs, notes, slides) into a custom, real-time multiplayer study game.
- A host uploads lecture content to start a session
- AI generates class-specific free-response questions
- Players join with a game code and compete in timed rounds
- Players solve on paper, then upload a photo of their answer
- AI extracts text + grades it instantly
- Scores update live with a leaderboard and chat It turns studying into competitive practice: faster, more engaging, and more effective.
How we built it
Study Royale was implemented using a modular full-stack architecture designed for real-time interaction and AI-driven content generation:
Backend
- Python FastAPI for REST endpoints (game creation/joining, session start, uploads, submissions)
- Pydantic models for strict request/response validation and structured game objects
- In-memory game engine for managing: lobby state, players, rounds, active problem sets, chat messages, and scoring
Real-time communication
- Socket.IO for multiplayer state synchronization (player events, game start, round transitions, live updates)
Ai Pipelines
- Google Gemini API used for OCR and text extraction:
- converting uploaded lecture documents into structured text
- converting handwritten submissions into machine-readable answers
- OpenNote API used for:
- generating practice problems aligned with extracted lecture content
- supporting grading workflows (where applicable)
Frontend
- React + Vite for a low-latency interactive client
- File upload handling for study material and answer submissions
- Live UI updates coordinated with polling + Socket.IO events
Challenges we ran into
Data normalization between AI outputs and the game engine: AI-generated problems required transformation into consistent round objects (problem statements, metadata, identifiers, and grading fields).
Multiplayer state coordination: Ensuring round advancement, current problem state, and score updates stayed synchronized across multiple clients required careful server-side authority and consistent event sequencing.
OCR reliability: Handwriting introduces variance in OCR accuracy; grading needed to handle incomplete or noisy OCR output and still provide meaningful evaluation.
API quota limits: Gemini rate limits required prompt optimization, controlled request frequency, and reattempt handling to maintain generation reliability.
Accomplishments that we're proud of
Delivered a functioning real-time multiplayer gameplay loop with lobby → session start → multi-round flow.
Successfully integrated an AI pipeline that generates course-specific problems from uploaded lecture content.
Implemented answer submission via image upload and produced AI-based evaluation and scoring outputs.
Built a complete end-to-end system: upload → extract → generate → play → submit → grade → leaderboard.
Maintained modular separation between core components (game engine, AI generation, grading pipeline, and transport layer).
What we learned
Multiplayer applications benefit from strict server-authoritative state control to avoid client desynchronization.
AI is most effective when treated as an external service producing structured artifacts, requiring strong schema enforcement and normalization layers.
OCR-based grading is less about perfect extraction and more about designing evaluation logic that remains reliable under noisy inputs.
Successful hackathon engineering depends heavily on modular architecture and parallel team development with stable APIs/contracts between components.
What's next for Study Royale
Implement persistent storage (e.g., Redis/PostgreSQL) for durable game sessions, reusable study packs, and analytics.
Expand grading robustness with partial credit, rubric-based evaluation, confidence thresholds, and error-tolerant matching.
Support additional question formats (multi-step derivations, diagrams, coding questions, structured proofs).
Add authentication and user profiles to support saved sessions, progress tracking, and personalized difficulty scaling.
Deploy to a public cloud environment and improve system scalability for concurrent games (multi-instance backend + shared state layer).
And more game modes!
Built With
- fastapi
- feynman
- gemini-api
- javascript
- opennoteapi
- python
- react
- socket.io
- vite
Log in or sign up for Devpost to join the conversation.