Secondary screen UI demo is the first "Try it out" links": https://youtu.be/a1FuEibcnd8
Flame Finder is an asymmetric ritual for two roles: a lone camper in VR (Roblox) and a companion on a secondary screen who helps reveal meaning behind each beat. Across three tasks we aim at loneliness and self-acceptance—finding worth in imperfection, naming something difficult honestly, then choosing to share warmth so someone else isn’t left only in the dark.
Inspiration
We wanted to build a multiplayer story game where you can take the VR experience in Roblox and give the user or "lone camper" the chance at self discovery (self-acceptance, vulnerability, shared warmth) through a series of quests and reflections at the end to lighten their inner flame.
What it does
Roles VR camper (Roblox): Moves through a dark campsite—physical triggers, snapping objects, carrying a torch—while short overhead lines reinforce the emotional beats. Secondary player (Next.js UI): Advances the narrative by completing reflections tied to each task once Roblox has triggered that task on the shared backend. Shared rules Every task moves waiting → triggered → completed on a FastAPI /flow API so both sides stay in sync. Complete only works after trigger, which keeps cause-and-effect obvious across VR + UI demos.
How we built it
Backend: backend/main.py — FastAPI, in-memory flow state per task (waiting / triggered / completed), /flow, /flow/reset, per-task trigger + complete endpoints (task 2 accepts a choice body). Secondary UI: frontend/ — Next.js + React + Tailwind calling the API for triggers/completions as appropriate for each beat. Roblox: Server scripts bind VR triggers (hands vs torch-by-tag), poll /flow for transitions, drive highlights/spirit beat/torch placement/task‑3 effects, and push concise status to a replicated UI channel where helpful.
Challenges we ran into
Aligning physical interactions (debounced touches, snap completion, tagged torch collisions) with API semantics (trigger before complete) so demos didn’t race or deadlock. Keeping billboard copy readable in-headset—short lines beat paragraphs when labels wrap in VR. Making cross-platform state understandable during judging (single shared flow vs scattered booleans).
Accomplishments that we're proud of
Connecting the VR to Roblox and connecting to the another UI for game task completion.
What we learned
A small finite-state flow per task beats ad hoc flags when two clients share one story. VR UX: constrain copy length and pacing; poll intervals affect how “instant” reactions feel.
What's next for Flame Finder
Richer secondary prompts per choice, stronger tutorialization for judges, optional session/run IDs if we scale beyond single-demo instances. More environmental storytelling and polish passes on torch/flame readability. Try it out (field suggestions) Keep your existing secondary UI demo / Roblox video links; add labels so judges scan faster:
Secondary screen UI demo: your YouTube Roblox experience: PLACE LINK Backend base URL: public URL Submission checklist (why it says “Incomplete”) Finish Devpost’s required slots: thumbnail, gallery screenshots, working links, team, built-with tags, accurate repo/demo URLs. Optional but strong: 60–90s judge-mode video showing VR hand touch → UI completion → visible world reaction for each task.
One-line pitch (optional hero sentence) An asymmetric VR + tablet ritual where imperfection, honesty, and shared warmth advance together—because nobody finishes the fire alone.
If you paste your actual Task 1 mechanic name (rock pile vs UI-only vs hybrid), I can swap one paragraph so Task 1 matches reality pixel-perfect—but this version already avoids over-promising floating-word minigames unless that’s truly what shipped on the secondary screen.
Built With
- antigravity
- cursor
- fastapi
- metaquest2
- nextjs
- roblox
- robloxstudio



Log in or sign up for Devpost to join the conversation.