Inspiration
We wanted play to spill out of apps and back into the real world. Everyone already photographs objects around them; we asked what would happen if those snapshots started games instead of ending in a camera roll. Lootr treats the environment as raw material: scan something, get a tiny arcade moment tuned to what you pointed at, then leave that moment on the map for someone else to stumble into. The spark was simple: make ordinary objects feel like power-ups.
How we built it
We ship a React Native (Expo) client that captures an image, runs lightweight object understanding (with a sensible fallback path when detection is uncertain), and makes one structured LLM call (Groq) per scan. The model does not invent arbitrary rules; it selects from a fixed family of mini-game modes (for example dodge, catch, timing) and returns parameters our engines already understand. That keeps gameplay predictable for a demo while still feeling personal per object.
A Node.js + Express backend persists sessions and map placements using Neon (Postgres). Discovery lives on ArcGIS map views so drops have a real geographic story. After a run, players can leave quick feedback; we tie that signal into evaluation so thumbs up or down actually closes the loop on quality over time.
If you want a crisp mental model of the latency budget we cared about during integration, the rough pipeline time decomposes as:
$$ T_{\text{e2e}} \approx T_{\text{capture}} + T_{\text{detection}} + T_{\text{LLM}} + T_{\text{hydrate}} $$
where (T_{\text{hydrate}}) is launching the chosen engine with validated JSON. Cutting duplicate LLM calls was a deliberate product rule: one model response per scan keeps the interaction snappy and easier to reason about.
What we learned
Shipping a playful loop forced us to separate “creative AI” from “correct gameplay.” Constraining outputs to a schema and a fixed game library turned out to be a feature: fewer surprise failures, faster iteration on UI and feel. We also learned how much map UX matters; placing a drop is emotionally different from publishing a score. Finally, threading database, maps, and model APIs taught us to invest early in validation at boundaries so bad JSON never crashes the arcade layer.
Challenges we faced
- Keeping one scan end-to-end reliable: cameras, networking, and models all fail independently; we leaned on validation, fallbacks for game type, and ruthless scope control (no auth rabbit holes, no multiplayer complexity) so the demo path stayed solid.
- Making AI feel instant: structured outputs and a single round trip helped, but we still had to tune timeouts, loading states, and error copy so failure modes felt honest instead of broken.
- Maps plus gameplay: balancing performance, permissions, and a clear mental model of “my drop vs the world” took iteration; geography adds real edge cases compared to a purely local toy demo.
Built With
- axios
- esri
- expo.io
- express.js
- groq
- javascript
- jest
- neon
- node.js
- postgresql
- react-native
- zod
- zustand
Log in or sign up for Devpost to join the conversation.