Inspiration
We are four international students at Arizona State University, each of us thousands of miles from home, navigating a country where a single missed insurance form can spiral into thousands of dollars of debt. We come from different corners of the world and different fields of engineering, but we all shared the same quiet fear when we first arrived: what happens if something goes wrong and I have no idea what I am covered for?That fear is not unique to us. Millions of underserved and underrepresented communities across the United States face the same wall every single day. Insurance documents written in dense legal language, benefit structures that take years to understand, and systems that seem designed for people who already know how they work.We built InsureScope because we believe protection should never be a privilege. And we built it the way we wished something had existed when we first arrived and felt completely alone in a system we did not understand.
What it does
InsureScope turns your device camera into an insurance advisor. Point it at any room and the app instantly identifies your belongings using real-time AI object detection, then overlays color-coded coverage information directly on the live camera feed:
Green means the item is covered by your current policy Yellow means it is partially covered, below your deductible or hitting a sub-limit Red means it is completely unprotected
A dashboard underneath aggregates the total value of everything in the room, breaks down what is at risk, and gives you a clear picture of your coverage gaps in plain language. You can also add items manually, switch between renters, homeowners, and condo policy types, and get AI-powered natural language explanations of exactly why something is or is not covered. If you want to go deeper, an optional Gemini-powered assistant lets you ask questions about your coverage in plain English.
How we built it
InsureScope is built around one core belief: understanding your coverage should feel as immediate and visual as looking around the room you are standing in. The pipeline starts with your device camera. We used MediaPipe Tasks Vision for fast, lightweight object detection running directly in the browser, with ONNX Runtime Web as an additional inference layer. Every detected object gets mapped to a local coverage rule engine powered by a JSON policy database, and the result is a color-coded canvas overlay rendered on the live camera feed in real time. The frontend is React 19 with TypeScript 5, built with Vite 6 and styled using Tailwind CSS v4. We used Framer Motion for animations and Lucide React for iconography. Biome handles linting and formatting. For AI-powered natural language coverage explanations, we integrated the Gemini API directly via fetch, surfaced through an optional "Ask about coverage" button that only appears when an API key is configured.
Challenges we ran into
The hardest technical challenge was getting real-time object detection to run smoothly in the browser. We worked with both MediaPipe Tasks Vision and ONNX Runtime Web, and keeping the canvas overlay perfectly in sync with the live video stream without flickering or dropped frames took far more iteration than we expected. Modeling the actual complexity of insurance coverage was also genuinely difficult. Every policy type covers things differently, applies different sub-limits and deductibles, and uses different depreciation logic across categories. Building a rule engine flexible enough to handle renters, homeowners, and condo policies cleanly took real thought. We migrated the entire codebase from JavaScript to TypeScript mid-hackathon, following a strict dependency order from leaf utilities up through hooks, context, and components. It cost us time upfront but made the later stages of development significantly more stable. And underneath all of it was the harder, quieter challenge: building something for people whose relationship with insurance is defined by confusion and distrust, and making sure every design decision reflected that responsibility honestly.
Accomplishments that we're proud of
We shipped a fully working AR insurance visualizer in one weekend. Point a camera at a room and within seconds you see exactly what is and is not covered, with real dollar estimates attached to each gap. That moment when it first worked properly is one we will not forget. We are proud of the technical depth underneath what looks like a simple UI. Two object detection runtimes, a full TypeScript migration, a local rule engine covering three policy types across 80 object classes, end-to-end tests, and a Gemini integration, all shipped in a single hackathon weekend by four people who had never built together before. We are proud that the optional AI layer is truly optional. InsureScope works completely without the Gemini API. The button simply does not appear if no key is configured, which means the core experience never depends on a cloud call to function. And we are proud that four people from completely different engineering backgrounds, across different continents, built something coherent and complete together under a clock that never stopped.
What we learned
We learned that in-browser machine learning is genuinely ready for real applications. MediaPipe and ONNX Runtime Web gave us fast, capable inference without any server, and the gap between what runs in a browser and what runs in the cloud has closed more than we realized. We learned what it really means to build for trust. Insurance is a space where people have been burned, confused, and ignored for decades. Every interaction in InsureScope had to earn the user's confidence, not assume it. We learned how to build as a team across radically different engineering mindsets. Systems thinking, physical constraints, and software architecture do not speak the same language naturally. Learning to translate between those perspectives, and seeing how each one made the product sharper, was one of the most valuable things that came out of this weekend. We also learned that Biome is a genuinely great alternative to ESLint for fast, opinionated TypeScript projects. And that strict TypeScript from the start is not overhead. It is a gift to your future self at 2am when something breaks.
What's next for InsureScore
The AR layer is just the beginning. We want to move from detection to prediction, using historical claim data to surface not just what is uncovered today but what is most likely to cost someone in the next year based on where they live and what they own. We want to expand the policy engine beyond the three base types and explore integrations with real insurance provider data so the coverage rules reflect what a user actually has, not just a category estimate. We want to add multilingual support, because the communities who are most underinsured are often the ones least served by English-only tools. We want to bring InsureScope to mobile as a native app, where the camera experience is more natural and walking through your home doing a coverage audit becomes something anyone can do in five minutes. The long-term vision is a world where discovering you were unprotected is something that happens before a crisis, not because of one. We built the foundation this weekend. We intend to spend much longer making it matter.
Built With
- biome
- bun
- computer-vision
- css
- github
- google-gemini-api
- html
- javascript
- machine-learning
- mediapipe
- olo
- onnxruntime-web
- playwright
- react
- real-time-object-detection
- typescript
- vite
- vitest
Log in or sign up for Devpost to join the conversation.