Inspiration
Physical security is a $120+ billion global market, projected to reach over $200 billion by the end of the decade. Video surveillance systems alone account for more than 50% of the market. And yet, most commercial security camera failures are caused not by bad equipment but by poor placement.
The numbers are staggering. In 2024, the U.S. recorded 779,542 burglary incidents — roughly one every 51 seconds. Victims of burglaries lose an estimated $3.4 billion in personal property each year. Meanwhile, homes without a security system are 300% more likely to be burglarized.
So cameras work as deterrents — when placed correctly. But they usually aren't. These mistakes are typically not discovered until after an incident, when pulling footage reveals a blind spot that's been there since day one. Professional security consultation to fix this costs real money. Commercial installations range from $1,500 for a small business to $50,000+ for large campus deployments.
What it does
Sentinel is an AI-powered camera placement optimizer for physical spaces.
- Upload a USDZ scan (geometry — rooms, walls, doors, windows) and an FBX mesh (texture)
- Sentinel parses the scene into entry points, obstructions, and threat-weighted zones
- K2 Think V2 streams its reasoning as it places cameras under a budget, optimizing for most coverage
- Visualize the result across five views in a 3D digital twin: importance heatmap, camera frustums, point cloud, threat paths, and per-camera POVs with metadata
- Adjust the budget slider and watch K2 re-reason about what you gain or lose
How we built it
- 3D rendering: @react-three/fiber + drei + three.js for the digital twin, frustum cones, point clouds, and FBX texturing
- AI layer: K2 Think V2 with streaming responses — separate prompts for placement, budget tradeoffs, and lighting analysis, all piped through a custom useK2Stream hook into a live reasoning panel
- Backend: FastAPI service that ingests USDZ/FBX, derives geometry analytics (coverage %, blind spots, lighting risk windows), and orchestrates K2 calls
- AI layer: K2 Think V2 with streaming responses — separate prompts for placement, budget tradeoffs, and lighting analysis, all piped through a custom useK2Stream hook into a live reasoning panel
Challenges we ran into
- Parsing USDZ in the browser/getting clean room/wall/entry-point graphs out of raw 3D scans was a struggle
- 3D coordinate frame mismatches between USDZ (Apple), FBX (Autodesk), and Three.js led to every visualization breaking at least once
- Performance with 100k+ point clouds plus camera frustums plus heatmap overlays. We had to aggressively memoize and split render passes.
Accomplishments that we're proud of
- Five distinct, useful 3D visualizations that all share one source of truth
- A budget slider that triggers a full re-reasoning pass considering if we had more or less cameras
- Real visualizations based on genuine footage from CalTech buildings
What we learned
- Streaming reasoning > batch outputs
What's next for sentinel
- Live capture: skip USDZ upload, walk the space with an iPhone LiDAR
- Camera model database: pick from real SKUs with real specs and real prices
- Multi-floor and outdoor scenes
- Compliance overlays: HIPAA/GDPR/PCI privacy-zone enforcement (auto-blur sightlines into bathrooms, neighbor windows, etc.)
- Live feed integration: once cameras are installed, Sentinel keeps watching and re-reasons when the space changes


Log in or sign up for Devpost to join the conversation.