Inspiration
Every year, equipment failures cause thousands of workplace injuries and cost industries over $50 billion in unplanned downtime. We saw that field inspections are still done on paper clipboards with zero institutional memory. If an inspector spots corrosion on a pipe today, the next inspector three weeks later has no idea — they start from scratch. By the time someone notices the pattern, the pipe has already burst. We wanted to build something that gives inspections a brain — not just seeing damage, but remembering it and tracking how it evolves.
What it does
FieldSense is an AI-powered field inspection platform. You snap a photo of any equipment — pipes, machinery, structural components — and GPT-4 Vision instantly analyzes it, detecting damage like corrosion, cracks, wear, and safety hazards. Each inspection gets a severity rating, detailed findings, and actionable recommendations. But the real power is memory. Every inspection is stored in Supermemory, building a living profile for each piece of equipment. When you inspect something again, FieldSense pulls its full history and compares — "this corrosion was minor two inspections ago, now it's critical." It catches degradation patterns that humans miss. For hands-free fieldwork, ElevenLabs generates voice narrations of inspection reports so workers wearing gloves or climbing equipment can listen instead of reading.
How we built it
React and Tailwind CSS on the frontend, with a clean field-first design built for mobile and outdoor use. Node.js and Express on the backend handling three core integrations: OpenAI GPT-4 Vision for image analysis and report generation, Supermemory for storing inspections, building equipment profiles, and enabling semantic search across inspection history, and ElevenLabs for text-to-speech voice reports. There's no traditional database — Supermemory handles all persistent memory and retrieval, acting as the contextual knowledge graph for every piece of equipment.
Challenges we ran into
Our original hackathon project (a VR memory assistant built on the Quest 3) hit a wall, and we had to pivot to FieldSense with limited time remaining. Getting OpenAI Vision to return consistently structured inspection data required careful prompt engineering. Designing the Supermemory containerTag strategy so that each equipment ID maintains its own memory space while still allowing cross-equipment pattern detection took iteration. Making the UI look polished and professional under time pressure was also a real challenge.
Accomplishments that we're proud of
The historical comparison feature is the highlight — watching the AI compare a new inspection against stored memories and say "condition has worsened since the last inspection" feels genuinely useful, not just a demo trick. The voice reports work seamlessly and are a real accessibility win. The equipment profiles that Supermemory auto-generates from inspection data are surprisingly rich. And honestly, pivoting from a completely different project and shipping something polished in a fraction of the original timeline felt pretty great.
What we learned
Context engineering is the real differentiator. Any app can call GPT-4 Vision — what makes FieldSense useful is the persistent memory layer. We learned how Supermemory's semantic search and user profiles can turn a simple image analyzer into an intelligent system that improves over time. We also learned that a focused, well-executed MVP beats an ambitious half-finished project every time.
What's next for FieldSense
We want to add real-time camera integration so inspectors can point their phone and get live analysis without uploading photos. Fleet-wide analytics dashboards showing degradation trends across all equipment would help managers prioritize maintenance. Integration with existing work order systems (ServiceNow, SAP) to automatically generate maintenance tickets from inspection findings. And longer term — deploying on AR glasses like Meta Ray-Bans so inspectors get real-time damage overlays while walking a job site, with the full inspection history accessible by voice.
Built With
- elevenlabs-api
- express.js
- node.js
- openai-gpt-4-vision-api
- react
- supermemory-api
- tailwind-css
- vite
Log in or sign up for Devpost to join the conversation.