Inspiration
Our initial focus was on women’s safety. As a team of women, our discussions evolved from physical security to the issue of stalking — and then to the realization of how often stalking now begins online.
We recognized that social media has normalized a high frequency of sharing — locations, routines, background details in photos, captions, and engagement patterns. While each of these signals might seem harmless on its own, aggregated over time they create a detailed behavioral map.
Our primary concern became this cumulative exposure, and that is the core problem we set out to address.
What It Does
We are building a B2B Digital Exposure Intelligence layer that integrates directly into platforms like Instagram.
Every user generates a digital trail — posts, captions, tagged locations, comments, and engagement behavior. Our system analyzes these signals within the platform ecosystem to identify exposure patterns that may increase vulnerability.
The System: Detects cumulative exposure signals Generates a dynamic risk score based on historical + new posts Forecasts vulnerability trends Provides real-time posting guidance through an AI companion
Activates protective guidance and an evidence vault if high-threat patterns are detected
Because it is embedded infrastructure sold to platforms (e.g., Meta), user data never leaves the ecosystem. We do not create a new data-sharing surface.
How We Built It
We initially considered building a standalone consumer app. That approach created privacy and security contradictions: we would be asking users to centralize even more sensitive data in a new system.
We pivoted to a B2B service model to align with responsible AI and data minimization principles.
Stack & Tools: Figma Make + Lovable for rapid prototyping GitHub Copilot for model scaffolding Claude and ChatGPT for structured research and threat modeling
Hugging Face dataset (~31,000 image-caption pairs) as a base reference dataset
Challenges We Ran Into
- Model Framing Exposure risk is contextual and cumulative. There are no standardized labels for “digital vulnerability,” so we had to define our own scoring logic.
- Dataset Limitations Public datasets do not model behavioral history over time — which our system requires.
- Prompt Iteration Achieving consistent and accurate results from AI tools required extensive prompt refinement.
- Prototype Constraints Our prototyping tools did not support real-time collaboration, which slowed parallel work.
- UX Sensitivity Integrating this into the user’s current flow without disruption required careful design. ----------------------------------- ## Accomplishments
- 78% Model Accuracy Despite the absence of standardized exposure-risk labels and the need to simulate longitudinal posting behavior, we built and evaluated a probabilistic model that reached 78% accuracy within the hackathon timeframe.
- Seamless Flow Integration We designed the intervention layer to sit directly within the posting journey — without forcing users into a separate dashboard or audit tool. Risk feedback appears contextually at the moment of posting, minimizing friction and preserving platform-native behavior.
- Built in 48 Hours Within a two-day hackathon, we reframed the problem, pivoted from a consumer app to a B2B infrastructure model, structured a custom dataset approach, developed a working prototype, and demonstrated risk scoring with live flow integration. ----------------------------------- ## What We Learned Risk modeling for behavioral exposure requires custom labeling logic. Longitudinal data simulation is critical for meaningful vulnerability forecasting. AI-assisted prototyping dramatically accelerates iteration — but prompt precision determines output quality. Sensitive safety interventions must balance
Built With
- claude
- copilot
- figma
- lovable
Log in or sign up for Devpost to join the conversation.