Inspiration
EchoShield was born out of a deep frustration with how often psychological abuse—especially emotional manipulation and gaslighting—goes unnoticed or unvalidated. We wanted to build something that could listen between the lines, giving survivors a tool that not only detects harm but understands their lived experience. Our goal was to combine technology, psychology, and empathy into something quietly powerful.
What it does
EchoShield analyzes conversations through two lenses: a victim-focused lens that detects trauma, confusion, or psychological harm, and an offender-focused lens that flags manipulative behaviors aligned with DSM indicators. The system connects these insights to surface early warning signs, risk levels, and escalation patterns—helping users make sense of what they’re experiencing before it's too late.
How we built it
We used a combination of NLP (Natural Language Processing), behavioral tagging, and a risk forecasting engine informed by trauma research and DSM criteria. Our backend models evaluate tone, patterns, and language, while the frontend offers a clean, accessible interface for users to input or review flagged interactions. Each behavior is categorized in real time with explainable feedback and optional safety resources.
Challenges we ran into
One of the biggest challenges was designing an algorithm that could interpret emotionally nuanced language without overstepping. Balancing accuracy with ethical transparency was critical. Another challenge was developing the dual-lens logic—ensuring both the survivor’s voice and behavioral red flags were weighted appropriately. Testing with sensitivity and care was also key.
Accomplishments that we're proud of
We’re proud of creating a system that doesn't just analyze text—it listens to it. EchoShield gives language to people who have been made to doubt their own voice. We're also proud that our engine offers both clarity and compassion: it's not just data-driven, it’s trauma-aware.
What we learned
We learned how powerful context is in emotional language processing—and how difficult it is to define “abuse” in a technical framework without minimizing personal experience. We also learned how essential ethical guardrails are when designing AI for safety-related applications.
What's next for EchoShield
We’re working on expanding EchoShield into real-time integrations—like browser plugins and mobile chat monitoring—to support users in-the-moment. We also plan to partner with mental health professionals and advocacy groups to improve our data models and develop customizable risk profiles for more nuanced detection.
Built With
- api
- backend
- figam
- frontend
- github
- javascript
- wix

Log in or sign up for Devpost to join the conversation.