SentimentSense: The Social Forensic Engine

Inspiration

Have you ever received a "we need to talk" text and immediately started spiraling? Have you ever stared at a three-paragraph work email, felt the brain-fog of executive dysfunction set in, and just... closed the tab?

We've all been there. In today's digital-first world, we're often separated by a "Social Ambiguity Gap." Neurotypical communication is built on subtext, "hints," and social "vibes"—intangible rules that are often inaccessible to neurodivergent individuals. For those with ADHD or Autism, these vague messages aren't just confusing; they trigger Rejection Sensitive Dysphoria (RSD) and social burnout. We were inspired by the Double Empathy Problem: the theory that social breakdowns are a mutual translation gap, not a defect. We envisioned a world where you could decode social subtext as easily as you translate a foreign language. What if you had a "Psychological Prosthetic" that turned social anxiety into an objective forensic science?

What it does

SentimentSense turns your communication apps into a personal social forensic team.

  • Social Forensics & Literal Decoder: Our "De-fluff" engine strips away metaphors, idioms, and social buffer to reveal the raw transactional intent. It highlights "Evidence Markers"—like passive-aggressive punctuation—so you aren't left guessing.
  • RSD Shield (Social Simulator): Before you hit send, you can "A/B test" your social life. Our agent roleplays the sender to predict their likely mood and reply to your draft, providing the emotional safety to communicate authentically.
  • Spoon Theory Energy Management: The app adapts to you. By setting your current "Spoons" (Energy Level), the AI tailors its advice. Low Spoons? We give you firm, "masking-optional" scripts. High Spoons? We help you with the social lubrication.
  • ADHD Executive Function Support: Received an overwhelming wall-of-text? One click deconstructs it into a prioritized, actionable checklist with deadlines, bypassing the initial decision paralysis.

How we built it

Our system is a high-performance assistant orchestrated to provide instant social certainty.

  • Core High-Reasoning Engine: We use Google Gemini 2.5 Flash as our central brain. Its advanced Theory of Mind (ToM) capabilities allow it to perform complex social simulations and analyze visual context in screenshots (like read receipts and time gaps).
  • Hybrid Semantic RAG: To power our "Forensic" insights, we built a custom Retrieval-Augmented Generation system. We used NumPy and Google’s text-embedding-004 model to create a semantic pass. Instead of brittle keyword triggers, our system matches the "vibe" of a message against a curated library of neurodivergent social patterns.
  • Cloud Infrastructure & Caching: For an assistive tool to be useful, it must be fast. We deployed our FastAPI backend on Vultr High-Performance Cloud Compute to ensure sub-500ms response times. We integrated Vultr Valkey (Redis-compatible) to handle session state and rapid context caching, ensuring the "prosthetic" feels like a natural extension of the user's mind.
  • Accessibility-First UI: The app is built with Vanilla JS and Tailwind CSS, featuring a sensory-friendly Glassmorphism design and native OpenDyslexic font support.

Challenges we ran into

  • Vultr Infrastructure Side-Quests: Our biggest hurdle was the initial environment setup. Provisioning ports and getting our Dockerized multi-agent system to talk to each other on the Vultr cloud took significant trial and error.
  • The "Fluff" Filter: Prompt engineering the "Literal Decoder" was a balancing act. We had to carefully craft our instructions to ensure the AI could reliably strip "allistic fluff" without losing the core transactional intent or sounding accidentally rude.
  • RAG Brittleness: Moving from simple keyword matches (which had too many false positives) to a semantic-weighted system required us to rethink how we calculate "Social Signal Strength" using cosine similarity and severity-based boosts.

Accomplishments that we're proud of

  • No one cried. (And we successfully decoded "fine." without a panic attack).
  • Performance-as-Accessibility: Achieving a sub-500ms "forensic loop" on Vultr. In a social crisis, latency is a barrier; our infrastructure makes the tool feel instantaneous.
  • The Evidence-Based UI: We didn't just build a chatbot; we built a system that shows its work. Highlighting the specific "triggers" that caused an interpretation provides the user with total agency and peace of mind.

What we learned

  • Infrastructure is Accessibility: If an assistive tool is slow, it’s useless in a live social situation. The speed of the cloud is directly tied to the user's emotional regulation.
  • Specialized Models Win: Hybrid RAG (keywords + semantic) is far more robust than a single LLM pass. Using specialized models for reasoning (Gemini) and speed (NumPy/Valkey) allowed us to build a more reliable system.

What's next for SentimentSense

  • Private Inference: We want to move our inference to Vultr Cloud GPUs, allowing users to process sensitive social messages completely locally for 100% data sovereignty.
  • Proactive "Social Ping": We envision a browser extension that monitors your messages and pings you—for example, "I noticed this email contains an open-ended trap. Would you like me to clarify the intent for you?"

Built With

Share this project:

Updates