Inspiration
Scammers and impersonators often succeed by creating urgency and confusion, which makes it hard to think clearly in the moment.
So we built MyJellyBean: bite-sized clarity for high-pressure messages—a fast “second set of eyes” that highlights red flags and suggests calmer, non‑escalatory next steps.
What it does
MyJellyBean is a full-stack message safety analyzer for suspicious DMs, texts, and emails. You paste a message (optionally add platform/context + risk toggles) and it returns:
- A risk score (0–100) and risk category (scam/fraud, impersonation, harassment/abuse, coercion/manipulation, privacy risk, meetup escalation risk, etc.)
- The top signals/red flags that drove the score
- A “Do this now” checklist with non-escalatory actions (verify, preserve evidence, report, block, don’t share OTP)
- A safer reply draft to avoid oversharing
- A structured report summary that’s easy to copy into platform reporting workflows
How we built it
- Frontend: React + Vite + Tailwind for a fast, clean UI
- Backend: Express (Node.js) API endpoint that accepts the message + context
- AI layer: Google Gemini via
@google/genai, prompted to return strict JSON - Safety guardrails: We explicitly avoid retaliation/doxxing guidance and emphasize de-escalation, verification, and reporting
- Deployment: Hosted on Vercel with
GEMINI_API_KEYstored as an environment variable
Challenges we ran into
- Reliability: Getting consistent structured outputs required strict JSON prompting and robust fallback behavior when outputs don’t parse.
- Safety UX: Writing “helpful but non-escalatory” advice is tricky; we tuned the wording to avoid confrontation while still being actionable.
- Deployment details: Ensuring secrets stayed server-side and were correctly configured as environment variables in production.
What we learned
- Building around an LLM is as much about constraints and validation as it is about model quality.
- Small UX choices (clear risk bands, concise checklists, copy-to-clipboard) make the tool feel more usable in stressful moments.
- “Human safety” features need explicit guardrails so the product helps users respond safely, not just detect risk.
Built With
- TypeScript
- React
- Vite
- Tailwind CSS
- Node.js
- Express
- Google Gemini API (
@google/genai) - Vercel

Log in or sign up for Devpost to join the conversation.