Inspiration I recently clicked a fake package delivery link from a text message. The realization that I almost fell for it taught me how convincing modern scams are. I wanted to build a fast, smart second opinion for anyone unsure about a message.

What it does ScamShield instantly analyzes suspicious texts, emails, or job offers. It provides a definitive verdict (Scam, Suspicious, Safe), isolates specific red flags, and tells you exactly what to do next.

How we built it We built a Flask Python backend connecting to Groq's Llama 3.3 70B model, with a vanilla HTML/JS frontend. An engineered zero-shot prompt forces the LLM to output rigid JSON, ensuring our UI renders structured, actionable alerts. Google Antigravity assisted with full-stack pair-programming.

Challenges we ran into Standard LLMs took too long to generate responses, so we pivoted to Groq's LPU inference for near-instant (under 2 seconds) analysis. We also had to heavily engineer our prompt to prevent conversational hallucination and force the model to return strict, parseable JSON.

Accomplishments that we're proud of Getting the analysis down to under two seconds! We're also proud that the app is 100% ephemeral—user messages are analyzed in memory and never stored, preserving privacy.

What we learned We learned advanced prompt engineering constraints—forcing an LLM to behave as a strict data-extraction engine. We also gained experience building and connecting a Flask API with a real frontend.

What's next for ScamShield A Chrome extension so users can highlight and analyze suspicious text right in their browser, plus multi-language support to protect non-English speakers.

Built With

Share this project:

Updates