Inspiration
As international students in Canada, my team and I frequently had to block our TD credit and access cards during our first year because we couldn't distinguish scam emails from legitimate communications. We found that the core problem with phishing fraud isn't a lack of security, but a lack of immediate verification; when users are in doubt, their instinct is to click a link within the email to check its validity, which is exactly how these attacks succeed.
Rather than trying to block every fraudulent email—an impossible task given how quickly new scams emerge—we developed TD Threat Denied. This tool provides customers with a zero-friction verification method directly within their inbox at the moment of suspicion. It eliminates the need for manual searching or scanning, allowing users to instantly confirm if an email was actually sent by TD before they take a high-risk action.
What it does
Users forward any suspicious email to verify@drivetimemedia.ca. The system automatically analyzes it and replies with a clear verdict — FRAUD, LEGITIMATE, or UNDER REVIEW — within 30 seconds.
The analysis is handled by 6 AI agents:
Sender Check Agent — evaluates the sender domain, display name, and reply-to address for spoofing indicators
URL Forensics Agent — inspects every embedded link for lookalike domains, redirect chains, and suspicious TLDs
Content AI Agent — detects urgency language, impersonation cues, and social engineering patterns in the body text
Template Match Agent — compares the email layout and structure against known legitimate TD Bank templates
Campaign Match Agent — matches the email against known active phishing campaigns targeting TD customers
Managing AI Agent — ingests reports from all 5 specialist agents and renders the final verdict with a confidence score and plain-language summary
High confidence verdicts trigger an automatic reply. Low confidence cases escalate to a human analyst dashboard where a TD analyst can review the AI's reasoning, override the verdict, and send the reply manually.
How we built it
As a team of four, we started by mapping the full user journey and debating the right interaction model — browser extension, web form, or email. We kept landing on email: users are already in their inbox when they receive a suspicious message, so the verification should happen there too. That decision shaped the entire architecture.
We split into two workstreams: one focused on the AI pipeline, the other on infrastructure. The AI side iterated through several prompt designs before settling on the 6-agent structure — five specialist agents each responsible for one forensic dimension, feeding into a managing agent that renders the final verdict. The infrastructure side set up SendGrid Inbound Parse with a custom MX record, ngrok for local webhook exposure, and a FastAPI backend to receive and process submissions asynchronously.
We reconnected to wire both sides together, add the SQLite submission store, build the analyst review dashboard, and integrate the Gmail SMTP reply system.
STACK: Languages: Python, JavaScript, HTML/CSS
Frameworks and Libraries: FastAPI — backend API and webhook handling SQLAlchemy — ORM and database management Anthropic Python SDK — Claude AI API calls for all 6 agents SendGrid Inbound Parse — inbound email webhook smtplib / Gmail SMTP — outbound reply emails Next.js — frontend (analyst dashboard)
Platforms: Anthropic Claude — AI analysis engine for agents SendGrid — email receiving infrastructure ngrok — local tunnel for webhook exposure during development
Tools: SQLite — submission and verdict storage Python venv — dependency isolation GitHub — version control
Challenges we ran into
Email pipeline setup: Receiving inbound emails required configuring SendGrid Inbound Parse with a custom MX DNS record, a live public URL via ngrok, and correct webhook routing to our FastAPI backend. Each layer had to work in sequence, and debugging required tracing failures across DNS, SendGrid, and the backend with limited visibility. Parsing forwarded emails added another layer — Gmail embeds the original sender inside the body rather than the headers, requiring a custom parser to extract the correct sender and skip the forwarder's own address.
Multi-agent AI pipeline: Getting consistent, structured JSON output from 6 agents required careful prompt engineering for each specialist role. The managing agent needed to correctly weigh conflicting signals across reports and always produce a verdict that matched the summary shown to the user — which required the scoring layer to override the AI summary whenever the two diverged.
AI use:
Yes more than 70% was AI produced
Built With
- 6
- agents
- ai
- all
- analysis
- and
- anthropic
- api
- backend
- calls
- claude
- database
- dependency
- development
- during
- emails
- engine
- exposure
- fastapi
- for
- frameworks
- frontend
- github
- gmail
- handling
- html/css
- inbound
- infrastructure
- isolation
- javascript
- languages:-python
- libraries:
- local
- management
- next.js
- ngrok
- orm
- outbound
- parse
- platforms:
- python
- receiving
- reply
- sdk
- sendgrid
- smtp
- smtplib
- sqlalchemy
- sqlite
- storage
- submission
- tools:
- tunnel
- venv
- verdict
- version
- webhook
Log in or sign up for Devpost to join the conversation.