Second - bunq Hackathon 7.0 submission
Project name
Second - the bank that helps you think twice.
Tagline
Banks built fraud detection for criminals. We built financial self-protection for humans.
Second is not a blocker. Second is not a chatbot. Second is not another budgeting app.
It is a personal financial intervention layer that speaks to you in the moment you are the most vulnerable.
Inspiration
The Netherlands ranks first in the European Economic Area for digital payment fraud volume, losses, and rate (EBA 2025). Dutch consumers lost roughly EUR 1.75 billion to scams in 2024; bunq itself paid out over EUR 10 million to victims in a single year (Threatmark 2025). What struck us most was a finding from NVB 2025: scammers now actively coach their victims to ignore bank warnings. Static friction is dead.
But fraud is only half the story. The same neurological pattern — urgency overruling reasoning — drives impulse buys, dead subscriptions, and regret-purchases of every kind. We wanted to build something that sits at that exact moment, before the money leaves, and asks: are you sure?
What it does
Second is a live, voice-first, vision-aware guardian built into bunq. It intercepts payments at the moment they're about to happen and opens a real conversation with the user — in the voice they actually listen to.
- Five personas (Sibling, Coach, Mentor, Accountant, Therapist) share one safety backbone but speak in distinct registers, hot-swappable mid-conversation.
- Three outcomes are always one tap away: Pay now, Hold 24h (funds parked in a "Second Vault" sub-account), or Divert to goal.
- 18 scam archetypes covering NL, EN, and DE — including 2026 trends like AI-fabricated supplier invoices and deepfake-voice helpdesk fraud. When a match fires, the action bar reweights to Verify-first and a dispute evidence packet is drafted automatically.
- Three live modalities: voice in (AWS Transcribe Streaming), voice out (AWS Polly Neural), and image (camera + Claude vision for receipts, invoices, and suspicious WhatsApp messages).
- Real bunq sandbox integration: every Hold, Divert, or Pay action is reflected immediately on the bunq Sandbox Android emulator paired to the same user.
How we built it
- Frontend: Next.js 16 (App Router) + React 19 + Tailwind 3 + Framer Motion, deployed to Vercel. Installs as a PWA via a service worker and
manifest.webmanifest. - AI reasoning: Claude Opus 4.7 via the Anthropic SDK or AWS Bedrock (toggled with one env var). The model is wired with twelve purpose-built tools (
fetch_balance,fetch_recent_tx,fetch_user_goal,check_scam_patterns,create_payment_hold,create_earmark_to_goal,draft_dispute_package,execute_payment,list_cards,freeze_card,detect_anomalies,alert_emergency_contact) that hit the real bunq sandbox API. - Voice: browser microphone capture downsampled to 16 kHz mono PCM → AWS Transcribe Streaming (voice in); AWS Polly Neural one voice ID per persona streamed as MP3 (voice out). Barge-in is implemented so the user can interrupt mid-sentence.
- Vision:
/demo/scancaptures a camera frame or file upload, sends it to Claude vision, and receives structured JSON (vendor, amount, IBAN, line items, scam markers). Three prompt presets — Suspicious message, Receipt, Invoice — tune the extraction schema. - bunq integration: full RSA + signed handshake (
installation → device-server → session-server),X-Bunq-Client-Signature(PKCS1v15-SHA256) on every call. A one-shotnpm run bunq:bootstrapscript creates the Vault and Goal sub-accounts and funds the account via Sugar Daddy. Webhooks feed a real-time SSE stream to the UI. - MCP layer: all twelve tools are also exposed as a JSON-RPC endpoint at
/api/mcpso any Anthropic-aligned agent can drive Second programmatically.
Challenges we ran into
- bunq's RSA handshake required precise PKCS1v15-SHA256 signing on every mutating call — getting the
X-Bunq-Client-Signatureheader right, especially across the installation/device-server/session-server sequence, took significant debugging. - Streaming voice with barge-in meant coordinating three async pipelines (mic recording → Transcribe WebSocket → UI transcript) while also cancelling TTS playback the moment the user starts speaking again — race conditions were common.
- Scam detection latency: running the pattern detector server-side on every conversation turn without blocking the streamed Claude response required careful sequencing of tool calls within the SSE loop.
- Cross-modality handoff: getting the Claude vision output from
/demo/scanto load seamlessly as context into the intervention conversation overlay required a clean shared state contract between the scan route and the conversation engine.
Accomplishments that we're proud of
- A fully end-to-end multimodal prototype — voice in, voice out, and camera — all wired live, not mocked.
- The persona system: five voices, one safety backbone, hot-swappable mid-conversation. The messenger-effect in behavioural finance is real, and we built for it.
- Real bunq sandbox integration with a one-command bootstrap (
npm run bunq:bootstrap) that any judge can run in under two minutes. npm run verify— a production-readiness probe that exercises every endpoint with deterministic input and prints PASS / FAIL / SKIP. Confidence on demo day.npm run record— a Playwright-driven end-to-end recording script that captures the full demo flow automatically, including the Polly voice output.- The MCP endpoint: Second's tools are accessible to any Anthropic-aligned agent, making the surface extensible far beyond the hackathon prototype.
What we learned
- The messenger matters as much as the message. Behavioural finance literature on the messenger effect informed the entire persona architecture — a warning from "your Sibling" lands differently than one from "your Bank."
- Streaming + tool use is powerful but needs careful orchestration. Claude's tool-call loop inside an SSE stream required deliberate handling of partial chunks, tool result injection, and turn limits.
- Multimodal inputs dramatically expand the intervention surface. Being able to point a camera at a suspicious WhatsApp screenshot and get structured scam-marker extraction in under two seconds changes what "fraud prevention" can mean in practice.
- bunq's sandbox is genuinely useful — having real sub-accounts, real signed API calls, and the Android emulator mirror made the demo concrete in a way that mocks never could.
What's next for Second
- Production bunq hold primitive: the current Vault is an internal transfer. bunq could expose a native hold that temporarily freezes funds in-place — Second is also a feature proposal.
- Proactive anomaly alerts: the z-score anomaly detector already runs; surfacing it as a push notification before a user even opens a payment flow is the next step.
- Expanded scam pattern library: the current 18 patterns are a seed. A feedback loop from confirmed scam reports — anonymised and aggregated — would let the library grow continuously.
- Voice-first onboarding: letting users set their persona, guard level, and savings goal entirely by voice, without touching the screen.
- Shared vaults: a "Second for two" mode where a trusted contact (partner, parent) is looped into high-risk payment decisions, with explicit consent.
Built With
- amazon-web-services
- bunq
- claude
- javascript

Log in or sign up for Devpost to join the conversation.