bunq Voice Assistant — Hackathon 7
NOTE - We did not have time to complete the demo, I just put a random link. If this is not acceptable, we understand if you choose not to look at our work.
A voice-first banking assistant built for visually impaired users, powered by an LLM with real-time bunq API access via the Model Context Protocol (MCP).
The Problem
Current mobile banking apps rely heavily on visual navigation. Screen readers like VoiceOver and TalkBack provide basic accessibility, but they are fundamentally limited:
- Navigation is linear and slow, requiring sequential interaction with UI elements
- Dynamic interfaces lead to inconsistent or incorrect screen reader interpretations
- Critical actions like payments require multiple steps with no structured overview
- Error recovery is difficult, increasing the risk of unintended transactions
The Solution
A voice-first LLM assistant that lets users interact with their bunq account through natural speech. Users press a button, state their request, and receive a spoken response — no visual navigation required.
Architecture
Voice Input (Whisper / Browser)
│
▼
Streamlit UI / Terminal Client
│ POST /api/assistant
▼
Assistant Backend (FastAPI)
│
├── LLM Tool Loop (Ollama / Qwen3)
│
└── MCP Client
│
▼
Banking MCP Server
│
▼
bunq API (Sandbox)
Components
| Component | Location | Description |
|---|---|---|
| Voice UI | app/app.py |
Streamlit mobile UI with chat and voice recording |
| Terminal Client | services/assistant-backend/main.py |
Pure voice client for visually impaired users |
| Shared Voice | shared/voice/voice.py |
VAD, Whisper, gTTS — shared across UI and terminal |
| Assistant Backend | services/assistant-backend/src/ |
FastAPI + LLM tool loop |
| MCP Client | services/assistant-backend/src/mcp_client/ |
Connects backend to MCP server |
| Banking MCP Server | services/banking-mcp-server/src/ |
Exposes banking tools via MCP |
| bunq API Client | services/banking-mcp-server/src/client/ |
Authenticated bunq API wrapper |
Features
Banking Tools (via MCP)
- Account Overview — balance, IBAN, account type
- Transactions — recent payment history
- Make Payment — send money via IBAN, email, or phone number (with confirmation)
- Support Info — contact details routed by issue type
- Session History — persistent conversation memory with starred instructions
- Credit Risk — loan eligibility prediction via logistic regression model
Accessibility
- Voice Activity Detection (VAD) — automatic speech detection, no button holding required
- Whisper speech-to-text — OpenAI Whisper large-v3-turbo
- gTTS text-to-speech — responses read aloud
IS_VISUALLY_IMPAIREDmode — system mic + speaker instead of browser UI- Text fallback — always available, bypasses Whisper entirely
Setup
Prerequisites
Environment Variables
Create .env in services/assistant-backend/:
ASSISTANT_API_URL=http://localhost:8000/api/assistant
DEMO_USER_ID=demo_user_001
DEMO_SESSION_ID=demo_session_001
IS_VISUALLY_IMPAIRED=false
Create .env in services/banking-mcp-server/src/:
BUNQ_API_KEY=your_sandbox_key
BUNQ_SANDBOX=true
Install Dependencies
# Create and activate venv
python -m venv .venv
source .venv/bin/activate
# Install backend dependencies
pip install -r services/assistant-backend/requirements.txt
# Install MCP server dependencies
pip install -r services/banking-mcp-server/requirements.txt
# Install app dependencies
pip install -r app/requirements.txt
Pull Ollama Model
brew services start ollama
ollama pull qwen3:4b
Running
Terminal 1 — Assistant Backend:
cd services/assistant-backend/src
uvicorn main:app --reload --port 8000
Terminal 2 — Streamlit UI:
cd app
streamlit run app.py --server.fileWatcherType none
Terminal 3 — Terminal Voice Client (visually impaired users):
cd services/assistant-backend
python main.py
Testing
Health Check
curl http://localhost:8000/health
# {"status": "ok", "mcp_connected": true}
Text Chat Test
curl -X POST http://localhost:8000/api/assistant \
-H "Content-Type: application/json" \
-d '{"message": "what is my balance", "user_id": "demo_user_001", "session_id": "test_001"}'
MCP Smoke Test
cd services/assistant-backend
python -m src.mcp_client.client
Demo Script
- "What is my balance?" → account overview
- "Show my recent transactions" → transaction history
- "Send 1 euro to sugardaddy@bunq.com" → payment confirmation flow
- "Always confirm before sending money" → starred session instruction
- "Am I eligible for a 10,000 euro loan?" → credit risk prediction
- "I need help" → support info
Team
| Name | Role |
|---|---|
| Umair | MCP server, assistant backend, API development |
| Anant | Credit risk model (logistic regression + classification) |
Log in or sign up for Devpost to join the conversation.