Inspiration
It started with a crumpled receipt on a kitchen table.
Most personal finance apps expect you to do all the work: enter purchases, pick categories, and stay consistent. In reality, nobody keeps that up. That friction is where most tools fail.
We asked a simpler question: what if using your phone camera was enough? Just point and go. Vision models have become surprisingly good at understanding real-world images: blurry receipts, product labels, prices in different formats. But that capability is only useful if it connects to something real. With bunq’s API, we could actually act on that information like log expenses, update balances, and make it meaningful.
The second idea came from impulse buying. When you’re about to buy something, you rarely have the right context:
- What’s your actual balance right now?
- Have you already bought something similar recently?
- What does this purchase mean long-term?
We wanted that information to appear instantly, triggered by simply pointing your camera.
At its core, Lenz is about making spending effortless, aware, and guilt-free, not something you avoid because it feels like work.
What We Built
Lenz is a Progressive Web App with two main features:
Receipt AI
Take a photo or upload any receipt. The system extracts:
- Merchant
- Total amount
- Currency
- Date
- Category
- Individual items
You can log it with one click into your bunq sandbox account and view it in a searchable history with a live spending-by-category chart.
Lenz AI
Point your camera at any product. The system:
- Identifies the item
- Estimates its price
- Fetches your live bunq balance
Then it calculates:
- Affordability status — the math is never delegated to the AI, while the suggestion and verdict on it's purchasing is dictated with and LLM based output:
$$ \text{status} = \begin{cases} \texttt{impossible} & \text{if } B < P \ \texttt{tight} & \text{if } B - P < 0.1 \cdot B \ \texttt{comfortable} & \text{otherwise} \end{cases} $$
where B is the live bunq balance and $P$ is the item price.
- Hours of work
- S&P 500 opportunity cost
- Duplicate purchase warning — semantic similarity check against your last 30–90 days of receipts (window depends on category: 90 days for electronics, 30 days for everything else)
The full banking dashboard (accounts, transactions, send money, request money, bunq.me links) is wired to the bunq sandbox API via HMAC-SHA256 signed requests.
How We Built It
The stack was chosen for zero friction from idea to demo:
- AI layer: Anthropic Python SDK :
claude-opus-4-5for receipts (accuracy),claude-haiku-4-5for live camera (speed). Images are base64-encoded in the browser and sent directly to Claude's vision endpoint with no pre-processing. - Banking layer: bunq_client.py : a hand-rolled bunq REST client implementing the full 3-step authentication flow (installation → device-server → session-server) with RSA-2K key generation and HMAC-signed request headers. Each registered user gets their own provisioned sandbox account automatically.
- Backend: Flask : all routes, AI calls, and bunq integration in one file (app.py). No microservices, no queues.
- Frontend: Vanilla JS/CSS as a single-page app inside
index.html. No build step, no framework —python app.pyand it's live. - Database: SQLite with a thin custom ORM (database.py). Zero ops, auto-creates on first run.
- Mobile: PWA with a service worker and web manifest. The camera feature uses
MediaDevices.getUserMedia(): installable to home screen, no app store.
Development happened in tight loops: write a prompt, test against a real receipt photo, observe the structured JSON output, tighten the prompt. The affordability logic was deliberately kept off the AI. We learned early that asking Claude "can this person afford €249?" produces inconsistent results. The oveview of the logic is ours; the implementation's is Claude's.
Challenges
1. bunq's authentication protocol is genuinely complex.
The 3-step flow (installation token → device registration → session token) with RSA key pairs and HMAC-SHA256 per-request signing took significant debugging. The error messages from the API are minimal, so a wrong header order or missing X-Bunq-Geolocation silently fails. Building bunq_client.py from scratch against the sandbox was the single biggest time investment.
2. Prompt engineering for structured output is harder than it looks.
Claude's vision models are expressive as they want to explain things. Getting consistent, machine-parseable JSON out of a receipt image required careful prompt design: explicit field names, a fixed category enum, instructions to handle missing fields with null rather than guessing, and a fallback JSON for non-product images in Lenz AI. We iterated through roughly a dozen prompt versions.
3. Camera UX on mobile Safari.
Getting the live camera preview to work as a PWA on iOS required specific video element attributes (playsinline, autoplay, muted) and careful permission handling. HTTPS is required for getUserMedia on mobile, which meant ngrok became a required part of the demo setup.
4. Speed vs. accuracy tradeoff. Receipt parsing with Opus takes 2-4 seconds - acceptable for a deliberate "scan this receipt" flow. Live camera scanning with Haiku needs to feel instant. We tuned the Haiku prompt to return a minimal JSON payload and added a loading skeleton to the UI so the wait feels intentional rather than broken.
5. Real-world camera stability and object detection
In practice, users don’t hold objects perfectly still. Slight hand movement, poor lighting, or objects partially out of frame can reduce detection accuracy. We had to design around this by encouraging stable framing in the UI , keeping responses fast so users don’t have to hold still for long
What We Learned
- Multimodal models are genuinely ready for real-world document parsing. A blurry, rotated, partially torn supermarket receipt is not a problem for Claude Opus.
- The bunq API is powerful but assumes you've read the full authentication spec. The sandbox environment is a great place to break things.
- Separating AI responsibilities from financial logic is not just good engineering, it's rather a prerequisite for a trustworthy financial product. AI decides what the thing is. Math decides what it costs you.
- A PWA with a service worker closes the gap between a web demo and a mobile-native experience more than we expected.

Log in or sign up for Devpost to join the conversation.