BunqSplit
Snap the receipt. Describe the dinner. Get paid.
Why we built it
Splitting a bill is a coordination problem with emotional shrapnel. Someone always does the math. Someone always quietly eats the rounding error. It is small per dinner and corrosive across years. The person who picks up the calculator is doing unpaid emotional labor, every time, and nobody thanks them.
Bunq already solved the hard part: moving money between humans without ceremony. The split itself is the last unsolved step, and it sits exactly one good transcription away from being trivial.
What it does
You photograph the receipt. You describe the table in plain English: "I had the carbonara, Mimi got the wine, the pizza was for everyone." BunqSplit transcribes, parses, and attributes. You land on a draggable card to correct anything we got wrong. One tap fires bunq payment requests.
End to end: under a minute.
Architecture
Frontend. Next.js 16, TypeScript, Tailwind, Framer Motion. A single /chat route, mobile-first, designed to survive one-handed use at a loud table.
Backend. FastAPI. One endpoint, /api/receipt/parse, owns the upload, transcription, and structured allocation pipeline.
Inference. Two Claude Opus 4.7 passes, decoupled by intent:
- Vision to text with an arithmetic gate. Transcribe the receipt, then verify line items sum to the printed total. If the gate fails, we surface a confidence error instead of propagating a bad read downstream.
- Free-form context to structured assignment. Map the user's natural-language allocation onto a strict JSON schema with explicit
selfandunassignedsentinels. Nothing reaches a person without a deliberate edge.
Payments. bunq API for actual money movement, only after the user signs off on the split.
Hard parts
Receipts are adversarial inputs. Thermal print fades non-uniformly. Items wrap across lines. Tax shows up in three different positions depending on the country. A hallucinated line item is not merely inaccurate, it is expensive. The arithmetic gate is what made the pipeline trustable, not the prompt.
The harder problem turned out to be UX, not ML. The model will be wrong sometimes. The interface had to make correction cheaper than the mistake. Drag-and-drop on a card beats re-prompting the model every time the user reads "wine" where they ordered "water."
What we are proud of
- A genuinely fast happy path. Photo to assigned split to payment requests, seamlessly.
- The pipeline fails loudly. No silent misallocation.
What we learned
The interesting part of an LLM application is not the model. It is the contract around it. Strict output schemas, explicit failure sentinels, and a UI that assumes the model is occasionally wrong. That is the difference between a demo and a tool you reach for on a Friday.
What's next
- Group memory. Recurring friends become one-tap allocations.
- Tip and tax. Region-aware proportional distribution.
- Travel mode. Multi-currency with conversion at receipt time.
- First-class integration. This belongs inside bunq, not adjacent to it.
- Beyond restaurants. Groceries, trips, shared subscriptions. Anywhere a receipt and a group chat overlap.
Built With
- amazon-web-services
- claude
- fastapi
- next.js
- python
- react
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.