Inspiration
We've all been there, dinner is over, the mood is great, and then someone has to pull out a calculator and figure out who owes what. Splitting bills with friends was taking way too long, pulling us out of the moment when we should just be enjoying ourselves. We wanted to fix that: send a voice message, snap a photo of the receipt, and let the app handle everything instantly through Bunq.
What it does
MeditaSplit is a multi-modal AI bill-splitting agent integrated with Bunq. You describe a shared expense by voice or upload a receipt photo, and the app understands who owes what, matches your contacts, and sends Bunq payment requests, all in seconds, with a human confirmation step before any money moves.
How we built it
We started by building two separate agents: one for voice understanding (speech-to-text via Groq Whisper + Claude NLU) and one for receipt parsing (Claude Vision). Once both were working independently, we combined them into a unified multimodal input layer. On top of that, we built a central reasoning agent using Claude's tool-use capabilities, responsible for interpreting intent, resolving ambiguities, assigning costs, and orchestrating the Bunq API calls. The backend runs on FastAPI, with the Bunq SDK handling payment requests.
Challenges we ran into
The Bunq API had a steeper learning curve than expected, particularly around the initial ApiContext setup and the lack of server-side filtering on payments, which forced us to build a client-side caching and filtering layer. On the AI side, handling complex or ambiguous voice queries reliably (multiple contacts with the same name, vague time references like "yesterday") required careful prompt engineering and confidence thresholds. Throughout all of this, the clock was always ticking.
Accomplishments that we're proud of
We're proud of shipping a fully working multimodal pipeline, voice and images in, Bunq payment request out, within 24 hours. The central reasoning agent handles edge cases gracefully, always asking for confirmation before acting, which makes the experience feel trustworthy rather than just impressive, and prevents mistakes before happening, like no contact or expense found.
What we learned
Building with multiple AI models and external APIs under time pressure taught us to ruthlessly prioritize and build modularly. We learned that a clear state machine (Perceive → Reason → Confirm → Act → Report) is worth the upfront design time, it saved us hours of debugging later. We also learned that the human-in-the-loop confirmation step isn't just a safety net, it's what makes users actually trust an AI that touches their money.
What's next for MeditaSplit
Expanding to handle multi-currency trips, deeper Bunq integration (recurring splits, group balances over time), and a smoother onboarding flow for non-Bunq users via bunq.me links. We also want to explore proactive suggestions: MeditaSplit noticing a group dinner payment and asking "want to split this?" before you even think to ask.
Built With
- 3s-polling-for-sync-deployment:-node.js-20+
- device-registration
- framer-motion-(animations)-persistence:-localstorage-(per-group-chat-history)
- framework:-next.js-16-(app-router)
- locally
- no-external-service-styling:-tailwind-css-4
- npm
- react-19
- request-inquiry-for-payments-fuzzy-matching:-fuse.js-7-?-contact-resolution-with-confidence-scoring-voice:-web-speech-api-(en-us)-?-browser-native
- run
- runs
- server-side-json-store-(groups-+-messages)
- session-lifecycle
- typescript-5-ai:-claude-sonnet-4.6-(anthropic-sdk-0.39)-?-agentic-tool-use-loop-+-vision-for-receipts-banking:-bunq-sandbox-api-?-full-rsa-key-pair
- via

Log in or sign up for Devpost to join the conversation.