How we built it

Bloom was built as a mobile-first kid economy engine around one core journey: a parent posts a gig, a kid completes it with before and after photos, AI verifies the result, and the reward lands instantly in the kid's bunq wallet.

On the frontend, we used Next.js 16, React 19, TypeScript, and Tailwind CSS to create a simple camera-first experience that works well on a phone-sized viewport. On the backend, we used FastAPI with typed models and a clear separation between routes, services, repositories, and integration clients.

We designed the backend so that all sensitive logic stays server-side. The frontend never talks directly to bunq or Claude. Instead, the backend owns:

  • gig state transitions
  • image validation and upload handling
  • AI verification
  • payment execution
  • balance retrieval
  • fallback and error handling

To keep development fast and demo-safe, we built Bloom with adapter-based integrations. That gave us local-friendly modes with LM Studio and fake bunq, while preserving clean paths for Anthropic Claude and real bunq sandbox integrations behind the same interfaces. We also used local JSON state for MVP persistence so the flow survives refreshes without the overhead of a production database.

Challenges we ran into

One of the biggest challenges was making the core loop feel reliable, not just technically functional. Photo verification sounds simple until you need it to produce a clear, kid-friendly verdict instead of vague AI output. We had to normalize the verification contract so the result was structured, specific, and usable in the UI.

Another challenge was payments. In a demo, nothing breaks trust faster than paying twice or showing a payment succeeded when it did not. That pushed us to treat idempotency, retry safety, and explicit fallback states as product requirements, not backend details.

We also had to balance real integrations with demo safety. Real bunq sandbox and hosted AI are valuable, but they introduce latency, configuration overhead, and failure modes. Bloom needed to work both as a realistic prototype and as something we could confidently demonstrate under hackathon pressure.

Finally, mobile camera UX turned out to be a core product challenge, not a polish task. Taking before and after photos has to feel natural, fast, and understandable for a kid, or the whole experience becomes friction instead of motivation.

Accomplishments that we're proud of

We're proud that Bloom is not just a concept deck. We built a working vertical slice of the actual learning loop:

  • parent creates a gig
  • kid claims it
  • kid submits before and after photos
  • backend verifies the improvement
  • payment status is returned
  • the kid sees the earned reward and updated balance state

We're also proud of the architecture. The product is simple from the user's perspective, but under the hood it is built with clean boundaries that let us switch between fake and real providers without rewriting the app. That made it possible to move quickly without painting ourselves into a corner.

Another accomplishment is that we kept the product grounded in the original learning goal. Bloom is not just about moving money. It connects effort, proof, reward, and balance in one flow, which makes earning feel tangible for kids and trustworthy for parents.

What we learned

We learned that the most important part of this product is not the payment itself. It is the moment where the child understands: "I did something, it was recognized, and now I earned money." That means explanation, feedback, and clarity matter as much as integration depth.

We also learned that demo-safe architecture matters early. Adapters, fallback modes, and explicit state transitions were not overengineering. They were what allowed us to keep building while mixing local AI, fake payments, and real sandbox integrations.

A third lesson was that mobile UX has to lead the design. This flow only feels right when the camera experience is first-class. If the app behaves like a desktop form with image uploads, it loses a lot of its magic.

What's next for Bloom

The next step is to harden the real-money learning loop by completing the move from local simulation to real Claude verification and real bunq sandbox payments while preserving safe fallback behavior.

After that, we want to improve the product experience around the core flow:

  • make camera capture feel even more native on mobile
  • improve progress states during verification and payment
  • strengthen idempotency and retry handling so one gig can never be paid twice
  • make failure states more recoverable and easier to understand during demos

Once the core loop is fully hardened, we want to expand Bloom into a broader learning product with features like richer parent insights, stronger progress feedback for kids, and future extensions such as Bloom reports and Snap & Learn, but only after the earning journey is truly reliable.

Built With

  • ai
  • anthropic-claude-api
  • bunq-api
  • cryptography
  • eslint
  • fastapi
  • lm-studio
  • local-state-persistence
  • lucide-icons
  • mobile-first-web-app
  • monorepo-architecture
  • next.js
  • pydantic
  • python
  • radix-ui
  • react
  • requests
  • sandbox-payments
  • tailwind-css
  • typescript
  • uvicorn
Share this project:

Updates