Inspiration

We were inspired by the idea of creating a financial digital twin, a system that doesn’t just show a user’s past transactions but helps them simulate the impact of future decisions. Current dashboards are static and reactive, leaving users unsure about affordability and risk. Our goal was to build something proactive, explainable, and scalable by combining microservices with AI-driven agents.

What it does

The project, Affordability Twin, allows users to interact with a Gemini-powered chatbot to ask affordability-related questions, such as “If I buy a \$500 phone, what’s my risk?”. Behind the scenes, a Risk Agent queries real-time balances and transactions through the MCP layer, computes affordability scores with Gemini AI, and collaborates with the chatbot using the A2A protocol. The chatbot then explains the result in natural language, giving users clear insights before they make financial decisions.

How we built it

We deployed the Bank of Anthos microservices on Google Kubernetes Engine (GKE) as the foundation. On top of this, we added:

  • A Gemini-powered Risk Agent to compute affordability risk.
  • A Gemini-powered Chatbot Agent for user interaction.
  • MCP (Model Context Protocol) to connect agents with balance and transaction APIs.
  • A2A protocol to coordinate communication between agents.
  • ADK (Agent Development Kit) to simplify building the agents.
  • kubectl-ai to manage scaling and deployments using natural language.
  • Gemini CLI for debugging, log tracing, and workflow analysis.

This setup allowed us to demonstrate a fully functional agentic system, integrated with microservices, all deployed on GKE.

Challenges we ran into

We faced delays during container creation on GKE, which we solved by tuning resources in deployment configurations. Debugging multi-agent workflows was another challenge, as interactions between the Chatbot Agent and Risk Agent could get complex; using Gemini CLI for workflow tracing was critical here. We also had to carefully design the MCP schema to ensure the agents had just enough access to microservices without unnecessary overhead. Finally, we had to balance real-time performance with clear and interpretable user responses—not just making predictions but making them understandable.

Accomplishments that we're proud of

We are proud that we successfully combined cloud-native microservices and modern agent frameworks into a cohesive, working solution. Deploying everything on GKE and ensuring agents could communicate seamlessly with microservices through MCP was a major technical milestone. Another accomplishment was demonstrating the Affordability Twin workflow live, where a chatbot powered by Gemini could compute and explain affordability risks in real time. We also feel accomplished in leveraging advanced tools like kubectl-ai and Gemini CLI to make the system easier to manage and debug.

What we learned

We learned the importance of designing agents and microservices to complement each other, rather than working in isolation. MCP proved to be a powerful tool for bridging legacy APIs with modern AI agents. A2A protocol taught us how to orchestrate multi-agent workflows effectively. We also discovered how kubectl-ai can reduce operational overhead in Kubernetes, making it accessible even for complex deployments. Most importantly, we learned that for real-world adoption, AI outputs must not only be accurate but also interpretable and actionable for users making financial decisions.

Share this project:

Updates