Inspiration

We wanted to focus on solving financial decision paralysis. Most banking apps show where money went, not where it should go. People see balances and transactions but get little guidance on what to do next—save, invest, or pay down debt—and often freeze. We wanted to empower people to reach their financial goals without needing a finance degree or a paid advisor. The idea was to build an AI that can run “what‑if” scenarios, respect safety rules, and explain tradeoffs in plain language so users can act with confidence instead of second‑guessing.


Figma Design portion

We started with a Figma prototype for the initial design of the app. The prototype shows how the AI assistant would sit inside an existing bank interface: a familiar layout (accounts, activity, goals, budget) with a chat-based advisor that helps redirect users and complete actions when needed. The design guided our front-end structure—tabbed views, account cards, and an embedded advisor panel—so the AI feels like part of the bank experience rather than a separate tool.


What it does

  • Counterfactual simulations with before/after scenarios — Users can ask “what if I save $500?” or “what if I invest $300?” and get a clear before/after view: projected account balances, impact on each goal, and effects on budgets and liquidity, so they can compare options without moving money.
  • 4-agent AI system (Budgeting, Investment, Guardrail, Validation) — Each decision is analyzed by specialized agents: Budgeting (spending and surplus), Investment (goals and growth), Guardrail (user rules and min balances), and Validation (synthesis and contradictions). The system combines their outputs into a single recommendation with reasoning.
  • Smart goal prioritization with feasibility scoring — When users say “prioritize my most realistic goal,” we rank goals with a deterministic 7-factor feasibility score (progress, time left, required monthly contribution, spending, liquidity, risk, etc.) and set a priority goal; suggestions and reallocations can then focus on that goal.
  • Portfolio intelligence with asset allocation — We support detailed investment accounts (taxable, Roth IRA, 401k) with allocation (stocks/bonds/cash) and use that for simulations and recommendations so users see how actions affect both balances and risk.
  • Natural language chat interface — Users can type things like “Should I save $500 or invest it?”, “Prioritize my most realistic goal,” “Stabilize my finances for the next month,” or “Increase savings without lowering my lifestyle.” Intent is parsed and routed to simulations, goal logic, stabilization, or lifestyle-aware savings.
  • Intelligent guardrails that protect and explain — User-defined rules (e.g. “never let checking go below $1,000”) are enforced by a dedicated Guardrail agent and by the simulation layer; when an action would break a rule, the system blocks or warns and explains why.

How we built it

We built it as a layered pipeline: type system → simulation engine → multi-agent AI → API and chat. The foundation is a production-grade TypeScript type system (user profile, accounts, goals, guardrails, actions, simulation results) so every step is type-safe. The simulation engine is pure, deterministic functions: save, invest, spend, and compare_options, with goal and budget impact and guardrail checks. On top of that we added a multi-agent AI layer with LangChain and OpenAI (or mocks for demos): Budgeting, Investment, Guardrail, and Validation agents, with Zod schemas for structured outputs. The stack is TypeScript end-to-end, with LangChain for agents and Zod for validation; the architecture is deliberately hybrid—deterministic math for simulations and feasibility, and LLMs for natural language and nuanced reasoning.


Challenges we ran into

  • Multi-agent coordination — Keeping four agents aligned and avoiding conflicting advice required a clear pipeline (specialized agents first, then a Validation agent that synthesizes and checks for contradictions) and consistent schemas so outputs could be compared and merged.
  • Unpredictable LLM output — LLMs sometimes misclassified intents (e.g. “prioritize my most realistic goal” as generic advice). We added keyword overrides for high-stakes intents and used Zod to enforce structure so malformed responses don’t break the app.
  • Determinism vs. intelligence — We wanted reproducible simulations and feasibility scores (deterministic) but also natural language and flexible advice (LLM). We kept simulations and scoring fully deterministic and used the LLM for intent, explanation, and recommendation text only.
  • Feasibility ranking — Turning “most realistic goal” into a single rank required a transparent, multi-factor score (progress, time, required contribution, spending, liquidity, risk, etc.) and rules for tie-breaking so the chosen goal is defensible and explainable.
  • Working guardrails — Making guardrails both enforced and explainable meant a dedicated Guardrail agent plus validation in the simulation layer, so violations are caught in code and also described in natural language.
  • Demo data — We needed realistic but understandable data: multiple personas (e.g. Sarah, Marcus, Elena), varied goals, and edge cases so we could demo simulations, guardrails, and feasibility without exposing real user data.

Accomplishments that we're proud of

We’re proud of a production-grade type system (dozens of interfaces covering profile, accounts, goals, actions, simulations, and agent outputs), a deterministic simulation engine with tests and clear before/after semantics, a multi-agent AI system (four specialized agents plus orchestrator and validation), a feasibility algorithm that ranks goals and drives prioritization, asset allocation support across account types, intent parsing and overrides so chat reliably triggers simulations, prioritization, stabilization, and lifestyle-aware savings, and comprehensive testing (simulation tests, agent tests, and chat flows) so we can iterate without regressions.


What we learned

We took away 10 technical, product, and process lessons—from “type safety is a superpower” (it caught bugs early and made refactors safe) and “Zod + LLMs = reliable structure” to “guardrails need both code and narrative” and “hackathons reward focus.” We also learned that users need both the numbers (simulations, feasibility) and the story (why this goal, why this action), and that embedding the AI inside a familiar bank-style UI (Figma → implementation) made the product feel concrete and usable.


What's next for Clarity Finance

Next steps we’d pursue: bank integration (read-only connections to real accounts so simulations use live data), improved UI (clearer before/after visuals, goal timelines, and one-tap “apply” for safe actions), advanced analytics (trends, projections, and scenario comparison over time), financial education (short explainers tied to recommendations and guardrails), and smart notifications (e.g. “You’re on track for your priority goal” or “This transfer would break your min balance rule”). We’d also explore deeper portfolio analytics and optional human-in-the-loop for large or irreversible actions.

Built With

Share this project:

Updates