Inspiration
The idea for HARMONY came from a simple observation: most financial mistakes don’t happen because people lack tools, but because decisions are made under impulse, stress, or incomplete context. Payments are fast, but judgment is fragile. We wanted to explore whether an AI agent could protect a user’s future intent before money moves, rather than reacting after the damage is done.
What it does
HARMONY is a user-owned AI financial enforcement agent that sits between intent and execution. When a user speaks or types an action—like paying a credit card bill—HARMONY simulates all future commitments, checks user-defined non-negotiables, and decides whether to allow, suggest alternatives, or stage the action for confirmation. Nothing executes blindly.
How we built it
We built a deterministic simulation engine for future commitments, a veto/decision layer to enforce rules, and a voice + text interface for interaction. The system logs every decision, keeps actions pending until confirmation, and adapts confidence based on user behavior.
Challenges we ran into
Balancing authority and user control was the hardest challenge. We had to design refusal logic that felt protective, not restrictive, while keeping everything explainable and auditable.
Accomplishments that we’re proud of
Built a working intent-to-decision pipeline
Real-time future simulation before execution
Explicit consent-based enforcement, not automation
What we learned
Financial AI needs boundaries. Trust comes from transparency, reversibility, and respecting user intent—not from intelligence alone.
What’s next for HARMONY
Next, we plan to deepen behavioral learning, improve simulation accuracy, and explore agent-to-agent financial coordination between trusted users under enforced constraints.
Log in or sign up for Devpost to join the conversation.