SmartSettle: A fiduciary multi-agent system for bill negotiation. Using local SLMs and a "Manager-Worker" architecture, it executes settlements under a secure, user-defined authorization manifest.
The Project Story 💡 Inspiration In the "Agentic Era," AI should do more than just talk; it should act. However, a massive "Trust Gap" exists. Users are rightfully hesitant to give AI access to their financial lives. We were inspired to build SmartSettle Auth to solve this. We wanted to create a fiduciary agent that uses Manager-Worker orchestration to handle high-stakes negotiations (like telecom or utility bills) while keeping the user in total control via a verifiable "Authorization Manifest." ⚙️ How we built it The core of SmartSettle Auth is a decentralized orchestration layer.
- Orchestration: We used a Coordinator-Worker pattern where a "Manager" agent interprets a user's financial mandate and delegates tasks to specialized sub-agents.
- Local Intelligence: To ensure 100% privacy and zero API costs, we deployed Llama 3.2 and Phi-3.5 locally. The "Strategy" and "Negotiation" logic never leave the user's hardware.
- Authorization Layer: We implemented a Scoped Authorization Manifest (JSON) that acts as a digital contract, defining the "Floor" and "Ceiling" for any negotiation.
- Safety Logic: We applied a simple but effective verification formula to ensure the agent never exceeds its mandate. If B is the current bill and r is the authorized discount rate, the agent is hard-coded to reject any settlement S where:
🚧 Challenges we faced The primary challenge was "Authorization Drift." In a multi-turn conversation with a vendor's bot, a worker agent might accidentally agree to a "bundle" that saves money but adds a long-term contract—violating the user's intent. We solved this by implementing a Logic Sentinel worker that audits every proposed deal against the original Authorization Manifest before it ever reaches the user for final approval. 📖 What we learned We learned that "Small is Powerful." By using specialized Small Language Models (SLMs) for specific tasks (one for Tamil cultural nuances, one for logical auditing), we achieved higher reliability and lower latency than using a single, massive "General" model. We also realized that Human-in-the-Loop (HITL) isn't a limitation; it’s a feature that builds the necessary trust for AI to finally be "Authorized to Act."
## Bonus Blog Post: The "Trust Handshake" – Building SmartSettle Auth
When we first sat down to conceptualize SmartSettle Auth, we didn't start with code; we started with a question: “Would you let a robot negotiate your bank account?” For most people, the answer is a hard "No." The "Trust Gap" in Agentic AI isn't about how smart the model is—it’s about how the model handles sensitive credentials.
Our journey led us to the Token Vault architecture. Early in development, we hit a massive technical hurdle: how do we give an agent enough "Authorization" to negotiate with a telecom provider without handing over the "keys to the kingdom"? Traditional API keys were too broad, and hard-coding credentials was a security nightmare.
The breakthrough came when we integrated the Token Vault with our Manager-Worker orchestration. By utilizing the Vault, we were able to implement a "Least Privilege" execution model. Instead of the agent having a persistent login, it requests a scoped, time-bound token only when a negotiation is active. This created what we call the "Trust Handshake"—a verifiable moment where the user sees exactly what the agent is authorized to do before the Vault releases the necessary handshake to the vendor.
Integrating this was no small feat. We spent nights debugging the asynchronous handoffs between the Logic Sentinel (which verifies the budget) and the Token Vault (which authorizes the action). But the result was worth it: a system where a user in a local village can authorize an AI to settle a bill in their native Tamil, knowing their underlying financial identity is locked behind a cryptographic vault. SmartSettle Auth isn't just about saving money; it’s about proving that with the right vault architecture, AI can finally be a safe, authorized extension of ourselves.
Built With
- langgraph
- llama
- ollama
- pydantic
- vercel
Log in or sign up for Devpost to join the conversation.