Inspiration

Financial offer messages are a hidden compliance risk. One vague disclosure - an unspecified APR, a missing fee, or a fuzzy eligibility clause - can expose fintech and commerce teams to serious legal and reputational damage. I was inspired by the real-world challenge of making AI-generated content trustworthy in regulated industries, not just fluent.

What it does

The Compliance-Friendly Offer Helper takes a merchant's offer description and runs it through a two-agent AI pipeline. The Drafter agent generates a structured, customer-facing message. The Reviewer agent then checks both the original input and the draft - flagging missing rates, invented terms, vague eligibility, and unspecified fees. If anything is unclear or hallucinated, the verdict is ⚠️ Needs Review. Only fully compliant, merchant-verified offers pass as ✅ OK. It also blocks prompt injection attacks and nonsensical inputs before they ever reach the model.

How I built it

Amazon Nova Lite via the AWS Bedrock Converse API - two sequential agent calls per request Streamlit for the front-end UI, deployed on Streamlit Community Cloud Python for the backend pipeline and security hardening layer Two-agent architecture: Drafter → Reviewer, with the original offer text passed to both agents to prevent hallucination approval

Challenges I ran into

The biggest challenge was LLM non-determinism. The Reviewer would sometimes approve vague offers because it was evaluating the drafted message (which looked polished) rather than the original input (which was vague). I solved this by passing the original offer text explicitly to the Reviewer and rewriting the system prompt as a strict, non-negotiable checklist rather than a conversational instruction. Getting consistent NEEDS_REVIEW verdicts on genuinely vague inputs required several iterations.

Accomplishments that I'm proud of

A working two-agent compliance pipeline built entirely on Amazon Nova and AWS Bedrock Consistent, reliable verdicts - the Reviewer correctly flags vague inputs every time Security hardening that blocks prompt injection attacks with zero Bedrock API calls A clean, intuitive UI that makes compliance checking accessible to non-technical users

What I learned

Prompt engineering for compliance is fundamentally different from prompt engineering for creativity. Vague, conversational instructions give LLMs room to rationalize incorrect decisions. Strict, checklist-style system prompts with explicit auto-fail conditions produce far more reliable results. I also learned that passing the original user input to the Reviewer - not just the draft - is critical for catching AI hallucination of financial terms.

What's next for Compliance-Friendly Offer Helper

Suggested enhancements could include multi-jurisdiction rule sets (EU vs. US compliance profiles), audit logging to DynamoDB for regulatory record-keeping, integration with existing offer management systems, and support for additional offer types beyond BNPL and credit cards.

Built With

  • amazon-nova
  • aws-bedrock
  • converse
  • python
  • streamlit
Share this project:

Updates