Inspiration

As a Technology Consultant, I have experienced firsthand the challenges of proposal writing, initially composing them independently and later overseeing teams who dedicated significant time searching for baseline documents, managing inconsistent formats, and working late into the night. These obstacles led me to envision an innovative solution: an AI partner designed to streamline the proposal process and convert it from a demanding task into a strategic asset.

What it does

Proposal Advisory AI Team is a sophisticated multi-agent system powered by AWS Generative AI services including Bedrock AgentCore. Each agent is assigned a distinct function, including input validation, scope analysis, solution design, timeline planning, quality review, pricing calculation, and proposal assembly. The Streamlit-based frontend offers an intuitive interface, enabling users to efficiently create, update, monitor, and download proposals in real time.

How we built it

As detailed in the application solution architecture, I built this application using Python as programming language and integrating the following key AWS services for the application backend:

  • Strands Agents Framework: Agent orchestration and tool integration
  • Amazon Bedrock: Foundation models (Claude Sonnet, Nova Pro, Nova Premier, Titan Text Embeddings)
  • MCP (Model Context Protocol): AWS Knowledge Server integration
  • Amazon Bedrock AgentCore Runtime: Serverless runtime and deployment platform
  • Amazon Bedrock AgentCore Memory: Session monitoring and event tracking
  • Amazon Bedrock AgentCore Observability: Observability information management
  • Amazon RDS Aurora Serverless: Pricing database with auto-scaling
  • Amazon Bedrock Knowledge Base: Historical proposal data storage
  • Amazon S3: Knowledge base file storage
  • AWS Secrets Manager: Secure credential management

The application frontend, deployed on Elastic Beanstalk, was also built using Python as programming language and leveraging Streamlit as UX framework.

Along the entire design, development, testing and deployment processes, I used Amazon Q Developer, in its agentic mode, as my development assistant in the VS Code IDE, allowing me to speed up the process for both the backend and frontend. I highlight that the frontend application was developed from scratch together with Amazon Q Developer.

Challenges we ran into

Coordinating nine specialized agents presented considerable complexity, requiring meticulous orchestration, robust state management, advanced context sharing among agents, and comprehensive error handling through extensive iterations. The Strands Agents Framework provided the necessary flexibility for implementing a tailored workflow, notably supporting an improvement loop led by the Approach and QA agents to optimize key process outputs. Furthermore, Strands capabilities facilitated enhanced tool access for agents, extending utilization from internally developed resources to MCP (Model Context Protocol) tools available via the AWS Knowledge MCP Server.

Implementing observability proved essential for gaining insight into the reasoning and decision-making processes adopted by the agentic team, which in turn supported prompt and tool refinement. A combination of Langfuse for local testing and AgentCore Observability for testing within the AWS Cloud environment supported this requirement.

An additional challenge involved integrating complementary capabilities for the team, such as leveraging knowledge bases and offering end-to-end status visibility for proposal preparation on the frontend. A strategic decision was made to manage these features as discrete operations overseen by the Proposal Advisory AI Team orchestrator, utilizing AgentCore Runtime for processing.

Delivering status visibility necessitated further innovation, prompting the introduction of asynchronous execution for the proposal advisory team on the frontend alongside the existing synchronous mode. To achieve this, AgentCore Memory was adopted as a shared short-term memory resource, enabling backend storage of proposal generation events and frontend retrieval for user presentation via the proposal status tracker.

Accomplishments that we're proud of

I am proud of building more than just a demo - I created a scalable, cloud-native and enterprise-grade multi-agent AI system that delivers real business value. Some highlights:

  • End-to-End Proposal Engine: From raw client requirements to a polished, branded proposal, the system automates every step in minutes instead of days.
  • Multi-Agent Orchestration: Successfully coordinated nine specialized AI agents with controlled workflows, validation gates, and iterative improvement loops — a feat that required careful orchestration and technical design.
  • Enterprise-Grade Architecture: Leveraged AWS Bedrock AgentCore, Strands, Aurora Serverless, and AgentCore Memory to ensure scalability, observability, and secure integration with corporate systems.
  • Observability & Quality Control: Implemented logging, tracing, and metrics with AgentCore Observability and Langfuse, turning what could have been a “black box” into a transparent system.
  • Seamless Frontend Experience: Delivered a modern Streamlit UI with async status tracking, version control, and feedback integration — making advanced AI orchestration accessible to non-technical users.
  • Real-World Impact: Transformed proposal writing from a stressful, inconsistent process into a repeatable, scalable, and client-ready experience — freeing teams to focus on strategy and delivery.

What we learned

This project taught me that building a multi-agent AI system is as much about orchestration and governance as it is about the models themselves. Some key lessons:

  • Workflow Orchestration Matters: Success came from concentrating complexity in the orchestrator, letting agents stay focused on their tasks. This made them reusable and easier to refine.

  • Quality Gates Save Costs: Implementing a Validator Agent upfront proved critical. By filtering incomplete or low-quality inputs, I avoided wasted computing and reduced hallucinations.

  • Iterative Improvement Loops: I learned that generative models alone struggle with self-review cycles. Coding explicit improvement loops (e.g., QA + Approach agents) gave me reliable quality control.

  • Context Sharing is Critical: Narrowing the context each agent received improved both efficiency and accuracy. Too much context led to token waste and distracted outputs.

  • Tools Extend Agent Power: Leveraging Model Context Protocol (MCP) tools, like AWS Knowledge MCP Server, showed me how external knowledge sources can dramatically boost agent performance.

  • Observability is Non-Negotiable: Logging, tracing, and metrics turned a black-box workflow into a transparent system I could test, debug, and continuously improve.

  • Balance AI with Code: Not every task should be handed to LLMs. I learned to critically evaluate when to use AI versus traditional code or tools for efficiency and reliability.

Ultimately, the biggest takeaway was that multi-agent AI is less about “magic models” and more about disciplined engineering—designing workflows, checkpoints, and observability that make the system trustworthy and production-ready.

What's next for Proposal Advisory AI Team

The journey doesn’t stop here. I see this project as the foundation for a broader transformation of how organizations handle proposals and client engagement. Next steps include:

  • Corporate Knowledge Bases: Build and continuously enrich knowledge bases with historical winning proposals. This will allow the system to learn from proven strategies, reuse high-quality content, and adapt outputs to specific service lines.

  • Smarter Project Planning: Integrate with planning tools and engines to generate company-specific approaches and timelines, ensuring proposals reflect not just generic best practices but the organization’s unique delivery model.

  • Advanced Pricing & CX Integration: Connect with customer experience and sales cycle tools to handle complex pricing scenarios, align proposals with CRM data, and embed proposal management directly into the sales workflow.

  • Multi-Format Output Generation: Extend beyond markdown to generate diverse outputs—Word, PDF, slide decks, or even interactive dashboards—by integrating with specialized AI tools for document and presentation design.

  • Continuous Learning & Optimization: Leverage observability, feedback loops, and user ratings to refine agent performance, reduce costs, and improve accuracy over time.

The vision is clear: a proposal partner that not only drafts documents but becomes a strategic engine for winning business, seamlessly integrated into enterprise workflows.

Built With

Share this project:

Updates