Inspiration
Board governance failures cost companies billions — from Enron to WeWork, the pattern is the same: critical risks buried in meeting minutes, conflicts of interest undocumented, action items that quietly disappear between meetings. We asked: what if AI could continuously monitor governance health across every board document, not as a one-time audit, but as an always-on governance analyst that tracks patterns over time?
What it does
Bounded Governance AI is a multi-agent platform that autonomously analyzes board minutes, governance frameworks, and compliance documents. Five specialized AI agents work together:
• Minutes Analyzer extracts decisions, votes, action items, and risks • Framework Checker compares actual practices against governance policies • COI Detector identifies conflict-of-interest gaps and undocumented disclosures • Cross-Document Analyzer finds patterns across multiple meetings — escalating risks, unresolved action items, persistent governance gaps • Reviewer self-corrects findings, adjusting confidence scores and re-triggering agents on low-confidence items
A Governed Chat lets executives ask questions grounded exclusively in their documents with full evidence citations. Role-based access control ensures each persona — Board Chair, Governance Analyst, Compliance Officer, Ops Manager, Intern — only sees documents and features they're authorized to access. Every action is logged in a full audit trail.
How we built it
• Frontend: Next.js 14 with Tailwind CSS, shadcn/ui, Recharts for real-time dashboards • Backend: FastAPI with SQLite for document storage and findings • AI Engine: Gemini 3 Flash via Google AI Studio API, leveraging the 1M token context window to batch-analyze entire document sets in single API calls — no RAG chunking needed • Agent Orchestration: Custom pipeline with sequential agent execution, cross-document analysis phase, and self-correction reviewer loop • Access Control: Document-level ACL matrix enforced on both frontend and backend with 5 governance personas • Development: Antigravity for rapid skeleton generation, Google AI Studio for prompt tuning, iterative feature branching with 19 commits
Challenges we ran into
Rate limits were brutal. Our initial architecture made 15+ API calls per analysis run (one per document per agent), instantly hitting the free tier limit. We had to fundamentally rethink our approach — batching all documents into single API calls per agent, reducing 15+ calls down to 5-6. This actually became a strength: it forced us to leverage Gemini's massive context window the way it was designed to be used.
Cross-document intelligence was harder than expected. Getting an AI to reliably track that a risk mentioned in January's meeting was deferred in February and still unresolved in March requires careful prompt engineering — explicit reasoning steps, pattern type definitions, and severity calibration scales.
Balancing autonomy with boundaries. The agents needed to be autonomous enough to find non-obvious patterns, but bounded enough to never fabricate evidence. Every finding must include a verbatim quote from a source document. The system refuses to answer questions outside the uploaded document scope.
Accomplishments that we're proud of
• 24 findings from 5 documents including cross-meeting patterns no single-document analysis would catch — like "Repeated Failure to Assign Accountable Ownership" tracked across 3 months of board minutes • Evidence-grounded chat that produces governance-grade responses with citations to specific documents, pages, sections, and existing analysis findings • Role-based access that actually works — switch from Board Chair to Intern and watch the entire UI change: documents disappear, features lock, chat denies access • Full audit trail of every agent action, every human verification, every permission-denied event • Built in under 48 hours by a team of 5 with a working, demo-ready product
What we learned
• Gemini's 1M context window changes the architecture. Instead of chunking documents and building retrieval pipelines, we could send everything at once. This eliminated an entire class of "lost context" bugs. • Prompt engineering is the real product. The difference between vague findings and governance-grade analysis came down to mandatory reasoning steps, severity calibration criteria, and cross-document correlation instructions in our agent prompts. • Agentic AI needs boundaries. Autonomous agents are powerful, but in governance, trust requires constraints — access controls, evidence requirements, human verification workflows, and complete audit trails.
What's next for Bounded Governance AI
• Google Drive integration for automatic document ingestion from shared governance folders • OAuth authentication replacing the demo persona switcher with real Google SSO • Governance timeline visualization showing how risks and action items evolve across meetings • Automated report generation producing board-ready governance health reports exported to PDF • Real-time monitoring with alerts when new documents are uploaded that contain high-severity issues • Multi-organization support enabling governance consultants to manage multiple client boards
Built With
- docker
- fast-api
- gemini3preview
- next.js
- python
- react
- shadcn/ui
- sqlite
- tailwindcss
- typescript
- uvicorn
- yellowind
Log in or sign up for Devpost to join the conversation.