Inspiration
We realized that many product issues don’t come from bad ideas, but from hidden compliance and governance risks that only show up late in the process. Features often seem simple during planning, but months later they can create problems when Legal, Security, or Finance gets involved. By then, the product is already built and fixing those issues is costly. This made us want to design a tool that helps product managers think through these risks earlier, using AI to stress-test decisions before they are leading to expensive rework. Our goal was to improve decision quality early on, not to slow teams down or just generate more documentation.
What it does
RegulaPM Nexus takes a single product decision and runs it through a multi-step AI pipeline to break it down in a structured way. It generates clear PRD sections, stakeholder feedback from six different perspectives (Security, Compliance, Legal, Finance, Engineering, and Support), along with compliance and launch checklists. It also creates a visual dependency graph that shows how risks, regulations, stakeholders, and success metrics are connected. Each output is structured and traceable, and individual sections can be regenerated without overwriting user edits. The final result is an export-ready decision packet in Markdown and JSON, built for auditability and long-term tracking rather than presentation slides.
How we built it
We built RegulaPM Nexus as a full-stack web app. The frontend is a Next.js 14 site styled with Tailwind CSS and shadcn/ui, and we kept the design calm and professional so it feels trustworthy. On the backend, we used Next.js API routes with MongoDB to store data, plus cookie-based sessions for login/auth. For the AI layer, we used Google Gemini 2.5 Flash and set it up as a six-stage pipeline instead of one giant prompt: (1) extract key entities, (2) build a deterministic dependency graph, (3) generate PRD sections one at a time, (4) generate stakeholder critiques independently, (5) create compliance/launch checklists, and (6) generate traceability links. Each stage outputs structured JSON that gets passed into the next stage, which makes the results more consistent and easier to audit. We render the dependency graph using React Flow. Since we had limited time, we scoped aggressively and focused on getting the full pipeline working end-to-end rather than adding a bunch of extra features.
Challenges we ran into
One of our biggest challenges was making the AI pipeline reliable within the limited time of a hackathon. We found that using one large prompt led to inconsistent and hard-to-validate results, so we had to break the system into multiple stages with clear schemas between each step. This took more effort upfront, but it made the output more predictable. Another challenge was preventing feedback loops, where regenerating one part of the output could unintentionally change or break other sections. To fix this, we scoped regeneration to individual sections and stakeholder critiques, while preserving user edits by default. Throughout the project, we also had to balance how deep we modeled regulatory details with the need to actually ship a working and trustworthy product before the deadline.
Accomplishments that we're proud of
We’re especially proud that our full six-stage AI pipeline works end to end and consistently produces structured, traceable outputs, including PRD sections, stakeholder critiques, compliance checklists, and a dependency graph with over twenty connected nodes. Each part of the output is schema-validated and can be regenerated on its own without breaking or overwriting other sections, which made the system feel solid and reliable. We’re also proud of the UI, which looks and feels like real enterprise software that a compliance or risk team could actually trust, not just a typical hackathon demo. To prove it wasn’t a one-off, we created three realistic demo briefs in fintech, healthcare, and enterprise SaaS to show the system works across different regulated industries. Seeing the dependency graph come together, especially with filters for risks, compliance, stakeholders, and metrics, was one of the most satisfying parts of the project.
What we learned
One major thing we learned is that schema validation is essential when AI output needs to be trustworthy. Free-form generation can look impressive in a demo, but it quickly breaks down when the output needs to be audited or traced back to a specific decision. We also learned that breaking AI generation into smaller, staged steps is harder to design, but it makes the system much more reliable than using one open-ended prompt. On the design side, we learned that UX restraint matters just as much as technical features. A calm, professional interface with clear structure builds more trust than a cluttered UI. Overall, our biggest takeaway was that building something smaller but reliable is more valuable than aiming big and ending up with something unstable, especially in regulated domains.
What's next for RegulaPM
Next for RegulaPM, we want to focus on making the outputs easier to use in real workflows. Our immediate priorities are adding PDF export with proper title pages and appendices, a diff view so teams can clearly see what changed between regenerations, and real-time collaboration so multiple people can review and edit decisions together. After that, we want to support customizable stakeholder personas, letting organizations define their own review perspectives, along with integrations into existing compliance and ticketing tools. Longer term, our goal is to build a governance layer that learns from an organization’s past decisions and regulatory outcomes, allowing the system to surface more relevant risks and insights over time.
Built With
- google-gemini2.5
- kubernetes
- mongodb
- next.js
- react
- tailwind
Log in or sign up for Devpost to join the conversation.