posted an update

Question: Really interesting direction One thing I m curious about how does Novaflow handle ambiguous or incomplete requirements in PRDs In real world scenarios, specs are often messy, and test accuracy heavily depends on interpretation. It would be great to see how your agents deal with uncertainty or conflicting logic. Also do you have any validation layer to ensure generated tests are actually aligned with business intent and not just syntactically correct Also potential area to explore could be traceability mapping each generated test case back to specific requirement lines. This would make debugging, audits, and team collaboration much easier, especially in large systems.

Answer: Hey, that’s a really good question. In real-world scenarios PRDs are often messy or incomplete, and test quality heavily depends on how well those requirements are interpreted.

First, I believe AI is not magic — it’s mathematics (statistics and probability). So if the input data (in this case the PRD) is inaccurate or ambiguous, the output can also be inaccurate. Because of that, NovaFlow follows a human-in-the-loop approach instead of fully blind automation.

For handling ambiguous or incomplete PRDs, NovaFlow first uses the reasoning capability of the Nova Lite model with the system persona set as a Senior Product Manager. Instead of directly generating test cases, the system first focuses on understanding the PRD and produces three outputs:

  1. A summarized version of the PRD
  2. Extracted product features and requirements
  3. A review section for the Product Manager

Here the actual Product Manager (who wrote the PRD after discussing with the client) can review, correct, or refine the extracted requirements. This ensures the interpretation is aligned with the real product intent before moving forward.

Regarding uncertainty or conflicting logic, NovaFlow does not directly generate test cases. It first identifies gaps, inconsistencies, or conflicting logic in the PRD and raises them for clarification with the Product Manager. Only after these ambiguities are resolved does the system move forward with test generation.

For the validation layer, Phase 2 introduces another persona: a Senior QA Engineer. This agent understands the validated PRD and generates test cases that focus on user journeys, business logic validation, and proper test design strategy — not just syntactically correct tests.

For traceability, I’m still exploring deeper LLM tracing, but currently during execution NovaFlow generates reports that store reasoning traces from Nova Act, pass/fail status, and execution metrics like duration. This provides visibility for Product Managers, QA Engineers, and Developers to understand how and why a test behaved in a certain way.

Additionally, I integrated Nova Sonic which enables bidirectional voice interaction. This allows stakeholders to discuss the test execution report conversationally, while the responses remain grounded in the actual execution data.

Overall, the idea behind NovaFlow is not to replace human decision making but to augment the QA workflow with structured AI reasoning while keeping humans in the loop.

Log in or sign up for Devpost to join the conversation.