Inspiration

Most AI planning systems I have used depend on the ai generating and judging plans This makes failures hard to explain we never know where the plan may be lacking. PRISM was inspired by the idea that AI should be treated as a subordinate component inside a deterministic system, not as an autonomous decision-maker. The goal was to explore how planning systems could remain safe, explainable, and auditable while still benefiting from AI-generated suggestions.

What it does

PRISM is a constraint-based planning engine where: Java is the system authority, Constraints are deterministic, Violations are structured data, AI only proposes or repairs plans. The system evaluates plans using explicit rules. If a plan violates constraints, PRISM generates structured violations and forces bounded replanning until the rules are satisfied or a retry limit is reached. At no point does the AI decide whether a plan is correct.

How we built it

PRISM is built as a backend-first system using Java and Spring Boot. Key components include: A ConstraintEngine that evaluates deterministic rules Independent constraint rules that return structured violations A planning loop that controls validation, replanning, and retry limits An isolated AI boundary where Gemini proposes task allocations. For demo reliability, Gemini is stubbed behind a strict interface. The architecture already supports real HTTP-based Gemini integration without changing core logic.

Challenges we ran into

One of the main challenges was designing a system where AI could contribute without becoming the authority. Other challenges included: Modeling violations as structured data instead of logs. Designing a bounded replanning loop to prevent infinite retries. Keeping the architecture explainable and demo-safe under time constraints. Balancing real-world AI integration with deterministic behavior for reliability.

Accomplishments that we're proud of

Enforcing a clear separation between AI suggestions and system authority, Designing a deterministic constraint engine with structured violations, Building a bounded replanning loop fully controlled by Java, Creating an architecture that is explainable, extensible, and production-minded, Delivering a clean, reliable demo under tight time constraints

What we learned

This project reinforced these: AI should not be responsible for validating its own output Deterministic systems benefit from AI when boundaries are explicit Explainability and correctness matter more than raw generation power Clear architecture decisions simplify both debugging and demos We also gained practical experience designing systems that treat AI as an advisory tool, not a source of truth.

What's next for PRISM — Constraint-Based Planning Engine with AI Advisory

Future improvements include: Enabling full HTTP-based Gemini integration, Adding a frontend to visualize plan versions and violations, Introducing additional constraints and planning domains, Persisting plan history and violations for deeper analysis

The core principle will remain unchanged: Java decides, AI advises.

Built With

Share this project:

Updates