Inspiration

Employees often waste time trying to understand company policies. Simple questions about travel, software purchases, data sharing, or vendor approvals can require digging through long handbooks or waiting for a manager’s response. This slows people down and creates risk when employees guess instead of following the correct process.

That frustration became the starting point for CCM AI. We wanted to build a tool that lets employees ask policy questions in plain English and receive clear answers with citations from their company’s own documents. As we explored the problem further, we realized that the same issue appears in vendor contracts. Important terms, hidden fees, renewal clauses, and payment changes may be buried in long documents of legal jargon that employees do not fully understand before signing.

Both problems have the same root cause: the information exists, but the person who needs it cannot access it quickly or confidently. CCM AI solves this by turning policies and contracts into searchable, explainable compliance guidance.

While CCM AI fits most directly under the Business track at KeanHackU, the problem applies across industries. Schools, hospitals, nonprofits, and companies all rely on policies, contracts, and compliance rules. We built CCM AI to help people make safer, faster decisions before mistakes happen.

What It Does

CCM AI has two main features:

Policy Clarification

Employees can ask questions such as, “Do I need manager approval to onboard a new vendor?” or “Can I expense this software tool?” The AI responds with a structured verdict: approved, needs approval, conditional, escalate, or prohibited. Each answer includes citations from the company’s uploaded policy documents, so users can see exactly where the answer came from.

The system is also multi-tenant. Each organization uploads its own policies, and employees only receive answers based on their company’s specific rules.

Fraud & Fairness Contract Detection

Users can upload a vendor contract as a PDF or DOCX, or paste the contract text directly. The AI then flags specific risks, such as undefined services, automatic long-term renewals, missing opt-out windows, unusual payment terms, unexpected bank account change requests, and hidden fees. Each finding includes a citation so the user knows exactly which clause to review before signing.

How We Built It

We started with a design session to map out the full system before writing code. The backend is a FastAPI server built around a multi-agent reasoning pipeline.

For policy questions, three agents run in sequence:

  1. The QueryUnderstandingAgent extracts the user’s intent and keywords.
  2. The PolicyRetrievalAgent retrieves the most relevant policy chunks from MongoDB Atlas.
  3. The PolicyReasoningAgent synthesizes the retrieved information into a structured JSON verdict with citations.

For contract analysis, we built a dedicated ContractAnalysisAgent. It summarizes the contract, identifies possible fraud signals, flags unfair clauses, and produces an overall risk verdict.

The AI backbone uses the Gemini API. We required JSON-formatted responses using responseMimeType: application/json, which made the output easier for the frontend to parse and display consistently.

Authentication is handled with JWT tokens and bcrypt-hashed passwords. We also added a role system that separates organization admins, who can upload policies, from employees, who can ask questions and analyze documents.

The frontend is built with vanilla HTML, CSS, and JavaScript. We kept it lightweight so it could be deployed quickly without a build step.

Challenges We Ran Into

Our biggest obstacle was API access and rate limits. We kept running into an “it works on my machine” problem. One team member’s requests would work, while another person would get quota errors or connection timeouts.

After debugging across different machines and networks, we found two issues. First, we had used up the free-tier API rate limit faster than expected during testing. Second, we had not whitelisted all of our IP addresses in the API configuration. Finding and fixing these issues took several hours.

We also ran into merge conflicts while working on the same files at the same time. This happened most often in the frontend HTML files and backend agent files. We had to resolve those conflicts carefully so we did not overwrite each other’s work during the hackathon.

Accomplishments We're Proud Of

Our proudest technical achievement is the multi-agent pipeline. Instead of relying on one large AI prompt, we separated the workflow into agents with clear responsibilities. One agent understands the question, another retrieves the relevant policy sections, and another produces the final verdict with citations.

This made the system easier to debug and helped us produce answers that were more organized and traceable.

We are also proud of the database design. In past projects, MongoDB caused issues such as connection errors, schema mismatches, and missing data. This time, we scoped every policy, user, and query to an organization from the beginning. That helped the system stay stable throughout the hackathon.

What We Learned

The biggest lesson we learned is to build vertically as a team before splitting horizontally. At first, we divided the work into frontend, backend agents, authentication, and database tasks. That felt efficient, but it created integration problems later when our assumptions about API formats did not fully match.

Next time, we would build one feature end-to-end together first. After that foundation works, we would split into separate areas of the project.

We also learned the importance of reading API documentation carefully before implementation. For example, the difference between OpenAI-style system messages and Gemini’s system_instruction format seemed small at first, but it caused real debugging delays. A closer read of the documentation would have saved us time.

What's Next for CCM AI

The most important near-term improvement is organization verification. Right now, anyone can register an organization account. In the future, we would require official business documentation, such as an EIN or business registration, before granting admin access. We would also add email verification to make the system more secure.

On the AI side, we would replace our current retrieval approach with a vector database such as Pinecone or Weaviate. Using dense embeddings would make policy retrieval more accurate, especially for long and complex documents where the relevant clause may not use the same words as the user’s question.

We would also add a full audit log. Every policy question and contract analysis would be stored with its verdict, citations, and timestamp. This would give compliance teams a clear record of what was checked and when.

Long term, the goal is to integrate CCM AI directly into HR platforms and contract management systems. Compliance checking should not be a separate tool that employees have to remember to use. It should become a seamless part of the workflows they already use.

Built With

Share this project:

Updates