Inspiration
I previously worked in retail environments where product data was managed almost entirely by humans. In one role, only two people were responsible for daily product entry, updates, pricing changes, and quality control across a large catalog. Every day brought the same issues: inconsistent descriptions, pricing errors, missing attributes, and last-minute fixes before products could go live.
The work was repetitive, stressful, and error-prone. When something went wrong, there was no clear audit trail showing what changed, why it changed, or whether it was reviewed. Preparing for this hackathon made me realize the real problem was not a lack of tools, but the absence of an operational layer that could safely assist humans instead of replacing them.
QuantOps Autopilot was inspired by that gap: using AI to handle low-risk operational work automatically, escalate higher-risk decisions to humans, and make every action explainable and auditable.
What it does
QuantOps Autopilot is a compliance-first AI operations platform for retail product data.
It ingests product catalogs, detects quality and compliance issues, redacts sensitive information, and uses Gemini 3 to generate structured FixPlans—machine-readable diffs with confidence, risk level, and evidence. Low-risk fixes are applied automatically, while higher-risk changes are routed to a human review queue.
The system supports long-running “Marathon” runs across large catalogs, preserves checkpoints, logs every action for auditability, and surfaces measurable ROI such as issues reduced, time saved, and automation safety ratio.
This is not a chatbot. It is an AI-driven operational workflow.
How we built it
We built QuantOps Autopilot as a web-based application using Google AI Studio and the Gemini 3 API.
The system combines deterministic engineering with AI reasoning:
- Rule-based validation and risk classification
- PII detection and redaction before AI processing
- Gemini 3 Flash for fast catalog triage
- Gemini 3 Pro for higher-risk or ambiguous cases
- Structured JSON outputs instead of free-form text
- Policy gating, human-in-the-loop review, rollback, and audit logging
This architecture ensures that AI actions are explainable, testable, and reversible.
Challenges we ran into
The biggest challenge was not calling the AI model, but designing guardrails around it.
We had to ensure sensitive data was never exposed, automated changes were limited to safe fields, and high-risk decisions always required human approval. Making long-running processes resumable while maintaining consistency and auditability was also challenging.
Balancing speed, safety, and clarity—both in the system and the UI—required careful design and iteration.
Accomplishments that we're proud of
- Built a real end-to-end AI operations workflow, not a demo chatbot
- Implemented structured, auditable AI outputs with rollback support
- Made human review a first-class part of the system
- Created a working Marathon Agent with checkpointing and resume
- Delivered measurable operational metrics like ROI and compliance rate
What we learned
We learned that responsible AI is fundamentally an engineering problem.
The most valuable part of using Gemini 3 was not text generation, but structured reasoning combined with strict policies and validation. AI becomes truly useful when it is constrained, explainable, and integrated into real operational workflows rather than acting as a black box.
What's next for QuantOps Autopilot
Next, we plan to expand connector support for real PIM and commerce systems, improve multilingual content generation, and add richer pricing intelligence using permissioned data sources.
We also plan to refine the human review experience and introduce role-based access controls, making QuantOps Autopilot suitable for real production retail environments.
Built With
- ai
- api
- browser
- gemini
- json
- local
- mermaid
- react
- storage
- studio
- typescript
- vite
Log in or sign up for Devpost to join the conversation.