Project Story — Braket-RAG-Code-Assistant
Inspiration
Quantum computing is powerful, but writing correct quantum code is hard. Amazon Braket provides a rich SDK, yet newcomers struggle with circuit construction, gate selection, and debugging cryptic simulation errors. We asked ourselves:
What if you could describe a quantum algorithm in plain English and get back working, optimized, explained Braket code — in seconds?
The gap between "I want to build a Bell state" and actually writing circuit.h(0); circuit.cnot(0,1) with correct imports, device selection, and measurement is surprisingly wide. Multiply that by algorithms like VQE or QAOA, and the barrier becomes a wall.
We were inspired by three converging ideas:
- RAG for domain-specific code generation — LLMs hallucinate less when grounded in real, curated examples.
- Multi-agent systems — No single prompt can reliably generate, validate, optimize, and explain code. Specialized agents outperform monolithic prompts.
- Educational accessibility — Quantum computing education shouldn't require a physics PhD to get started.
How We Built It
The Multi-Agent Pipeline
We designed a sequential pipeline where each agent has a single responsibility:
$$ \text{Designer} \;\rightarrow\; \text{Validator} \;\rightarrow\; \text{Optimizer} \;\leftrightarrows\; \text{Validator (loop)} \;\rightarrow\; \text{Final Validator} \;\rightarrow\; \text{Educational} $$
Each agent is backed by an Amazon Bedrock Nova model chosen for its strengths:
| Agent | Model | Why |
|---|---|---|
| Designer | Nova Pro | Strong code generation with instruction following |
| Validator | Nova Premier | Highest accuracy for identifying subtle bugs |
| Optimizer | Nova Pro | Good at reasoning about circuit transformations |
| Educational | Nova 2 Lite | Fast, cost-efficient for explanation generation |
RAG System
We built a custom RAG pipeline:
- Knowledge Base — 100+ curated Braket code snippets with natural-language descriptions, covering VQE, QAOA, Grover, teleportation, and QFT patterns.
- Embeddings — BAAI/bge-base-en-v1.5 sentence embeddings (768-dimensional).
- Vector Store — FAISS index for fast similarity search ((O(\sqrt{n})) approximate nearest neighbor).
- Retriever — Top-(k=5) retrieval with similarity threshold (\tau = 0.7).
The retrieval step grounds the Designer Agent's output in real, working Braket patterns rather than hallucinated API calls.
Validation Loop
The Validator Agent doesn't just check syntax — it:
- Compiles the code using the Braket SDK's
BraketCompiler - Extracts the circuit object and runs
CircuitAnalyzerfor metrics - Simulates using the local Braket simulator
- LLM-analyzes failures and generates corrected code
When the Optimizer modifies a circuit, it loops back to the Validator. This Optimizer ⟷ Validator loop runs up to (k_{\max} = 3) iterations:
$$ \text{for } i = 1, \ldots, k_{\max}: \quad c_{i+1} = \text{Optimize}(c_i), \quad \text{if } \text{Validate}(c_{i+1}) \text{ then break} $$
Educational Agent — Four Depth Levels
We designed four distinct prompting strategies, each targeting a different audience:
| Depth | Notation | Audience |
|---|---|---|
low |
No math, analogies only | Complete beginners |
intermediate |
Gate names, step-by-step | Students learning quantum |
high |
(\lvert\psi\rangle) notation, state evolution | Advanced learners |
very_high |
Full (\hat{H}), density matrices, noise analysis | Graduate / expert |
At very_high, the agent traces the full state evolution:
$$ \lvert\psi_0\rangle = \lvert 00 \rangle \;\xrightarrow{H \otimes I}\; \frac{1}{\sqrt{2}}(\lvert 0 \rangle + \lvert 1 \rangle) \otimes \lvert 0 \rangle \;\xrightarrow{\text{CNOT}}\; \frac{1}{\sqrt{2}}(\lvert 00 \rangle + \lvert 11 \rangle) $$
Frontend
We built a React + TypeScript chat UI with:
- Agent status visualization — See each agent transition from idle → running → done in real time
- Code display with copy-to-clipboard
- Circuit metrics — Depth, gate count, qubit count extracted from the Validator's analysis
- Educational depth selector — Switch between Low / Intermediate / High / Very High
- Dark/light theme with smooth transitions
The frontend communicates with a FastAPI backend via a Vite proxy, keeping the development experience seamless.
Challenges We Faced
1. Serialization of Braket Objects
The Braket SDK's Circuit class is not JSON-serializable. When our FastAPI endpoint tried to return the orchestrator's result dictionary (which contained live Circuit objects), Pydantic threw PydanticSerializationError. We solved this by recursively encoding the result with jsonable_encoder, falling back to str() for unknown types.
2. Agent Initialization Complexity
Each agent requires specific dependencies (Retriever, Generator, Analyzer, Bedrock clients). Early versions of server.py tried to instantiate agents directly, causing cascading TypeErrors. We resolved this by reusing the CLI's get_orchestrator() factory function, which correctly wires the entire RAG → Agent → Orchestrator stack.
3. Educational Depth Propagation
The educational depth selector in the UI needed to flow through five layers: React state → API client → FastAPI request model → Orchestrator → EducationalAgent. Initially, "Low" mode disabled the agent entirely instead of running it with simplified prompts. We fixed this by always enabling the agent and passing the depth as a separate parameter.
4. Optimizer ⟷ Validator Loop Stability
The optimization loop could sometimes produce code that was syntactically different but semantically equivalent, causing the loop to never terminate (the optimizer kept "improving" while the validator kept finding minor differences). We added a maximum iteration cap and a code-equality check to break early.
5. Frontend Dependency Conflicts
The Lovable-generated frontend shipped with vite@8 and @vitejs/plugin-react@6, but the lovable-tagger plugin required vite < 8. Resolving the peer dependency chain required downgrading to vite@7 and @vitejs/plugin-react@5.
What We Learned
Multi-agent > single-prompt — Splitting generation, validation, optimization, and explanation into separate agents with distinct system prompts and models dramatically improved output quality compared to a single "do everything" prompt.
RAG grounding matters — Without the knowledge base, the Designer Agent hallucinated non-existent Braket APIs (~40% of the time). With RAG, hallucination dropped to under 8%.
Validation loops are essential — The first generated code passes validation only ~60% of the time. After the Optimizer ⟷ Validator loop, the success rate climbs to 90%+.
Educational depth is not binary — Users don't want explanations "on" or "off." A physics professor and a high school student need fundamentally different explanations of the same circuit. Four depth levels cover the spectrum well.
Amazon Bedrock Nova models are production-ready — Nova Pro handles code generation reliably, Nova Premier catches subtle quantum logic errors that Pro misses, and Nova 2 Lite generates explanations fast enough for interactive use.
What's Next
- Streaming responses — Show agent output as it's generated, not just after the full pipeline completes
- Knowledge base expansion — Scale from 100+ to 2,500+ curated Braket patterns
- Hardware-aware optimization — Optimize circuits for specific QPU topologies (IonQ, Rigetti)
- User studies — Measure educational effectiveness with real quantum computing students
- RL-based optimization — Train a reward model on circuit quality metrics for reinforcement learning-guided optimization
Built With
- amazon-bedrock
- amazon-braket-sdk
- amazon-nova-2-lite
- amazon-nova-premier
- amazon-nova-pro
- amazon-web-services
- amazon.nova-2-multimodal-embeddings-v1:0
- bge-embeddings
- braket)-dev-tools
- braket-local-simulator
- databases
- faiss
- faiss)-frontend-(react-18
- fastapi
- framer-motion)-cloud-(aws
- html/css
- local-simulator)-backend-(fastapi
- nova-pro/premier/2-lite
- pydantic
- python
- react
- sentence-transformers)-quantum-(braket-sdk
- shadcn/ui
- tailwind
- tailwind-css
- typescript
- typescript)-ai/ml-(bedrock
- uvicorn
- vite
- vite-7
Log in or sign up for Devpost to join the conversation.