Inspiration
Last year I spent 2 to 3 months doing back to back certification prep across AWS, Azure, GCP, and Oracle. Eight or nine exams. I still failed the AWS Solutions Architect Professional by 17 points. Not from lack of study, but from studying the wrong things. I would go deep on something that interested me, burn two hours on a topic worth 2% of the exam, and move on feeling productive but not prepared. I needed something that keeps the focus tight and drills weak spots, not interesting ones.
What it does
SonicCert Coach is a multi-agent AI tutoring system for AWS certification prep. It runs an adaptive quiz session that targets your weakest domains first, adjusts difficulty based on your performance, and tracks your progress across sessions.
One orchestrator agent coordinates three specialists: a tutor that generates questions, an explainer that breaks down concepts on demand, and a hint agent that nudges you without giving away the answer. Two pure Python tools handle grading and memory because those are deterministic problems that do not need a language model. At the end of each session, you get a domain breakdown showing exactly where your gaps are.
How we built it
The system runs on Amazon Bedrock using the Converse API tool use pattern. Nova Pro handles orchestration because multi-turn reasoning and tool selection across an entire session require its stronger reasoning capability. Nova Lite handles the three specialist agents where speed and cost efficiency matter more than depth. The grader and memory are pure Python.
Exam syllabi for AIF-C01 and AIP-C01 are stored as JSON files that drive adaptive topic selection. Questions are weighted toward your weakest domains (70%) with 30% exploration to avoid tunnel vision. A circuit breaker wraps all Bedrock calls for resilience, and progress is persisted locally across sessions.
128 tests cover the full stack, including live AWS Bedrock connectivity checks.
Challenges we ran into
The project was originally called SonicCert Coach for a reason. The plan was to use Amazon Nova Sonic for real-time bidirectional audio so you could just talk to the tutor. After few days of debugging, it became clear that the experimental AWS SDK (aws_sdk_bedrock_runtime) hangs indefinitely on invoke_model_with_bidirectional_stream. The same hang reproduces on official AWS samples. boto3 does not support bidirectional streaming yet. It is a real bug in early-access code, not a configuration issue. The pivot was the call. Multi-agent orchestration with tool use turned out to be a better fit for structured cert prep than a voice interface would have been.
Accomplishments that we're proud of
The adaptive learning loop works exactly as intended. The orchestrator pulls session history, identifies weak domains, and targets them proportionally to real exam weights without any manual configuration. Watching it zero in on Responsible AI after two wrong answers in that domain is exactly the behavior we wanted.
128 tests pass, including 15 live AWS Bedrock connectivity checks against real Nova Pro and Nova Lite endpoints. The whole system was designed, built, and validated in two days.
What we learned
Experimental SDKs are a real risk in time-constrained projects. We spent few days on a dead end before pivoting. The lesson: validate your foundational API call in the first hour, not after the architecture is built around it.
On the multi-agent side: the tool use pattern in Bedrock Converse is powerful but requires careful schema design. The orchestrator occasionally returns multiple tool calls in a single turn, which is a real edge case that needs explicit handling.
Grading and memory being pure Python was the right call from the start. Deterministic problems do not belong in language models.
What's next for SonicCert Coach
The syllabus system already supports adding any AWS cert by dropping in a JSON file. CLF-C02, SAP-C02, and the new AWS AI Professional (AIP-C01) are the immediate next additions.
Longer term: a web interface so the session report renders properly, multi-user support with shared weak-area analytics, and eventually revisiting voice input once the Nova Sonic SDK stabilises.
Built With
- amazon-bedrock
- amazon-nova-lite
- amazon-nova-pro
- amazon-web-services
- aws-iam
- boto3
- python
- terraform
Log in or sign up for Devpost to join the conversation.