Inspiration
In today's fast-moving tech landscape, most learners and developers don't struggle with access to information — they struggle with not knowing what they don't know.
Traditional learning platforms tell you what to study. But they rarely diagnose why you're stuck or how your thinking is falling short of industry standards.
That gap between "I studied it" and "I can apply it like a professional" — that's what Gaplytics is built to close.
What It Does
Gaplytics is an adaptive AI platform powered by Amazon Bedrock that analyzes a student's reasoning patterns and identifies capability gaps in real time.
Instead of just answering questions, it uses Socratic guidance — asking targeted follow-up questions, probing assumptions, and nudging the learner toward industry-grade thinking rather than just giving away answers.
Key features:
- Gap Detection — identifies where reasoning breaks down
- Personalized Guidance — adaptive follow-ups based on your specific weak points
- Socratic AI Mentor — challenges you to think deeper, not just recall facts
- Serverless Architecture — AWS Lambda + Bedrock for scalable, low-latency inference
How We Built It
| Layer | Technology |
|---|---|
| AI / LLM | Amazon Bedrock (Claude) |
| Backend | AWS Lambda (TypeScript) |
| Orchestration | Bedrock Agents / prompt chaining |
| Frontend | Streamlit / REST API |
| Version Control | GitHub |
The core pipeline works like this:
- Student submits a response or solution
- Bedrock analyzes it against an expert reasoning benchmark
- A capability gap score is computed per concept area
- The system generates a Socratic follow-up tailored to the weakest gap
- The loop continues until the reasoning meets the threshold
$$\text{Gap Score} = 1 - \frac{\text{Demonstrated Reasoning Depth}} {\text{Expected Industry Reasoning Depth}}$$
What We Learned
- Prompt engineering for Socratic dialogue is significantly harder than standard Q&A prompting — the model must guide without revealing
- AWS Lambda cold start latency matters more than expected for conversational AI flows — we optimized using provisioned concurrency
- Structuring capability taxonomies (breaking a skill into measurable reasoning sub-components) is as much a UX problem as an AI problem
Challenges We Faced
- Avoiding answer leakage — tuning the Socratic prompt so Bedrock probes rather than explains was an iterative challenge
- Defining "capability gap" meaningfully — we had to move beyond keyword matching to genuine reasoning-depth evaluation
- Latency vs. depth tradeoff — deeper multi-turn analysis takes more tokens; we had to balance response quality with real-time UX expectations
What's Next for Gaplytics
- Expand to support code review and system design gap analysis
- Build a learner dashboard tracking gap closure over time
- Integrate with platforms like LeetCode and GitHub for automatic context ingestion
- Fine-tune domain-specific models for Data Science, Cloud, and Software Engineering tracks
Built with "Love" for the Frostbyte Hackathon 2026
Built With
- amazonbedrock
- awslambda
- python
- typescript
Log in or sign up for Devpost to join the conversation.