About the Project
Inspiration
We spend too much time hunting for code — switching tabs, copy–pasting from answers that barely fit our stack, and rewriting examples to match our conventions. We wanted a system that blends documentation, community knowledge, and AI planning to produce minimal, correct, and idiomatic snippets.
What We Built
- A Django + DRF backend with an AI-first Smart Search pipeline:
- Context7 docs
- Local DB
- External providers
- AI generation (fallback or preferAi)
- OpenAI planning → curation → code cleaning
- A React + Vite frontend with a consistent card UI (syntax-highlighted previews, source badges), landing-based auth flow, and save-to-backend actions.
- A Model Context Protocol (MCP) server to fetch Context7 docs/snippets and optionally summarize with OpenAI, usable from editors/agents.
How We Built It
- Spec-to-code with Kiro:
.kiro/specs/*laid out requirements, routes, and flows..kiro/steering/*enforced security/UX/performance guidelines..kiro/hooks/*defined observability. - Backend uses Python Decouple for environment management (no secrets in code) and DRF Spectacular for API docs.
- Frontend uses React Router, TanStack Query, and our
CompactSnippetCodeHighlighterfor a fast, modern UI. - MCP server exposes tools/resources (Context7 and summarize) for editor/agent integrations.
What We Learned
- Orchestrating providers with AI planning/curation significantly improves snippet quality.
- Clean-up is essential: extracting the longest fenced block and normalizing whitespace yields usable code.
- Decouple + settings-driven config eliminates environment mismatch and secret sprawl.
- MCP makes it trivial to reuse the same capabilities in multiple environments (IDE, agents, terminals).
Challenges
- Balancing latency vs. quality when adding AI planning/curation steps.
- Designing a robust returnUrl flow for unauthenticated users (landing → login → deep link).
- Ensuring consistent error shapes and observability across components.
A Little Math (LaTeX)
We treat results ranking as a blend of sources. Let $s_i$ be scores from Context7, local, and external providers, and let $\alpha_i$ be learned weights. Our combined score is: $$ \text{score} = \sum_i \alpha_i s_i\,,\quad \sum_i \alpha_i = 1,\; \alpha_i \ge 0 $$ AI curation post-processes the top-$k$ to maximize code relevance and cleanliness.
Built With
- Backend: Django 5, Django REST Framework, SimpleJWT, DRF Spectacular, Python Decouple
- Frontend: React 18, Vite, TypeScript, React Router, TanStack Query, Tailwind/Radix
- AI/Docs: OpenAI API, Context7 API
- MCP:
@modelcontextprotocol/sdk, Node.js, TypeScript - Database: PostgreSQL (local SQLite option)
Submitter Details
- Submitter Type: Team
- Country of Residence: Zambia
- If resident in Canada: N/A
Project Timeline
- Existing prior to June 24, 2025? No
- Significant Updates During Submission:
- AI planning and curation integrated into Smart Search
- My Snippets UI refresh with code highlighting and save-to-backend
- MCP server for Context7 + summarize tools
- Kiro specs/steering/hooks applied end-to-end
Category
- Software, AI/ML, Developer Tools (choose one per rules)

Log in or sign up for Devpost to join the conversation.