Inspiration
Every developer has been burned by a bad dependency. You npm install something with 50k stars, ship to production, and six months later discover the sole maintainer ghosted, there's an unpatched CVE, and the last commit was a year ago. Tools like Dependabot and Snyk catch known vulnerabilities after you've already installed the package. We wanted to shift that evaluation left -- before the dependency enters your codebase -- and make it comprehensive enough that you'd actually trust it.
What it does
DepScope is a multi-agent system that performs due diligence on any open-source dependency. Paste a GitHub URL or package name and three agents work in parallel:
- Repo Health Analyzer pulls repository signals from GitHub: commit frequency, contributor distribution, issue response time, bus factor, release cadence, and dependency count.
- External Researcher uses You.com to search for known CVEs, community sentiment across Reddit and Hacker News, deprecation notices, and actively maintained alternatives.
- Risk Scorer synthesizes everything into a letter grade (A through F) with a radar chart across five dimensions: maintenance, security, community, documentation, and stability. Findings are ranked by severity (CRITICAL / HIGH / MEDIUM / LOW) with specific, actionable recommendations.
When the system detects a critical finding, it calls your phone via Plivo with a spoken briefing and offers to text you the full report.
DepScope also gets smarter over time. It tracks patterns across every analysis in a session, surfacing aggregate insights like "single-maintainer repos score 40% lower on average" and adjusting its risk heuristics based on accumulated data.
How we built it
- Composio orchestrates the three-agent pipeline. Each agent is registered as a Composio tool with structured inputs and outputs. Composio handles the coordination, passing repo health data and research findings into the risk scorer once the first two agents complete.
- You.com Search API powers the external research layer. For each package, we run three targeted queries: CVE/vulnerability search, community sentiment analysis, and alternative library discovery. Results are parsed into structured findings.
- Gemini synthesizes the combined data from all three agents into a risk profile, letter grade, severity-ranked findings, and an opinionated verdict.
- Plivo delivers voice alerts when critical findings are detected. The call includes a spoken briefing of the top finding and a DTMF menu: press 1 to receive the full report via SMS.
- Lovable powers the real-time dashboard. It shows live agent status with progress messages, a radar chart, severity-coded findings, an alternatives comparison table, and aggregated pattern insights.
- Render hosts the backend and serves the dashboard.
The orchestration layer streams agent status updates to the frontend via Server-Sent Events so judges (and users) can watch the agents work in real time.
Challenges we ran into
Gemini's free tier shares a single rate-limit pool across all models, so our three-model fallback chain didn't help when the daily quota hit zero, we had to build a full demo-cache layer so the pipeline never dead-ends. Composio's SDK documentation was sparse on custom tool creation; we discovered through trial and error that the parameter is inputParams (not inputParameters) and that z.record(z.any()) crashes internally, forcing us to serialize complex objects as JSON strings. Plivo voice calls require a publicly reachable answer URL, but our Render deploys took 5+ minutes on the free tier, so we had to build getBaseUrl() to derive the callback URL from request headers as a fallback. US SMS requires 10DLC campaign registration, which killed our "press 1 for the full report" flow until we restructured to prioritize voice delivery. Getting all five APIs (GitHub, You.com, Gemini, Composio, Plivo) to work together reliably meant every integration needed its own retry logic, fallback path, and graceful degradation, the happy path was easy, but making the system robust against any combination of failures was the real engineering challenge
Accomplishments that we're proud of
The entire pipeline runs end-to-end: GitHub analysis, You.com research, Gemini synthesis, Composio orchestration, Plivo delivery. We also built the orchestration so that Agents 1 and 2 run in parallel via Promise.allSettled, and every step has a fallback: cached data, model rotation, local generation, so the demo never breaks even when an API is down. The frontend streams agent progress in real time via SSE, so you can watch each agent start, work, and complete
What we learned
Rate limits are the real boss fight in multi-API systems, we burned through Gemini's entire daily quota during testing and had to redesign around it. Building for demo reliability is fundamentally different from building for correctness: every external call needs a fallback, every fallback needs to produce plausible data, and the UI needs to handle both paths gracefully. We learned Composio's custom tool API inside and out (and filed mental bug reports along the way)
What's next for DepScope
- CI/CD integration: run DepScope as a GitHub Action that blocks PRs adding risky dependencies
- Watchlist mode: monitor your existing dependencies and alert when maintenance signals degrade
- Lockfile bulk scan: paste a package-lock.json and evaluate your entire dependency tree at once
- Historical tracking: compare a package's risk score over time to catch slow decline
Built With
- composio
- express.js
- gemini
- github-api
- javascript
- lovable
- node.js
- plivo
- render
- you.com
Log in or sign up for Devpost to join the conversation.