Inspiration
You know that feeling when you find a cool new MCP server, but you hesitate to install it? You’re wondering: Is this thing actually maintained? Does it follow the spec? Is it going to break my setup?
I built MCP Quick Check because I was tired of the guessing game. The Model Context Protocol ecosystem is exploding, which is awesome, but it also feels a bit like the Wild West right now. I wanted a tool that felt like a "health check" or a Carfax report for MCP servers—something that could look under the hood and tell me if a server is a "Good" or "Poor" choice in seconds, without me having to manually dig through JSON configs and GitHub issues.
What it does
MCP Quick Check is a CLI tool that runs a pre-installation validation on any MCP server. You give it a server name (like ai.exa/exa), and it generates a detailed report card based on four key metrics:
Registry Status: Is it active and up-to-date in the official registry?
GitHub Health: Is the repo popular? Is it actively maintained, or is it a ghost town of unclosed issues?
Code Quality: Does it have a README? A license? CI/CD pipelines?
Protocol Compliance: Does it actually adhere to the 2025-11-25 spec? What capabilities (Tools, Resources, Prompts) does it expose?
How we built it
I built this using Python because of its rich ecosystem for handling API requests and JSON data. Architecture: The tool is modular, with dedicated validators for the Registry, GitHub, and Compliance logic. APIs: It hits the registry.modelcontextprotocol.io API to fetch metadata and the GitHub API to analyze repository health. Async: I implemented aiohttp for async support to ensure the checks remain fast, even when querying multiple endpoints. Scoring Engine: The secret sauce is the weighted scoring algorithm. It doesn't just say "pass/fail"; it calculates a score out of 100 based on weighted factors (e.g., a missing license hurts your score more than a missing example).
Challenges we ran into
The biggest headache was GitHub's permission system.
During testing, I ran into a weird edge case with the ai.smithery/smithery-ai-slack server. My tool could see the repo existed (stars, description), but completely failed to read the README or package.json, causing the quality score to tank.
It turned out to be an issue with Organization SAML enforcement—my GitHub token had permission to see metadata but was blocked from reading contents. Instead of letting the tool crash, I had to engineer "graceful degradation." Now, if the tool hits a permission wall, it catches the error, flags it as a "limited check" (likely due to monorepo or token limits), and calculates a partial score based on what it can see. It was a frustrating bug that turned into a robust feature.
Accomplishments that I'm proud of
The Robustness: I'm really proud that the tool doesn't just crash when it hits an API limit or a permission error. It adapts and gives the user the best possible info.
The Compliance Checker: Parsing the capabilities isn't just about reading a JSON field; the tool actually hunts through package.json, mcp.json, and even TypeScript source files to find tool definitions that might be hidden in the code.
The Speed: It feels instant. You type the command, and the report is there.
What I learned
I learned way more about the internals of the Model Context Protocol than I expected—specifically how flexible (and sometimes inconsistent) server configurations can be. I also got a crash course in the nuances of GitHub's API scopes versus Organization permissions, which is a lesson I won't forget anytime soon!
What's next for MCP Quick Check
Right now, it's a CLI tool. In the future, I’d love to:
- Build a Web UI so you can paste a server name and get a shareable link to the report.
- Add Security Scanning to look for common vulnerabilities in the server dependencies.
- Integrate it directly into MCP clients/installers as a "verify" step.
Log in or sign up for Devpost to join the conversation.