Inspiration We built Vigil AI because modern software teams are shipping code faster than ever, often with AI-assisted development in the loop, but security review has not kept pace. In practice, developers are often forced to choose between static security tools that are fast but noisy, and AI tools that are flexible but hard to trust on their own. We wanted to bridge that gap.

Our goal was to create something that fits directly into the developer workflow: a tool that not only detects vulnerabilities, but also helps explain them, prioritize them, and fix them without forcing engineers to leave their IDE.

What it does Vigil AI is an IntelliJ plugin that scans Python projects for security vulnerabilities and presents the results in a clear, developer-friendly interface.

It combines deterministic security scanning with AI-based reasoning. The deterministic layer identifies suspicious patterns in the codebase, while the AI layer adds context: it helps validate findings, explain why they matter, suggest remediations, and support follow-up questions through a contextual chat experience.

The plugin also groups findings by severity, shows a project-level security score, supports quick remediation flows, and generates visual reports that can be shared across technical and non-technical teams.

How we built it We built Vigil AI as a JetBrains plugin in Kotlin, using IntelliJ Platform APIs for the tool window, inspections, quick fixes, and project integration.

For vulnerability detection, we used Semgrep as the deterministic analysis engine and parsed its output into our internal finding model. On top of that, we built an AI enrichment pipeline that validates findings, generates explanations, proposes fixes, and supports follow-up conversation around each issue.

We also created an intentionally vulnerable multi-file Flask demo application to test the plugin against realistic security issues such as SQL injection, insecure deserialization, SSRF, command injection, weak cryptography, unsafe templating, path traversal, and hardcoded secrets.

Finally, we added a reporting layer that turns scan results into visual, shareable outputs for stakeholders.

Challenges we ran into One of the biggest challenges was balancing reliability and intelligence. We did not want to rely only on AI, because security tools need consistency and traceability. At the same time, purely static scans often lack context and produce noise. Designing a hybrid model that felt useful and trustworthy was a core challenge.

We also ran into several technical issues related to plugin development itself: IntelliJ UI behavior, sandbox/runtime quirks, keeping inspections synchronized with scan results, and making the experience feel smooth inside the IDE rather than bolted on.

Another major challenge was UX. Security information can become overwhelming very quickly, so we had to think carefully about how to present findings, scores, explanations, fixes, reports, and chat in a way that feels clear and actionable.

Accomplishments that we're proud of We are proud that Vigil AI works as a full end-to-end workflow inside the IDE: scan, inspect, understand, fix, and report.

We are also proud of the hybrid architecture. Instead of choosing between deterministic scanning and AI reasoning, we made them complement each other. This gives the project a stronger foundation and makes the product story much more compelling.

Another accomplishment is the overall user experience. The plugin is not just a scanner: it helps developers understand vulnerabilities in context, provides follow-up guidance, and makes security more approachable.

Finally, we are proud of the demo environment we built, because it allowed us to showcase the plugin on a codebase that feels much closer to a real application than a single toy file.

What we learned We learned that in security, trust matters as much as intelligence. Developers are much more likely to adopt a tool when they can see where findings come from and why the system reached a conclusion.

We also learned that plugin development is as much about UX design as it is about code. Even a strong backend feels weak if the interface is noisy, confusing, or poorly integrated into the IDE workflow.

Most importantly, we learned that AI is most valuable when it augments a strong deterministic foundation instead of trying to replace it completely.

What's next for Vigil AI Our next step is to make Vigil AI more robust, broader, and closer to production-readiness.

We want to support more languages beyond Python, integrate additional security engines, improve the remediation workflow, and make severity and confidence modeling more sophisticated. We also want to improve the reporting layer, add stronger validation after fixes, and expand the collaborative side of the product so teams can use Vigil AI not just as a scanner, but as a shared security assistant during development.

Longer term, we see Vigil AI becoming a developer-first security copilot: one that helps teams catch issues early, understand them quickly, and fix them with confidence.

Built With

Share this project:

Updates