Argus AI Security Scanner - Project Story

Inspiration

I stumbled across the The 2025 AI Security Benchmark Report and couldn't believe what I read. 79% of companies are using AI in production but only 6% have actual AI security. Billions in AI investments just sitting there vulnerable.

I've been watching everyone rush to deploy ML models without thinking about security. I found the MIT AI Risk Repository and AVID Database - massive collections of documented AI failures. Lightbulb moment: what if I could automatically scan code against these known vulnerabilities?

What it does

Argus scans ML repositories for security vulnerabilities using vector similarity search. Point it at any GitHub repo and it:

  • Finds sketchy patterns and AI framework usage
  • Compares against 1,600+ known AI risks
  • Generates reports with actionable fixes
  • Works with TensorFlow, PyTorch, Hugging Face, etc.
  • Provides both CLI and web interfaces

Unlike traditional scanners, it actually understands AI-specific vulnerabilities.

How I built it

Tech Stack: TiDB Serverless (vector search), Sentence Transformers (embeddings), Flask (web), Click (CLI)

Core Algorithm: Vector similarity search - convert code patterns and vulnerabilities into vectors, then find close matches

Architecture: Multi-agent system - Scanner finds patterns, Analyzer matches against vulnerability database, Reporter generates results.

Challenges I ran into

Similarity thresholds are finicky. Too low = everything's a vulnerability. Too high = miss real problems. Built dynamic scoring that adjusts based on severity.

Performance sucked initially. 15 minutes to scan TensorFlow repo. Had to batch process files and skip non-ML stuff.

Accomplishments that I'm proud of

  • Built a AI vulnerability scanner that actually understands AI risks
  • Made 1,600+ academic risks practically usable
  • Optimized from 15 minutes to under 2 minutes scan time
  • Created both technical and user-friendly interfaces
  • Actually works on real ML repositories

What I learned

AI security is way more complex than I thought - bias, privacy, model poisoning, adversarial attacks, stuff I'd never considered.

Vector search is magical - finding semantically similar patterns without writing explicit rules for everything.

Data quality > algorithms - spent 60% of time cleaning data, not building fancy ML.

What's next for Argus

Short-term: GitHub Actions integration, better reporting with code snippets, more ML frameworks

Medium-term: Real-time monitoring for deployed models, custom vulnerability patterns, community contributions

Long-term: Industry standard for AI security, contribute to regulations, prevent major AI security incidents

This taught me there's a real gap in AI security tooling. If I can help developers catch vulnerabilities before production, that feels worthwhile.

Built With

Share this project:

Updates