Inspiration
Most developers find bugs after they've already caused failures in production. We wanted to flip that — what if your codebase had a structural "gravity score" that pulled risky code to the surface before it fails? AntyGravity was inspired by the idea that technical debt and code complexity have a real gravitational pull toward system instability, and that predicting failure should be as natural as running a linter. Built for GitLab's ecosystem, we wanted to make AI-powered code intelligence accessible to any developer with just a repository URL.
What it does
AntyGravity is a predictive bug intelligence dashboard powered by Claude AI. You paste a GitHub or GitLab repository URL and it: Fetches your source files directly via the GitHub or GitLab REST API (supports private repos with token auth) Runs multi-language static analysis across 10 file types detecting security vulnerabilities — hardcoded secrets, SQL injection, eval/exec, weak hashes, command injection, insecure pickle, path traversal, and more Calculates a per-file Gravity Score based on cyclomatic complexity and structural risk factors Generates an overall Security Audit score with critical/warning breakdowns Uses Claude AI to generate real, contextual fix suggestions for each detected bug — not generic advice, but actual corrected code Tracks scan history so you can see exactly which bugs were resolved and which new ones appeared between scans Detects debug statements across both Python (print()) and JavaScript/TypeScript (console.log()) Shows a language breakdown of your scanned codebase Provides a Claude-powered AI assistant that understands your specific scan results and answers questions about your code's risk
How we built it
The backend is a Python FastAPI server with two layers of analysis:
AST-level analysis via Python's built-in ast module for deep structural checks — bare except blocks, mutable default arguments, high cyclomatic complexity, and oversized functions Regex-based pattern matching for cross-language security anti-patterns across Python, JS, TS, Java, Go, Ruby, PHP, and C# Claude AI integration via the Anthropic SDK for intelligent fix generation and the chatbot assistant — the API key is stored securely in a .env file loaded via python-dotenv Persistent scan history backed by a JSON file so diff tracking survives server restarts GitLab and GitHub API integration with proper per-platform authentication handling
The frontend is a React + Vite application featuring a dark futuristic dashboard built with Tailwind CSS, Framer Motion animations, Recharts for risk visualization, and Lucide React icons.
Challenges we ran into
Getting accurate static analysis without false positives was harder than expected — regex-based security checks tend to flag safe code, such as a comment mentioning "password." Balancing the Gravity Score formula so it produced meaningful gradations across diverse codebases required significant tuning. GitHub and GitLab have completely different authentication models (Bearer token vs PRIVATE-TOKEN header), and building unified support for both while handling rate limits gracefully was non-trivial. Integrating Claude AI to generate contextually accurate fix suggestions — rather than generic responses — required careful prompt engineering with the actual code snippet and surrounding context. Designing the scan-diffing system to reliably track resolved vs new bugs across scans, and persisting that state correctly, was also a key challenge.
Accomplishments that we're proud of
AI-generated code fixes — Claude analyzes each detected bug and returns an actual corrected code block, not just a description The scan diff system — rescanning a repo shows "you fixed 3 critical issues since last time" with specific details, creating a real feedback loop for developers True multi-platform support — GitHub and GitLab repos both work correctly including private repos with token authentication 10 language coverage from a single lightweight backend with no external analysis tools required The Security Audit sub-system producing an independent security score, critical count, and status badge separate from the Gravity Score
What we learned
AST-based analysis is far more reliable than regex for Python but requires careful handling of syntax errors and edge cases. Building a meaningful score that actually correlates to real risk requires domain thinking, not just counting issues. Platform APIs differ more than you'd expect — GitLab and GitHub require completely different handling despite doing the same thing. Integrating Claude AI revealed how much more useful an LLM is when given specific structured context (the actual bug, the file, the code snippet) compared to vague open-ended prompts. FastAPI's Pydantic integration makes building well-typed APIs extremely fast and reduces an entire class of bugs.
What's next for AI Bug Predictor
A VS Code extension that shows Gravity Scores inline as you write code Webhook support so the tool automatically rescans on every GitLab push and comments on merge requests with risk scores JavaScript AST analysis via a Node.js sidecar for deeper JS/TS structural checks beyond regex SQLite persistent storage replacing the JSON file for scan history, enabling trend graphs over time Team dashboards showing risk trends across multiple repositories with per-developer contribution tracking CI/CD pipeline integration so builds can be blocked when the Gravity Score exceeds a configurable threshold
Built With
- css
- fastapi
- github
- gitlab
- javascript
- python
- react
- tailwind
- vite
Log in or sign up for Devpost to join the conversation.