Inspiration

Security is often an afterthought in fast-paced software development. I built this project to ensure developers don't have to be security experts to write safe code. My goal was to create an "AI Guard" that identifies vulnerabilities the moment code is written.

What it does

Sentinel AI is an automated tool that connects to GitLab repositories. It fetches the code, runs a Bandit (SAST) scan, and uses AI to translate technical security logs into simple, actionable advice. It explains why functions like eval() are dangerous and provides the exact steps to remove hardcoded passwords.

How we built it

The project is built on Replit. We used Python and the GitLab API to securely fetch repository data. For security scanning, we integrated the Bandit tool, and the AI logic is powered by an LLM-based Agent that converts complex security findings into human-readable remediation steps.

Challenges we ran into

The biggest challenge was securely handling GitLab Authentication and Private Tokens. Additionally, syncing raw security data with the AI to ensure it suggests fixes for the correct file locations and line numbers required significant technical fine-tuning.

Accomplishments that we're proud of

I am proud of building a pipeline that operates with "Zero Manual Effort." The agent successfully identified High Severity risks in vulnerable.py and provided the correct security patches instantly.

What we learned

I learned how to integrate third-party APIs (GitLab), the fundamentals of automated security auditing (SAST), and how AI can drastically simplify the "DevSecOps" workflow for developers.

What's next for Sentinel AI: Autonomous GitLab Security Auditor

In the future, we plan to implement an Auto-Fix feature where the Agent doesn't just suggest fixes but automatically generates Merge Requests. We also aim to add configuration scanning for Docker and Kubernetes.

Built With

  • ai/llm
  • bandit
  • gitlab-api
  • python
  • replit
  • sast
  • security
Share this project:

Updates