Inspiration
Evaluating software projects—whether in classrooms, hackathons, or hiring pipelines—is often a manual, time-consuming, and inconsistent process. Reviewers must go through code quality, documentation, and project structure individually, which leads to delays and subjective feedback.
We were inspired to solve this inefficiency by building an intelligent system that can automate evaluation, ensure consistency, and provide instant, actionable feedback directly within the developer’s workflow.
What it does
Suchak AI is an AI-powered GitLab agent that automatically evaluates repositories and generates structured feedback.
It:
- Fetches repository data using GitLab APIs
- Analyzes code, documentation, and structure
- Assigns a score out of 100 with a detailed breakdown
- Generates strengths, weaknesses, and actionable suggestions
- Automatically posts the evaluation as a comment on merge requests or commits
This enables real-time, scalable, and consistent project evaluation.
How we built it
We built Suchak AI as a modular and extensible system:
- Python for core logic and orchestration
- GitLab REST API for repository access and posting comments
- LLM-based evaluation (OpenAI) for intelligent analysis
- Local evaluation mode using heuristic rules for cost-free execution
- Structured prompt engineering for consistent scoring and feedback
The workflow follows:
Trigger → Fetch Repository → Analyze → Score → Generate Feedback → Post to GitLab
We also designed a weighted scoring system to ensure interpretability:
- Code Quality: 40%
- Documentation: 20%
- Structure: 20%
- Completeness: 20%
Final score:
$$ Score = 0.4C_q + 0.2D + 0.2S + 0.2C $$
Challenges we ran into
- GitLab API integration: Handling authentication, project IDs, and posting comments correctly
- Designing meaningful evaluation logic: Ensuring the feedback is not generic but actionable
- Balancing cost and performance: Implementing both local and LLM-based evaluation modes
- Ensuring consistency in scoring: Avoiding randomness in AI-generated outputs
- Demo reliability: Making sure the agent works smoothly in real-time for presentation
Accomplishments that we're proud of
- Built a fully functional AI agent, not just a chatbot
- Successfully integrated evaluation directly into GitLab workflow
- Designed a dual-mode system (local + AI-powered)
- Created a clear, structured evaluation report with scoring and suggestions
- Achieved a system that is practical, scalable, and demo-ready
What we learned
- How to design and build agent-based AI systems
- Deep understanding of GitLab APIs and workflows
- Importance of prompt engineering for reliable outputs
- How to balance automation with interpretability
- Building systems with a focus on real-world usability
What's next for Suchak AI
- 🔄 Integrate with GitLab CI/CD pipelines for automatic triggering
- 📊 Add leaderboards and analytics dashboards
- 🧠 Improve evaluation using fine-tuned models
- 🛠️ Enable auto-fix suggestions via pull requests
- 🌍 Expand support for multiple programming languages and frameworks
Suchak AI aims to become a standard intelligent evaluation layer in modern software development workflows.
Built With
- api
- argparse
- gitlab
- openai
- powershell
- pytest
- python
- python-dotenv
- rest
Log in or sign up for Devpost to join the conversation.