Inspiration
Project management in 2026 remains a manual bottleneck. While developers have AI coding assistants, team leads are still stuck triaging GitHub issues and balancing team loads. We were inspired by the concept of Agentic Project Management—where the tool doesn't just track tasks but actively plans and audits them. We wanted to bridge the gap between raw code and strategic execution. The problem is clear: the DevOps pipeline has been automated, but the people management pipeline hasn't. JiraX changes that.
What it does
JiraX is an AI-powered project management engine that performs three core functions:
AI Repo Analysis: Connects to any GitHub repository to summarize open risks and categorize issues automatically. No manual triage needed. Automated Sprint Planner: Uses Gemini 2.5 Flash to analyze team capacity and backlog, generating a balanced sprint plan with creative naming conventions and intelligent task assignments. Visual QA Auditor: A multimodal feature where users upload UI screenshots, and the AI identifies alignment, contrast, and UX issues as a 24/7 Senior QA Engineer would.
The result? Developers focus on code. Managers focus on strategy. The AI handles the coordination.
How we built it
We chose a modern, secure architecture to ensure high performance and reliability:
Frontend: React 19 and Vite for a lightning-fast responsive UI with real-time feedback. Backend: FastAPI (Python) server handling GitHub API integrations and secure communication with Google AI, eliminating CORS issues. AI Engine: Powered by Gemini 2.5 Flash, utilizing multimodal capabilities for both text reasoning and image-based audits. Validation Layer: Pydantic models validate all AI responses before they reach the frontend, ensuring data integrity and preventing UI crashes. Deployment: Frontend deployed on Vercel, backend on Render for scalability and reliability.
The architecture follows a secure-first principle: sensitive API keys and authentication are handled entirely server-side. The frontend never touches raw API credentials.
Challenges we ran into
The primary challenge was managing Rate Limiting and CORS policy across a decoupled deployment. Early in development, we faced strict quota limits: L=100 requests per dayL = 100 \text{ requests per day}L=100 requests per day V=2 concurrent operationsV = 2 \text{ concurrent operations}V=2 concurrent operations This forced us to implement prompt compression and batch request handling to maximize utility within constraints. A secondary challenge was ensuring multimodal consistency. When the AI receives a screenshot and needs to return structured QA feedback, alignment between vision analysis and JSON schema is critical. We solved this by:
Sending strict Pydantic schemas to Gemini in system prompts Implementing retry logic with schema validation Using temperature-controlled reasoning for deterministic outputs
We also faced deployment parity issues—the development environment behaved differently than production. Moving all sensitive API logic to the backend resolved the Access-Control-Allow-Origin errors seen during the Vercel-to-Render handshake.
Accomplishments that we're proud of
Full-Stack Automation: We successfully integrated a pipeline where the AI doesn't just "chat," but actually validates its own responses against Pydantic models to ensure the UI never crashes due to malformed data. Multimodal Integration: Seamlessly integrated vision analysis with structured planning. A user can upload a screenshot and receive a prioritized list of CSS fixes in 3 seconds. Rate Limit Optimization: Achieved 80% efficiency within API quotas through intelligent prompt compression and batch processing. Security-First Architecture: Zero-trust design where the frontend never directly handles API keys or GitHub tokens. Real-Time Responsiveness: Implemented streaming responses where sprint plans are generated incrementally, allowing users to see progress without waiting.
What we learned
Multimodality is the next frontier: Being able to send an image of a "broken" website to an AI and receive a structured list of CSS fixes changed how we think about QA. Text alone is limiting; vision enables context. An AI is only as useful as the JSON it provides: Response generation is half the battle. The other half is validation. We learned to never trust AI outputs without schema enforcement. Rate limits force creativity: Constraints breed innovation. We developed compression techniques that reduced token usage by 40% without sacrificing accuracy. Security cannot be an afterthought: Starting with a secure backend architecture saved us weeks of refactoring. Zero-trust design should be default, not optional. Iterative planning beats one-shot planning: A human-in-the-loop sprint planner that asks clarifying questions performs better than a single AI-generated plan.
What's next for JiraX: AI-Driven Project Management
Our roadmap is ambitious:
Real-Time Slack/Discord Sync: Notifications when sprint status changes, issues are categorized, or health score dips below 80. Historical Velocity Tracking: Train a lightweight ML model on past sprints to predict future capacity with 90%+ accuracy. AI Code Review Integration: Extend vision analysis to code diffs, identifying performance and security issues. Team Analytics Dashboard: Burndown charts, velocity trends, and individual contribution metrics. Custom LLM Fine-Tuning: Adapt Gemini's reasoning to company-specific jargon and project conventions.
We're also exploring agentic workflows where JiraX can autonomously create GitHub PRs to split oversized tasks or suggest architectural improvements based on repo analysis. The ultimate vision: A project management tool that doesn't ask "What should we do?"—it tells you, backed by data and AI reasoning.
Built with React 19 • FastAPI • Gemini 2.5 Flash • Pydantic • GitHub API Deployed on Vercel (Frontend) & Render (Backend)
Log in or sign up for Devpost to join the conversation.