AURA — Autonomous Unified Review Agent
Inspiration:
The inspiration for AURA came from a fundamental problem we've all experienced as developers: the endless cycle of manual code reviews, test writing, and bug fixing.
We noticed that developers spend 30-40% of their time on repetitive QA tasks, and critical bugs often slip through to production, costing companies billions annually. Traditional code analysis tools are reactive—they find problems after they exist. We asked ourselves: What if we could predict and prevent issues before they happen?
AURA was born from the vision of creating a truly autonomous AI agent that doesn't just analyze code—it actively improves it, generates tests intelligently, predicts regressions, and takes automated actions. We wanted to build something that works 24/7, learns from your codebase, and becomes smarter over time.
The name "AURA" reflects our goal: an invisible, ever-present force that surrounds your codebase, protecting and improving it continuously—like an aura of quality assurance.
What it does
AURA is a fully autonomous AI engineering assistant that transforms how developers approach code quality. Here's what it does:
Autonomous Code Review
- Continuously monitors your codebase 24/7
- Automatically detects bugs, security vulnerabilities, and code quality issues
- Classifies issues by type (bug, security, performance, style, best practice) and severity
- Provides real-time feedback with actionable suggestions
Intelligent Test Generation
- Automatically generates comprehensive unit tests and E2E tests
- Uses AI to understand code context and create meaningful test cases
- Supports multiple languages (Python, JavaScript/TypeScript, Java, and more)
- Analyzes test coverage and identifies gaps
- Creates runnable, production-ready test code
Regression Prediction
- Uses machine learning to predict potential regressions before they happen
- Analyzes historical patterns, code complexity, and change frequency
- Provides risk scores and early warnings
- Helps prioritize which code changes need extra attention
Unified Dashboard
- Beautiful, intuitive interface showing real-time code health metrics
- Visualizes test coverage trends, quality scores, and issue distribution
- Tracks regression predictions and automated action history
- Provides actionable insights at a glance
How we built it
We built AURA as a modern, scalable full-stack application with a focus on modularity and extensibility.
Architecture Overview
Frontend (React + TypeScript)
↓ HTTP/WebSocket
Backend API (FastAPI)
↓
AURA Core Engine
├── Code Analyzer (AI-powered)
├── Test Generator (AI-powered)
├── Regression Predictor (ML-based)
├── Action Engine (Automated workflows)
└── Learning Agent (Pattern recognition)
↓
Data Layer (PostgreSQL/SQLite)
Backend Development
- FastAPI Framework: Built a high-performance REST API with automatic OpenAPI documentation
- AI Integration: Integrated OpenAI GPT-4o and Anthropic Claude 3.5 Sonnet for code analysis and test generation
- ML Models: Developed custom regression prediction algorithms using scikit-learn
- Database Design: Created comprehensive data models for analyses, issues, tests, predictions, and actions
- Code Parsing: Built AST-based parsers for multiple languages to extract function signatures and code structure
Frontend Development
- React 18 + TypeScript: Built a type-safe, modern UI with React hooks
- Tailwind CSS: Created a beautiful, responsive dark-themed interface
- Recharts: Implemented interactive data visualizations for trends and metrics
- Component Architecture: Developed reusable components (modals, toasts, file viewers)
- Real-time Updates: Integrated WebSocket support for live dashboard updates
Key Technical Decisions
- Multi-AI Provider Support: Allows switching between OpenAI and Anthropic models
- Language-Agnostic Design: Supports multiple programming languages with extensible parsers
- Fallback Mechanisms: Graceful degradation when AI APIs are unavailable
- Batch Processing: Efficient handling of large codebases with batch issue saving
- Error Recovery: Robust error handling to prevent data loss
Challenges we ran into
Building AURA presented several significant challenges:
1. AI API Integration Complexity
- Challenge: Integrating multiple AI providers (OpenAI and Anthropic) with different API structures and rate limits
- Solution: Created a unified abstraction layer that handles provider switching, error handling, and fallback mechanisms
- Learning: Implemented graceful degradation so the system works even without API keys (using templates)
2. Test File Matching & Coverage Calculation
- Challenge: Accurately matching generated test files to source files across different naming conventions and directory structures
- Solution: Developed sophisticated pattern matching algorithms that handle various test file naming patterns (test_*.py, *.test.js, *Test.java, etc.)
- Learning: Coverage calculation needed to account for both database-stored tests and file system tests
3. Issue Data Persistence
- Challenge: Issues were being detected but not properly saved to the database, causing dashboard to show no data
- Solution: Implemented robust error handling in issue saving logic, added fallback to extract issues from analysis_result JSON, and improved data validation
- Learning: Always validate data structures and handle edge cases (None values, enum types, etc.)
4. Real-time Coverage Updates
- Challenge: Coverage percentage wasn't updating immediately after accepting tests
- Solution: Added proper data refresh mechanisms with delays for file system updates, parallel data loading, and improved test-to-source file matching
- Learning: File system operations need time to propagate; async operations need proper sequencing
5. E2E Test Integration
- Challenge: E2E tests weren't being counted in overall coverage and weren't being saved with proper file naming
- Solution: Enhanced test file naming to include "e2e" identifiers, improved CodeAnalysis creation for tests without analysis_id, and updated coverage calculation to include both unit and E2E tests
- Learning: Test type preservation throughout the generation and saving pipeline is critical
6. Frontend-Backend Data Synchronization
- Challenge: Dashboard showing "No data available" even when API was returning data correctly
- Solution: Added comprehensive error logging, API response interceptors, and improved error handling in the frontend
- Learning: Silent failures in API calls need explicit error handling and user feedback
7. Database Query Optimization
- Challenge: Complex queries for issues by type and severity were slow and sometimes returned None values
- Solution: Added proper filtering for None values, normalized data types, and optimized query structure
- Learning: Always filter None values in database queries and normalize data before aggregation
Accomplishments that we're proud of
We're incredibly proud of what we've built in this hackathon:
Innovative Solutions
- Regression Prediction: Built ML-based models that predict potential issues before they occur
- Multi-Language Support: Extensible architecture supporting Python, JavaScript/TypeScript, Java, and more
- Intelligent Test Generation: AI-powered test creation that understands code context and generates meaningful tests
- Automated Actions: System that not only detects issues but takes action to fix them ### Beautiful User Experience
- Modern, intuitive dashboard with dark theme
- Real-time visualizations using Recharts
- Responsive design that works on all screen sizes
- Comprehensive file viewer and test preview modals ### Technical Excellence
- Clean, modular codebase with proper separation of concerns
- Comprehensive error handling and fallback mechanisms
- RESTful API with automatic OpenAPI documentation
- Type-safe frontend with TypeScript
- Database models with proper relationships and constraints
What we learned
This hackathon was an incredible learning experience:
Technical Learnings
AI Integration Best Practices
- How to handle multiple AI providers with different APIs
- Implementing graceful fallbacks when AI services are unavailable
- Optimizing prompts for code analysis and test generation
- Managing API rate limits and costs
Full-Stack Development
- Building scalable REST APIs with FastAPI
- Creating responsive, modern UIs with React and TypeScript
- Managing state and data flow in complex applications
- Implementing real-time updates and data synchronization
Database Design
- Designing schemas for complex relationships (analyses, issues, tests, predictions)
- Optimizing queries for performance
- Handling data migration and schema evolution
- Extracting data from JSON fields when direct relationships fail
Code Analysis & Parsing
- Working with AST (Abstract Syntax Trees) for code parsing
- Understanding different language conventions and test file patterns
- Building language-agnostic analysis systems
- Matching test files to source files across various naming conventions
Machine Learning Integration
- Building regression prediction models
- Feature engineering for code quality metrics
- Risk scoring and prioritization algorithms
What's next for AURA — Autonomous Unified Review Agent
- IDE Integration — VS Code and IntelliJ plugins for real-time inline suggestions
- Multi language support expansion
- Team collaboration feature
4. Enhanced AI & ML Models — Fine-tuned models and custom regression prediction
Built With
- javascript
- node.js
- openi
- postgress
- python
- react



Log in or sign up for Devpost to join the conversation.