Inspiration
The inspiration for AI Examiner came from recognizing a critical pain point in educational institutions: the time burden of manual exam grading. Teachers spend countless hours evaluating answer sheets, often struggling to maintain consistency across hundreds of submissions while facing:
- Manual grading fatigue leading to inconsistent evaluation standards
- Delays in providing timely feedback to students
- Difficulty in identifying learning gaps across the student population
- Administrative overhead in organizing and tracking evaluation results
What it does
AI Examiner is an intelligent answer evaluation system that uses Google's Gemini AI to automatically evaluate student answers against model answers, providing detailed feedback, marks, and improvement suggestions.
Key Features:
- AI-powered evaluation using Google Gemini
- PDF document support for model & student answers
- Gemini Vision API for image & handwritten text analysis
- Comprehensive feedback with strengths & improvement areas
- Automatic grading based on percentage scores
- Teacher & student management
- Evaluation history with filtering & search
- Export evaluations to PDF reports
How we built it
Tech Stack:
Frontend:
- React 18.2.0 - UI framework
- React Router DOM - Navigation
- Axios - HTTP requests
- React Icons - UI icons
- CSS3 - Responsive styling
Backend:
- Flask 3.0.0 - Python web framework
- MongoDB 7.0 - NoSQL database
- Google Gemini AI - Answer evaluation & vision analysis
- Google Gemini Vision API - Image & handwritten text recognition
- PyPDF2 3.0.1 - PDF processing
- pdf2image 1.16.3 - PDF to image conversion
- Pillow 9.5.0 - Image processing
Infrastructure:
- Docker & Docker Compose
- vercel
- Railway (deployment)
Architecture:
- React frontend handles file uploads & UI
- Flask REST API processes requests
- PDF2Image converts PDFs to images
- Gemini Vision API analyzes images & extracts text
- Gemini AI evaluates answers
- MongoDB stores evaluations & user data
Challenges we ran into
- AI Consistency→ Solved with structured prompts and validation rules
- PDF Processing→ Implemented robust parsing with fallback mechanisms
- Scalability→ Added async queuing and rate limiting for batch operations
- Large File Uploads→ Implemented chunked uploads with progress tracking
- Real-time Status→ Added WebSocket for live evaluation progress
- Database Performance→ Optimized queries with indexing and caching
- Production Deployment→ Used NixPacks for reproducible Docker builds
Accomplishments that we're proud of
- 95%+ consistency in evaluation scores
- Reduced grading time by 90%
- Handles 100+ concurrent evaluations
- Fully automated workflow (upload → evaluate → report)
- Production-ready deployment on Railway
- Non-technical UI - teachers need no training
- 40% reduction in API token costs
What we learned
- Full-stack web development with Flask and React
- Prompt engineering for reliable AI outputs
- PDF processing and document handling
- Asynchronous processing for scalability
- Docker containerization and cloud deployment
- Database optimization and query tuning
- Real-time feedback systems design
What's next for Ai-examiner
Short-term:
- Custom rubric builder per exam
- Multi-language support
- PDF/Excel export of results
Medium-term:
- Mobile app for teachers
- Plagiarism detection integration
- Peer review module
Long-term:
- LMS platform integrations (Canvas, Blackboard)
- Predictive analytics for student success
- Collaborative grading features
Log in or sign up for Devpost to join the conversation.