Problem Statement: In hackathons, competitions, or classrooms, evaluating multiple projects or assignments is often time-consuming, subjective, and lacks consistency. Judges may miss important details, and participants don’t always receive constructive feedback.
Solution: Evaluvator is a cloud-powered platform that uses Google Cloud’s AI, Natural Language Processing (NLP), and scalable backend services to evaluate submitted projects (code, documents, or presentations). It gives:
Instant scoring based on predefined rubrics
AI-generated feedback to help improve
Data visualizations for overall performance trends
How It Works:
Submissions are uploaded to Google Cloud Storage.
Backend (using Cloud Functions & Firebase) processes the data.
NLP & ML models analyze project descriptions, code quality, or pitch content.
Evaluation metrics (clarity, creativity, impact, feasibility) are scored.
Results and feedback are shown in a clean dashboard.
Tech Stack:
Google Cloud Storage
Firebase for backend & authentication
Cloud Functions
Vertex AI for ML model deployment
BigQuery for analytics
Flutter (or React) frontend for UI
Impact: Evaluvator ensures fair, fast, and insightful evaluations — making it perfect for hackathons, classrooms, and even internal company idea challenges.
Built With
- cloud
- platform
Log in or sign up for Devpost to join the conversation.