Problem Statement: In hackathons, competitions, or classrooms, evaluating multiple projects or assignments is often time-consuming, subjective, and lacks consistency. Judges may miss important details, and participants don’t always receive constructive feedback.

Solution: Evaluvator is a cloud-powered platform that uses Google Cloud’s AI, Natural Language Processing (NLP), and scalable backend services to evaluate submitted projects (code, documents, or presentations). It gives:

Instant scoring based on predefined rubrics

AI-generated feedback to help improve

Data visualizations for overall performance trends

How It Works:

  1. Submissions are uploaded to Google Cloud Storage.

  2. Backend (using Cloud Functions & Firebase) processes the data.

  3. NLP & ML models analyze project descriptions, code quality, or pitch content.

  4. Evaluation metrics (clarity, creativity, impact, feasibility) are scored.

  5. Results and feedback are shown in a clean dashboard.

Tech Stack:

Google Cloud Storage

Firebase for backend & authentication

Cloud Functions

Vertex AI for ML model deployment

BigQuery for analytics

Flutter (or React) frontend for UI

Impact: Evaluvator ensures fair, fast, and insightful evaluations — making it perfect for hackathons, classrooms, and even internal company idea challenges.

Built With

Share this project:

Updates