Inspiration

During hackathons and project showcases, judges often face the challenge of evaluating numerous projects efficiently and fairly. We noticed that manual evaluation processes can be time-consuming,, and sometimes subjective. This inspired us to create My Eval Buddy—an AI-powered assistant designed to streamline the evaluation process by providing quick, unbiased, and comprehensive assessments of projects.

What it does

Ability to upload zip or git link to eval the project and give insights based on dynamic metric choices and add code summary or ppt to alongside code for better context evaluation

How we built it

Python backend and front end, Streamlitt for UI

Challenges we ran into

Faster Execution, overall code reading and and summarizatin and insights take ~45 secs. Too many tokens to process, optimizing prompt preventing hallucinations, state management

Accomplishments that we're proud of

LLM basics, RAG, prompt engineering , MVP, complete enough to runnable tradeoff feature rich, UX, feasible. Got a headstart in learning further

What we learned

LLM basics, RAG, prompt engineering

What's next for My Eval Buddy

Extend to eval anything grading assignments, adding additional corpus which will use RAG to improve performance without the overhead of re training.

Built With

Share this project:

Updates