Inspiration
We realized that judges are doing all this work in addition to their day jobs!
What it does
Summarizes hackathon submissions, optionally scores them according to certain criteria - acts as a "shadow judge," providing initial filtering for busy human judges.
How we built it
Frontend deployed on Lovable, building with React and Tailwind. Backend: n8n managing the entire agentic MCP workflow, calling OpenAI (speed-to-text), Anthropic (summary and scoring), storage on AWS S3 buckets.
Challenges we ran into
Learning unfamiliar APIs, realizing that our assumptions weren't always right (e.g. Minimax does text-to-speech, but NOT speech-to-text) - learning and iterating on the fly.
We had Auth0 and Minimax in our architecture, but just ran out of time.
Accomplishments that we're proud of
Minimalist looking front end ... and the first time the flow works from end-to-end, feels like magic!
What we learned
How much can be accomplished by skillfully connecting very powerful modern AI tools.
What's next for HackTrack - the busy Hackathon Judge's Best Friend
Validate with testing against a control panel of human judges. And then - pioneering the JaaS industry!
Built With
- amazon-web-services
- anthropic
- lovable
- n8n
- nextjs
- openai
- s3
Log in or sign up for Devpost to join the conversation.