About the Project

We kept seeing the same problem: incredibly strong engineers getting filtered out because their GitHub didn’t tell the right story fast enough. Recruiters skim for seconds. The signal is there, but buried. That felt broken.

So we built a system that generates a custom, job-specific GitHub portfolio for every application. You paste a job description, and an agentic AI pipeline analyzes your repositories, ranks the most relevant work, and generates a clean, recruiter-friendly page that explains exactly why you’re a fit.

What makes this different

Instead of a static portfolio, we use agentic AI to:

  • Parse job requirements and extract key skills
  • Evaluate repositories using embeddings, code signals, and activity
  • Generate concise, role-aligned summaries for each project
  • Continuously refine outputs through multi-step reasoning agents

Then we layer in decentralized AI grading on Solana.

Each portfolio is paired with a verifiable scoring process:

  • Models evaluate relevance, quality, and alignment
  • Scores and metadata are anchored on Solana for transparency
  • Anyone can verify that outputs were not manipulated

This adds something missing in hiring tools: trust. Not just what is shown, but proof of how it was evaluated.

How we built it

  • GitHub OAuth for data ingestion
  • Embedding pipeline to map job descriptions to repositories
  • Agent orchestration layer to rank, summarize, and explain projects
  • Lightweight frontend for fast, shareable portfolio pages
  • Solana integration to store hashes of evaluation results and scoring proofs

At a high level, we treat each application like an optimization problem:

[ \text{maximize relevance(repo, job)} \quad \text{subject to clarity and trust} ]

Challenges we faced

  • Signal extraction: GitHub data is noisy. We had to balance stars, commits, and semantic relevance without overfitting to any one metric
  • Summary quality: Early outputs sounded generic. We improved this by chaining agents with specific roles instead of one monolithic prompt
  • Latency: Multi-step agent workflows can be slow. We optimized with caching and parallel evaluation
  • Trust layer design: Figuring out what to store on Solana without overloading the system required careful tradeoffs between cost and verifiability

What we learned

  • Relevance beats completeness. Showing less, better, wins
  • Agentic workflows outperform single-pass generation for structured tasks
  • Trust is a feature. Verifiability changes how people perceive AI output

This started as a simple idea about portfolios, but quickly became something bigger: a system that helps candidates prove they are qualified, not just claim it.

Built With

Share this project:

Updates