SkillScan — AI Career Intelligence for Students


🌟 Inspiration

Every semester, thousands of students graduate with degrees, portfolios, and ambition — and still can't get hired.

We watched our peers apply to 80, 100, sometimes 150 jobs. The rejections kept coming. The silence was deafening. No email explaining what went wrong. No feedback. No direction.

What broke us wasn't the rejection rate. It was the loop — apply, get rejected, apply again, get rejected again — with no data, no signal, and no way to improve.

We asked one question that changed everything:

"What if a student could know — before applying — exactly what's stopping them from getting hired?"

That question became SkillScan.

We weren't inspired by another resume tool or a job board. We were inspired by a deeply human problem: talented people being filtered out not because they're unqualified, but because no one ever told them what to fix.


⚙️ What It Does

SkillScan is an AI Career Intelligence System — not a chatbot, not a resume formatter. It answers one surgical question:

"What exactly is standing between you and this job?"

Here's what happens when a student uses SkillScan:

1. Resume Analysis The student uploads their resume and selects a target role. SkillScan's AI engine parses it, extracts structured skills, and benchmarks them against real job market standards.

2. Gap Score (0–100) The system produces an objective readiness score — not a vague "good job" or "needs improvement," but a precise number with the reasoning behind it.

3. Skill Gap Detection AI compares the student's profile against the target role and surfaces the exact missing skills — ranked by hiring impact. So students fix the most critical things first, not the easiest ones.

4. 30-Day Roadmap A personalized, week-by-week action plan is generated. Not generic advice. A sequenced plan built around the student's specific gap profile.

5. Job Match Percentage Students can paste any job description and instantly see how well they match — before spending time on a cover letter.

6. AI Mock Interviews Role-specific interview questions are generated and adapted dynamically. The student's answers are scored in real time with rubric-based feedback.

7. Salary Prediction Based on current skill level and projected post-plan readiness, students see what they're worth — and what they could be worth.

8. LinkedIn Optimization AI surfaces specific, actionable changes to make the student's profile appear in recruiter searches for their target role.

Resume → Score → Gap → Roadmap → Interview → Apply → Improve ↺

The loop is continuous. Every application outcome feeds back into the system and sharpens the next plan.


🛠️ How We Built It

We built SkillScan from scratch in a focused sprint, making deliberate technical decisions at every layer.

Frontend — React + TypeScript + Tailwind CSS We chose React with TypeScript for type safety and a component architecture that could scale. Tailwind gave us a clean, responsive UI without fighting CSS specificity wars. Every screen was designed around one principle: clarity over clutter.

Backend & Database — Supabase Supabase handled authentication, real-time data, and storage. Its Postgres foundation let us query skill gap data relationally — connecting resumes, roles, skills, and roadmaps with proper structure, not JSON blobs.

AI Engine — Gemini AI This is where SkillScan's intelligence lives. We didn't use Gemini as a single prompt-response chatbot. We architected it as a multi-role decision engine:

AI Role Prompt Design
Resume Parser Structured extraction with schema-forced output
Gap Ranker Comparative scoring against role benchmarks
Roadmap Generator Sequenced output constrained by timeline and priority
Interview Simulator Adaptive question generation with rubric-based scoring
Salary Predictor Range estimation from skill-market mapping

Each role had its own prompt, its own output format, and its own validation layer. We treated the AI like a team of specialists, not a generalist.

Architecture Decision — Why Not a Chatbot? Early in development, we debated building a conversational interface. We rejected it deliberately. A chatbot puts the burden of asking the right questions on the student. SkillScan puts that burden on the AI. The student should only have to upload a resume and name a role. Everything else is computed, not conversed.


🧱 Challenges We Ran Into

1. Making AI output consistent and structured The hardest engineering challenge wasn't calling the AI — it was making the AI return reliable, parseable output every time. Natural language responses varied wildly. We solved this by designing strict output schemas and validation layers that rejected malformed responses and retried with refined prompts.

2. Ranking skills by actual hiring impact Not all missing skills are equal. "Missing SQL" and "Missing Kubernetes" affect a Data Scientist's application very differently. Building a ranking model that reflected real hiring priorities — not just keyword frequency — required us to think carefully about how job descriptions signal priority, not just presence.

3. Generating roadmaps that are actually followable AI-generated plans tend toward vague optimism. "Learn machine learning in a week" is not a plan. We constrained the roadmap generator with time-boxing, resource specificity, and progressive skill sequencing — forcing the output to be realistic and actionable rather than aspirational.

4. Keeping the UX simple when the system is complex SkillScan does a lot. The temptation was to show everything. We had to make hard decisions about what the student sees first, second, and never — because cognitive overload is the enemy of action.

5. Cold start — no user data yet Personalization improves with data. At launch, we had none. We solved this by grounding all AI outputs in structured job market benchmarks rather than historical user behavior, so Day 1 results are still meaningfully accurate.


🏆 Accomplishments That We're Proud Of

We built a real product, not a demo. SkillScan isn't a slideshow of features. It's a working system that takes a real resume and produces a real, actionable career plan in under 60 seconds.

We made AI do specialized work. Instead of one general-purpose prompt, we designed seven distinct AI roles, each with its own prompt engineering, output schema, and validation. This multi-agent architecture is something we're genuinely proud of technically.

We solved the feedback problem. For the first time, a student can upload their resume, name their target role, and get the specific, ranked, actionable feedback that employers never give. That's not a feature. That's a gap we closed.

We kept the user experience honest. We resisted the temptation to show inflated scores or overpromise timelines. The gap score is real. The roadmap is challenging. The interview feedback is direct. Students deserve honesty, not false confidence.

We built it fast — and it works. Every feature in the submission is live. The AI pipeline runs end-to-end. The roadmap generates correctly. The mock interview scores answers. We're proud of the quality we shipped under time pressure.


📚 What We Learned

Prompt engineering is real engineering. We came in thinking AI integration was mostly API calls. We left knowing that the quality of AI output is almost entirely determined by how you design the prompt — the schema, the constraints, the examples, the failure handling. It deserves the same rigor as any other software component.

The problem frame matters more than the solution. We iterated three times on what SkillScan actually is before we wrote a line of code. Each reframe made the product sharper. The final version — an AI decision engine, not a chatbot — only emerged because we kept asking "what does the student actually need?"

Simplicity is harder than complexity. Adding features is easy. Removing them — deciding what a student should not have to think about — was the hardest design challenge we faced. Every time we simplified the UI, the product got more powerful.

Speed compounds. Small, focused builds — one feature fully working before the next one starts — consistently beat trying to build everything in parallel. We shipped faster by doing less at once.

Feedback loops are everything — for students and for us. The core insight of SkillScan — that without feedback you can't improve — turned out to be true for us as builders too. Our own sprint retrospectives, user tests, and prompt iteration cycles were the reason the product works.


🚀 What's Next for SkillScan — AI Career Intelligence for Students

SkillScan works. Now we scale it — and deepen it.

Phase 1 — Accuracy (Next 30 Days) Connect to live job posting APIs so gap detection reflects real-time market demand, not static benchmarks. The job market shifts. SkillScan should shift with it.

Phase 2 — Personalization (Next 60 Days) Build a learning layer from anonymized outcome data. When students who followed the roadmap get hired, that signal improves the plan for everyone who comes after them.

Phase 3 — Institutional Partnerships Partner with colleges and bootcamps to deploy SkillScan at the cohort level. Career offices can see aggregate gap data across graduating classes and intervene with curriculum — not just individual students.

Phase 4 — Recruiter Side Flip the interface. Let recruiters describe what they're looking for, and match them to students whose roadmap completion brings them to the threshold — not just students who already qualify.

Phase 5 — Global Gap Map Aggregate anonymized skill gap data across cities, roles, and institutions to publish a public Hiring Gap Index — showing policymakers, educators, and employers where the talent-market disconnect is worst.

The long-term vision isn't a career tool. It's a labor market intelligence layer that makes hiring more signal-based and less noise-based — for everyone.


*SkillScan was built because we were tired of watching talented people fail silently.* *The gap isn't talent. The gap isn't effort.* **The gap is information — and we built the system that closes it.**

Built With

Share this project:

Updates