Inspiration

What it does

How we built it

Challenges we ran into

Accomplishments that we're proud of

What we learned## Inspiration

The job search process is often overwhelming, repetitive, and emotionally draining—especially for early-career engineers applying to dozens of roles with little feedback. I wanted to build something that transforms the job search from a passive, manual grind into a structured, feedback-driven system.

Jobawockeez was inspired by the idea that job applications should behave more like a reinforcement loop: analyze → tailor → evaluate → improve. Instead of blindly submitting resumes, the system treats each application as data and continuously optimizes for better alignment with job descriptions.

What it does

Jobawockeez is an AI-powered resume tailoring and optimization engine.

Given:

A raw job description

A structured “master CV”

It:

Extracts structured requirements from the JD (skills, responsibilities, seniority).

Scores and matches resume content deterministically.

Selects the most relevant experience and projects.

Generates a tailored resume aligned with the role.

(Optionally) critiques and iteratively improves the result.

The system emphasizes explainability: it shows matched and missing skills so users understand alignment gaps rather than relying on black-box AI output.

How we built it

The architecture is modular and agent-based:

JD Extractor (LLM-powered) Converts unstructured job descriptions into a strict JSON schema using prompt engineering and post-processing validation.

Matcher (deterministic scoring engine) Implements token-aware matching and weighted scoring to rank experiences and projects based on required skills, keywords, and quantified impact.

Resume Writer + Critic Uses structured inputs to generate Markdown resumes and self-critique them iteratively.

CLI + Pipeline Orchestration Orchestrates the full flow end-to-end while maintaining clean separation of concerns.

We prioritized:

Deterministic matching for transparency

Strict schema enforcement for reliability

Fallback extraction to prevent pipeline crashes

Challenges we ran into

LLM reliability & JSON formatting Models occasionally returned invalid JSON or extra text. We solved this with:

Strict system prompts

Post-processing validation

Graceful fallback extraction

Substring matching issues Naive skill matching (e.g., “go” matching “mongodb”) caused false positives. We implemented token-aware and boundary-aware matching to fix this.

Balancing AI and determinism Too much AI reduces explainability; too little reduces flexibility. We designed a hybrid system where:

AI handles semantic extraction

Deterministic logic handles scoring and selection

Latency LLM calls introduced noticeable delay. We optimized prompts and model choice to reduce response time.

Accomplishments that we're proud of

Built a fully modular AI pipeline in under a day.

Achieved structured JD extraction with schema validation and fallback logic.

Designed a deterministic, explainable scoring engine.

Created a system that mirrors how ATS and hiring managers evaluate resumes.

Turned a traditionally frustrating process into an optimization problem.

What we learned

Prompt engineering is less about verbosity and more about constraint.

Deterministic systems are essential when explainability matters.

Hybrid AI + rule-based systems outperform pure LLM approaches in structured workflows.

Designing around failure cases (invalid JSON, missing fields) makes systems production-ready.

What's next for Jobawockeez

Add feedback memory to track which tailored resumes perform best.

Introduce scoring dashboards and alignment heatmaps.

Add automatic gap recommendations (e.g., “Learn Kubernetes to improve match rate by 12%”).

Expand to cover cover-letter generation and networking message optimization.

Deploy as a web app with persistent user profiles and job tracking.

What's next for Jobawockeez

Built With

Share this project:

Updates