Inspiration

The job application process is broken. On average, it takes 30 minutes to apply for a single position uploading resumes, filling out redundant form fields, and re-entering the same personal information over and over. Multiply that across dozens of applications and you've lost entire days to tedious busywork. Meanwhile, the swipe-based UX pioneered by dating apps has proven incredibly effective at helping people make fast, intuitive decisions on large volumes of options. We asked ourselves: what if applying to a job was as effortless as swiping right?

Jobr was born from the frustration of the modern job hunt and the realization that AI has matured enough to handle the repetitive parts for us. We wanted to build something that turns hours of form-filling into seconds of swiping so job seekers can focus on finding the right opportunity instead of drowning in paperwork.

What it does

obr is a swipe-based job application platform that radically simplifies the job search. Users swipe through job listings like a Tinder-style card stack: swipe left to pass, swipe right to apply. When you swipe right, Jobr doesn't just save the job; it automatically fills out and submits the application for you using an AI agent, pulling from the profile information you've already provided.

Here's the full experience:

  • Personalized job discovery: Jobr scrapes real-time listings from Indeed, LinkedIn, ZipRecruiter, and Google Jobs, filtered by your preferred category (across 13 fields like Software Engineering, Data Science, Finance, etc.), location, and posting age.
  • Swipe to decide: Each job is presented as a rich card showing the company, title, salary, location, requirements, and description. Swipe right to apply, left to pass.
  • Automatic application: Liked jobs are added to a processing queue. A backend agent picks them up, navigates to the application page, detects every form field, and fills them out using your profile data: name, email, phone, experience, skills, and more.
  • Real-time status tracking: A live overlay shows the status of each queued application (pending → processing → completed), so you always know where things stand.
  • Persistent saved jobs: Your liked jobs are stored in Firestore and persist across sessions, so you never lose track of what you've applied to.
  • Smart deduplication: Jobr tracks which jobs you've already seen and never shows them again, keeping the feed fresh.

How we built it

Jobr is a full-stack application spanning a Flutter frontend, a Python/FastAPI backend, Firebase infrastructure, and an AI agent powered by OpenAI.

Frontend (Flutter/Dart):

  • Cross-platform Flutter app
  • A three-step authentication gate: Firebase email/password sign-in → profile completion → main app access.
  • Real-time queue status updates via Firestore snapshot streams, rendered as an overlay on the swipe screen.
  • User preferences (category, location, job sites, max posting age) drive both the scraper parameters and client-side filtering.

Backend (Python/FastAPI):

  • A FastAPI server with modular routers for scraping, form filling, auto-fill, screen control, and health monitoring.
  • Job scraping via the python-jobspy library, which pulls listings from Indeed, LinkedIn, ZipRecruiter, and Google Jobs simultaneously. Smart deduplication checks Firestore before saving to avoid duplicates.
  • A background queue processor implemented as an asyncio task within FastAPI's lifespan manager; it polls Firestore's queue collection every 5 seconds, picks the oldest pending item, fetches the user's profile and job data, and dispatches the AI agent to fill the application.

AI Form Filler (LangChain + OpenAI):

  • An AI agent built on LangChain's create_react_agent using OpenAI's gpt-4o-mini model at temperature 0 for deterministic form-filling behavior.
  • A Playwright-based field detection engine that launches a headless Chromium browser, navigates to application pages, and identifies all form fields (inputs, textareas, selects, checkboxes) with their bounding boxes and semantic types.
  • Screen control tools (via PyAutoGUI) give the agent the ability to move the mouse, click, type text, press keys, scroll, and interact with dropdowns, effectively controlling the screen like a human would.
  • The agent follows a structured workflow: take a screenshot → identify the next empty field → click its center → clear existing content → type the appropriate value from the user's profile → repeat until the form is complete.

Infrastructure (Firebase/Firestore):

  • Firebase Authentication for user management.
  • Cloud Firestore as the primary database, storing user profiles (users collection), scraped job listings (jobs collection), and the application queue (queue collection).
  • The Firebase Admin SDK on the backend and cloud_firestore Flutter package on the frontend, sharing the same Firestore instance for real-time synchronization.

Challenges we ran into

Browser automation at the UI level introduced its own set of difficulties. We had to bridge the gap between detecting fields in a headless Playwright browser and controlling the screen with a MCP agent using PyAutoGUI api tools. Timing and accuracy were critical: actions on the screen needed to be related with code behavior at precisely the right moments in order for the sequencing of events to follow as intended. We also accounted for edge cases in multipage multiurl applications, where the field elements needed to be constantly rescanned in order for the agent to not be confused. Unique solutions were developed as a response, such as manually copying the new urls on each page to the stdin for the MCP agent to accurately recognize link updates.

Real-time synchronization between the Flutter frontend and the Python backend through Firestore required careful coordination. The queue system needed to handle edge cases like the app being closed while an application was processing, stale queue items from previous sessions, and status transitions that needed to appear instantly in the UI.

Job scraping at scale without hitting rate limits or getting blocked required a smart cycling approach across multiple search terms and job sites, with deduplication logic to avoid storing duplicate listings when the same job appears on multiple platforms.

Accomplishments that we're proud of

  • The end-to-end pipeline actually works: a single swipe right triggers a chain that spans the Flutter UI, Firestore queue, Python backend, AI agent, browser automation, and form submission. Seeing a job go from a card on screen to a submitted application with zero manual input is incredibly satisfying.
  • The site-agnostic field detection engine: our module can analyze any web page and identify form fields with their types and screen coordinates, without being hardcoded to any specific job site.
  • Scraping from 4 major job platforms: Indeed, LinkedIn, ZipRecruiter, and Google Jobs all feed into a single unified job stream, giving users access to a massive breadth of listings.

What we learned

  • LLM agents need guardrails: giving an AI agent direct screen control tools is powerful but requires careful prompt engineering and safety constraints. We learned to disable dangerous key combinations and add deliberate delays between actions.
  • Firestore's real-time streams are incredibly powerful for building reactive UIs; using Firestore snapshot listeners made the queue status overlay trivially easy to implement compared to polling.
  • Cross-platform Flutter development involves more infrastructure than expected, and managing CocoaPods for macOS, Gradle for Android, and platform-specific Firebase configurations taught us a lot about the build toolchain beneath Flutter's "write once, run anywhere" promise.
  • Job scraping is a moving target: sites change their markup, rate-limit aggressively, and serve different content to different user agents. Building resilient scraping with smart deduplication and graceful fallbacks was an education in defensive programming.

What's next for Jobr.

  • Resume parsing: automatically extract profile information from an uploaded resume so users don't have to manually fill out their profile.
  • Smarter job matching: use embeddings and semantic similarity to rank jobs based on how well they match a user's skills and experience, not just category and location filters.
  • Application success tracking: detect whether a submitted application received a confirmation, and surface success/failure rates to help users refine their approach.
  • Cover letter generation: have the AI agent generate tailored cover letters for each application using the job description and user profile.
  • Interview prep integration: once a user gets a callback, provide AI-powered mock interview practice based on the specific job requirements.
  • Batch applying: let users queue up multiple right-swipes and process them all in sequence, applying to a dozen jobs while they grab coffee.

Built With

Share this project:

Updates