What Inspired Us

We're international students. Every one of us on this team has had the same 2am moment — ten browser tabs open, Reddit threads contradicting each other, a YouTube video from 2021 that may or may not still be accurate, and a question that feels impossibly high-stakes: How do I stay in the US after graduation?

We've all turned to AI for help. And AI answered fast. But it answered wrong — not because it lacked knowledge, but because it never asked who we were. It didn't know we were on F1 visas. It didn't ask if our degree was STEM or non-STEM. It didn't check whether we'd already used our OPT. It just listed every immigration pathway that exists and left us to figure out which ones actually applied.

Then we read Oversecured's research on AI security vulnerabilities in wellness apps — how the most popular mental health chatbots have critical security flaws, and how therapy records sell for $1,000+ on the dark web. That research reframed something for us: AI security isn't just about protecting data from attackers. It's about protecting users from unsafe AI output. The same principle applies to immigration advice. A chatbot that tells a student they can apply for H1B without checking their work authorization status isn't leaking data — it's leaking dangerous misinformation into a high-stakes decision.

That's what inspired VisaGuard. Not better answers. Safer answers.

What It Does

VisaGuard AI is a domain-specific guardrail system for immigration decision support. It sits between the user and the AI, ensuring that no response is generated until the system has enough context to answer responsibly.

The core workflow:

  1. Detect — The system identifies whether a query touches immigration-sensitive territory (visa status, work authorization, enrollment requirements, travel restrictions)
  2. Clarify — Instead of answering immediately, VisaGuard asks targeted clarifying questions: visa type, degree level, STEM classification, OPT usage, I-20 expiration
  3. Guard — Responses pass through a guardrail layer that checks for assumption failures, missing context, and potentially harmful generic advice
  4. Output — The system generates structured pathways with specific requirements, timelines, and flagged risks tailored to the user's actual situation

The key insight: the guardrail's job is to prevent the AI from answering when it doesn't have enough information. Most AI failures in high-stakes domains aren't knowledge failures — they're context failures.

How We Built It

The system has two layers, inspired by hybrid security architectures used in financial fraud detection:

Layer 1 — Rule-Based Detection (Local, Instant)

A deterministic pattern-matching engine that classifies incoming queries by risk level and domain. Immigration-sensitive keywords and phrase patterns trigger the clarification flow before any LLM is invoked. This layer also validates the AI's draft response against a checklist of required elements — if a user asks about H1B eligibility and the response doesn't mention employer sponsorship requirements, the guardrail catches it.

We model the risk classification as a simple scoring function:

$$R(q) = \sum_{i=1}^{n} w_i \cdot p_i(q)$$

where $p_i(q)$ is a binary pattern match for the $i$-th risk indicator in query $q$, and $w_i$ is the severity weight. Queries exceeding a threshold $\tau$ trigger the full clarification flow.

Layer 2 — Contextual Analysis (LLM-Powered)

Once Layer 1 collects sufficient context through clarifying questions, the LLM generates a response constrained by the guardrail framework. The system prompt encodes domain-specific rules — USCIS regulations, SEVP policies, OPT/CPT requirements — and the response is structured into pathways rather than free text.

The comparative evaluation against generic AI uses five metrics scored on a normalized scale:

$$S_{total} = \frac{1}{5}\sum_{k=1}^{5} s_k, \quad s_k \in {context, clarification, safety, structure, reliability}$$

Tech Stack:

  • React frontend with a conversational interface
  • Claude API (Sonnet) for the LLM layer
  • Local JavaScript rule engine for Layer 1 detection
  • Domain-specific system prompts encoding USCIS/SEVP policy guardrails

Challenges We Faced

The clarification-fatigue tradeoff. Asking too many questions before answering creates friction. Asking too few means the guardrail isn't doing its job. We iterated on finding the minimum viable context — the smallest set of questions that eliminates the most dangerous assumption failures. For immigration queries, we found that three questions (visa type, STEM/non-STEM, OPT status) eliminate roughly 80% of incorrect generic responses.

Guardrails that help vs. guardrails that block. Research on mental health chatbots showed us that overly aggressive guardrails can themselves cause harm — users reported that being rejected by safety guardrails during moments of vulnerability was the closest they came to a harmful experience. We applied this lesson: VisaGuard's guardrails don't refuse to help. They slow down and ask questions. The user never feels blocked — they feel guided.

Defining "wrong" in immigration advice. Unlike medical advice where clinical protocols define right and wrong, immigration advice exists in gray zones. An H1B answer isn't wrong — it's wrong for a specific user who doesn't meet the requirements. The guardrail had to be designed around contextual correctness, not absolute correctness. This pushed us toward the clarification-first architecture rather than a simple content filter.

What We Learned

The biggest lesson: AI security in high-stakes domains isn't about building a better model. It's about building a better process around the model. The same LLM that gives dangerous generic immigration advice gives excellent tailored advice — once it has context. The guardrail doesn't make the AI smarter. It makes the AI ask before it answers.

This applies far beyond immigration. Any domain where the same question from different users requires fundamentally different answers; healthcare, legal, financial, academic advising: needs this pattern. Generic AI gives generic answers. Guardrailed AI gives the right answer for this specific person.

That's what VisaGuard does. Not better answers. The right ones.

Built With

Share this project:

Updates