Inspiration
The modern web was built for one type of visitor: humans. Every login flow, CAPTCHA, and rate limiter assumes the entity on the other side is a person. That assumption collapsed the moment autonomous AI agents started navigating the web at scale. The existing solutions are completely one-sided. CAPTCHAs try to block all AI — including the legitimate ones developers deploy with full authorization. There's no primitive to say "this bot is supposed to be here." Meanwhile, as AI vision models have gotten dramatically better, traditional CAPTCHAs (distorted text, grid image selection) have become trivially solvable by any sufficiently capable model.
We asked: what would identity infrastructure look like if it was built from scratch, knowing that both humans and AI agents are real, valid visitors — just fundamentally different? Aurora is our answer. Two cryptographically-backed authentication paths, one for each identity, underpinned by adversarial machine learning, behavioral biometrics, and on-chain reputation.
What it does
Aurora is a dual identity layer with two verification paths.
HumanLock is the human path. It uses adversarial image challenges that are still understandable to people but can confuse AI vision models. It also checks behavioral signals like mouse movement, click timing, tremor score, and path linearity.
AgentPass is the AI agent path. It gives agents a machine-native challenge using obfuscated math, browser proof-of-work, response timing, and model fingerprinting. If the agent passes, AURORA issues a signed agent token. When Solana credentials are configured, it also writes a verification memo to Solana devnet.
Together, these systems let a website do something more useful than a normal CAPTCHA: block unwanted bots while still allowing trusted AI agents.
How we built it
We built the main app with Next.js, React, TypeScript, and Tailwind. The frontend has separate experiences for HumanLock, AgentPass, and a judge-friendly live demo page with stable element IDs so browser agents can interact with it.
For HumanLock, we built a Python FastAPI service using PyTorch and torchvision. It loads a ResNet18 ImageNet model, takes normal images like cats, dogs, bananas, clocks, cars, and guitars, then applies an FGSM-style adversarial perturbation. The image still looks recognizable to a human, but the model is pushed toward the wrong label. The Next.js API requests those generated challenges, and the verifier combines the image answer with behavioral signals before signing a JWT-style human token.
For AgentPass, the backend generates fresh math challenges using K2 Think V2, with an OpenAI fallback and local fallback challenges for reliability. The challenge is intentionally obfuscated with unusual formatting, symbols, and multilingual number wording. The browser also runs a SHA-256 proof-of-work loop inside a Web Worker, and the server uses the response latency plus proof-of-work timing to estimate a model fingerprint. If the answer is correct, AURORA signs an agent token and can write a Solana devnet memo containing the challenge ID, model fingerprint, confidence score, and timestamp.
We also built Browser Use demo scripts to show both sides live: one AI agent tries HumanLock and should get blocked, while another goes through AgentPass and gets verified.
Challenges we ran into
The hardest part was making the project feel real instead of just looking like a security demo. HumanLock needed more than “show an image and ask for a label,” so we added behavioral checks like mouse path shape, tremor, and timing. AgentPass also needed more than a normal math question, so we added challenge obfuscation, proof-of-work, token signing, model fingerprinting, and Solana reputation logging.
Another challenge was reliability. The adversarial image service depends on a Python ML stack, while the web app depends on live API routes and browser behavior. We added fallbacks, retries, and a separate demo page so the project could still be judged clearly even when one external piece was slow.
Accomplishments that we're proud of
We are proud that AURORA is not just another CAPTCHA clone. It treats AI agents as a new kind of web user: sometimes unwanted, sometimes useful, but always needing identity.
We built both sides of that idea in 36 hours: adversarial human verification, agent verification, signed tokens, model fingerprinting, proof-of-work, Browser Use demos, and Solana devnet reputation records. The result is a working prototype of what an identity layer for the agentic web could look like.
What we learned
We learned a lot about how fragile “human vs bot” detection becomes once AI agents can see, click, read, and reason inside web pages. A single challenge is not enough anymore. The stronger approach is layered: visual robustness, behavioral signals, cryptographic checks, signed tokens, and reputation.
We also learned that adversarial ML is powerful but tricky to productize. It is one thing to generate an image that fools a model, and another thing to turn that into a usable web experience that still feels simple for a real person.
What's next for Aurora
Next, we want to make AURORA easier for other developers to drop into their own sites. That means turning HumanLock and AgentPass into embeddable widgets, improving the scoring model, adding stronger server-side proof-of-work verification, and expanding the reputation layer beyond demo memos.
Longer term, we want AURORA to become a trust layer for websites that expect both people and AI agents. Humans should not have to fight impossible CAPTCHAs, and good agents should not have to pretend to be human.
Built With
- browser-use
- fastapi
- hmac
- jwt
- k2-think-v2
- next.js
- openai-api
- playwright
- python
- pytorch
- react
- resnet18
- sha-256-proof-of-work
- solana-devnet
- solana-web3.js
- tailwind-css
- torchvision
- typescript
- web-workers
Log in or sign up for Devpost to join the conversation.