Inspiration
I’m the kind of person who builds projects to make small parts of my life easier. Things like scripts to scrape price drops, simple bots to pull workout data, outreach email generators. Every time I try to incorporate a site with no public API, I have to scrap the idea. Most of these sites are hid behind scrape blockers and it becomes a nightmare to work with. Docket grew to solve that very issue. If companies won’t ship the endpoints I need, I’ll make a tool that creates them on demand. Just let Docket know what type of endpoint and which company's site to target, and get an API route back.
What it does
Docket accepts natural language queries like “Make me a Trader Joe’s API that gets their newest products.” From there a Claude-powered desktop agent automatically opens the site, navigates to the target section, and snapshots the HTML. The raw markup is then transformed into clean JSON (name, price, image, URL). From there, Docket registers a new Flask route: /whatsnew, and serves that JSON. To test out the new route, Swagger updates instantly so you can try the endpoint right away.
How I built it
Backend: Flask 3.0, Docker Containers.
Scraping: Claude Computer Use Agent + pyautogui for human-style clicks and copy-paste.
Transformation: a second LLM call turns HTML into a schema-driven JSON file in /temp.
Dynamic routing: we compile and hot-load a Flask blueprint for each new endpoint, then refresh Flasgger docs concurrently.
Frontend: Next.js 14 + Tailwind + shadcn/ui, clean and modern
Challenges I ran into
This project was my first time working with Anthropic's API suite. Furthermore, the Computer Use model I was using is still in beta. Using these tools to automate my macOS browser with vision-based clicks easily took the most time to debug. Getting the LLM to create a valid JSON schema every single run took hours of prompt tweaks and a retry loop. Finally, Flask wasn’t doesn't natively support unregistering and re-registering blueprints mid-run, so I had to create an entirely new way to hot-swap routes.
Accomplishments that I'm proud of
Even after learning a completely new API interface, I shipped a demo that boots from zero to live JSON in under a minute. The agent opens the site, navigates to What’s New, extracts products, and auto-docs a fresh /whatsnew endpoint. Each action the Computer Use agent takes is seen in real time shows how the agent works with the webpage. The cherry on top is the Swagger docs that shows everyone how to use send requests to the API. Everyone on the Berkeley wifi was able to test this newly generated API.
What I learned
At an AI hackathon it was only fitting to use Cursor and Claude Code. Throughout the 25 hours I learned the best methods for prompt engineering and tweaking the phrasing to keep code concise. After hours of code and too many files, I realized that keeping the architecture simple for the Demo with one worker, one API at a time, helped isolate bugs. Working with the Claude Computer Use showed me how the future is going to work with AI agents working alongside us in our local environments.
What's next for Docket
The next step is containerizing each generated API so that every time a user asks for a new endpoint, Docket will spin up a lightweight Docker container that bundles the scraped JSON, the hot-loaded Flask blueprint, and its own Swagger docs. Containers keep routes isolated, let multiple APIs run in parallel without stepping on each other, and open the door to auto-scaling (one per site, one per user, or one per refresh). On top of that, I’m planning an async job queue to replace the global lock, a tiny scheduler that auto-rebuilds stale APIs, and a CLI so you can script the whole flow in CI. My end goal is to make hobby developers' lives a lot easier by providing APIs for any website they want.
Built With
- claude
- flasgger
- flask
- gunicorn
- nextjs
- pyautogui
- typescript
Log in or sign up for Devpost to join the conversation.