SkillUp — skillup-market.vercel.app

Inspiration

Every time I finished a productive AI coding session, I noticed the same thing happening. I had just figured out a perfect workflow, exactly how to ask my AI to set up a project, review code, or debug an issue. Then the session ended. Next time I started from scratch.

I looked around and realized everyone was doing the same thing. AI tools have a feature called skills that lets you save workflows as reusable instruction files. But creating them is manual, there is no easy way to share them, and most people never bother.

The knowledge existed. It was just disappearing after every session.

That was the problem worth solving.


What We Built

SkillUp is two connected pieces:

Observatory is a tool that sits inside your AI coding environment, watches your session, and automatically detects reusable workflow patterns when your session ends. It drafts a skill file for you and lets you publish it to the marketplace in one command.

The Marketplace is a community platform at skillup-market.vercel.app where people browse, download, fork, and build on each other's skills. Like Hugging Face but for AI workflows.

The flow is simple:

  1. You work normally with your AI tool
  2. Session ends and Observatory detects a reusable pattern automatically
  3. A skill file gets drafted using your active model
  4. One command publishes it live on the marketplace
  5. Anyone fetches it, a local safety check runs on their machine, and it installs in seconds

How We Built It

We built the entire project in one day using AI agents to parallelize the work.

Observatory is built in Python and uses the Claude Code hooks system to automatically trigger at the end of every session. It reads the session transcript in JSONL format, sends it to Claude for pattern detection, generates a properly formatted skill file using a template, and handles the full publish, fetch, fork, and login flow through a clean command interface.

The Marketplace is a Next.js app with the App Router, backed by Supabase for auth and database, Prisma as the ORM, and deployed on Vercel. The frontend was built first with mock data, then the backend was wired up in a second pass replacing all fixtures with real database calls.

The two pieces communicate through a REST API. Observatory authenticates using a Bearer token the user copies from their account settings page. Skills are identified by human-readable slugs in the format username/skill-name, the same way GitHub identifies repositories.

Everything is fully open source.


Challenges

The transcript format. Claude Code session files are stored as JSONL where each line is a JSON object representing one event. Parsing them correctly, extracting only the meaningful tool calls and user prompts, and filtering out noise like system messages and thinking traces took careful work to get right.

Trust without a central evaluator. Our first instinct was to run an eval pipeline on the server to score every submitted skill. We quickly realized this would cost money at scale and could be gamed by bad actors submitting inflated scores. The solution was to move the safety check entirely to the user's local machine. When you fetch a skill, Observatory reviews it using your own Claude session before installing it. Zero cost to us, zero central bottleneck, and the check happens in context on your device.

Token refresh across a local tool and a web app. Observatory is a local Python tool that authenticates against a web API. Handling token expiry gracefully without interrupting the user required building a shared auth helper that all scripts use, with automatic retry on 401 and a clean error message when re-login is needed.

Building fast without breaking things. We used four parallel AI agents to build the marketplace frontend, each owning a separate layer with a shared TypeScript contract embedded in every agent prompt. The risk was prop name drift and type mismatches across agents working simultaneously. Having an explicit audit step at the end to check for inconsistencies before wiring up the backend saved significant debugging time.


What We Learned

Scoping is everything when building with AI agents. The more precise the spec going in, the less correction coming out. We spent more time planning than most teams spend building and it paid off. The agents produced clean, consistent code because the contracts and boundaries were defined before any code was written.

We also learned that the most interesting product decisions are not technical. Moving the safety check from server to client, dropping the eval pipeline entirely, making Observatory a Claude Code skill rather than a standalone CLI, these were design decisions that simplified the architecture and made the product better at the same time.


What's Next

  • Team skill libraries for organizations
  • Curated skill collections for specific stacks and workflows
  • Cross-session pattern learning that gets smarter over time
  • Support for more AI coding tools beyond Claude Code

SkillUp. Publish freely, fetch safely.

Built With

  • claude
  • next.js
  • python
  • supbase
  • vercel
Share this project:

Updates