CLAM: Command Line Assistance Module (FixErrorPls)
The Story
Do you remember your first "git push --force", "sudo rm -rf", "chown -R root:root /"? It was a difficult time for everyone. Until our humble superhero, Clammy, opened ✨the shell✨ to everyone.
Clammy's Job
Most tools try to be helpful all the time, but most users don't need the noice, setup and cognitive load. We asked Clammy to solve a simpler question. What is the smallest help that actually works?
We would like to introduce:
clam
A local command-line assistant, that only shows up when something breaks and disappears once it's fixed. If you don't need it, you forget it exists - and that's by design.
Tagline: The first, fully free and open source, truly offline inline command line assistance tool
Key Features
Inline Suggestions
Press Ctrl+Space and Clammy suggests commands based on what you're typing. But this isn't your shell's tab completion; Clammy understands intent. Typing git ch might suggest git checkout main or git cherry-pick depending on your recent history and repository state. No guessing, no Googling, no context switching.
Fix Error Please (FEP)
Command failed? Instead of copying the error into a search engine, just ask Clammy. The clam fep command reads the error output, analyzes what went wrong, and offers a corrected command. One keystroke to understand the problem. One more to fix it.
For example:
$ git comit -m "update"
git: 'comit' is not a git command.
$ clam fep
--> Suggested: git commit -m "update"
--> Explanation: Typo in 'commit'
Clammy Protects You
Before you execute something catastrophic, Clammy taps you on the shoulder.
Known dangerous patterns—rm -rf /, fork bombs, recursive permission changes are caught immediately without any network call. This layer is fast and works offline.
For commands that aren't obviously dangerous but might have unintended consequences, Clammy sends a sanitized version to an LLM for analysis. It explains why a command might be risky in plain English: "This will recursively delete all files in your home directory, including hidden configuration files."
You always have the final say. Clammy warns; you decide.
Context Awareness
Clammy doesn't guess, it reads. Before suggesting anything, it pulls context from:
- The
--helpoutput of relevant commands - Your current working directory and visible files
- Recent command history (sanitized)
- Environment variables (filtered for privacy)
This means suggestions are technically accurate, not hallucinated from training data.
How Clammy Works
Under the hood, CLAM is modular, shell-agnostic, and built for speed.
Tech Stack
| Component | Technology |
|---|---|
| Core Logic | Shell scripts for hooks and installation |
| LLM Orchestration | Python-based pipeline for prompt construction and response streaming |
| Intelligence Layer | OpenAI, Groq, Anthropic APIs + Ollama for fully local inference |
| Terminal UI | Custom ANSI-based rendering engine for floating suggestion boxes |
| Caching | Local SQLite cache for frequent command patterns |
The Pipeline
When you invoke Clammy, here's what happens:
- Capture — We grab
$LAST_COMMAND,$EXIT_CODE, and the last N lines of terminal output - Sanitize — Environment variables and command history are scrubbed for secrets (API keys, passwords, tokens)
- Contextualize — We inject the current directory, visible files, and relevant
--helpoutput into the prompt - Infer — The request goes to your configured LLM provider (or Ollama locally)
- Stream — The response streams directly to your cursor position, character by character
The whole round trip typically completes in under 500ms for cached patterns, or 1-2 seconds for novel queries.
The Hard Parts
Speed Matters
Clammy learned early: if you take 5 seconds to respond, nobody waits. Users expect autocomplete to feel instant, anything slower than a blink feels broken.
We solved this with a multi-tier caching strategy:
- L1 Cache: In-memory cache for the current session
- L2 Cache: SQLite-backed persistent cache for frequent patterns
- Speculative Prefetch: For common command prefixes, we pre-warm the cache in the background
Result: frequent commands get suggestions in under 100ms.
Terminals Are Chaos
We wanted a clean, floating suggestion box; like VS Code's autocomplete, but in a terminal. The terminal wanted chaos.
Every terminal emulator handles ANSI escape codes slightly differently. Cursor positioning, color rendering, box-drawing characters. All of it varies. And the moment someone resizes their window mid-suggestion, everything shifts.
We built a custom rendering engine that:
- Detects terminal dimensions dynamically
- Handles resize events gracefully
- Falls back to simpler rendering on terminals with limited support
- Uses pure ANSI sequences (no ncurses dependency)
It took longer than the LLM integration. Worth it.
Secrets Stay Secret
Your command history is a liability. It's full of things that should never leave your machine:
- API keys passed as arguments (
curl -H "Authorization: Bearer sk-...") - Passwords in one-liners
- Private file paths and hostnames
Before any data touches an external API, Clammy runs it through a sanitization layer that:
- Regex-matches common secret patterns (API keys, tokens, passwords)
- Redacts environment variable values
- Strips private paths and replaces them with placeholders
If you use Ollama, nothing ever leaves your machine at all.
What Makes Us Proud
The Zero-Tab Workflow
The dream was simple: fix a complex git rebase gone wrong without ever opening a browser. No Stack Overflow. No copying error messages. Just ask Clammy, get the fix, apply it, move on.
We made it real. During testing, we tracked how often users left the terminal to debug errors. With Clammy, that number dropped by over 80%.
True Offline Mode
Privacy shouldn't be a premium feature. With Ollama integration, Clammy runs 100% locally. No API calls, no telemetry, no data leaving your machine. You can use it on air-gapped systems, on planes, or anywhere you don't trust the network.
Local models are getting good enough that for most autocomplete and error-fixing tasks, you don't notice the difference.
Smart Safety That Understands Context
Clammy doesn't just pattern-match rm -rf. It understands context.
rm -rf ./node_modules→ Probably fine, common cleanuprm -rf ~/→ Definitely not fine, let's talkrm -rf /tmp/build-*→ Looks intentional, but let's confirm
The LLM layer can distinguish between a routine cleanup task and a catastrophic mistake, even when the syntax looks similar.
Try It Out
Clammy was born at ICHack '26
Log in or sign up for Devpost to join the conversation.