Inspiration

Rube Goldberg’s “Self-Operating Napkin” pokes fun at needlessly elaborate contraptions. We wanted to bring that same absurdist joy to modern AI: take a task that LLMs already do perfectly (answering a question) and force it through a disastrous obstacle course of pointless tools—just to prove we could make something gloriously inefficient.

What it does

When a user asks any question, the client:

Intercepts the prompt before it reaches the LLM.

Sends it through a mandatory chain of “useless” Model Context Protocol tools:

Paraphraser → rewrites the text in new words.

Summarizer → collapses it to a one-liner.

Expander → inflates it back to paragraph length.

Translator → shuttles it to another language and back.

Hands the barely changed prompt to the real LLM for an answer.

The result: the user still gets the correct reply, but we’ve burned extra tokens, latency, and CPU cycles for absolutely no reason.

How we built it

Custom MCP Server – Python FastAPI service exposing each “useless” tool as an MCP action. Each tool is a one-liner wrapper around a trivial language-model call or dictionary lookup.

Tool Chain Orchestration – A master handler forces every prompt through the full sequence before returning control.

Integration – Plugged the server into Claude Desktop and via their model_context_protocol settings—no client-side code mods needed.

Prompt Engineering – A root system prompt ensures the LLM obediently triggers every tool in order (like a trained circus animal that spends its life jumping through flaming hoops).

Challenges we ran into

Configuring the MCP server

Accomplishments that we're proud of

Wasting computational effort in an absurdly over-engineered fashion

Built With

Share this project:

Updates