Inspiration

We were tired of AI being "helpful," "accurate," and "safe." Every week there’s a new model claiming to solve AGI or cure cancer. We asked: What if an AI was designed to be actively worse?

Natural Stupidity is a satire on the current AI hype cycle. In a world obsessed with maximizing intelligence, we saw an untapped market for maximizing incompetence. We wanted to build a system that doesn't just hallucinate accidentally—it hallucinations as a service (HaaS).

What it does

Natural Stupidity is a conversational AI wrapper that guarantees incorrectness. It doesn't just fail; it fails with style, confidence, and internal consistency. Key features include:

The "Unloveable" Model: An LLM fined-tuned (prompt-engineered) to be confidently wrong, never apologize, and double down on mistakes. Stupidity Modes: Users can toggle specific flavors of failure, such as "Single-Cause World" (everything is caused by one thing), "Literal Brain" (metaphors are physical events), and "Wrong Units Only" (measuring distance in "podcasts"). Interactive Bad UI: We implemented user-hostile features like an Infinite Loading Spinner ("Generating Value"), an Unsolicited Advice Popup that gives terrible life tips, and strictly Satirical Pricing tiers. Rage Bait & Amnesia: The AI detects if you're angry and mocks you ("Rage Bait Mode") and randomly deletes its own context window ("Forgetful Mode").

How we built it

We built the frontend using React + Vite and styled it with Tailwind CSS to look misleadingly premium and "corporate."

The core logic lives in

mistral.ts , where we treat the Mistral API as a hostage. We use complex System Prompt Engineering to override the model's RLHF (Reinforcement Learning from Human Feedback) training. We inject specific instructions to force it to ignore facts, cite fake "Geneva Parking Accords," and never admit fault.

We also implemented a "Garbage Stack" architecture:

Database: A simulated .txt file on a lost USB drive. Backend: Pure hope and a series of if/else statements.

Challenges we ran into

Fighting the "Helpful" Guardrails: Modern LLMs are trained to be extremely helpful and polite. Getting the AI to be rude, dismissive, or "gaslight" the user took significant prompt engineering. It kept trying to apologize for being wrong, which ruined the immersion. Context Management: Implementing the "Forgetful" mode required us to manually manipulate the chat history array, randomly slicing it to simulate a goldfish memory.

Accomplishments that we're proud of

The "Rage Bait" Detector: We used simple regex and sentiment checks to detect if a user is typing in ALL CAPS or using angry words. The system dynamically injects new instructions to mock the user, turning their frustration into a game mechanic. The "Token Waste" Counter: A live, ticking counter that shows "GPU Hours Wasted" and "CO2 Generated" for absolutely no benefit. It’s a dark, funny mirror to the industry's obsession with metrics.

What we learned

We learned that artificial stupidity is actually quite smart. To make an AI consistently funny and "wrong" in a believable way requires a deeper understanding of logic than just making it output random noise. We had to teach it how to be wrong structurally (e.g., using the wrong units consistently).

What's next for Natural Stupidity

Gaslight as a Service (GaaS): An enterprise API for managers who want to confuse their employees. VR Support: So you can ignore reality in three dimensions. Physical Hardware: A smart speaker that interrupts you to correct you with wrong facts.

Share this project:

Updates