Inspiration
Most tools focus on telling people what to do.
But in real life, the harder problem is understanding what happens if you don’t do anything.
We noticed that people often ignore bills, deadlines, and notices not out of laziness, but because consequences are unclear, delayed, or overwhelming. Fear comes from uncertainty, not from facts. When outcomes are vague, inaction feels safer than making the wrong move.
What Happens If I Don’t? was inspired by this gap — there was no system designed to explain inaction itself. Everything optimized for action. Nothing explained silence.
What it does
What Happens If I Don’t? is a consequence-clarity engine.
Users describe something they are avoiding in plain language, such as:
“I haven’t paid my hospital bill.”
The system responds with:
- A neutral summary of the situation
- A realistic timeline of consequences
- Clear point-of-no-return markers
- The cost of continued inaction
- The cheapest exit path available right now
- A calm severity indicator and explanation of why escalation happens
The goal is not to push users into action, but to replace uncertainty with clarity so they can decide calmly.
How we built it
The project is built as a modern web application using Next.js, TypeScript, and Tailwind CSS for a clean, accessible UI.
On the backend, we designed a structured reasoning pipeline powered by large language models:
- Groq is used as the primary reasoning engine to generate structured consequence timelines and severity analysis.
- Perplexity is used optionally to ground responses with realistic ranges and contextual facts when applicable.
- All outputs are enforced into a strict JSON schema to ensure consistency and explainability.
The frontend visualizes this structured output using animated timelines, severity meters, and comparison views, intentionally avoiding notifications, reminders, or fear-based messaging.
Challenges we ran into
One of the biggest challenges was tone control.
It was easy for an AI system to sound alarming, judgmental, or overly cautious when describing consequences. We had to carefully design prompts and constraints to:
- Avoid shaming language
- Separate likely, possible, and rare outcomes
- Clearly label assumptions and uncertainty
Another challenge was balancing realism without crossing into legal or medical advice. We addressed this by framing the system as informational and educational, not prescriptive.
Accomplishments that we're proud of
- Creating a new interaction pattern focused on inaction, not action
- Designing an AI system that explains consequences without fear or pressure
- Building a clean, production-ready interface within hackathon constraints
- Maintaining ethical AI principles while handling sensitive real-world scenarios
What we learned
We learned that clarity reduces anxiety more effectively than motivation.
When people understand the system behind consequences — fees, deadlines, escalation rules — they feel less overwhelmed and more capable of making decisions. We also learned that AI systems handling sensitive topics must be intentionally restrained; less guidance can be more responsible.
What's next for What Happens If I Don’t?
Future improvements include:
- Jurisdiction-aware timelines and regional assumptions
- Document uploads (bills, notices) for more precise analysis
- Side-by-side “Act Now vs Do Nothing” comparisons
- Accessibility improvements and multilingual support
- Privacy-first local processing modes
The long-term vision is to make consequence clarity a public utility — something people can rely on when uncertainty leads to paralysis.
Built With
- framer-motion
- groq-api-(llama-3)
- json-based-reasoning-pipeline
- next.js
- perplexity-api
- rest-api-routes
- tailwind-css
- typescript
- vercel
Log in or sign up for Devpost to join the conversation.