Inspiration and What it does

Humans make thousands of decisions every day, yet we rarely introspect them systematically. We have Fitbits for our steps and Mint for our finances — but when it comes to how we think, we operate almost entirely blind.

We were inspired by what we call the “Introspection Gap”( the disconnect between our emotional state and our logical reasoning at the moment a decision is made). Most poor decisions aren’t caused by a lack of intelligence — they happen because unacknowledged cognitive biases (like Sunk Cost Fallacy or Anchoring) combine with high-valence emotional states and quietly distort judgment.

CogniClear was built to close this gap.

It isn’t a journal. It isn’t advice. CogniClear is a Decision Flight Recorder — a system that forces you to externalize your internal state, surfaces hidden cognitive biases, and simulates plausible futures before you commit to a choice.

Instead of telling you what to do, CogniClear helps you understand how your decisions are being made.

How we built it

The Brain (Gemini 3) The core of CogniClear is powered exclusively by gemini-3-pro-preview. We chose Gemini 3 because true introspection requires high-order reasoning, not just text generation. Lighter models tended to agree with the user or provide surface-level insights.

We designed strict structured JSON schemas and system prompts that force the model to behave like a rigorous cognitive scientist — extracting assumptions, emotional drivers, biases, and causal relationships instead of offering generic feedback.

The Frontend CogniClear is built with React and TypeScript for robustness and type safety, and styled using Tailwind CSS to create a calm, dark-mode, focus-inducing interface that encourages reflection rather than distraction.

Visualization We implemented the Semantic Decision Map using Recharts. Decisions are plotted in a 2D space using:

  1. Emotional Valence

  2. Clarity Score

This allows users to visually observe patterns in how emotions and reasoning quality interact across decisions — something a timeline or journal cannot reveal.

Persistence We implemented a flexible DataService layer, currently backed by LocalStorage to simulate a MongoDB-style document architecture. This allows full CRUD operations and historical tracking without backend overhead during the hackathon, while remaining backend-ready for production.

Challenges we ran into

1.Hallucination vs. Reasoning Early versions using weaker models simply validated the user’s thinking. To fix this, we engineered Gemini 3 to play Devil’s Advocate — requiring it to cite explicit textual evidence from the user’s input when identifying biases or flawed reasoning.

2. Structured Output Reliability Getting an LLM to consistently return deeply nested JSON (biases, simulations, mitigation paths) was difficult. We leveraged Gemini’s responseSchema enforcement to guarantee strict typing and predictable outputs.

3. The “Future Simulator” Designing a counterfactual experience that felt like a simulation, not a chatbot, required careful instruction tuning. The model had to act as the environment — reasoning about constraints, assumptions, and consequences — rather than as an assistant giving advice.

Accomplishments that we're proud of

One of our hardest challenges was quantifying something as abstract as “Clarity” without turning it into a fake or misleading metric.

In CogniClear, when we task Gemini 3 with calculating the Clarity Score, we are not asking it to judge intelligence or correctness. We are asking it to evaluate the user’s decision against well-established mental models from decision science and game theory — the same frameworks used by disciplined thinkers, strategists, and analysts.

Specifically, the model looks for evidence of:

1. Structured Option Thinking (MECE) The AI checks whether the options considered are distinct and expansive, rather than falling into binary traps. Decisions that explore multiple, non-overlapping alternatives score higher than those framed as “do it vs don’t do it.”

2. Second-Order Thinking (“And then what?”) The model evaluates whether the reasoning goes beyond immediate outcomes. Decisions that consider long-term consequences, downstream effects, and trade-offs score higher than those driven purely by short-term gratification.

3. First-Principles Reasoning CogniClear assesses whether the user is reasoning from fundamental facts or relying on social proof and analogy. Justifications like “everyone else is doing this” reduce clarity, while reasoning grounded in core constraints and realities increases it.

4. Probabilistic Thinking The AI analyzes language for how uncertainty is handled. Absolutist language (“always,” “never,” “definitely”) lowers the score, while probabilistic framing (“likely,” “risk of,” “there’s a chance”) raises it — reflecting a more realistic mental model of the future.

What the Clarity Score Actually Represents

When Gemini 3 generates the Clarity Score, it is effectively asking:

“Does this decision exhibit the patterns of a disciplined thinker — or the patterns of an impulsive, emotionally driven reaction?” This allows us to transform qualitative thought processes into structured, trackable signals, enabling users to observe how their decision-making style evolves over time — without pretending the number is objective truth. The score isn’t a judgment, It’s a mirror.

What we learned

Gemini 3 is a Reasoner Gemini-3-pro consistently detected subtle linguistic cues — hesitation, rationalization, emotional leakage — that smaller models completely missed.

The Power of Externalization During testing, we found that simply writing down the Options Considered often changed our own decisions before the AI even responded.

Bias Is Universal Even while building CogniClear, we caught ourselves falling into Confirmation Bias — hoping the AI would validate our design and code choices.

What's next for CogniClear

Real Backend Integration Connecting the DataService layer to MongoDB Atlas for secure, cross-device synchronization.

Voice Introspection Allowing users to speak freely (“rant mode”) and using the model to structure disorganized speech into coherent decision trees.

Collaborative Decision Mode A “War Room” feature where teams can input decisions collectively and detect group-level biases such as Groupthink or Authority Bias.

"CogniClear doesn’t make decisions for you.It helps you finally see how you make them."

Built With

Share this project:

Updates