Inspiration
While brainstorming for a recent project, I caught myself opening an AI chatbot before I'd even attempted to think through the problem myself. At first I thought it was just a confidence issue — that I didn't trust my own thinking anymore. But then I started researching and found out this isn't just a personal quirk. It's a documented phenomenon. Studies show that heavy AI use is measurably reducing critical thinking ability, especially in students aged 17–25. The term researchers use is cognitive interoception — your internal sense of how capable and independent your own thinking is. And it turns out, when you outsource your thinking repeatedly, that sense quietly atrophies. You don't notice it going. That invisibility is the whole problem. There's no dashboard, no signal, no mirror. We decided to build one.
What it does
Grounded passively monitors how much of your thinking you're actually doing yourself every time you interact with an AI tool. Every query is classified in real time as either supplementing (you wrote first, you're asking for feedback, you're using AI to sharpen your own thinking) or supplanting (you handed the whole task over before attempting it yourself). This classification happens silently in the background using behavioural signals — time to submit, query structure, message length, and self-declaration.
Your Cognitive Health Score (0–100) updates live based on this pattern. It drops when you supplant, rises when you supplement or complete a Reclaim Workout. The dashboard shows your score as a breathing ring, your session log as a live feed, and your 30-day trend as a chart. The Reclaim tab gives you timed, AI-free writing challenges designed to rebuild the specific cognitive domains that have gone quiet. The Profile tab shows your baseline score from Day 1 frozen alongside your current score — so you can see the gap, or the recovery.
Grounded doesn't compare you to anyone else. It only ever compares you to yourself.
How we built it
We started with pen and paper - deliberately, because if we're asking students to think before they prompt, the least we could do was think before we built. All our early ideation was handwritten: initial brainstorming, concept maps, user journey sketches, the supplanting vs supplementing framework.
The prototype is built in Figma Make — a React-based interactive prototype embedded directly in Figma Slides for the submission. The AI tracking interface is powered by Claude Sonnet, which responds to the user helpfully while silently classifying each query as supplementing or supplanting. The full conversation history is passed on every call so Claude maintains context across the session — no repasting needed. Classification is reinforced client-side for the clearest cases, ensuring the score always reflects what actually happened. The cognitive score state is managed via React context and persists across all four tabs — Sense, Track, Reclaim, and Profile.
Challenges we ran into
The first and hardest challenge was choosing the right sensory experience. It had to be something that matters not just as a 2026 trend but decades from now - something that deserved to come to the limelight before it was too late. Cognitive interoception fit because the erosion is already happening silently, the research is clear, and no tool has pointed a mirror at it yet. Picking something that felt both urgent and timeless took more iteration than any line of code.
The second challenge was the onboarding and baseline design. How do you take an accurate, honest snapshot of someone's current cognitive health — not self-reported, not gameable, but a true picture of how their mind works right now? We had to design a two-phase baseline test that could measure unaided thinking separately from AI-assisted thinking, score it across meaningful dimensions, and do it in a way that felt fair to someone with dyslexia, anxiety, or just a bad morning. Getting the methodology right — what we measure, how we score it, what counts as a valid baseline — was the most intellectually demanding part of the project.
The third challenge was drawing a precise, defensible line between supplementing and supplanting. These aren't binary states — they exist on a spectrum. A student who writes three paragraphs and then asks Claude to tighten one sentence is in a very different place than one who pastes an assignment title and copies the output. Designing a classification system that could distinguish between those two cases consistently, and explaining that distinction clearly enough that users trust it, required a lot of iteration.
Finally, building all of this while keeping up with ongoing college assignments was its own real constraint. Every design decision had a deadline breathing down its neck. That pressure made us prioritise ruthlessly — and probably made the product sharper for it.
Responsive design in Figma Make also took more iteration than expected — getting the sidebar, chat area, and score HUD to behave correctly across mobile, tablet, and desktop required careful use of Tailwind responsive prefixes throughout.
Accomplishments that we're proud of
Shipping something real under pressure The thing we're most proud of is completing this project within the time constraint while staying genuinely true to the sensory experience we chose. We didn't cut corners on the concept to make the build easier. Every feature — the baseline test, the classification system, the Reclaim tab — exists because the idea demanded it, not because it was the fastest thing to build.
We practiced what we built The ideation was handwritten. The thinking came first. If we're asking students to think before they prompt, the least we could do was think before we built.
What we learned
Speculative design is as urgent as any other discipline right now This project taught us to think about the future seriously — not as a thought experiment but as a design responsibility. The problems Grounded addresses aren't hypothetical. They're already happening. Speculative design isn't a nice-to-have; it's how you get ahead of harm before it's irreversible.
Figma Make is more powerful than it looks We pushed Figma Make further than we expected — connecting it to a live API, handling real-time classification, managing state across tabs. It struggled sometimes. So did we. But it worked.
Teamwork is a design tool Dividing the work by strength — research, visual design, prototype engineering, narrative — meant each part of the project got someone's full attention. That division is what made the whole thing possible in the time we had.
Honest design is harder than good design Making Grounded feel like a coach rather than a surveillance system required deliberate choices at every step — the warm palette, the encouraging copy, the "only compared against yourself" framing. Getting the tone right was as hard as getting the technology right.
What's next for Grounded
More cognitive domains The current prototype tracks written communication. The full vision covers mathematical reasoning, source critique, and argument construction — each with their own baseline test and Reclaim challenges.
A wider audience We built for students 13–21, but the problem doesn't stop at graduation. Postgrads, PhD researchers, and knowledge workers face the same dependency patterns with higher stakes. Grounded needs to grow with them.
Better signal, better hardware Behavioural signals only tell part of the story. Integrating with emerging devices — EEG wearables, attention tracking, biometric inputs — would make the cognitive health score significantly more accurate and harder to game.
Built into the tools, not alongside them Longer term, Grounded should sit inside the AI tools students already use — not as a separate app they have to remember to open, but as a passive layer that watches how they work and reflects it back. The dashboard only matters if the data is real. That requires integration, not isolation.
Built With
- claudeapi
- figmamake
- procreate
- vercel
- vscode

Log in or sign up for Devpost to join the conversation.