Inspiration
The project began with a simple but uncomfortable observation: AI is becoming so convenient that many of us skip the thinking step entirely. The brain is a muscle, and without the proper exercise… those muscles won’t grow to their full potential.
We paste emails, ask for rewrites, request summaries, and rely on AI for tasks we used to reason through ourselves. That raised deeper questions:
- What cognitive tasks are we offloading?
- How much thinking are we delegating?
- At what point does convenience become cognitive dependency?
What it does
ChatTime analyzes a user’s prompt history and classifies each prompt into five cognitive categories drawn from Bloom’s taxonomy:
- Repetitive
- Information retrieval
- Problem solving
- Critical thinking
- Creativity
It then groups these into AI Automation Tasks vs Human Cognitive Tasks, computes a Cognitive Offloading Index (COI), and generates a clear report showing:
- How much thinking the user did
- How much thinking the AI assistant did
- Whether their usage pattern is healthy, moderate, high, or critically reliant
- A pie-chart visualization of cognitive vs automated work
- An insightful summary with recommendations
In short: ChatTime is a mirror for your AI habits.
How we built it
We started by grounding the system in educational psychology, especially Bloom’s taxonomy and research on cognitive offloading.
From there, we built the pipeline:
- Prompt Parser — cleans and segments user prompt history
- LLM-based Classification Engine — assigns each prompt to one or more cognitive categories
- Metric Calculation Engine — computes category percentages and the COI using a weighted formula
- Visualization Layer — generates the AI-vs-Human cognitive pie chart
- Insight Generator — produces a short, human-readable usage summary
ChatTime accepts pasted prompt histories, processes them through this pipeline, and outputs a full AI Dependency Report.
Challenges we ran into
- Defining cognitive categories: Human thinking is messy. Mapping prompts to Bloom-based categories required careful interpretation and iteration.
- Weighting the COI: Choosing weights that reflect cognitive effort (e.g., creativity = 1.0, repetitive = 0.2) required balancing academic grounding with practical usability.
- Ambiguous prompts: Many prompts span multiple cognitive levels (“design a scalable architecture” touches creativity, critical thinking, and problem solving).
- Integrating the feedback system with Vertex AI through Google Cloud: One of our biggest technical challenges was getting the feedback system to work reliably with Vertex AI on Google Cloud. While we were eventually able to get the backend pipeline functioning, integrating that feedback layer smoothly into the frontend within hackathon time constraints was much harder, and the final front-end connection remained incomplete.
Accomplishments that we're proud of
- Creating a quantifiable way to measure AI cognitive offloading (something rarely discussed but deeply important)
- Designing a COI metric that is academically grounded yet intuitive for everyday users
- Building a system that transforms raw prompt histories into meaningful insights and visualizations
- Crafting a future-ready vision: a plugin that sits inside ChatGPT (or other commercially available AI tools) and analyzes sessions with one click
What we learned
- AI usage isn’t just about productivity; it shapes how we think
- People often underestimate how much cognitive work they outsource until they see it visualized
- Bloom’s taxonomy is surprisingly powerful when adapted to modern AI interactions
- Designing metrics around human cognition requires both technical precision and psychological nuance
What's next for ChatTime
- Browser/ChatGPT plugin that automatically analyzes sessions without manual copy-paste
- A fully integrated feedback system in the frontend so users can receive live, session-specific guidance
- Feedback improvements using A/B testing to evaluate which recommendation styles are most effective for changing user behavior
- Cosine similarity–based feedback matching to compare current prompt patterns against known cognitive usage profiles and generate more accurate, personalized feedback
- Longitudinal tracking so users can see how their AI reliance evolves over weeks or months
- Personalized recommendations to help users rebalance cognitive habits
- Team/organization dashboards to understand collective AI usage patterns
- Privacy-preserving on-device analysis so prompt histories never leave the user’s machine
ChatTime’s long-term goal
Help people use AI as a tool rather than a crutch and keep human cognition at the center of the loop.
Built With
- css
- flask
- google-cloud
- groq
- html
- javascript
- llm
- openai
- python
- streamlit
- vertex


Log in or sign up for Devpost to join the conversation.