Inspiration
I was inspired to build INNERLOOP after observing a recurrent pattern that people may often realize that they are stuck, worried, or over-thinking something again and again, yet they rarely receive any feedback about why they get such thoughts or do something about them. Most of the conversational AI systems give generic responses that are comforting or encouraging, but they unintentionally reinforce unproductive thought patterns instead of breaking them. I wanted to explore whether an AI system could move beyond conversation and instead provide clarity about the underlying emotional or cognitive pattern itself.
This project was inspired due to the absence of the current AI systems that can explain what kind of mental state a person is in, rather than simply responding to what they say.
What it does
When a user inputs information either through text or voice, INNERLOOP examines it to determine what the primary emotional state might be, whether the user might be stuck in a state of cognitive loop such as rumination, self-doubt, stress build-up, or indecision, and implements a singular intervention approach to help break the cycle.
How we built it
INNERLOOP was built during the Gemini 3 Hackathon using the Gemini 3 API via Google AI Studio. The system relies on carefully designed system instructions and a strict response schema to ensure consistency, precision, and non‑chatty outputs.
Gemini 3 is used to: -Infer dominant emotional states from unstructured input -Detect cognitive loops using explicit classification logic -Select a single appropriate intervention strategy -Generate structured, deterministic responses
The frontend was implemented using React and TypeScript and supports text and voice-based input. Moreover, it also utilizes voice recordings and applies the same reasoning pipeline as text.
Challenges we ran into
The main challenge was preventing the model from behaving like a conversational assistant when it kept defaulting to conversational or empathetic language. Achieving concise, analytical responses required careful prompt design and schema enforcement. Balancing emotional nuance with clarity was another key challenge.
Accomplishments that we're proud of
-Successfully reframed a conversational language model into a structured emotional reasoning system -Achieved consistent detection of cognitive loops with clear justifications -Built a system that applies one deliberate intervention instead of multiple vague suggestions -Integrated voice input without compromising reasoning quality -Created a system that feels analytical and intentional rather than chatty or performative
What we learned
-Emotional clarity is more valuable than emotional reassurance -Structure and constraints significantly improve AI reliability -Users benefit from knowing what kind of mental state they are in, not just how to feel better -Gemini 3 performs exceptionally well when used as a reasoning engine rather than a conversational agent
What's next for INNERLOOP
Among the possible future works, expanding loop categories, tracking repeated patterns, and using it in domain-specific settings such as academic stress or professional decisions are some good add-ons to the project.
Built With
- audioapi
- gemini3
- react
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.