Inspiration

CivicMind was inspired by a recurring difficulty faced by non-native learners preparing for official civic evaluations conducted in French. In many cases, candidates do not fail because they lack commitment or civic values, but because they struggle to interpret how public rules are applied in institutional contexts.

Situation-based questions often contain subtle conceptual or linguistic traps. Answers that feel reasonable from a personal perspective can still be institutionally incorrect. Traditional preparation tools tend to focus on memorization or surface-level explanations, leaving this reasoning gap unaddressed. CivicMind was created to make this gap visible.


What it does

CivicMind is not a quiz or a content generator. It is a reasoning diagnosis tool.

Learners answer a small number of realistic, situation-based questions. After each response, the system explains how public authorities are expected to act in the given scenario, highlights the conceptual or linguistic trap embedded in the question, and clarifies why certain options may feel intuitive but are incorrect.

After multiple answers, CivicMind analyzes patterns across the learner’s responses to identify underlying reasoning gaps, such as confusing personal intuition with public obligations. Based on this diagnosis, it generates a targeted example practice scenario that illustrates what the learner should focus on next, without attempting to replace official exams.


How we built it

CivicMind was built as a focused demo to showcase Gemini 3 as a reasoning engine rather than a generic AI tutor.

Gemini 3 Flash is used at runtime to interpret learner reasoning, explain institutional logic, and analyze patterns across answers. All questions and correct answers are defined by the system; Gemini does not decide correctness or generate official exam content.

The backend exists solely to securely call the Gemini API and enforce structured outputs. By clearly constraining Gemini’s role and validating its responses, the system ensures clarity, reliability, and consistent behavior throughout the demo.


Challenges we ran into

One of the main challenges was preventing the model from overstepping its role. Without clear constraints, a language model can easily drift into content generation or authoritative decision-making. This was addressed by strictly limiting Gemini’s responsibilities to explanation, diagnosis, and adaptive example generation.

Another challenge was balancing realism with interpretability. Institutional reasoning often involves nuanced language and implicit assumptions, which can overwhelm learners if presented too abstractly. The demo therefore focuses on a small number of carefully selected scenarios to keep explanations understandable without oversimplifying the logic.


Accomplishments that we're proud of

We are proud of grounding this project in a real and recurring difficulty faced by non-native learners preparing for official civic evaluations. Instead of abstract assumptions, CivicMind is based on concrete misunderstandings observed in how foreign residents interpret public rules and institutional language during exam preparation.

We are also proud of defining a clear and disciplined role for Gemini within the system. Rather than using AI to generate more content or answers, CivicMind demonstrates how a large language model can be used to diagnose reasoning patterns and make them understandable to learners navigating public systems in a non-native language.

Finally, we are proud of maintaining a narrow and intentional scope. By deliberately avoiding full exam systems, authentication, or large question banks, the project remains focused on clearly demonstrating a single idea rather than expanding feature breadth.


What we learned

This project reinforced that effective use of large language models depends more on role definition than raw capability. Gemini is most valuable when it is constrained to interpret, diagnose, and explain reasoning rather than produce content at scale.

We also learned that analyzing a small number of carefully designed responses can reveal deeper insights than large datasets, especially when the goal is to understand how learners think rather than what they remember.


What's next for CivicMind

CivicMind is currently developed as a focused demo exploring how reasoning diagnosis can support civic learning. The next step is to integrate this reasoning-first approach into a broader learning platform that already supports structured civic preparation.

Future iterations will focus on incorporating the diagnostic workflow demonstrated here, including identifying recurring reasoning patterns, explaining institutional logic more explicitly, and using adaptive examples to guide learners without replacing official exams.

Longer-term, the goal is to extend this approach across additional scenarios and languages while maintaining strict boundaries between explanation, diagnosis, and official assessment. The intent is not to scale content, but to scale clarity—making public-rule reasoning more transparent for learners navigating complex institutional systems in a non-native language.

Built With

Share this project:

Updates