Inspiration

We were inspired by the increasing complexity and opacity of AI systems, especially those that collect and process sensitive personal data. As non-technical users are asked to trust algorithms they don’t understand, we wanted to build a tool that could bridge that gap — not just for tech-savvy users, but for everyone. With growing concern around facial recognition, emotion analysis, and algorithmic decision-making, we created cl.AIrity to offer clarity, accountability, and ethical guidance in plain, understandable terms.

What it does

cl.AIrity is a responsible AI explainer agent that helps individuals and organizations understand what AI systems actually do — especially when those systems are described in vague, technical, or misleading language. In a time where workplaces are rapidly evolving through AI adoption — from automated hiring platforms and productivity monitoring tools to customer-facing chatbots — cl.AIrity gives non-technical users the ability to make sense of how these systems work and what they do with personal data. Users submit a description of an AI service — such as a line from a product website or internal tech proposal — and cl.AIrity returns a clear, bullet-point explanation in plain language. It also provides a 1–5 star “Transparency & Privacy Safety Score” to help users assess how responsibly the system is communicating its purpose, data usage, and limitations. By equipping employees, team leads, HR professionals, and even IT decision-makers with accessible explanations, cl.AIrity ensures that AI is not just adopted efficiently, but responsibly. It gives organizations a practical tool for risk awareness, ethical review, and informed technology adoption — helping workplaces evolve with clarity, not confusion.

How we built it

We built cl.AIrity using Flask and Python to create a simple, intuitive web interface that connects users to the AI agent. The backend is powered by the Letta agentic AI platform, where we defined CLAIRE’s persona, scoring rubric, and structured interaction flow.

Key components: • Flask served the front-end and routed user inputs to the agent • Python handled data processing and agent integration • Letta handled structured agent behavior using detailed system instructions and UX rules Custom logic ensures consistent tone, bullet-pointed breakdowns, and transparency scoring

Challenges we ran into

Balancing tone: We wanted cl.AIrity to sound formal and trustworthy, but not so academic that the explanations became confusing again. It took time to land on a tone that was both respectful and plainspoken.

Transparency scoring: Defining a star-based system that could meaningfully reflect data ethics based on minimal input required careful thought and user testing.

Handling vague inputs: Many real-world descriptions of AI are purposefully ambiguous. Teaching the agent to request clarification in a helpful way without overstepping required finesse.

Frontend development: I don’t have a background in frontend developement, so building a working UI with Flask that felt functional was a major learning curve. It took extra time to connect all the parts smoothly and present the agent in a user-friendly way.

Accomplishments that we're proud of

Built a fully working, user-facing agent that speaks with clarity and empathy.

Created a repeatable conversation flow for formal yet accessible Responsible AI discussions.

Developed a transparency and data responsibility rubric that can be adapted for different use cases, from biometric services to AI chatbots.

What we learned

What I learned is that being responsible with AI isn’t just about how a system is built, but how it’s communicated. One of the most overlooked forms of harm is when people are asked to trust or interact with AI systems they don’t understand — not because they’re incapable, but because the explanations are buried in technical jargon or legal language. Through cl.AIrity, I realized that clear, simple communication is a form of accountability. If we’re building tools that affect people’s lives, we have a responsibility to speak in a way that respects their right to know what data they’re giving away and how it’s being used. Removing that gatekeeping isn’t just good design — it’s good ethics.

What's next for cl.AIrity

Expand to support real-time document uploads (e.g., terms of service, privacy policies) for instant breakdowns and scores. Collaborate with responsible AI researchers and educators to make cl.AIrity a trusted public resource for AI literacy.

Built With

Share this project:

Updates