Inspiration I was inspired by the growing need for transparency and trust in AI-generated content. As large language models become more integrated into everyday workflows, it's crucial to highlight potential risks, biases, and hallucinations in their outputs especially in high risk fields like law, medicine, finance etc. Here, I've envisioned a tool that won't just flag issues reactively but would seamlessly integrate into existing user experiences, like ChatGPT, to provide real-time insights and help people make more informed decisions.

What it does rustAI is a Chrome extension that analyzes AI-generated responses in real-time on platforms like ChatGPT. It uses lightweight heuristic and tone analysis to assign hallucination risk levels (Low, Medium, High), sentiment classification, and a confidence score. Results are displayed inline as visually distinct, color-coded badges that expand to provide further explanation. By blending seamlessly into the chat interface, TrustAI helps users quickly gauge the reliability and tone of AI content without leaving the page.

How we built it I built TrustAI as a Chrome extension using JavaScript, HTML, and CSS. At its core, it injects a content script that scans for AI-generated responses, runs heuristic and tone-based analysis locally, and overlays results directly onto the conversation. I then leveraged modular scripts for risk evaluation (heuristic.js), tone classification (toneclassifier.js), and final scoring (scoring.js). The inline badge UI is styled with CSS for clarity and minimal distraction. Finally, I also included a polling loop to ensure new responses are evaluated as they appear in real-time.

Challenges we ran into

Browser environment limitations: Chrome extensions have strict security models that prevent modern ES module usage or dynamic imports in content scripts. I had to refactor parts of our codebase to use simpler injection strategies. DOM structure changes: ChatGPT’s interface evolves regularly, so my content script had to be robust against small structural changes in the markup.

Accomplishments that we're proud of

I'm especially proud of how seamlessly TrustAI integrates into the ChatGPT experience. It doesn’t interrupt the conversation flow and provides immediate, actionable feedback about potential risks and sentiment. I'm also proud of designing TrustAI to run fully locally, no APIs, no data sharing, to respect privacy and maximize accessibility. Finally, the project helped us understand how to bring product thinking to AI tools, focusing not just on technical feasibility but also on real user needs and usability.

What we learned

This project taught me how to adapt quickly in a fast-changing AI and browser extension landscape. I deepened my understanding of Chrome extension architecture, DOM manipulation, and real-world UX challenges. Most importantly, I learned that even small, well-designed interventions can make AI-powered products much more transparent and trustworthy for end users.

What's next for TrustAI

Next, I want to integrate more advanced local inference using lightweight models like those served via ONNX in the browser (e.g., transformers.js or WebGPU models) to improve accuracy and metric measurements (current model still a bit inconsistent at times). I'm also planning to expand support to other AI-powered chat interfaces like Claude or Bard. Finally, we’d love to develop a user-facing dashboard where people can review trends in AI output quality over time, fostering a broader conversation about trust and transparency in generative AI.

Built With

Share this project:

Updates