About the Project
Inspiration
In an era where AI-generated content is everywhere, I started noticing a growing problem: people are no longer questioning what they read. Content is consumed quickly, shared instantly, and rarely verified. While AI tools make information more accessible, they also blur the boundary between human-authored and machine-generated content.
What inspired this project was a simple question:
What if we could see trust, instead of guessing it?
We already use tools like lenses to examine the world more closely — but when it comes to information, we are still reading blindly. This project began as an attempt to design a “lens” for understanding content.
What I Built
I built TrustLens, an AI-powered tool that analyzes text or URLs and visualizes their trustworthiness through an interactive interface.
Instead of giving a binary answer, TrustLens evaluates content across multiple dimensions:
- AI authorship likelihood
- Source credibility
- Factual accuracy
- Logical consistency
- Emotional objectivity
These dimensions are combined into an overall trust score:
[ \text{Trust Score} = \frac{w_1 A + w_2 C + w_3 F + w_4 L + w_5 E}{\sum w_i} ]
Where:
- ( A ): AI authorship score
- ( C ): credibility
- ( F ): factual accuracy
- ( L ): logic
- ( E ): emotional neutrality
The system presents the result as a “trust label,” along with:
- Sentence-level AI probability highlighting
- Claim and citation verification
- Logical issue detection
- Emotional tone analysis
- Optional human-style rewriting
How I Built It
TrustLens was built using a combination of modern web technologies and AI models:
- Frontend: React + Next.js
- Backend: API-based architecture with OpenAI models
- Interaction design: real-time feedback with debounce and progressive rendering
The analysis pipeline includes:
- Parsing input (text or URL)
- Segmenting content into sentences and paragraphs
- Running multi-dimensional evaluations using AI models
- Aggregating scores and generating explanations
- Rendering interactive visual feedback
A key design decision was to move beyond static results and create an experiential interface. Low-trust content is not only labeled but also visually transformed through effects like blur, jitter, and pixelation, allowing users to feel instability rather than just read it.
Challenges
One of the biggest challenges was balancing technical complexity and user understanding.
Ambiguity of AI detection
AI-generated content is not always clearly distinguishable. Instead of forcing a binary output, I designed a probabilistic and explainable system.Designing for multi-dimensional trust
Trust is not a single metric. Combining multiple evaluations into a coherent and readable system required careful structuring.Avoiding information overload
Providing rich analysis without overwhelming users required layering information and designing progressive disclosure.Translating abstraction into interaction
Concepts like “trust” and “uncertainty” are abstract. I explored how visual instability and distortion could communicate these ideas more intuitively.
What I Learned
Through building TrustLens, I realized that designing AI tools is not just about generating results, but about shaping how people interpret information.
I learned the importance of:
- Designing for explainability, not just accuracy
- Moving beyond binary outputs to nuanced systems
- Using interaction as a way to communicate meaning
- Bridging technical systems with human perception
More importantly, this project shifted my focus from simply building tools to designing systems that encourage critical thinking and awareness.
Reflection
TrustLens is not just a tool for detecting AI — it is a system for rethinking how we engage with information.
In a world where content is abundant but trust is unclear,
we don’t just need more information — we need better ways to understand it.
Built With
- javascript
- next.js
- openai
- react
- typescript
Log in or sign up for Devpost to join the conversation.