💡 Inspiration In an era where Generative AI can create hyper-realistic images and write convincing articles in seconds, the line between reality and fabrication has blurred. misinformation spreads 6 × 6× faster than the truth on social media. We asked ourselves: "How can we empower the average user to instantly verify what they see online?"

We were inspired to build CyberAI Inspector as a digital guardian—a tool that doesn't just tell you something is "fake," but explains why, using a transparent, multi-modal approach to trust scoring.

💻 What it does CyberAI Inspector is a centralized verification hub that analyzes digital content across three dimensions:

Image Forensics: Detects deepfake artifacts and metadata inconsistencies. Text Verify: Checks for misinformation patterns, clickbait sentiment, and factual inaccuracies. URL Scanner: Validates domain reputation, SSL security, and phishing heuristics. It aggregates these insights into a single Trust Score ( T s T s ​ ), giving users immediate confidence in the content they consume.

⚙️ How we built it We built a robust full-stack application separating complex processing from a reactive UI:

Backend: We used Python with FastAPI for high-performance, asynchronous processing.

Text: We utilized NLP libraries like TextBlob and NLTK. We implemented a custom algorithm that calculates a trust score based on sentiment polarity ( P P) and subjectivity ( S S): T t e x

t

α ( 1 − S ) + β ( 1 − ∣ P ∣ ) + γ F f a c t T text ​ =α(1−S)+β(1−∣P∣)+γF fact ​

(Where F f a c t F fact ​ represents factual accuracy checks against known misinformation patterns.) Images: We used OpenCV and PIL to perform Error Level Analysis (ELA) and extract hidden EXIF metadata to spot manipulation. URL: We integrated python-whois and SSL checks to verify domain age and certificate validity. Frontend: Built with React and TypeScript via Vite for a blazing-fast user experience. We used Tailwind CSS to create a modern, dark-themed "Cyberpunk" aesthetic that fits the security theme.

🧠 Challenges we faced The "Truth" Paradox: Defining what makes text "trustworthy" is mathematically difficult. Sarcasm and satire often flagged false positives. We had to fine-tune our weights ( α , β , γ α,β,γ) to balance objective analysis with linguistic nuance. Dependency Hell: Integrating heavy AI libraries like torch and transformers alongside standard web frameworks caused significant environment conflicts, which we solved by carefully pinning versions and using virtual environments. Cross-Origin Communication: Connecting our granular local development environment to a production API required complex CORS configuration to ensure secure data transfer between our React frontend and FastAPI backend. 📚 What we learned Multi-modal AI is key: Analyzing a news article requires checking the text and the image and the source URL. Isolating one vector often missed the full picture. User Experience Matters: A complex backend needs a simple output. We learned how to distill complex JSON analysis into a simple 0-100 visual Trust Gauge.FastAPI capabilities: We gained deep appreciation for FastAPI's ability to handle concurrent analysis requests without blocking the main thread, essential for a real-time tool. 🚀 What's next for CyberAI Inspector We plan to introduce Browser Extension support to automatically score content as you browse Twitter or Facebook, and integrate Audio Analysis to detect AI-generated voice clones.

Built With

Share this project:

Updates