Inspiration
We built AI Internet Detective because suspicious content rarely arrives in a clean or trustworthy format. It usually spreads as copied text, screenshots, low-context forwards, scanned PDFs, short clips, or social media links with missing provenance.
Most AI tools handle this badly in one of two ways:
- they produce a shallow summary with no evidence trail
- they generate overconfident verdicts even when the underlying media cannot actually be verified
That makes them weak for real fact-checking workflows, public-interest investigation, or trustworthy misinformation review. We wanted to build something closer to an investigation desk than a chatbot: something that shows claims, evidence, sources, limitations, and clear next steps.
What it does
AI Internet Detective transforms messy online content into a structured investigation report that a human can actually inspect.
For every case, the system can surface:
- summary
- extracted claims
- claim-level assessments
- confidence scores
- evidence bullets
- source URLs
- named entities
- language and risk labels
- reasoning
- limitations
- recommended next checks
Most importantly, the product is designed to be honest about uncertainty. If a video URL is inaccessible or does not expose enough retrievable information, the system returns Cannot analyze rather than pretending to know more than it does.
How we built it
We built AI Internet Detective as a full-stack application with:
- Angular 19, TypeScript, and TailwindCSS on the frontend
- Spring Boot 3 and Java 21 on the backend
- MongoDB for storing investigations
- ASI-1 as the core intelligence layer
The end-to-end workflow looks like this:
- The user submits one of six input types: text, article URL, image, PDF, video URL, or uploaded video.
- The backend normalizes the content and extracts readable text where needed.
- ASI-1 runs a first pass to extract summary, claims, entities, language, and risk labels.
- ASI-1 runs a second pass to verify claims, attach evidence points, return source URLs, and generate the final report.
- The investigation is stored in MongoDB and displayed in the results page and archive.
- The user can ask follow-up questions on the saved report using ASI-1 again.
ASI-1 is not a decorative integration. It is the core of the workflow.
We used ASI-1 in three meaningful ways:
- structured signal extraction
- claim verification and reasoning
- grounded follow-up Q&A on stored reports
Technical details:
- ASI-1 calls are made from the Spring Boot backend
- responses are requested as structured JSON
- web search is enabled for broader verification context
- image workflows use multimodal prompting
- video workflows include explicit guardrails for inaccessible media ## Challenges we ran into
- Designing structured output robustly enough for multiple media types
- Handling incomplete or malformed model output safely
- Avoiding misleading confidence when media was inaccessible
- Making the frontend demo feel polished and explainable under hackathon time pressure
- Keeping the product useful for both live demos and stored investigations
Accomplishments that we're proud of
- We built a product that feels like an investigation tool rather than a chatbot wrapper.
- We added claim-level evidence and source URLs instead of stopping at a generic verdict.
- We built honest media guardrails so inaccessible videos are not mislabeled as fake.
- We created a demo-ready UI that shows the AI workflow clearly while keeping the final report inspectable.
- We made the report reusable by storing investigations and enabling follow-up questions. ## What we learned
- Trustworthy AI products need explicit uncertainty handling, not just better prompts.
- Structured multi-step AI workflows are much more useful than single-pass completions.
- A strong demo is not only about accuracy; it is also about clarity, inspectability, and user trust.
- Misinformation tools need to work with messy real-world formats, not only ideal text inputs.
What's next for AI Internet Detective
- Add stronger video frame analysis beyond metadata-driven caution
- Add direct image forensics and reverse-search support
- Add user accounts and private case histories
- Add mobile-friendly rumor sharing flows
- Add deployment and team-based review workflows for moderation or newsroom use cases
Built With
- angular.js
- asi:one
- java
- mongodb
- springboot
- typescript
Log in or sign up for Devpost to join the conversation.