Relevant Links for Reference

  1. https://yyccbb.github.io/review-agent-frontend/
  2. https://github.com/RaUzenL/tiktok-techjam-2025
  3. https://youtu.be/1UhwGVpVqnY

Inspiration

Location-based platforms like Google Maps and Yelp are essential for choosing where to eat, visit, or shop. However, we often encountered spammy, irrelevant, or promotional reviews that made it hard to trust what we read. This frustration inspired us to build a system that automatically filters out low-quality content and surfaces trustworthy insights — ensuring users get real value from reviews.


What it does

Our system is an AI-powered moderation agent that ingests Google Review JSON records and classifies them as relevant or not based on three core policies:

  • No Advertisement: Flags promotional content or links.
  • No Irrelevant Content: Ensures reviews talk about the actual place.
  • No Rant Without Visit: Detects complaints that lack visit evidence.

It uses a combination of regex filters, heuristics, and LLMs (Gemma 3 12B-IT / Qwen3 8B) to make decisions, and returns each judgment with a confidence score and explanation. Users can paste reviews into our web app and get real-time assessments.


How we built it

  • Agentic Workflow: Built with LangGraph to orchestrate the multi-layer moderation pipeline.
  • Backend: FastAPI server deployed on Hugging Face Spaces with Docker.
  • Frontend: Built with React + Vite, deployed via GitHub Pages.
  • LLM Inference: Integrated Hugging Face Inference API to run models.
  • Filtering Logic: Combined regex, textual visit heuristics, and LLM evaluations.
  • CI/CD: Automated deployment using GitHub Actions.

Challenges we ran into

  • Designing rules that didn't over-filter casual but genuine reviews.
  • Handling edge cases where LLM responses varied or contradicted heuristics.
  • Managing API rate limits and latency from Hugging Face inference endpoints.
  • Balancing clarity and explainability in the output without overwhelming the user.

Accomplishments that we're proud of

  • Built a fully working moderation pipeline that combines rule-based and LLM-based systems.
  • Successfully deployed the backend and frontend for live demo usage.
  • Achieved explainable moderation with high transparency (confidence scores + reasons).
  • Integrated two different open-weight LLMs to handle edge-case judgment.

What we learned

  • How to combine deterministic and probabilistic NLP techniques in a real-world pipeline.
  • The power and limitations of open-weight LLMs when used in moderation tasks.
  • Importance of user trust and transparency in automated AI systems.
  • How to use LangGraph effectively for agent orchestration.

What's next for TikTok TechJam 2025 – by team TikTok_Offer_Please

  • Expand to support multi-language review moderation.
  • Add model fine-tuning or prompt optimization for better consistency.
  • Integrate with live APIs (e.g., Yelp, Google Places) for real-time moderation.
  • Offer an API service that third-party platforms can use for review filtering.
  • Explore user feedback loops to improve the system over time.

Built With

  • dateutil
  • docker
  • fastapi
  • gemma3-12b-it
  • githubactions
  • githubpages
  • googlemapsreviewsapi
  • huggingface-hub.inferenceclient
  • huggingfacespaces
  • javascript
  • langgraph
  • pydantic
  • python
  • qwen3-8b
  • react-+-vite-(frontend)-platforms-&-cloud-services:-hugging-face-spaces-(backend-deployment)
  • regex
  • typescript
  • typescript-frameworks:-fastapi-(backend)
  • vite
  • vue3
Share this project:

Updates