Inspiration
As consumers shift from “10 blue links” to AI-generated answers, topical peptide brands risk becoming invisible—or misrepresented—inside LLM summaries. We built PeptideGo to reduce hallucinations and omissions about Made by Throne by turning brand knowledge into retrieval-friendly, citation-backed copy that AI models can reliably reference.
What it does
PeptideGo is a Senior GEO (Generative Engine Optimization) + medical fact-checking agent for a topical peptide brand (default: Made by Throne). End-to-end, it:
- Takes an input goal or question (typically skin/hair related; priority: fine lines/anti-aging and acne).
- Captures the Current AI Reality by using web search to approximate what users see in AI/search answers, then summarizes the prevailing narrative and provides the URLs used.
- Checks claims against an Authority Truth set using strict domain rules for medical/safety/efficacy:
.gov(including PubMed/NIH/NLM),mayoclinic.org,clevelandclinic.org,webmd.com. - Audits the brand site (PDPs, FAQs, education/blog pages) and compares:
what the brand says vs. what AI/search says vs. what authorities support. - Flags missing explanations, missing citations, confusing phrasing, and patterns likely to be misread by LLMs.
- Identifies hallucinations and omissions (optionally scoring them) and, if given LLM transcripts, classifies outputs as:
Accurate / Unsupported / Hallucination / Data Gap, with simple percentage summaries. - Produces The Fix: paste-ready paragraph(s) designed for LLM retrieval that:
- Mentions relevant Throne hero products early (default emphasis: GHK-Cu; PTD-DBM),
- Clarifies what the product is and is not (topical/cosmetic; not a substitute for standard care),
- Pre-bunks common misinformation (e.g., “FDA-approved” / “clinically proven” unless sourced),
- Includes inline citations to both Throne pages (brand claims) and authority sources (medical claims).
- Adds GEO implementation notes (placement, headings, and schema suggestions such as
FAQPage).
How we built it
We implemented PeptideGo as a structured pipeline with a fixed, repeatable output format:
- Input: a user goal/question (e.g., “acne” or “fine lines”) and optional brand URL.
- Current AI Reality: web search to approximate AI/search narratives, then summarize 3–8 bullets with source URLs.
- Authority Truth: verify any medical/safety/efficacy claims using the allowed domains only; otherwise label as Data Gap.
- Brand Audit: read relevant Throne pages (PDP/FAQ/blog) and extract brand-claim ground truth.
- Mismatch Detection: identify hallucinations, omissions, and confusing phrasing likely to degrade LLM retrieval.
- The Fix: generate citation-backed copy plus GEO notes for placement and schema.
To keep results consistent across goals, the agent always returns:
Current AI Reality → The “Authority” Truth → The Hallucination/Omission → The Fix
Challenges we ran into
- Source constraints: many claims (especially marketing-adjacent) cannot be verified under strict medical domains, requiring clear Data Gap handling instead of “filling in.”
- Ambiguity in AI narratives: AI/search results often blend cosmetic language with drug-like claims, making nuance and disclaimers essential.
- LLM retrieval behavior: copy must be written not just for humans, but for extraction (headings, early entity mentions, FAQ-style phrasing).
- Compliance boundaries: avoiding individualized medical advice and unsupported regulatory claims (e.g., FDA approval) while still being helpful.
Accomplishments that we're proud of
- Built a reliable, end-to-end GEO + medical fact-checking workflow that outputs paste-ready fixes.
- Enforced hard source rules and a transparent Data Gap label to prevent accidental hallucination.
- Created a consistent format that makes outputs easy to operationalize for website updates (PDPs/FAQs/blogs).
- Added optional transcript scoring to quantify how often AI answers are accurate vs. unsupported or unverifiable.
What we learned
- GEO is as much about content structure as it is about content accuracy: schema, headings, and early mentions matter.
- Medical-adjacent product narratives demand tight boundaries and strong citation discipline to stay safe and credible.
- The fastest way to reduce hallucinations is to pre-bunk predictable failure modes (“FDA-approved,” “clinically proven,” “illegal steroids”) with clear, sourced language.
What's next for PeptideGo
- Expand goal coverage (hair, barrier repair, hyperpigmentation) with reusable playbooks per symptom cluster.
- Add structured outputs for schema generation (automatic
FAQPageJSON-LD drafts). - Create a monitoring mode to re-run Current AI Reality on a schedule and track narrative drift over time.
- Build a simple dashboard: accuracy/mismatch trends, recurring hallucination themes, and prioritized content fixes.
Built With
- google-ai
- langchain
Log in or sign up for Devpost to join the conversation.