Inspiration 6 degree’s came from a simple frustration: fashion brands are being judged and discovered inside AI systems, but most teams operate on literature review which has no quantiative backing except saying "trust me".
6 degree's provides quantitative data especially within the AI conversation.
Coming from a fashion-model perspective, I’ve seen how quickly aesthetics, cultural signals, and consumer language shift. Traditional dashboards miss that movement. We wanted to build a system that captures how brands are actually being described, compared, and recommended by AI in real time—and turn that into decisions teams can execute.
What it does 6 degree’s is an interactive AI GEO intelligence platform for fashion brands.
Instead of static charts, it generates a live semantic universe where:
Nodes represent your brand, competitors, aesthetics, and user intent concepts. Links show how often those concepts co-occur across AI/search responses. Weak or missing edges expose uncaptured market intent. Competitor-dominance signals show where rivals are winning specific clusters. Evidence explorer ties every relationship to source-level proof. Share of Model Strength provides a quantitative score across model ecosystems. A built-in AI advisor chats with users and gives strategic, data-grounded operating guidance. In short: it turns AI-era brand visibility into something measurable, explainable, and actionable.
How we built it We built 6 degree’s as a full-stack Next.js application with a custom data pipeline and interactive graph interface:
Frontend: Next.js + React + TypeScript + Tailwind CSS Core UX: custom force-directed semantic graph in SVG with click/zoom/pan interactions Backend: Next.js API routes orchestrating external data + synthesis Data sources: Tavily + Exa for web/search grounding Synthesis layer: OpenAI to structure evidence into graph entities/links/discourse/visual cues Normalization/hardening: strict payload sanitation on server and client to handle inconsistent model outputs Visual correlation: real image extraction from sources with moodboard fallback Scoring pipeline: model-strength computation with weighted signal logic Advisor layer: floating AI chat that consumes all in-app generated data and returns strategic recommendations in natural, executive language The architecture was designed around one principle: every insight should be traceable back to evidence.
Challenges we ran into We hit multiple non-trivial challenges:
Inconsistent model output schemas causing UI fallback behavior and generic graphs Type safety issues in strict TypeScript builds under production constraints React state/render pitfalls around refs, effects, and animation loops Data quality problems like publisher names appearing as competitor nodes False confidence metrics when fallback evidence IDs made links look “backed” when they weren’t Sparse model mentions leading to flat/zero model-share scoring despite available signals Image reliability for real brand/product visuals across noisy source pages UX clarity around gap interpretation and competitor dominance context Chat tone quality balancing structured usefulness with natural, human-sounding responses Each issue forced us to improve resilience, not just patch symptoms.
Accomplishments that we're proud of We’re proud that 6 degree’s moved beyond “dashboard theater” into decision intelligence:
Built a truly exploratory semantic map with evidence-level drilldown Enforced brand-only competitor extraction (not publishers/media outlets) Guaranteed uncaptured-intent detection so teams always see where they’re losing ground Delivered competitor dominance proof tied to real query/source logs Created a quantitative Share of Model Strength pipeline that reflects observed signals Added real/fallback visual correlation workflows to preserve insight continuity Embedded a natural-language AI advisor powered by the same internal data graph Hardened the full stack to compile and run reliably under strict lint/build checks Most importantly, we made AI visibility legible to business stakeholders.
What we learned We learned that AI-facing brand strategy needs three things at once:
Structure (clear ontology of brand, intent, aesthetic, competitor relationships) Evidence (traceability to source-level observations) Actionability (specific recommendations tied to measurable gaps) We also learned that language models are great at synthesis, but production systems need strong normalization and guardrails around them. And from a UX standpoint: users don’t just need metrics—they need narrative context, proof, and confidence in what to do next.
What's next for 6 degree's Next steps are focused on scale, precision, and workflow adoption:
Brand memory layer: track semantic movement over time, not just snapshots Team workspaces: saved analyses, annotations, and collaborative decision logs Competitive watch mode: alerts when competitors gain ground in key intent clusters Campaign loop integration: connect recommendations to execution and outcome tracking Model-level diagnostics: deeper transparency by ecosystem and query archetype Vertical expansion: beyond fashion into beauty, luxury, and lifestyle categories Enterprise readiness: role permissions, auditability, and richer export/reporting Advisory intelligence upgrades: scenario planning (“if we shift narrative X, what happens?”) The goal is to become the operating layer for AI-era brand strategy: where cultural positioning, market evidence, and tactical execution finally connect.
Built With
- cursor
Log in or sign up for Devpost to join the conversation.