Inspiration and Context
43% of college students have used chatgpt or similar ai tools, and of those, 53% used it to write essays. 74% of college faculty say students are using ai to write essays or papers. 84% of faculty agree that ai use is reducing students' critical thinking, originality, and deep engagement with course material. and there is no doubt these numbers will keep climbing.
but the problem isn't that students use ai. they're going to use it regardless. the problem is they use it blindly, they paste in their thoughts, submit whatever comes back, and never learn what got lost in translation. an mit study found that students who used chatgpt to write showed measurably lower brain activity during the task compared to those who wrote on their own. when ai generates or heavily revises student work, their personal voice gets diluted and replaced by generic language from training data. eventually they forget what their own writing even sounds like.
cadence doesn't fight that. it gives students something they've never had: a mirror. here's your sentence rhythm. here's your formality score. here's the phrases only you use. now when ai writes for you, you can see exactly where it sounds like you and where it doesn't. that's not more ai, that's ai literacy.
that's cadence. your writing voice, preserved.
What it does
cadence models your writing voice: sentence cadence, lexical variance, structural tendencies, tonal balance, and uses it to generate content that actually sounds like you, not ai slop.
three ways to build your voice fingerprint:
- upload past writing (essays, papers, creative work) → voice analyst builds a structured style profile
- upload communication samples (emails, messages) → communication-specific profile tuned for tone shifts and brevity
- do a live 5-minute voice interview with an ai agent (elevenlabs convai) → personality, speech patterns, reasoning style, vocabulary
two studios:
- writing studio: enter a prompt, watch the full pipeline live. writer agent drafts in your voice, then two browser use agents simultaneously hit zerogpt and originality.ai, scrape per-sentence flagging, and loop revisions until the draft passes at ≤10% ai detection. every step streams in real-time — agent thinking, word-by-word drafting, detection scores, flagged sentence highlights, revision diffs.
- communication studio: browser use agent reads your gmail inbox, drafts replies matching your communication voice, you review and send — all through actual browser sessions.
the write-detect-revise loop runs up to 5 iterations with progressive humanization: ai starter replacement, comma splice injection, tricolon breaking, oxford comma drops, and noise injection (cyrillic homoglyphs, zero-width spaces, intentional typos) that scales intensity each round.
also runs as a 5-agent fetch.ai pipeline on asi:one, orchestrator, profile digester, writer, and two parallel detector agents coordinated through agentverse mailbox relay. available @cadence on agentverse for the main constructor agent that powers the pipeline.
How it was built
Architecture
| Layer | Stack |
|---|---|
| Frontend | React 18 + Vite + Framer Motion + Three.js (React Three Fiber) |
| Backend | FastAPI + WebSockets |
| Database | Supabase (Postgres + Auth + Storage) |
| LLM | Anthropic Claude Sonnet — voice fingerprinting + writing + revision |
| Voice | ElevenLabs ConvAI — adaptive voice interview |
| Browser automation | Browser Use Cloud — parallel AI detection (ZeroGPT, Originality.ai) + Gmail automation |
| Multi-agent | Fetch.ai uAgents — 5 agents on Agentverse with mailbox relay, ASI:One Chat Protocol |
| PDF processing | PyMuPDF + jsPDF |
- frontend: react 18 + vite, framer motion animations, three.js gpu particle system (1M particles) for landing page, react-three-fiber dither wave shader for dashboard, supabase auth, elevenlabs react sdk for voice interview
- backend: fastapi + websockets for real-time pipeline streaming, anthropic sdk (claude sonnet) for voice analysis + writing, browser use cloud sdk for ai detection + gmail automation, pymupdf for pdf extraction
- fetch.ai: 5 uagents on agentverse with mailbox relay, orchestrator handles asi:one chat protocol, digester parses voice profiles, writer drafts with claude, two detector agents run browser use sessions in parallel
- database: supabase (postgres + auth + storage) for user profiles, documents, pipeline sessions, voice interview transcripts
- apis: anthropic (claude) → voice fingerprinting + writing, elevenlabs (convai) → voice interviews, browser use cloud → zerogpt + originality.ai detection + gmail, fetch.ai agentverse → multi-agent coordination
pipeline streams everything over websocket in real-time: agent reasoning, word-by-word drafting, detection scores per iteration, flagged sentence highlights, and revision diffs.
Challenges
- getting the write-detect-revise loop to actually converge was brutal. ai detectors flag different sentences for different reasons, and what passes on zerogpt gets caught by originality.ai and vice versa. tuning the humanization engine to satisfy both simultaneously took endless iteration, figuring out which structural patterns trigger detection and how to rewrite around them without losing the user's voice.
- deploying 5 agents on agentverse with mailbox relay and getting them to coordinate reliably. message routing between orchestrator → digester → writer → two parallel browser-use detectors, handling async responses that arrive out of order, maintaining session state across agents that each poll their own mailbox independently, and debugging distributed agent communication with no shared memory is a different kind of pain.
Accomplishments
- full write-detect-revise loop that actually works, ai generates in your voice, two browser agents check real detectors in parallel, and the system iterates until the draft passes at ≤10% ai detection. not a gimmick, it converges
- 5-agent fetch.ai pipeline running on asi:one with full chat protocol integration, and users can attach a pdf of their voice profile, fully integrating the cadence flow, and get back a complete essay through agentverse, no webapp needed.
- live voice interview system with elevenlabs convai that adapts in real-time, paired with a three.js particle shader that pulses with the ai's speech. interview → transcript → fingerprint → usable voice profile in 5 minutes.
Lessons Learned
- ai detectors don't agree with each other. zerogpt and originality.ai flag completely different sentences for completely different reasons, building a system that satisfies both simultaneously taught me more about how detection actually works than any paper I read.
What's next for Cadence
- optimize browser use agent prompts to navigate detector sites faster, right now each detection run takes longer than it needs to because the agent can over-explores the dom. tighter instructions, fewer wasted clicks, faster convergence.
- add more detectors to the loop beyond zerogpt and originality.ai. the more detectors the draft passes simultaneously, the more robust the output. gptzero, turnitin, copyleaks.
- port the communication studio to agentverse so the full gmail inbox reading + reply drafting pipeline runs through asi:one the same way the writing pipeline already does
- monetize everything. perhaps a freemium model with limited pipeline runs, paid tier for unlimited generations, more detectors, and priority browser use sessions.
Built With
- agentverse
- anthropic-claude-sonnet
- asi:one
- browser-use
- elevenlabs-convai
- fastapi
- fetch.ai-uagents
- python
- supabase-(postgres-+-auth-+-storage)

Log in or sign up for Devpost to join the conversation.