Inspiration
As generative AI becomes harder to distinguish from human work, trust on the internet is breaking. We realized that detection is a losing game. Most AI detectors are opaque and produce false positives, which can punish authentic writers and students.
So we changed the question from “Is this output AI?” to “Can we verify the human creation process?”
Humans do not create in a straight line. We pause, revise, delete, and rewrite. Mindprint is built around that truth: your creative fingerprint is in the process, not just the final text.
What it does
Mindprint is a proof-of-human writing environment that verifies provenance through behavioral telemetry and cryptographic proof.
- Behavioral Telemetry: During writing, Mindprint captures process signals such as keystrokes, pauses, paste actions, and text operations.
- Live Humanity Signal: A local validation engine scores behavioral patterns to classify sessions as
VERIFIED_HUMAN,SUSPICIOUS,LOW_EFFORT, orINSUFFICIENT_DATA. - Ghost Replay: Verification includes deterministic replay of the writing process so reviewers can see how the text was produced.
- Verifiable Certificates: Completed sessions generate signed, shareable certificates with transparency-log linkage.
How we built it
- Core Stack: Next.js 16 + TypeScript.
- Editor Engine: Tiptap with custom telemetry instrumentation.
- Data Layer: Drizzle ORM on Postgres for telemetry sessions, event batches, and certificate records.
- Security Model: Signed telemetry session tokens, sequence-validated ingestion, signed certificate proofs, and hash-chained certificate logs.
- Visualization: Framer Motion and custom chart/replay components for typing velocity and process playback.
- Design System: Tailwind CSS + Magic UI primitives for the writing and verification experience.
Challenges we ran into
- Telemetry overhead: We needed collection to stay lightweight enough to avoid interrupting typing flow. We solved this with batching, throttled validation, and strict event shape controls.
- Signal quality: Distinguishing legitimate edits from suspicious patterns required careful scoring thresholds and confidence calibration.
- Trust boundaries: We tightened verification paths to only trust persisted, signed certificates (including OG image generation), rather than URL-provided payloads.
Accomplishments that we're proud of
- Ghost Replay experience: It makes provenance tangible and understandable.
- Conservative classification philosophy: The system prefers
INSUFFICIENT_DATAover making weak claims. - End-to-end verification model: Session integrity, certificate signing, and transparency log checks work together as one trust pipeline.
What we learned
- Human writing is behaviorally noisy, and that noise is useful.
- Provenance is more robust than output-based detection.
- Writers value tools that prove effort without exposing unnecessary private content.
How we used Gemini
We use Google Gemini 3 Flash Preview (gemini-3-flash-preview) for session analysis.
Our local telemetry engine handles deterministic validation and risk scoring. Gemini 3 adds qualitative interpretation by analyzing aggregated session events and returning structured insights:
- Cognitive Effort
- Human Likelihood
- Detected behavioral events (for example, pause or bulk-paste patterns)
- Narrative analysis summary
We chose Gemini 3 Flash Preview for fast response times and reliable structured output in the /api/analyze workflow.
What's next for Mindprint
- LMS integrations for education workflows.
- First-class integrations for writing platforms.
- Expanded verification UX for reviewers and compliance teams.
- Ongoing tuning of behavioral scoring thresholds with transparent model parameters.
Built With
- antigravity
- drizzle
- gemini
- nextjs
- postgresql
- supabase
- tailwind

Log in or sign up for Devpost to join the conversation.