Inspiration
SF started with a problem we heard directly in the classroom, teachers are often asked to judge whether a student’s writing is authentic using only the final essay. As AI writing tools become more common, it created uncertainty, guesswork, and stress for educators; we wanted to build something fair. Instead of asking teachers to guess from the final product alone, we asked: what if they could see the writing process itself? That idea became SF.
What it does
SF is a writing platform for students and teachers that captures the process behind a written assignment. Students write inside a rich-text editor, and SF records key events like typing, deleting, pasting, tab switching, formatting changes, and draft snapshots. Those actions are stored in a custom .sf format with timestamps. When a student submits, teachers can review the final work and replay the timeline of how it was created. SF also provides submission analytics such as word count, paste count, tab-away count, total time, and event activity to give educators more context.
How we built it
We built SF as a full-stack TypeScript application.
- Frontend: Next.js and React for the student and teacher workflows
- Editor: rich-text editor with custom event capture
- Backend: Next.js route handlers for assignments, submissions, auth, and seed data
- Database: Prisma + SQLite
- Playback engine: custom replay logic to reconstruct the document over time
- AI-assisted analysis: Snowflake Cortex
Challenges we ran into
One of our biggest challenges was replay fidelity. Capturing events is straightforward, but accurately rebuilding a rich-text document over time is much harder. We also had to balance performance, timeline accuracy, and ethical interpretation. A high paste count or frequent tab switching does not automatically mean misconduct, so we had to design SF to support context and transparency rather than overconfident conclusions.
Accomplishments that we're proud of
- Built a custom
.sfwriting-session format - Created a full student-to-teacher workflow
- Developed a replay viewer for writing timelines
- Added teacher-facing analytics for a more informed review
- Integrated AI-assisted analysis without replacing teacher judgment
What we learned
We learned that process-based evidence is far more useful than final-output-only review. We also learned that educational integrity tools must be explainable, careful, and grounded in human oversight. From a technical side, we learned how much small data-modeling decisions affect replay reliability and performance.
What's next for SF
Next, we want to improve teacher analytics, make replay insights more explainable, add rubric-based feedback tools, and strengthen the platform for real classroom deployment. Our long-term goal is to make SF a platform that helps educators better understand writing, effort, and growth. We also want to extend our platform to explore coding as a future assignment type.

Log in or sign up for Devpost to join the conversation.