Katha - A Story for Everyone
Inspiration
Katha was inspired by a deeply personal experience during COVID-19.
Our family was physically apart, and we wanted to celebrate my grandmother’s 80th birthday in a meaningful way. Since we couldn’t gather, we decided to create a "life documentary" for her—something that would capture her journey through the eyes of the people who loved her.
But the process was unexpectedly difficult.
Memories were scattered across different family members: photos, videos, voice notes, and stories that only existed in someone’s mind. Coordinating contributions was chaotic, organizing everything into a timeline took hours, and turning it into a meaningful narrative required immense manual effort.
What should have been a beautiful experience became fragmented and stressful.
That’s when we realized: People don’t lack memories; they lack a way to bring them together. Katha was created to solve this.
What it does
Katha is a collaborative storytelling platform that transforms scattered memories into a structured, immersive life story. It moves beyond passive archiving to active Narrative Synthesis.
A single user (Curator) Initiates the project, setting the timeline boundaries. They invite others (Contributors) through a private shared link to the temporary workspace. The distributed community can then Ingest diverse fragments:
- Photos and Videos
- Voice Notes and Recordings
- Textual Stories and Prompt Descriptions
Katha’s powerful AI Agent layer dynamically connects these inputs, providing context and automating the "Katha Metamorphosis" from fragments to order:
- 1. Automated Context Search: AI searches location data (e.g., historical maps, Google Earth context) and general historical events based on the Mandatory Timestamp (M/Y) each contribution must include.
- 2. Character Recognition Bar: The dashboard features a dynamic UI bar of face bubbles. The AI auto-detects people across media (
Ravi (Subject), Anjali (Sister)) and suggests grouping related stories. - 3. Generative Media Reconstruction: When a contribution is text-only (a description of a childhood event without a photo), the AI generates historically accurate visual context (
chrysalis icon 'cooking' state) to prevent narrative gaps.
A standout feature is Live Audience Participation:
During a presentation to a large audience, a QR Code Engagement Portal appears. Audience members scan it, access a temporary interaction screen on their phones, and have 5 minutes to submit photos or answer curated AI questions (e.g., "How did it feel based on this known event?"). The AI instantly moderates, filters, and integrates these inputs into the vertical storytelling stream without breaking the emotional flow.
The final result is not just storage; it’s an experience: A vertically scrollable, presentation-ready story that can be shared with individuals or large audiences. Security is prioritized: The workspace is temporary; after the Curator finalizes the story, they Download the completed narrative, and all data is DELETED forever.
How we built it
Katha was designed as a hybrid system where distributed human memory is augmented and synthesized by powerful AI context and generative tools.
- Design & UX (Prototype in Figma): We built our low-fidelity wireframe by drawing out what we wanted, and then made our mid-fi, got feedback and proceeded to a high-fidelity prototype on Figma that we are very proud of. We established a professional, human-focused design system (referencing the Sage Green/Earth palette) to ensure the product felt trustworthy and represents community and wisdom.
Challenges we ran into
- The Unstructured Data Problem: Synthesizing memories across vastly different formats (text prompts, low-res vintage photos, audio recordings) into one seamless "Biopic" narrative flow.
- Maintaining Emotional Authenticity: Balancing automated AI context/generation with the unique, human voice of the contributors. The AI must assist, never replace.
- Distributed Moderation vs. Security: Ensuring crowd-sourced contributions were appropriate and ethical without making the verification process slow or overly manual.
- The Technical Challenge of Live QR Integration: Visualizing how to pause a presentation, integrate diverse audience input in 5 minutes, and have the AI synthesize it chronologically live.
Accomplishments that we're proud of
- Designed a New Storytelling Format centered around distributed memory and chronological synthesis.
- Built a system where multiple people can co-create a single life story seamlessly, overcoming physical distance.
- Introduced Live QR-based Audience Participation, transforming presentation from passive viewing to active co-creation.
Built With
- figma
Log in or sign up for Devpost to join the conversation.