Garden of Kin: Project Framework


Inspiration

Garden of Kin began with a simple, shared memory: the post-feast lull of Chinese New Year's Eve, when families go from polite chatter to raw honesty. We imagined a family fractured by loss—mother's first empty seat at the table—and a power outage forcing them into an old drinking game. The game, "Guang Sanyuan" (逛三园), becomes a Trojan horse for truth: each "garden" (botanical, zoo, family) strips away pretense until only love remains. We wanted to explore how dialects (Taiwanese, Northeastern, Hong Kong-accented Mandarin) could be both weapons and bridges, and how AI might capture the messy beauty of reunion.


What It Does

It's a 10-minute AIGC short that uses generative tools to craft a light comedy with surprising emotional depth. Three generations play a word-association game by candlelight; each round unearths decades of resentment, guilt, and unspoken affection. The film moves from laugh-out-loud absurdity (naming zoo animals) to profound silence (naming family members who've drifted). By the time the lights flicker back on, the empty bowls have been refilled—not with food, but with renewed connection. It's proof that AI cinema can serve story, not just spectacle.


How We Built It

We wrote a tightly structured script where every line doubles as a game move and an emotional beat. Tapnow translated our screenplay into dynamic storyboards, mapping each candlelit pause and defensive glance. With KlingAi, we generated the visual foundation: warm chiaroscuro compositions that treat darkness as both veil and spotlight. For animation and voice, we orchestrated HailuoAi, ViduAi, and GagaAi in parallel—each tool handled different characters' dialects and gestures, allowing us to fine-tune the soft Taiwanese lilt against blunt Northeastern jabs. Suno composed the minimalist score: a single guzheng string that trembles with every familial reveal. Finally, CapCut fused these layers, adding film grain and subtle sound design to ground the synthetic imagery in tactile reality—treating AI as a cinematographer, not a magic wand.

Technical Stack:
Tapnow→KlingAi→HailuoAi/ViduAi/GagaAi→Suno→CapCut


Challenges We Ran Into

Emotional fidelity: Teaching AI to render subtlety—a trembling lip, a glance away—required hundreds of prompt iterations. The candlelight was particularly stubborn; early renders looked like CGI fire, not memory.

Cultural performance integrity: Capturing the nuance of Pingtan opera and authentic regional dialects proved far more complex than algorithmic adjustment. The rhythmic storytelling of Pingtan, the oral posture of elderly Northeastern speakers, and the code-switched humor of Hong Kong-influenced Putonghua carried cultural DNA that no synthesis model could replicate. We had to recruit professional Pingtan performers and local dialect consultants, then synchronize their recordings with AI-generated visuals—a workflow that introduced new layers of timing mismatches and lip-sync complexity.

Continuity: Characters morphed between scenes. We solved this by creating a "visual bible" of seed images, but it remained a constant battle.


Accomplishments We're Proud Of

We created an AIGC film that feels human. Festival programmers have called it "the first AI short that made me cry"—not because it's perfect, but because its imperfections serve the story. The dialect work is unprecedented: we've mapped a linguistic geography of Greater China that feels lived-in, not generated. We also cracked a workflow for emotional continuity in AI cinema, sharing our prompts and seed strategies in an open-source repository. Most importantly, we proved that a ten-minute film about a drinking game can hold the weight of grief and joy simultaneously.


What We Learned

AI is a co-writer, not a ghostwriter. The best moments came from letting the model surprise us—an unexpected gesture, a pause it invented—then building on that.

Constraints are creative fuel. The single location, candlelight, and game structure gave us narrative guardrails that AI desperately needs.

Cultural specificity scales. We worried the game mechanics would be opaque to Western audiences, but the emotional logic is universal. The more specific we got with dialect and ritual, the more resonant the story became.


What's Next for Garden of Kin

We're developing it into an anthology series—each episode a different Chinese family ritual that becomes a truth-telling device (Mahjong, tea ceremony, karaoke). We're also building a real-time "Gardens" game engine where viewers can input their own categories and watch AI-generated vignettes unfold, creating an interactive documentary about global family dynamics.

Future roadmap:

  • Phase 1: Anthology series (2025)
  • Phase 2: Interactive engine (2026)
  • Phase 3: Feature film via open-production model (2027)

The destination remains the same: using emerging tech to tell timeless stories about going home.

Built With

  • capcut
  • gagaai
  • hailuoai
  • klingai
  • suno
  • tapnow
  • viduai
Share this project:

Updates