Inspiration

Late one night in November 2025, I found old screenshots of my 90s Tamagotchi on my phone. I smiled at the memory—then realized something profound: I cared more about that pixelated pet than I do about my own mental health. Why? Because Tamagotchi was fun. Simple. Gave instant feedback. Made me feel needed. Meanwhile, every journaling app I've tried lasts three days max. They feel like homework. They judge me with blank pages. They offer nothing in return. What if mental wellness could feel like raising a Tamagotchi? The next morning, I saw Kiroween's theme: Resurrection. Perfect. Tamagotchi died in the early 2000s, but its soul—the joy of nurturing something small and meaningful—never should have. The breakthrough came while reading Japanese haiku: some emotions can't be captured in direct language. When you're anxious, you don't think "I feel anxious"—you think in images, weight, shadows. That's when it clicked: WordGotchi wouldn't just store emotions. It would transform them into art and poetry.

What it does

WordGotchi is an AI pet that eats your emotions and turns them into beauty. Core Experience:

Feed your feelings: Type any emotion (frustration, joy, confusion, sadness) Interactive eating: Words scatter across the screen—you click each one to feed your pet Watch it grow: Your pet evolves based on how many words it consumes Receive gifts: After eating, your pet generates:

Abstract art (via Stable Diffusion) - sadness becomes blue depths, joy becomes golden bursts Original poetry (via Claude) - metaphorical reflections of feelings you couldn't express

The Magic:

Every word you feed is analyzed by Claude for 7 emotions (joy, sadness, anger, fear, surprise, disgust, trust) Your pet's personality shifts based on your emotional patterns Combos reward fast clicking with special animations and particle effects All data stays in your browser—completely private

How we built it

Tech Stack:

Frontend: React, TypeScript Canvas Animation: Konva.js for character, Framer Motion for UI AI Integration:

Backend: Python, FastAPI

Claude API (emotion analysis, poetry generation) Gemini Imagen API (abstract art generation) Storage: Browser localStorage (hackathon MVP)

Development Timeline:

Days 1-2: Canvas setup, basic character animation Days 3-4: Claude integration, emotion analysis engine Days 5-6: Art/poetry generation, visual effects Day 7: Interactive word-clicking system (the "aha" moment) Day 8: Evolution system Days 9-10: Polish, demo video, documentation

Challenges we ran into

  1. Making "eating words" feel magical Early prototypes were boring—words just disappeared. No satisfaction. Solution: Particle physics system. Words dissolve into light particles that flow into the character with trailing effects. Added "nom" sounds and screen flashes on consumption. Suddenly it felt alive.
  2. Passive watching is boring Users initially just submitted text and waited. During testing, I caught myself impatiently clicking on the scattered words even though they weren't interactive yet. Solution: Rebuilt the entire feeding system overnight. Now YOU click words to feed your pet. Combos for speed. Each word flies toward the character with particles. Transformed passive observation into addictive gameplay.

Accomplishments that we're proud of

  1. 100% of frontend code generated by Kiro in 10 days What would've taken 3 months solo took 10 days with Kiro:

Emotion engine: 2 weeks → 2 days Art generation pipeline: 3 weeks → 2 days Animation system: 2 weeks → 1 day

Kiro didn't just speed development—it let me focus on emotional experience instead of boilerplate.

  1. The interactive feeding system Turning passive watching into active clicking transformed the entire experience. Test users went from "that's cool" to "this is addictive" overnight.
  2. We focused on what cannot be expressed in words. By stepping back from the current hype surrounding ChatBots, we realized that the very impossibility of communication through language is what truly matters.

What we learned

  1. Agency transforms experience The difference between "watch words disappear" and "click words to feed" seems small. But giving users control turned passive consumption into active joy. People don't want to watch—they want to participate.

  2. Cut features are success, not failure I killed voice input, social sharing, and 3 evolution stages. Each cut made WordGotchi better by maintaining focus. Knowing what NOT to build is as important as knowing what to build.

What's next for WordGotchi

1.Cross-Platform Mobile App We plan to release WordGotchi as a mobile application for both iOS and Android. This will allow users to carry their AI companions in their pockets and interact with them anytime, anywhere.

2.Enhanced Evolutionary Diversity We aim to expand the logic behind the "feeding" mechanic. Future updates will introduce a wider variety of visual forms and personalities that adapt dynamically based on the specific semantics and sentiment of the words the user feeds it.

Built With

Share this project:

Updates