Njiapanda — Hadithi: When Stories Are The Guide They Need
What inspired this
This started closer to home than a hackathon.
Njiapanda was built on International Women's Day 2026. But it did not start there.
It started with people I love — watching financial abuse play out quietly over years. Watching emotional abuse be explained away as personality. Watching physical abuse be survived in silence because there was nowhere obvious to go and no clear first step.
At one point, the first step was my home. Someone close to me needed a few weeks of space — somewhere safe to think, to plan, to breathe — before they could find their way out. I gave them that.
That is not a system. That is one person helping one person.
But it taught me something: the most important thing in that moment was not a hotline number or a legal resource. It was a quiet, trusted path to safety — and someone who knew how to walk it with them.
Njiapanda is an attempt to make that path findable. For everyone. Not just the people who happen to know someone who happens to have a spare room.
The problem we chose to solve
Most GBV support tools are built for people who already know they need help. They assume a survivor who can name what is happening, who has decided to act, who is ready to call a number or fill in a form.
Those tools are important. But they miss the largest group of all.
Abuse does not announce itself. It settles in slowly, quietly, until it starts to feel like your normal. It is that gut feeling you keep pushing down. The nervousness you cannot explain. The discomfort you have learned to move around.
Financial control. Emotional degradation. Isolation from friends and family. Phone monitoring. Millions of people in Kenya experience these as normal relationship problems — because no one has ever named them as anything else.
No hotline reaches a person at that moment. No shelter. No form.
Named, it becomes real. Unnamed, it stays.
We chose the Creative Storyteller track because stories are how we name things we do not yet have words for.
How we built it — and what actually happened
The honest version of this build is not a clean straight line. It is a series of real constraints navigated in real time.
The original plan was a full Google Cloud stack — Vertex AI for Gemini, Imagen 3 for illustrations, Cloud Run for the backend agents, the whole thing hosted on GCP.
What actually happened:
Cloud Run required billing. I did not have GCP credits. Imagen 3 image generation kept failing — the API returned errors, the content moderation blocked prompts that were emotionally descriptive but entirely non-violent, and when it did work, I had no credits to sustain it.
So I made pragmatic decisions, one at a time.
Cloud Run → Firebase Hosting. Firebase Hosting has a permanent free tier. No billing account required. No credit card. The entire React frontend is deployed at njiapanda-v2.web.app — a live Google Cloud URL, provable in the Firebase console.
Vertex AI → Google AI Studio. The AI Studio API key calls the same Gemini models. The inference is identical. The authentication is simpler. For a hackathon on a zero budget this was the right call.
Imagen 3 → Picsum placeholder images. When image generation kept failing and credits ran out,The image source is the only thing that would change in a production deployment with proper credits.
None of these were compromises on the core idea. The interleaved multimodal stream works. The story generates. The illustrations appear inline. The community loop saves and surfaces stories. Everything that makes Hadithi what it is — that all works.
The infrastructure underneath it is just honest about what a solo builder with no cloud credits could ship in a hackathon.
The technical stack — as actually built
| Layer | Technology | Purpose |
|---|---|---|
| AI storytelling | Gemini 2.0 Flash via Google AI Studio | Story generation with [IMAGE:] markers embedded inline |
| Streaming | Interleaved SSE output | Text, images, and audio arrive as a single progressive stream |
| Narration | Gemini TTS via response_modalities: ["AUDIO"] |
Each paragraph narrated using Gemini's native audio output |
| Hosting | Firebase Hosting on Google Cloud | Frontend live at njiapanda-v2.web.app |
| Backend | Supabase Edge Functions | Story streaming, moderation, and storage |
| Frontend | React + Framer Motion | Story blocks fade in progressively as they arrive |
What we learned
1. The fictional frame is protective by design. The distance created by fiction is exactly what makes honesty possible. You are not being asked to confess. You are just reading. But if the story resonates, that recognition is yours.
2. Interleaved output changes the emotional experience. A text-only story is easy to read past. When an illustration fades in at the emotional peak of a paragraph and a voice begins reading as the text settles, the story becomes harder to dismiss. The multimodal stream does not just tell the story — it makes it felt.
3. Specificity creates recognition. Generic stories about unnamed women in unnamed cities do not create recognition. Stories about Zawadi in Mathare, or Amina in Mombasa, with specific financial pressures and family dynamics — those do. We spent significant time on the system prompt to get Kenyan cultural specificity right.
4. Ship honest, not perfect. When image generation failed and credits ran out, the temptation was to stop. The better decision was to keep the structure intact and swap the image source. The judges can see what the feature is designed to do. The placeholder is honest about what a zero-budget solo build could deliver.
Challenges we faced
GCP billing and credits were the central challenge. The original architecture required Cloud Run and Imagen 3 — both need billing enabled. Without credits, both were blocked. The pivot to Firebase Hosting and AI Studio kept the project moving without compromising the core feature.
Imagen 3 content moderation repeatedly blocked emotionally descriptive prompts that were non-violent. Even abstract descriptions of distress triggered rejections. The style prefix fix — "watercolour illustration, gentle, abstract, soft light —" — helped when the API was accessible, but without credits it was not a path we could rely on for the submission.
Swahili quality was a deliberate decision to get right rather than ship fast. AI-generated Swahili that has not been reviewed by native speakers feels robotic and tonally wrong — which in a crisis support context is harmful, not just imperfect. We shipped English-only with a clear note in the UI: "Kiswahili coming soon — being reviewed by native speakers." This is the responsible choice and worth naming as such.
Supabase edge function secrets do not inherit from local .env files. Every variable must be set explicitly via the CLI. Silent failures during early testing cost time to diagnose.
What we believe
Recognition comes before action — awareness is the first intervention
AI assists. Humans decide. Always.
Anonymity is not a limitation. It is the design.
Ship honest. Name the constraints. Keep the core intact.
Open source because every country that needs this should be able to build it.
What's next
Hadithi needs Imagen 3 properly integrated when credits allow. Kiswahili needs native speaker review before it goes live. The community story library needs conductor partners to moderate and grow it.
The model that works in Nairobi should be deployable in Kampala, Dar es Salaam, Kigali, and Johannesburg — without starting from scratch.
The repository is MIT licensed and public at github.com/nashthecoder/njiapanda-support-kenya
The network grows one story, one conductor, one connection at a time.
Log in or sign up for Devpost to join the conversation.