🧠 Inspiration
We’re a father–daughter team — Matt (in Barbados) and Katie (in the UK) — building this together over the past few weeks and submitting it on Father’s Day!
This project started as a way for Katie to learn more about AI and large language models, with Matt mentoring on backend and architecture.
But it quickly turned into something bigger: a tool to make Barbados Parliament videos searchable, transparent, and easier for people to engage with — especially those who don’t have the time to scrub through hours of footage.
We wanted to answer a simple question:
"What did they really say?"
And make that answer accessible to anyone in the country.
🔍 What It Does
YuhHearDem takes long, unstructured YouTube recordings from the Barbados Parliament and turns them into a conversational knowledge assistant.
You can ask questions like:
🏥 “What did the Minister of Health say about the sugar tax?”
And YuhHearDem will:
- Search a knowledge graph of topics and entities
- Find the exact moment in a video where it was said
- Provide timestamped links to go straight to the source
- Suggest follow-up questions using graph-based context
It's chat-first civic intelligence powered by LLMs and big data.
🛠️ How We Built It
- We ingest YouTube transcripts from parliamentary sessions
- Clean and align them using Gemini Flash
- Extract entities, topics, and relationships into a knowledge graph
- Store graph data and vector embeddings in MongoDB Atlas
- Run hybrid GraphRAG search at query time (graph + vector)
- Serve responses through a chat UI powered by Google ADK
Matt focused on the backend, knowledge representation, and retrieval pipeline.
Katie designed and built the frontend UX — focusing on accessibility and making the interaction feel intuitive and natural.
🧗 Challenges We Ran Into
- Transcript cleanup: YouTube captions are messy — full of stutters, cutoffs, and misalignment.
- Graph consistency: Making sure every node was connected, searchable, and meaningful.
- Async tooling: Getting Gemini, ADK, MongoDB, and the frontend to talk reliably in real time.
- Timezones & time limits: Juggling day jobs, time differences, and still getting this live!
🏅 Accomplishments That We're Proud Of
- Built an end-to-end pipeline from video to knowledge graph to conversational AI
- Created a system that works at scale and can be reused for other civic datasets
- Learned new tools (Gemini, MongoDB Atlas, ADK) and made them work together
- Watched Katie go from frontend learner to shipping a production-grade AI interface
And of course — submitting this as a family project on Father's Day ❤️
📚 What We Learned
- How to design structured knowledge graphs from messy real-world transcripts
- How to build LLM-powered tools that are actually grounded in data
- How to make chat interfaces that don’t feel like tech demos, but real tools
- That big data + small teams can build real civic change
🚀 What’s Next for YuhHearDem?
- Expand to cover the Prime Minister’s Office channel and statutory legislation
- Let users track politicians’ stances across sessions and topics
- Add summarization, alerts, and a public mobile version
- Work with journalists and civic groups to make this even more impactful
This is just the beginning.

Log in or sign up for Devpost to join the conversation.