Inspiration
In today’s digital landscape, misinformation spreads faster than ever—fueled by biased headlines, viral posts, and fragmented narratives. We wanted to build a tool that empowers users to cut through the noise and access the truth. Inspired by the Mesopotamian god of wisdom, Naboo became our answer: a reliable guide in an unreliable world.
What it does
Naboo aggregates information from trusted news outlets, social media platforms, and AI language models, then distills it into concise, unbiased summaries. Whether you’re researching a trending topic or evaluating a controversial issue, Naboo gives you multiple perspectives—so you can make informed decisions with clarity and confidence.
How we built it
Core FastAPI 0.115 serves both the JSON API and a minimal HTML developer console. Uvicorn acts as the ASGI server with watchdog for hot reloads during development. Python 3.11 with dependencies managed via requirements.txt. Google Agent Development Kit (ADK) powers the architecture—using Agent and Tool abstractions to orchestrate. OpenAI Responses API handles: Query enrichment and reframing, vision-based OCR + theme detection, web search summarization, high-level news overviews PRAW 7.7.1 ingests Reddit content with heuristics to exclude noise. Requests + ElementTree parse and filter RSS feeds from various news outlets. dotenv for environment management, with logging and traceback for debugging and monitoring. Front-End & Dev Console A vanilla HTML/CSS/JS developer panel is generated directly from main.py with inline scripts—perfect for local testing and fast iteration. A native iOS Swift client (see ios/OCRClient.swift) communicates with the backend using URLSession to send queries and upload images for OCR/theme detection. Data Flow Tools Each agent is modular, allowing for tool fallback and rerouting:
browser_search/credible_search.py Orchestrates OpenAI-enhanced web search and narrative generation based on user queries.
reddit_research/reddit_agent.py Pulls relevant Reddit threads and uses OpenAI to summarize key discussion points. Fallback templates are used when the LLM fails.
news_perspectives/news_agent.py Aggregates multiple politically diverse RSS feeds, deduplicates articles, and summarizes them with OpenAI. Falls back gracefully if parsing fails.
vision/ocr_tool.py Encodes images to base64 and uses OpenAI Vision models for: Text extraction (OCR) Theme classification (e.g., identifying protest signs, headlines, etc.)
Challenges we ran into
Complex Agent Architecture Building a modular multi-agent system using the Google ADK was more involved than expected. Coordinating tool routing, managing fallbacks, and designing clear boundaries between agents (Reddit, News, Web Search, Overview) required trial and error, especially to avoid duplicated or contradictory outputs. Noisy, Redundant, or Low-Quality Data Scraping Reddit and parsing RSS feeds often surfaced irrelevant or low-signal content. We had to design custom filters—like subreddit whitelists, language checks, and political spectrum balancing—to ensure each agent returned valuable, non-biased perspectives. Balancing Bias for a Complete Picture Presenting multiple viewpoints without overwhelming or misleading the user was a key design challenge. We didn’t want to "flatten" everything into one neutral narrative, but also didn’t want to promote echo chambers. The News Agent in particular required careful tuning to ensure ideological diversity without toxicity. Integrating FastAPI with Swift UI Bridging a Python FastAPI backend with an iOS frontend had its pain points. Handling JSON responses, image uploads, and async behavior from Swift required custom handling—especially with file uploads for OCR and streaming response formats.
Accomplishments that we're proud of
First iOS App + First Time Using Swift We successfully built and tested a working Swift UI client that communicates with our Python backend—despite having no prior experience with iOS development. First Project Using Google’s Agent Development Kit (ADK) We dove deep into autonomous agent systems and implemented a full pipeline using ADK, including router agents, nested tools, fallbacks, and LLM supervision logic. Multi-Source Integration We brought together Reddit, news RSS feeds, web search, vision OCR, and LLMs into a single app—handling coordination, filtering, and summarization in real time. Built a Real-Time Truth Engine Naboo isn’t just a demo—it performs live research and outputs explainable summaries. We’re especially proud of how it balances multiple sources and distills them into a cohesive narrative. No-Framework Frontend with Dev Console We built a lightweight, fully functional dev/testing console using only HTML, CSS, and JS—served straight from FastAPI. This sped up debugging and allowed rapid iteration without bloated frameworks.
What we learned
How to Build with Autonomous Agents This was our first time using Google’s Agent Development Kit (ADK), and we gained hands-on experience designing a modular, multi-agent system that routes tasks, handles fallbacks, and collaborates across tools to solve complex problems. Prompt Engineering & Instruction Design Matters We learned how small changes in prompts and tool instructions can drastically affect the quality, tone, and accuracy of LLM outputs—especially when juggling multiple agents and input sources. Misinformation is a UX Problem Too It’s not enough to just collect facts—we discovered that presenting truth clearly and neutrally is a design challenge in itself. We had to think deeply about tone, layout, transparency, and trust-building. How to Balance Bias in Aggregated Content When pulling from news and social media, bias is inevitable. We learned how to strategically source across the political spectrum and design our agents to show contrasting perspectives without forcing false equivalence. Backend ↔ Frontend Integration We learned how to bridge a Python backend (FastAPI) with an iOS client (Swift UI), including handling JSON requests, image uploads, and asynchronous queries across platforms. Real-Time Systems Need Smart Defaults LLM latency, API rate limits, and noisy input taught us the value of fallbacks, caching, and graceful degradation. A system is only as useful as its worst-case behavior—and we learned to plan for that.
What's next for Naboo
Launch a Public Beta on iOS We plan to polish the UI and deploy Naboo to TestFlight, opening it up to early users for feedback and iteration. Add Fact-Checking & Source Transparency We’ll integrate APIs like PolitiFact or Media Bias/Fact Check to tag sources with credibility and bias ratings—giving users more context at a glance. Personalization & Topic Tracking Users will be able to follow topics they care about and receive ongoing, multi-perspective updates as stories evolve—curated by Naboo’s agents. Live Misinformation Alerts We’re exploring ways for Naboo to notify users in real time when a viral topic is gaining traction—along with credible summaries to counter disinformation. Expand Agent Capabilities We plan to: Add X/Twitter thread ingestion Build a “fact vs opinion” classifier agent using fine-tuned models Partnerships & Impact We envision Naboo as a tool for civic tech, media literacy, and education. We're exploring partnerships with nonprofits, universities, and journalists to scale our impact.
Built With
- dotenv
- fastapi
- google-adk
- html/css
- javascript
- json
- openaiapi
- python
- swift
- uvicorn

Log in or sign up for Devpost to join the conversation.