🌐 Project Story: A Transparent Multi-Agent Framework for Ethical AI

💡 Inspiration

As AI-generated content becomes increasingly widespread, distinguishing trustworthy information from toxic, biased, or misleading outputs is more urgent than ever. We were inspired by this challenge—and guided by our commitment to ethical AI—to build a system that not only identifies harmful content, but does so in a reproducible and transparent way. The vision: empower users with agency over AI-driven insights.

🧠 What it Does

Emakia is a modular, multi-agent Streamlit app powered by Google’s Agent Development Kit (ADK). It detects and correlates toxicity, misinformation, and bias across diverse user inputs. The architecture is intentionally transparent: each agent's output is visible and traceable, giving users the power to understand how judgments are made. Whether data is uploaded or pulled from BigQuery, users receive actionable, explainable insights through a user-friendly interface.

🛠️ How We Built It

  • Agent Design: Built discrete agents for toxicity, misinformation, and bias, each with task-specific logic.
  • Coordination Agent: Synthesizes agent outputs to surface correlations and higher-order risks.
  • Data Integration: Supports CSV upload and Google BigQuery input for scalable content evaluation.
  • Deployment: Streamlit Cloud deployment with secrets and dependencies managed via packages.txt, secrets.toml, and robust GitHub workflows.
  • Visual Feedback: UI shows how each agent contributes to final results, emphasizing transparency.

🚧 Challenges We Ran Into

  • Streamlit Deployment: Managing conflicting system-level dependencies (especially with C++ build tools and pip) required extensive debugging and a custom packages.txt.
  • Nested Git Repos: Refactoring for a clean repo history and modular structure introduced Git headaches that needed careful coordination.
  • Agent Orchestration: Defining how agents should interact—without introducing ambiguity or compounding noise—was a nontrivial design challenge.
  • v0.dev Code Validation: Auto-generated agent code needed consistent testing and refactoring to align with the system’s modular conventions.

🏆 Accomplishments That We're Proud Of

  • Integrated multiple analysis pipelines while maintaining agent interpretability.
  • Created a reproducible, user-friendly deployment on Streamlit Cloud with secure handling of environment secrets.
  • Built an adaptable framework that can support additional agents with minimal refactoring.
  • Facilitated user-driven exploration of ethical risks using real-world data via BigQuery.

📚 What We Learned

  • Multi-agent transparency demands thoughtful orchestration—users need traceable, well-explained outputs to build trust.
  • Reproducibility isn't just an ML concern; it's essential across deployment, UX, and backend design.
  • BigQuery is incredibly powerful, but securing and validating user queries within agent workflows is its own design surface.

🔮 What's Next for A Transparent Multi-Agent Framework for Ethical AI

We're planning to:

  • Add agent attribution visualization so users can trace back decisions even further.
  • Expand support for multilingual and multimodal inputs (e.g., PDF, images, and long-form text).
  • Integrate feedback loops where users can flag false positives/negatives to retrain and improve agents.
  • Package Emakia as a plug-and-play module for ethics-focused enterprise and academic use.

Built With

  • bigquery
  • bigquery-api
  • cursor
  • github
  • github-actions
  • google-adk
  • google-colab
  • langchain
  • numpy
  • packages.txt
  • pandas
  • python
  • streamlit
  • streamlit-cloud
  • v0.dev
Share this project:

Updates