🌟 What Inspired Us to Build InsightMesh
In many organizations, customer and user feedback is collected—but rarely understood or acted upon at scale. Teams are often overwhelmed by hundreds or thousands of feedback messages scattered across channels, lacking the time or resources to manually process, categorize, or extract insights from them.
We were inspired by this gap between feedback collection and feedback action.
We imagined a system that could act like a human analyst—reading feedback, categorizing it, deciding how important it is, explaining the reasoning behind decisions, and then storing everything in a central dashboard. The idea of building a multi-agent system powered by AI that mimics this entire pipeline excited us.
We also wanted to experiment with cutting-edge tools like Google Cloud Vertex AI, LLMs like Gemini, and RAG (Retrieval-Augmented Generation)—and see how they could solve real-world problems.
At its core, InsightMesh was born from our desire to:
✅ Automate feedback processing intelligently
🧠 Build a transparent, explainable AI system
🗂️ Bring structure and meaning to unstructured user input
☁️ Learn how to use cloud-scale AI tools to build something impactful
It’s more than just a dashboard—it’s a feedback intelligence system designed to help organizations truly listen to their users, with the power of AI.
🚀 What It Does
InsightMesh is an intelligent feedback processing system that:
Accepts raw user/customer feedback
Uses AI to:
Categorize feedback (e.g., bug, feature request, sentiment)
Prioritize based on urgency or importance
Explain the reasoning using LLM-based chain-of-thought
Retrieve relevant context using RAG (if enabled)
Log structured feedback into BigQuery
Provides a Streamlit-based dashboard to:
Analyze feedback pipelines step-by-step
Visualize insights using graphs
Summarize multiple feedbacks using Gemini Pro
Manually inject feedback into the system
🛠️ How We Built It
Frontend: Streamlit for interactive dashboards
Backend: Python multi-agent architecture
Cloud: Deployed on Google Cloud Run, using:
Vertex AI for LLMs (Gemini Pro, Embedding models)
BigQuery for logging & analytics
Matching Engine for RAG context retrieval
Agents:
Classification Agent
Prioritization Agent
Explainer Agent
Summarizer Agent
Feedback Logger Agent
Feedback Viewer Agent
RAG: Retrieval using pre-indexed corpus powered by Gemini embeddings
🧗 Challenges We Ran Into
Configuring Vertex AI credentials and endpoints in Docker and Cloud Run
Building a modular agent architecture with clean step flows
Debugging RAG integration with Matching Engine (embedding index access issues)
Ensuring BigQuery schema compatibility and data integrity
Managing secret keys and environment variables securely in cloud deployments
🏆 Accomplishments We're Proud Of
Built a full-fledged feedback intelligence platform from scratch
Successfully integrated multiple Vertex AI APIs (Gemini, Matching Engine, BigQuery)
Deployed a production-ready system to Google Cloud Run
Developed a flexible agent framework for future extensions
Learned to manage authentication, indexing, and inference across GCP services
📚 What We Learned
Deep understanding of how to build production apps on Vertex AI + Cloud Run
Hands-on with multi-agent system design
How to perform embedding-based semantic search using Matching Engine
Working with BigQuery for real-time feedback logging and analytics
Best practices in Dockerizing and securing cloud apps
🔮 What's Next for InsightMesh
🔗 Integrate real-time feedback sources (Slack, Gmail, support forms)
🤖 Train custom classification models using Vertex AutoML
🧾 Improve RAG pipeline with custom corpus expansion and fine-tuned chunking
📈 Add user feedback rating and sentiment tracking
🛡️ Add role-based access control and login to the dashboard
🌍 Support multilingual feedback processing using Gemini 1.5 Pro
Log in or sign up for Devpost to join the conversation.