Inspiration
Network operation centers often see a "sea of green" on their technical dashboards, while a storm of customer complaints builds on social media. The problem is a total disconnect between network metrics (the "what") and customer sentiment (the "why"). We were inspired to build a system that fuses these two worlds, moving from reactive fire-fighting to proactive problem-solving by understanding the true, real-time customer experience across different regions.
How we built it
We built "InsightsFlow" as a complete, end-to-end system with four distinct, communicating components:
The Data Simulator (simulator.py): We first needed a realistic "world" to monitor. We built a Python simulator that serves a live JSON API. It simulates regional network metrics (latency, loss) and, most importantly, uses the Google Gemini API to generate a continuous stream of realistic, context-aware customer feedback, including tweets and support call summaries.
The AI Agent (agent_listener.py): This is the "brain" of our project. It's a multi-agent Python application that polls the simulator every 5 seconds.
Perception Agent: It takes the text from a tweet or call and uses NVIDIA's Nemotron model (via OpenRouter) to analyze it, extracting sentiment, topic (e.g., "network_signal," "billing," "app_functionality"), and urgency.
Happiness Tracker: This stateful agent calculates a "Happiness Score" for each region, maintaining both short-term and long-term moving averages to spot trends.
Orchestrator Agent: This agent bundles all the data—network metrics, analyzed sentiment, and the current happiness trend—and feeds it back to Nemotron, prompting it to make a high-level, structured JSON decision, such as {"action": "send_alert", "parameters": ...}.
The Reporting Server (reporter_with_storage.py): To bridge our backend agent to our frontend, we built a lightweight Python http.server. The AI Agent POSTs its final report (the raw data + its decision) to this server, which acts as a simple, in-memory data cache.
The Live Dashboard (streamlit_dashboard.py): We used Streamlit to create a real-time operations dashboard. It polls our reporting server every 5 seconds, processes the data with Pandas, and visualizes everything. This includes live-updating time-series charts for network latency and customer happiness, as well as a real-time log of every proactive decision the AI agent makes.
Challenges we ran into
Real-Time Synchronization: Our single greatest challenge was getting the data simulator and the AI agent to run in real-time. This required managing multiple asynchronous processes and ensuring the agent could poll, analyze, and make a decision before the next batch of data was generated, all without bottlenecks.
LLM Rate Limiting: We were using the free tier of the Gemini API for simulation, which meant we quickly ran into API rate limits. This forced us to be creative, batching our content generation (e.g., generating tweets every 3 ticks, not every tick) to keep the simulation flowing smoothly.
Realistic Scenario Design: It was surprisingly difficult to design meaningful "what-if" scenarios. Crafting the "Dallas Anomaly" (great signal, bad billing sentiment) required careful prompt engineering to test if our agent could really distinguish between a network problem and a customer service problem.
Real-Time Visualization: Finally, getting the Streamlit dashboard to continuously poll our reporting server and update multiple charts in real-time without flickering or slowing down was a small but important hurdle to creating a smooth user experience.
Accomplishments that we're proud of
Successfully Implementing Nemotron: We are incredibly proud of successfully using NVIDIA's Nemotron to power the entire agentic part of our application. We were able to get it to reliably perform complex analysis and, most importantly, return structured JSON for its decisions, which was the key to automating the workflow.
End-to-End Integration: Our biggest accomplishment is seeing all four components work together in harmony. Watching the Gemini simulator create an issue, the Nemotron agent catch it, and the Streamlit dashboard display the resulting alert seconds later was a fantastic moment.
Scaling Down Data Analysis: The system works. It successfully finds nuanced customer happiness problems—like the "billing" issue in Dallas that had nothing to do with the network—that a human operator would easily miss in a sea of raw data. It effectively scales down the massive task of data analysis into small, actionable insights.
What we learned
Agents are the Future: This problem proved that an agentic approach is ideal. Breaking the problem into specialized agents (Perception, State, Orchestration) was the key to building such a complex and powerful system.
JSON is the Key: Forcing an LLM to respond in structured JSON is the "superpower" that connects AI's reasoning to a software's concrete actions.
Context is Everything: We proved that network metrics alone are a poor indicator of the true customer experience. Without the context from customer sentiment, you're missing the "why."
What's next for InsightsFlow
Deeper Sentiment Analysis: We plan to analyze the tonality and emotion from customer call audio, adding this "emotional score" to our Happiness Calculator for a much deeper signal. Real-World Integration: We will replace the simulator with live data from the X (Twitter) API and internal tools like ServiceNow or Jira. Unified Dashboarding: We aim to integrate our "Happiness Score" directly into industry dashboards like Grafana, placing it alongside traditional network metrics. Automated Actions: We will empower the agent to take real action based on its decisions, like automatically creating a P1 ticket in PagerDuty or sending an alert to a Slack channel.

Log in or sign up for Devpost to join the conversation.