Inspiration

We wanted to build an agent that doesn’t just react—it listens, looks at live data, and gets better over time. Voice support is a perfect fit: customers speak, metrics (ticket volume, SLA) change in real time, and the right decision (respond vs escalate) should depend on both. The Self Improving Agents Hack pushed us to wire three sponsor tools into one pipeline and add a real feedback loop.

What it does

Voice Support Agent takes a customer transcript (or voice input), runs it through Modulate for voice/emotion/safety insights, pulls Lightdash metrics (e.g. open tickets, support volume), and sends both to an Airia agent that decides whether to respond in-app or escalate. After each interaction, we record the outcome; the system self-improves by adjusting escalation thresholds from the last several outcomes—no manual tuning.

How we built it

We used a Node/Express backend that orchestrates three APIs: Modulate (voice insights), Lightdash (metric query v2), and Airia (agent invoke via REST). A single pipeline runs on each support event: fetch insights and metrics in parallel, call the Airia agent with that context and current thresholds, then return a suggested action. Outcomes are stored and fed into a small self-improvement module that updates “escalate if open_tickets > X” (and similar rules) from recent success/escalation history. The demo UI is a simple static page that hits the pipeline, records outcomes, and shows live vs mock API status and recent runs.

Challenges we ran into

Mapping each sponsor’s API (auth, request shape, base URLs) and fitting them into one coherent flow took iteration. Making the “self-improving” part visible and demo-friendly in a short time was another—we focused on one clear mechanism (threshold updates from outcome history) and exposed it in the UI and state API.

Accomplishments that we're proud of

We integrated all three sponsor tools in one narrative: voice (Modulate) and metrics (Lightdash) feeding an agent (Airia) that acts without a human in the loop, plus a real feedback loop that changes behavior from outcomes. The app runs end-to-end with mocks when keys aren’t set, so the full flow is demoable anywhere.

What we learned

How to design a minimal but convincing self-improvement loop (thresholds + outcome history), and how much clarity matters when demoing three APIs in three minutes—one flow, one story.

What's next for Voice Support Agent

Real voice in/out (e.g. Modulate + TTS), more Lightdash metrics and filters, and richer Airia prompts so the agent can explain its reasoning. We’d also add A/B-style logging to validate that threshold updates actually improve outcomes over time.

Built With

Share this project:

Updates