Inspiration

We were inspired by the frustration of seeing headlines that oversimplify or sensationalize complex stories—especially around timely issues like the recent slew of executive orders. Often, a headline sparks curiosity, but digging deeper takes time and effort. We wanted something that could respond to that instinctive moment—“Hey, I saw this headline and I want to know more.” Iris was created to make that kind of deeper engagement effortless. Just ask out loud, and it calls you back with a balanced summary of the broader picture.

What We Learned

Building Iris taught us how to bridge voice interaction with real-time research and summarization. We explored VAPI's voice AI platform, honed prompt engineering for effective summarization, and tackled the challenges of making complex research sound natural and concise over a phone call.

How We Built It

Voice Input: Users speak their query via phone using VAPI’s inbound call API.

NLU & Research: We parse the transcription and trigger automated research using web search APIs, prioritizing diverse viewpoints.

Summarization: A large language model synthesizes findings into a short, clear summary.

Voice Callback: VAPI initiates an outbound call. Iris delivers the summary and answers follow-ups using the research data in memory.

Challenges

Voice UX Design: Keeping summaries concise yet informative for phone delivery.

Diversity of Sources: Ensuring our research covered genuinely varied viewpoints.

Time Constraints: We scoped tightly around one query type—news viewpoint analysis—to ship within the hackathon timeframe.

Built With

VAPI (Voice AI Platform)

OpenAI GPT API (LLM summarization)

APIFy (crawling & research)

Node.js & TypeScript (backend)

MCP

Built With

Share this project:

Updates