Inspiration

Every team I've ever been part of runs into the same wall. You get everyone on a call, there's energy, ideas are flying — and then someone asks, "Okay so what do we actually do?" And suddenly the call goes quiet.

Not because people don't care. But because nobody has the context. Nobody remembers what was decided last week. The one person who does is talking too fast. Someone's writing notes in a doc nobody will read. A decision gets made, everyone nods, the call ends — and two days later half the team is executing on completely different assumptions.

I've lived that chaos more times than I can count. And the more I looked at the tools out there — Otter, Fireflies, all the rest — the more frustrated I got. They just watch. They transcribe and disappear. They don't help you think, they don't push back on weak ideas, they don't remind you of what your team actually decided last month, and they definitely don't go do anything after the meeting ends.

That frustration is where Lira came from.


What it does

Lira is an AI meeting participant — not a recorder, not a bot that pastes a summary into Slack. An actual participant. It joins the call, listens in real-time, responds when addressed, and contributes to the conversation the same way a well-prepared teammate would.

The thing that makes it different is the layer underneath. Lira doesn't just know generic things — it knows your company. You connect your organization, describe your products, your team, your terminology. You can crawl your website, upload your docs, drop in PDFs and Notion exports. All of that gets embedded, indexed, and pulled in as context when Lira joins your meeting.

So when you're mid-call and you ask "Hey Lira, what did we decide about the pricing model?" — it doesn't guess. It actually knows.

And after the meeting? Lira extracts action items, creates tasks, fires off webhooks, notifies Slack channels. It's not just a note-taker. It's the follow-through that teams are always missing.

We also built an interview feature — Lira can conduct structured interviews, ask follow-up questions based on candidate responses, and produce scored evaluations. No scheduling back-and-forth, no human needed in the room.


How we built it

The core of Lira is Amazon Nova Sonic — a speech-to-speech model that lets Lira actually talk in meetings, not just respond via text. The frontend is built in React with WebRTC handling the real-time bidirectional audio stream.

On the backend we're running a Fastify API on EC2, with DynamoDB for meeting and connection state, S3 for document storage, and Qdrant as our vector database for semantic search over organization knowledge. Every document a user uploads goes through a RAG pipeline — chunked, embedded using OpenAI's text-embedding model, and stored in Qdrant. When a meeting starts, the right context gets assembled and injected into Lira's system prompt, token-budget aware so the model doesn't get overloaded.

The task execution engine uses GPT-4o to extract structured action items from meeting transcripts, then routes them — to a project management webhook, a Slack notification, or whatever the team has configured.


Challenges we ran into

Real-time audio at scale was the first wall. Maintaining low-latency bidirectional audio through WebRTC, keeping it stable across different network conditions, and feeding it cleanly into Nova Sonic without artifacts — that required more iteration than almost anything else. Latency in voice AI is brutal — users have almost no tolerance for it. Even 300ms of extra delay makes the whole interaction feel wrong.

Context injection was the second. You can't just dump a company's entire knowledge base into a system prompt. You have to retrieve the right chunks, rank them by relevance to what's actually being discussed, respect token limits, and refresh mid-session when the conversation shifts topics. Getting that pipeline to work reliably — and to fail gracefully when it couldn't — took a lot of work.

Making it feel natural was the third and honestly the most underrated one. A voice AI in a meeting is only useful if the people in the meeting don't feel like they're talking to a bot. That means appropriate interruption timing, natural-sounding speech, knowing when to contribute versus when to stay quiet. It's a product problem as much as a technical one.

Accomplishments that we're proud of

Getting Lira to feel like a real participant and not a demo toy is the thing we're most proud of. It's one thing to show a voice AI answering scripted questions — it's another to have it join a live, unscripted meeting and hold its own. The organization context system is also something we're genuinely proud of: the full pipeline from document upload to semantic retrieval to token-aware context injection, with zero cross-contamination across organizations, was a hard problem and we solved it well.

The interview feature surprised us too. It went from a stretch goal to a full working feature — Lira can now conduct structured interviews end-to-end, adapt follow-up questions dynamically based on what a candidate says, and generate scored evaluations. That felt like a real leap from "meeting tool" to "AI teammate."

What we learned

We came in thinking the hard part would be the AI. It wasn't.

The hard part was context. Making an AI that sounds smart in a generic demo is easy. Making one that's genuinely useful inside a specific organization — that knows the company's language, its products, its quirks — that took real architecture. The organization context system went through several rewrites before it felt right.

We also learned: build for the boring use case, not the demo case. It's easy to make Lira look impressive when everything is going right. The work is making it gracefully handle empty knowledge bases, partial organization profiles, and meetings where nobody addresses Lira at all. Real products live in the boring paths.

What's next for Lira

The meeting is just the entry point. The real vision is Lira as a full agentic operator — an AI that doesn't just understand what happened in a meeting, but acts on it end-to-end.

That means autonomous meeting scheduling, email drafting and follow-ups, deeper project management integrations, standup automation, and an onboarding copilot that gets new team members up to speed using the organization's own knowledge base. We also want to build a decision tracker — something that holds teams accountable to what they actually committed to in previous meetings, not what they vaguely remember committing to.

The long-term goal is simple: you shouldn't have to think about what happens after a meeting. Lira handles it.

Built With

  • amazon-nova-lite-(reasoning)
  • amazon-web-services
  • aws-dynamodb
  • aws-lambda-databases-&-search:-dynamodb-(meeting/connection-state)
  • custom-webhook-engine-infrastructure-&-devops:-systemd-(service-management)
  • deploy
  • docker-(qdrant-container)
  • fastify
  • javascript-frameworks-&-libraries:-react
  • languages:-typescript
  • openai-gpt-4o-(task-extraction)
  • openai-text-embedding-3-small-(vector-embeddings)-cloud-services:-aws-ec2
  • qdrant-(vector-database-for-semantic-search)
  • rsync-based
  • s3-(document-storage)-apis-&-integrations:-google-oauth
  • slack-webhooks
  • tailwindcss-ai-/-ml:-amazon-nova-sonic-(speech-to-speech)
  • ubuntu-22.04
  • vercel-(frontend-deployment)
  • vite
  • webrtc
Share this project:

Updates