Inspiration: Belfast is the tech capital of Northern Ireland, Allstate, Citi, FD Technologies, KX, Liberty IT, Rapid7, but there's a silent productivity tax on every engineer in every one of those offices: the corporate meeting. Convoluted stakeholder discussions, ambiguous requirements, poorly defined business logic. This costs development teams hours of rework, miscommunication, and technical debt every single week. And when those teams are satellite offices executing on decisions made in London or New York, the problem compounds. We looked at the meeting tool space and noticed something. Nobody builds for the engineer who wasn't there but still has to ship what was decided. That's the gap. That's what we built for.
What it does: distill. is a meeting intelligence tool built specifically for software engineers. It listens in real time, cuts through corporate jargon, and instead of producing a summary paragraph, it outputs exactly what a developer needs, structured requirements, defined business logic, explicit constraints, edge cases, and clarifying questions where the language was too vague to code against. When someone in a meeting says "make it fast" or "keep it secure," distill. flags it live and generates a specific clarification prompt. There's also a grounded Q&A layer. You can ask any past meeting anything and get answers that cite the transcript, not the model's training data. Over time, this becomes a searchable knowledge base of every decision the company has ever made.
How we built it: The frontend is React + Vite + TypeScript. The backend is FastAPI running on Uvicorn with SQLAlchemy for persistence. Audio streams into the backend via WebSocket endpoints. ElevenLabs Realtime STT handles transcription with sub-second latency. The transcript then hits our analysis pipeline, which uses the Azure AI / OpenAI client to run three structured extraction passes: requirements extraction, ambiguity detection, and business logic analysis. Each pass returns schema-validated JSON. Results are served back via API to the frontend for display and downstream use.
Challenges we ran into: Getting the structured extraction to be consistent was harder than getting it to work at all. The model would extract requirements reliably 80% of the time out of the box but the last 20% (edge cases in edge cases, ambiguous pronouns, implicit constraints) required careful prompt engineering and schema constraints to handle without hallucination.
Accomplishments that we're proud of: The pipeline works end to end. Live audio in, structured requirements out. The ambiguity detection is genuinely useful. It catches things a human PM would catch on a second read-through
What's next for distill. Finishing a stable MVP, not full production. Immediate priority is tightening core flow end-to-end: reliable transcription, clean requirement extraction, ambiguity prompts, and usable frontend outputs with fewer failures. Then run pilot testing with a few real users, gather feedback, and only after that move to production hardening/deployment.
The broader impact is direct. When Belfast's engineers are more productive, companies deliver more, grow faster, and crucially give talented people a reason to stay. Brain drain isn't just about salaries. It's about whether the environment is worth working in. Belfast 2036 doesn't thrive on good intentions. It thrives because the people building it had the right tools.
Built With
- api
- azure-cognitive-services-speech-/-azure-ai-transcription-platform/runtime:-browser-based-web-app-+-chrome-extension-integration-other:-websockets
- elevenlabs
- fastapi
- javascript
- pydantic
- python
- react-router-backend/api:-fastapi
- sql-frontend:-react
- sqlalchemy
- sqlalchemy-database:-sqlite-(current-local-mvp)-ai/llm:-azure-ai-projects-(azure-ai-projects)-with-openai-compatible-chat-client-speech/transcription:-elevenlabs-realtime-stt-(websocket)
- sqlite
- typescript
- uvicorn
- vite
Log in or sign up for Devpost to join the conversation.