Inspiration

Healthcare conversations are some of the most information-dense and emotionally important interactions people have, yet the usable output after a visit is often incomplete, delayed, or forgotten. Clinicians spend too much time converting spoken dialogue into documentation, while patients often leave with only a partial memory of what was said, prescribed, or recommended.

We built Synth to close that gap. Instead of treating a visit as something that happens once and then has to be manually reconstructed, we wanted to treat it as a structured event that can be captured, organized, and reused immediately. The belief behind the project is simple: the visit itself should become the source of truth.

In practical terms, the problem looked like this:

  • clinicians have to document what was already said
  • patients have to remember instructions from memory
  • follow-up questions happen later, often without context
  • valuable medical detail gets buried in unstructured conversation

We wanted Synth to reduce that friction.

$$ \text{Visit Friction} = \text{Documentation Burden} + \text{Recall Loss} + \text{Follow-up Ambiguity} $$

$$ \text{Our Goal} = \min(\text{Visit Friction}) \quad \text{while maximizing} \quad \text{clarity, trust, and continuity} $$

What it does

Synth is an AI-powered medical visit assistant that transforms doctor-patient conversations into structured clinical outputs and grounded follow-up tools.

At a high level, it does four things:

  • captures or accepts visit conversation input
  • transcribes and structures that conversation
  • generates clinical artifacts such as summaries and SOAP notes
  • enables grounded follow-up through a saved patient-facing share experience

The workflow is designed to feel continuous rather than fragmented:

  1. A clinician records or uploads a visit conversation.
  2. Synth processes the audio and transcript.
  3. It generates a structured summary and SOAP note draft.
  4. The visit is saved as a reusable clinical record.
  5. A secure share link can be created for patient follow-up.
  6. The patient can ask questions based only on that saved visit context.

What makes Synth different is that it is not just a note generator. It is a visit intelligence layer. The same underlying record powers clinician documentation, patient communication, and future workflow actions.

Inline, the product can be described as:

$$ \text{Synth} = \text{Transcription} + \text{Clinical Structuring} + \text{Grounded AI Follow-up} $$

How we built it

We built Synth as a full-stack application using Next.js 16, React 19, TypeScript, Tailwind CSS, and Radix UI for the product experience. For persistence, we used Prisma ORM with PostgreSQL. For AI generation, we integrated Amazon Bedrock with Amazon Nova, and for speech processing we used AWS Transcribe.

The app is designed around a real end-to-end clinical workflow, not isolated demo screens. That meant connecting the frontend experience, the visit data model, the AI generation layer, and the patient follow-up flow into one coherent system.

mermaid
flowchart LR
  A[Visit Audio or Transcript] --> B[AWS Transcribe]
  B --> C[Structured Transcript]
  C --> D[Amazon Nova]
  D --> E[Summary + SOAP Draft]
  E --> F[PostgreSQL via Prisma]
  F --> G[Clinician Workspace]
  F --> H[Patient Share Link]
  H --> I[Grounded Follow-up Chat]

A representative part of the implementation is the generation step, where Synth produces structured outputs from the transcript in parallel:

const [summary, soapNotes] = await Promise.all([
  generateConversationSummary(transcript),
  generateSoapNotesFromTranscript(transcript),
])

We also modeled the clinical data so each saved visit could support documentation, finalization, and patient follow-up from a single source:

model Visit {
  id             String   @id @default(cuid())
  patientId      String
  clinicianId    String
  status         String
  chiefComplaint String?
  startedAt      DateTime @default(now())
  finalizedAt    DateTime?
}

On the infrastructure side, we prepared Synth to run on AWS using:

  • Amazon ECS Fargate for app hosting
  • Amazon RDS / PostgreSQL for data storage
  • Amazon S3 for uploads and transcription assets
  • AWS Secrets Manager for runtime secrets
  • Amazon CloudWatch for logs and observability
  • Docker and Terraform for packaging and deployment

We also built protected clinician workflows, onboarding, saved visit flows, and a patient-facing chat experience that stays tied to the visit record rather than behaving like a generic chatbot.

Challenges we ran into

One major challenge was grounding. In a healthcare setting, a fluent answer is not enough; it has to be tied to the actual visit context. That meant we had to design the assistant flow so it uses saved clinical data instead of drifting into unsupported completions.

Another challenge was workflow continuity. Audio processing, transcript handling, AI generation, persistence, and follow-up each have different latency and UX patterns. Making them feel like one product required careful coordination between backend routes, UI states, and data modeling.

We also had to balance hackathon speed with real architecture. It is easy to build a flashy single-screen demo. It is much harder to build something that already resembles a product system with pages, APIs, storage, infrastructure, and multiple user paths.

The core lesson from the hard parts can be summarized as:

$$ \text{Useful Healthcare AI} = \text{Model Quality} \times \text{Data Structure} \times \text{Grounding Discipline} \times \text{Workflow Design} $$

If any one of those terms goes to zero, the experience breaks down quickly.

Accomplishments that we're proud of

We are proud that Synth is an end-to-end clinical product, not just a prompt wrapper. It captures a visit, structures it, generates documentation, saves it, and reuses it for follow-up.

We are also proud that the system is built around trustworthy reuse of visit data. The same record supports multiple surfaces: clinician documentation, saved workflow state, patient follow-up, and future extensibility.

A few specific accomplishments we are especially proud of:

  • a clean transcript-to-summary-to-SOAP workflow
  • a patient-facing grounded follow-up experience
  • a full-stack AWS-based architecture rather than a local-only prototype
  • a data model that supports visits, documentation, share links, appointments, and care-plan-style extensions
  • a UI that feels like a product experience rather than a collection of disconnected demos

In short, we did not just build an AI feature. We built the beginnings of a clinical memory system.

What we learned

The biggest lesson we learned is that in healthcare, structure beats spectacle. A powerful model matters, but what matters more is whether the output fits into a reliable workflow and can be traced back to what actually happened during the visit.

We also learned that AI quality depends heavily on product architecture. Prompting is only one layer. The schema, APIs, saved state, UX sequencing, and retrieval constraints all shape whether the result feels trustworthy.

Some of our clearest takeaways were:

  • unstructured conversation becomes much more valuable once it is normalized into a reusable record
  • good AI UX is as much about system boundaries as it is about model intelligence
  • persistence matters because a visit is not a one-time generation event
  • patient trust improves when the follow-up experience is tied to their actual visit context
  • full-stack thinking is essential when building clinical tools

What's next for Synth

Our next step is to make Synth even stronger as a real clinical workflow product.

In the short term, we want to improve:

  • transcript quality and speaker separation
  • richer visit finalization workflows
  • better evidence visibility in patient follow-up responses
  • stronger clinician review and editing controls before final save

In the medium term, we want to expand Synth into a broader care workflow layer:

  • cross-visit trend summaries
  • medication and follow-up tracking
  • care-plan generation
  • clinician productivity analytics
  • deeper patient summary experiences after the visit

In the longer term, we see Synth becoming a system that helps turn clinical conversations into a durable and queryable memory layer for care.

$$ \text{Long-Term Vision} = \text{Conversation} \rightarrow \text{Clinical Record} \rightarrow \text{Shared Understanding} \rightarrow \text{Better Continuity of Care} $$

That is the real ambition behind Synth: not just helping generate notes, but helping medicine remember more clearly, communicate more accurately, and follow through more effectively.

// The idea in one line:
const synth = conversation => structuredCareMemory(conversation)

Built With

Share this project:

Updates