LinkCare — Project Story
Inspiration
A family member of someone on our team went through an unexpected diagnosis when everything seemed to be going normally. This sudden change left them overwhelmed, and it wasn't really clear where to turn for advice and a community. What we realized was that in the moment, people just need the comfort of others who have lived a similar experience.
Our goal was to create an online environment where patients can connect with people who truly understand what they're going through. Having a community like this throughout the recovery process is important for emotional support and may help people develop confidence when it comes to making medical decisions.
What It Does
LinkCare connects patients with peers who share the same diagnosis and stage, forming trusted care circles. Within these circles, users can share milestones, ask questions, and support one another throughout their journey.
Additionally, LinkCare has doctor recommendation features based on real patient experiences, which highlights the specialists that are commonly trusted and respected. Instead of relying on anonymous ratings, users have a collection of reviews that may be more personal.
Patients can type or speak their condition, and our AI instantly matches them to relevant doctors and a supportive peer network. The platform also adapts its interface tone based on emotional signals, creating a more human-centered experience. LinkCare is also accessible for all, using voice assistance to help those who struggle to read doctor summaries.
How We Built It
We built LinkCare as a full-stack AI application, with each layer designed around a specific user need.
Frontend: We built the interface in Next.js with React, styled with Tailwind CSS, and animated with Framer Motion. The goal was to make every interaction feel calm and welcoming. Vector Database & RAG Pipeline: The core of LinkCare's doctor recommendation system is a Retrieval-Augmented Generation (RAG) pipeline backed by Actian VectorAI.
Patient experiences are embedded using Google's gemini-embedding-001 model into 3072-dimensional vectors and indexed using HNSW for fast approximate nearest-neighbor search. When a user submits their condition, we retrieve the most semantically similar experiences and rank doctors using a composite score:
composite = 0.45·s̄(similarity) + 0.35·s̄(outcome) + 0.20·((s̄(sentiment) + 1)/2)
This weighting reflects our product priority: semantic relevance first, then clinical outcomes, then patient sentiment. Multi-Agent Consensus Engine: Rather than relying on a single AI opinion, we built a consensus engine that spins up three specialist personas in parallel:
- Clinical Specialist: focuses on outcome scores, recovery times, and evidence-based treatment efficacy.
- Patient Advocate: centers the human experience: communication style, emotional support, and accessibility.
- Data Scientist: evaluates statistical robustness, sample sizes, and whether scores are driven by noise.
Each persona independently calls the Gemini Agent and returns a structured verdict. The final agreement score blends vote consensus with average confidence:
agreement_score = min(95, floor(avg(confidence) * (v_max / n_personas)))
Voice Input/Output: We deployed OpenAI Whisper on a Modal T4 GPU to handle voice queries. Browser audio recorded via MediaRecorder (WebM format) is re-muxed to WAV using ffmpeg before transcription, which was necessary to handle Chrome's malformed WebM headers. On the output side, we integrated ElevenLabs to read AI-generated recommendation summaries aloud, streaming audio/mpeg directly back to the client using eleven_turbo_v2 for low latency. Together, these make the entire doctor recommendation flow hands-free.
REST Bridge: Since Actian VectorAI uses a gRPC interface and our frontend is Node.js, we wrote a FastAPI bridge (06_rest_bridge.py) that wraps the Python SDK and exposes a clean HTTP API for the Next.js app to consume.
Challenges We Ran Into
Throughout the development of our product, we ran into various roadblocks that pushed us to adapt and reassess our approach.
One major pivot was our initial plan to incorporate the PreSage SDK. We originally planned to use it to analyze user emotion through facial expression recognition. However, we found out that it was incompatible with our stack and pivoted to Whisper AI, a voice-based approach that let us explore emotional features in a way that better fit our architecture. Adapting the emotional state index to meaningfully influence the consensus engine output was its own challenge, since we had to think about how emotional context should shift recommendations.
Integrating Actian VectorAI into our backend also required careful configuration and debugging. Working with a vector database was new to us, and getting the Python SDK, Docker container, and REST bridge all communicating correctly took significant iteration.
Accomplishments That We're Proud Of
As a team of four, we're proud that the full RAG pipeline works end-to-end: a user speaks or types their condition, gets a real-time embedding, retrieves semantically matched patient experiences from a vector database, and receives ranked doctor recommendations, all in one seamless flow.
We're also proud of our multi-agent consensus engine. Getting three independent AI specialists to analyze the same data in parallel and synthesize a confidence-weighted recommendation felt like an innovative approach to the problem of medical decision support. It would have been easy just to call one model, but building the full voting-and-divergence system made the product meaningfully more trustworthy.
Finally, we’re especially proud of successfully shipping voice input powered by Whisper. Bringing this feature to life required us to work through backend integration challenges and ensure audio was processed reliably from the browser to our system. In the end, we built a smooth voice experience that makes the platform more accessible, particularly for patients who may feel more comfortable speaking rather than typing about sensitive health concerns. On the output side, streaming ElevenLabs audio back to the client in real time required careful handling of chunked audio responses to avoid buffering delays.
What We Learned
We learned that building trust into an AI system matters as much as building accuracy. A single model giving a confident recommendation feels non-transparent. Three specialists disagreeing and showing you why you got that recommendation felt real and honest.
We also learned a lot about the practical realities of working with vector databases. The same goes for the nuances of audio processing, mainly the gap between working on our own machine versus working in a serverless GPU container.
In addition, we learned how much a clear emotional motivation, like building something for a real person going through something hard, keeps a team focused when the technical problems get frustrating.
What's Next for LinkCare
The next step for LinkCare is centered around refinement and growth. Since the functionality is strong, we will be focusing on enhancing the user experience, making interactions smoother, improving interface clarity, and overall creating a more intuitive experience. We want to make sure users feel comfortable and supported throughout their full journey.
We also plan on expanding the LinkCare ecosystem by growing our network of users and verified medical professionals. At the same time, we want to improve our social aspect, enhancing posting features and discoverability.

Log in or sign up for Devpost to join the conversation.