Inspiration
As immigrant children, we're aware of the difficulties our parents faced when communicating in English. This is especially apparent when talking about more complicated information, like their health. According to National Polling results, there are over 26 million people who have limited English proficiency. Additionally, about a third of adults with limited English proficiency say that they have faced language barriers when seeking healthcare. And during doctor's appointments, some of these non-English speaking patients are stuck with either inadequate understandings of translations or even no translations. They would have a hard time grasping words or phrases that are too technical for them to understand. Thus, this inspired us to create something that can bridge that communication gap and allow doctors and patients to understand each other easily.
What it does
- Voice-to-Text Transcriptions: Doctors can report on the patient's situation by speaking to them in English.
- This report will be saved in the patient's appointment tab, which is translated to their native language.
- Voice-to-Voice Transcriptions: As the doctor finishes their sentences, their voice is immediately translated to the patient's native language.
- 3D Anatomy Visualizations: Allows the patient to actively see what part of the body that doctor is talking about, as the 3D model dynamically changes through the doctor's responses.
How we built it
The project is primarily rooted within the React.js framework in the frontend, and Python in the backend. We used Vapi's API in combination with OpenAI's models to enable both voice-to-voice interactions and voice-to-text transcription. We used Gemini API to simplify and produce text-to-text translations. Also, we referred to BioDigital to procure and implement a 3D human anatomy model.
Challenges we ran into
- Correctly implementing APIs and understanding their respective documentations
- Integrating the frontend and the backend to work flawlessly
- Keeping the UI/UX intuitive for both medical professionals and patients, as well as making it highly accessible for those who are less tech-savvy
Accomplishments that we're proud of
- We successfully created a working prototype that can handle voice input, translate it in real-time, and present it both as text and voice in another language
- Implementing a dynamic, real-time anatomy visualization based on real-time medical discussions
What we learned
- Implementing smooth and accurate voice interactions
- Balancing features we wanted to add and what was more practical for our users
What's next for BridgeMed-AI
- Expanding our translation features to support more dialects, particularly underrepresented ones, will help reach more communities.
- Encourage more open and personal communication between patients and their doctor.
Built With
- biodigital
- claude
- css
- gemini
- html
- javascript
- openai
- python
- react.js
- vapi




Log in or sign up for Devpost to join the conversation.