Inspiration
We were inspired to create MediNform because of how anxiety inducing being sick and the healthcare industry can be. We noticed there was sometimes a divide between physicians and patients and we wanted to help bridge the information gap.
What it does
MediNform leverages speech-to-text technology to capture live doctor-patient interactions, delivering real-time feedback and relevant information tailored to each consultation. Utilizing Google's Gemini AI, MediNform identifies diseases, injuries, medications, procedures, and treatments, offering patients additional context and support. This functionality is designed to ease the anxiety of patients who may feel overwhelmed by their medical situations.
In addition to providing reassurance, MediNform serves as an essential accessibility tool for individuals with communication challenges, such as anxiety, hearing loss and muteness. The platform adjusts its explanations based on the patient’s age, ensuring that younger patients receive simplified, easy-to-understand information, while older patients benefit from more detailed, technical descriptions. This adaptability promotes a better understanding of health issues, fostering a more comfortable and informed experience for all patients.
While just a demo, our team hopes MediNform can spark innovation in the way we approch medical consultations. We hope this demo can showcase how the power of AI can be used to help patients communicate with healthcare professionals. Being sick is stressful enough, so we want to put a focus on taking some of that stress away.
How we built it
We used python to create a speech-to-text buffer that updates live with a conversation. And then uses that text output to look for key words in a Trie that contains thousands of medical terms that a patient may not understand. When a keyword is found, MediNform prompts Google Gemini to provide a description of that word and why it's important. These key words can be anything from diseases to medications to treatments. We also included a simple doctor's view to simulate the demo of a doctor talking to and taking notes about his patient.
Challenges we ran into
Our biggest challenges arose from having to learn many different API's and imports in just the span of 24 hours. Other issues emerged when linking our front end and back end code due to mismanaged communication.
Accomplishments that we're proud of
We have a working speech to text that can look up key dictionary words and give a response within a reasonable amount of time for the time we had to program. We created an interactive GUI that uses threading to sync the application live with the real-time conversation/consultation. We are also proud to have finished the project fully with little to no errors!
What we learned
Communication is key. Everyone needs to be on board with our ideas, and our ideas need to be properly expressed. Our biggest issues arose from lack of communication and miscommunication. In the end we overcame adversity and were able to completely integrate the separate parts of out code into a project we are proud to have created.
What's next for MediNform
If interest is gained, we would love to further our development for small scale testing within hospitals and doctors offices to see just how much better MediNform can make patient experiences
Log in or sign up for Devpost to join the conversation.