Inspiration

The COVID-19 pandemic demonstrated how much we rely upon healthcare professionals and other front line workers. They are working 24/7 trying to control the pandemic; we decided it's time to shift the burden. It has been incredibly difficult to get a consultation with a doctor or another health professional, either via an in-person or virtual appointment. Our product allows concerned individuals to assess their symptoms and then provides them with a medium to connect with a live doctor for further assistance. We were inspired by the format of a walk-in clinic, and aim to streamline the process for a concerned patient receiving professional advice. However, we wanted to make it as easy and efficient as possible for individuals to self-diagnose and then reach out for medical help if needed, which is why we were spurred to create a smart assistant action.

Talking enables for a greater reach of accessibility and is less abstruse than programs written for the screen. Natural speech is for that very reason natural - it is the predominant method of communication and one everyone is familiar with.

What it does

It begins with the person saying "Hey Google, talk to my personal doctor". This will trigger the initiation of the app on any Android phone or Google Home device. All the interfacing is done via oral responses from the user and the Google assistant. The application proceeds to ask the user what symptoms they're experiencing and, after the user tells their assistant, it searches our database for a list of diseases that match their symptoms and reports back to the user. It also says a list of treatments for the user for relief. The app then asks the user if they would like to be connected with a live doctor or continue talking to the personal doctor app. If the user responds "Connect me to a live doctor", the assistant asks for them to say their phone number. Using the Twilio API, we send the user a text message containing a tel:[PHONE NUMBER] link. Upon clicking the link on their phone, they will be placed in a call with a doctor on the network. Our program seeks to allow patients to diagnose themselves from their own homes by using their voices and then connecting them with a live doctor on-hand. Even after the global pandemic subsides, our product will remain incredibly useful. Picture the services and flexibility of a walk-in clinic from the comfort of your own home, where all you need to do is say "Talk to my personal doctor" to get immediate medical attention,

We also allowed patients to create a profile on our patient-doctor portal where they can input their medical information and a user ID. A doctor on our network can then use this portal to more accurately diagnose the patient and provide suitable next-steps. Patients also get to select a security key that allows doctors to ensure they are speaking with the right patient.

How we built it

We designed the Google Home Action using Voiceflow, which is a platform that allows for the fast and flexible creation of a smart assistant application whilst abstracting away the lower-level logic and components. It allowed for the interfacing of Google Sheets - our database - and APIs. It was our team's first time using Voiceflow, and it was an incredibly fun and exciting, albeit occasionally challenging, experience.

Voiceflow helped us map out, end to end, the workflow of the project, handling capturing user input and interfacing with the Google Assistant. It provides the logic required for our application, abstracting away the underlying details, and allowing us to focus on designing and implementing the algorithms. Voiceflow also made it incredibly easy to test and debug our application and was quite intuitive as an almost-no-code platform. Google Actions were arcane to our entire team prior to this hackathon, and Voiceflow allowed us to start creating immediately, transitioning our idea into a tangible product in a matter of a few hours.

For the creation of the patient-doctor portal, we used Java and Netbeans to create a secure and reliable GUI. We used Git in order for multiple people to collaborate on the code base and effectively complete this half of our hack.

Challenges we ran into

The biggest challenge we ran into was facilitating the actual calling of the doctor from the patient. Google Home can directly call a number but there was no way to access this functionality via Voiceflow due to privacy concerns. Thus, we had to quickly adapt and think of a solution as this was a core part of our product and the main motivation for the hack. We brainstormed and came up with the idea to use Twilio's API to send a text message to the user which contains a tel: link. This way, they could either click on the link to be connected to a doctor or ask their smart assistant to call the number.

Another challenge was interfacing the API with Voiceflow and ensuring the pipeline worked consistently and had no errors fetching from the Sheets database. There were a lot of improvements and additions we made throughout our process in order to create the best version of our product we could, including testing different APIs and different methods of achieving our goal.

Accomplishments that we're proud of

We're incredibly proud of creating a multi-faceted hack that uses a newer technology i.e. actions for smart assistants. We wanted to create something distinct and innovative which can solve a real-world problem right now. We collaborated and delegated work effectively and all contributed immensely to the creation of the final product. There were many roadblocks we ran into, but we always persevered and came up with a solution.

What we learned

We learned how to use a new platform, Voiceflow, to develop applications for a completely different medium than we're used for. We learnt how to use voice to drive events and actions, as well as the immense potential that exists in this space. We also learnt how to work with REST APIs and garner data in a usable manner. We also learned how to create a GUI in Java that has a backend and implement security features to prevent unauthorized access and editing.

We also learned how to effectively test and demo our product, as well as think about solutions and implement them. Most of all, we learned the pipeline for creating a smart assistant action and the many nuances it presents.

What's next for MyPersonalDoctor

We're hoping to widen our network of doctors and health care professionals, as well as add a feature that transfers a patient to another doctor in the network if there is no response in 5 seconds. We hope to be able to use natural language processing in order to capture a more diverse range of user responses and provide more tailored feedback. We are also aiming to allow the app to understand and process more languages than just English, globalizing our product to allow for anyone to use it. Finally, we hope to create an app that uses deep learning (a convolutional neural network) to identify signs of more serious diseases, such as cancer and heart failure.

Built With

Share this project:

Updates