My roommates are pre-med students and Biology majors. We came across an issue where Med-School students tend to come across a time in their career when they need to practice talking to patients. This is an issue because to get a good practice and thorough results, it requires them to either find a diseased or hire an actor that leads them to spend time as well as money.

Hence we wanted to make sure that we are able to help the students and solve one of their real-life problems and decrease their futile use of time and money.


The Google Patient Guru is responsible for turning Google-Home into a "patient", using the Google APIs, Google Action, and accompanying platforms. It takes help of concepts such as Finite State Machines, Regular Finite Automata and Machine Learning to executes its functions.

With all these combined, Google-Home gets into the role of an actual "patient" and the user now takes up the role of the "doctor". The patient walks in and greets the doctor. Upon the doctor's introduction, the patient introduces themselves and answers questions as to what he/she is feeling like and how did it all begin. Upon further questioning, the patient talks about his/her symptoms. Finally, the doctor associates the probable disease that the patient may be suffering through, from the symptoms.

The doctor (user) gets to practice the diagnostics skills such as:

  • Greeting the patient and having proper bed-side etiquettes.
  • Posting questions to reach the root cause.
  • Ask probing questions, to the patient, to get the symptoms from them.
  • Proposing a probable disease based on the symptoms.
  • Giving a good "Doctor's Report".


There was a lot of effort that went into building of this product. There were roadblocks and changes, even more roadblocks and changes and finally after meandering through all that, we got through with a product that the team is really proud of. It was something we put our 36 hours worth of effort into, and we cannot be me proud of it.

The Google Patient Guru is a beautiful blend, which is there to help students save time and money, brought into existence by a culmination of Google Cloud, Google APIs, JavaScript, Dialogflow, Finite State Machines, Machine Learning and Team Work. We built it on multiple platforms including Google Actions, Dialog Flow, and the Google Home. Furthermore, the intent functions for Google Cloud acted as the the states for the state machine we have implemented in our project. The VR part was built with the help of unity.


  • Our initial plan depended on the usage of Amazon Echo Dot instead instead of Google Home Mini. The use of Amazon Web Services was difficult. Added to it the fact that it didn't end there; we had to use Amazon lex to get the state machines and bots up, made it much more difficult as it wasn't that user friendly. Hence, it led to a change in the platform used, and we switched to using Google Home mini instead.
  • One more thing that made the project's execution more difficult, was that two of our teammates were not computer science major and were not able to provide much help in terms of software related tasks, but contributed in many other ways.
  • After all that, in the very end we ran into was hardware related problem. We decided to implement a VR "patient" alongside our Google medical assistant. However, after completing our implementation of a 3D VR mobile character on Unity, the Oculus Rift was unable to sync with our devices as we did not have a device with an HDMI link directly to the GPU. This resulted in am unfinished VR application which we weren't able to test and improve.


-Our entire group collaborated with each other and built a successful project from scratch within the 36-hour limit while facing many setbacks. -Every type of software, framework, and technology we used this weekend was completely new to us and required quick learning and adaptation.

  • We were able to have the Google Patient Guru function exactly the way we imagined and even added new features such as a VR assistant to improve the project. -We feel that perfecting this project will definitely provide the much-needed training for medical students, without them having to search for diseased to practice with them or actors to act and practice with them.


  • This entire process was a great learning experience that not only introduced us to many types of technologies and frameworks but also allowed us to work on our problem-solving skills and ability to adapt.
  • Google Cloud: Our Google Patient Guru was completely constructed via Google Cloud APIs on Dialogflow.
  • Unity: Learned, designed, and implemented a 3D VR character animation that runs on Unity and based for Oculus
  • AWS, Amazon Lex: Although we diverted from Lex, we still learned and are able to create a functional conversational bot using Amazon APIs.


The implementation of this prototype works very well for our current problem statement and goals. However, the future is all about expansion of the following concept into various fields. The end goal is to reduce the effort, time and money expenditure for the users by providing them with a human like assistant form of Google Home. Henceforth, the added advanatge of constructing the project on Google Cloud APIs allow us the ability to expand into various web applications in the future. Right now it helps medical students but in the future the number of problems our Google Patient Guru could solve is endless.

Built With

Share this project: