Inspiration

In 2019 with the onset of the COVID-19 pandemic, the current US healthcare infrastructure failed to support those already lacking quality healthcare pre-COVID - remote communities and minority groups. According to the CDC, these communities possess greater risks for COVID-19 and lack basic healthcare access due to a “lack of transportation, child care difficulties, inability to take time off work, and language/cultural barriers”. As members of the Asian immigrant community which includes numerous ESL and low-income members, we’ve seen first-hand these societal problems jeopardize the health of our loved ones. With this in mind, we were inspired to create a virtual environment where patients can access quality healthcare from their home.

What it does

iDoctor strives to protect those at high-risk of COVID-19 such as the elderly, low-income, and chronically ill, while also elevating tele-health. Unlike other health-related apps which allow you to log symptoms, iDoctor is accommodating for non-native English speakers and contains unique features to fully complete an online doctor’s visit. For those with chronic illnesses, iDoctor can prove to be life-saving as it closely monitors symptoms, alerts doctors, and allows patients to ask questions immediately through the messages feature, all the while fostering technological inclusivity. On the other hand, iDoctor can aid in making long-term health decisions, such as the continued use of a chemotherapeutic treatment by allowing users to view their side effects over time in the reports feature.

How we built it

We used the Mozilla Speech Recognition API and Google Cloud Translation API to build a tool for users to log their symptoms in their native language comfortably. In order to build the website, we used Python Django to integrate the features so that the user’s symptoms are automatically updated into the generated report. Front-end tools such as HTML, CSS, and JavaScript were also utilized. In general we're very proud of creating a fully integrated website with Django (as first-time web-developers). Special Fully-Implemented-Web-Features:

Voice-Recognition: Filling out forms based purely on voice recognition in order to account for communities that are not as tech-savvy (i.e. elderly communities who struggle to type on devices)

Translate API: Clear and simple way of making symptom-reporting easier for non-Native English speakers. Fosters technological inclusivity in remote communities.

Calendar: Fully functioning calendar app created in Django that logs appointments and sends reminders

Challenges we ran into

  • First time using Django and had to work with what we learned during the hackathon from watching videos.
  • Limited experience in front-end design languages like HTML + CSS, virtually none in JavaScript
  • We had very limited experience with APIs, and it was our first time working with Google Cloud. Connecting the symptoms and reports features in the back-end so that they would sync.
  • Prototyping and figuring out which features to prioritize for our app.
  • Splitting up the work and coordinating as a team proved difficult as everything was done via Zoom and we lived in different time zones (one of our members operated from Singapore).

Accomplishments that we're proud of

Despite being a minimum viable product with only the central features, our web application works to sync a user’s symptoms with a generated report. We also managed to use HTML, CSS, and JavaScrip, which we weren’t familiar with at the beginning. We believe our app can be applied for hospitals struggling to transition to an online patient-physician interaction experience.

What we learned

We were able to learn the basics of Django, HTML, CSS, JavaScript, and APIs. Furthermore, we gained skills in brainstorming (throwing out lots of ideas is the way to go), learning skills on the go, and searching for ways people have solved similar problems to ours on Google/implementing them to fit our needs. Our first hackathon and our first time working together ended up being an amazing experience, and we are more confident in our design, coding, and presenting skills.

What's next for iDoctor

Going forward, we believe there are numerous opportunities to improve our app. First, we would finish building all the features we could not get to- such as the ability to upload images of injuries or any visible symptoms the doctor can analyze accurately. Second, we could polish our existing features- such as providing an analysis of symptoms on our report. Finally, we could use machine learning and AI to assess the progression of a disease or determine the efficacy of a drug. Then, this data could be stored again to fine-tune the accuracy of the algorithm’s predictions for the user.

Share this project:

Updates