Remote and marginalized communities have already been unable to easily access medical care when they need it the most. But with delays in treatment amplified by COVID-19, a lack of or inaccurate medical information leads to uninformed decisions that can exacerbate existing illnesses and symptoms or lead to unwanted new ones.

We discovered that 1 in 5 Canadians spend over 7 days waiting for a family doctor consultation, and even worse, in 2020 the Fraser Institute reported that Canadians had a median wait time of 22.6 weeks to receive the required medical treatment they needed.

Rather than spending long wait periods in pain and agony alone, it would be much more beneficial for Canadians to track their symptoms daily to better understand their health condition and have an accurate, up-to-date personal medical record to show their doctor at their next appointment visit.

How can we solve this problem with technology? Let us introduce you to TRACE.

What it does

TRACE is an application that allows you to document any symptoms or side effects you are experiencing on a day-to-day basis and tracks your health condition over time. By using TRACE, you will have a well-documented record of all issues (ie. symptoms) experienced since your last visit to a medical professional, while also monitoring any prescriptions being consumed during the said period.

As the application generates monthly reports, based on the users' entries, users can use the recorded information to improve communication with doctors and other healthcare professionals. This information is clearly analyzed and can be sent to your medical advisor to make your next doctor appointment more efficient. TRACE will allow doctors to visualize all of the treatments currently used alongside the side effects, thus permitting more effective healthcare decisions to be made. By making health journaling quick and easy, TRACE will help users to increase treatment adherence and therefore improve their overall health.

How we built it

The prototype design of the application was built using Figma. We implemented the image-to-text detection of prescription drugs using Google's Vision API (in Python and Javascript).

Challenges we ran into

As beginner hackers from different educational backgrounds, developing a mobile app was something we had never done. We strived to design a user-friendly UI/UX that fully captured the range of functionality we wanted to offer potential TRACE users.

In addition, we wanted to integrate the Google Vision API (OCR) into our final product, though it was very difficult. Thus, we ended up building it as an individual working module so that we could run some test samples on images of common prescription drugs.

Accomplishments that we're proud of

We are proud of bringing a design solution to life in only 36 hours. Though we are not experienced coders, we shared the same excitement and passion for this project and cause. We are grateful for this experience to meet new people, think up new ideas, and learn new skills!

What we learned

We learned how to use Google Vision and Speech-to-Test APIs on the Google Cloud Platform, which none of us had any prior experience in. Design wise, we learned a lot about optimizing user experience flows, which we conducted additional research on to ensure our app design was user-friendly. Finally, we also discovered many gaps in the healthcare industry through our market research, which we hope to continue tackling in the future!

What's next for TRACE

At TRACE, we value improving the accessibility and convenience of the application for our users.

We're adding the option for one device to add multiple accounts so that one family can have everyone's medical symptom records in one place. This makes TRACE convenient and accessible for households with only one device, and a perfect choice for parents that want to have all their children's medical updates on hand at the next doctor visit.

We'd also like to design integration with Apple's Health App for increased user accessibility.

We will also build out the mobile platform and integrate the Google Vision API OCR (for convenient logging of prescription drugs from taking an image) and Speech-to-Text API (for simple logging of symptoms through user's voice). This will enable the full range of features designed in our Figma prototype.

Built With

Share this project: