Inspiration

Simply put, we set out to make the healthcare experience more holistic. Our healthcare systems so often seek to generalize: to classify our qualitative, subjective experiences into black-and-white boxes. But in an era driven by data, these heuristics hurt us more often than they help us. Through advancements in wearable technologies, such as the Fitbits and Apple Watches that inspired our project, we now have the ability to gather a vast range of health data with incredible ease; seeing these capabilities, our team determined to take these metrics and interpret them in more accessible, inclusive, and thoughtful ways.

What it does

When logging into InSync, users first authenticate and connect to their Fitbit and MyChart accounts, allowing the application to access their wearable's biometric data and electronic health records, respectively. From the former, we extract a number of raw biometrics, such as blood oxygen saturation (SpO2) and heart rate (bpm), and process each in two ways: first, we graphically display the data itself on the user's dashboard, and second, we pass it into a Gemini-based chatbot that allows the user to interact and engage with the data in more meaningful ways. Users' dashboards are updated with new biometrics every half hour, and if any worrying trends appear over time -- such as noticeable increases in a user's average heart rate or drops in their blood oxygen saturation -- the application flags them for users.

How we built it

Due to the short timescale of this project, the cost-barrier for many medical-grade APIs, and our lack of access to real users' biometric data, we instead chose to create data (CSV files) for two example users when building this project as a proof-of-concept -- one whose data indicates potential medical concerns and another that does indicate some -- to demonstrate InSync's intended functionalities. However, we did still create scripts to integrate with the Fitbit API, so this functionality is technically available for InSync, assuming it's given access to a user's wearable for real-time data. We built our backend with Python (Flask), integrating Gemini's API to create both a background processing unit, which takes in the biometric data and assesses for anomalies, and a medical chatbot for users to interact with. We built our frontend primarily with React, Tailwind CSS, TypeScript, and JavaScript, using these frameworks to create an accessible user interface.

Challenges we ran into

When dealing with data privacy, we ran into a number of issues; initially, we considered simply removing all personally identifiable information (PII), but realized this slightly defeated the purpose of personalizing the experience to users' unique demographics. We then tried encrypting the data first using TenSEAL, but doing so prevented the model from processing the data at all. Ultimately, we realized that the ideal option for InSync would be to use edge computing techniques to process data locally; however, between time, cost, and public access constraints, we were unable to fully develop this functionality.

Accomplishments that we're proud of

For two of us, submitting a project to our first-ever hackathon! Also, that this project gave us the chance to practice integrating different technologies, such as multimodal data collection, third-party authentication, web development, and others.

What we learned

As mentioned above, this project allowed us to translate our skills to a real-world, open-ended project, and it forced us to adapt when our original ideas didn't pan out the way we'd hoped.

What's next for InSync: Multimodal Analysis for Personalized Healthcare

Moving forward, there are a number of changes that would vastly improve InSync's functionality. For one, fully integrating with the Fitbit and Epic MyChart APIs -- and other wearable devices -- to allow any user to create an account and process their own data. We'd also like to expand on the insights and recommendations offered by swapping the Gemini-based processing unit for one like Med-PaLM or another similar, medically-trained model that could offer more specialized insights or more intentionally adapt its responses to account for users' risk factors (genetic, environmental, etc).

Share this project:

Updates