Inspiration
We wanted to automate and streamline the patient intake process, addressing the fragmented and manual methods common in healthcare. Many clinics still rely on slow, repetitive, and error-prone manual forms, which is time-consuming for patients and staff. Simultaneously, valuable data from patient-owned medical devices (like glucose monitors or wearables) remains isolated. OneChart aims to be a software framework to fix this, unifying these data sources to make intake faster, more automated, and more efficient.
What it does
OneChart's primary function is to automate and accelerate patient intake by unifying data from various health-related sources, as well as automating intake questions through natural languages processes. It provides backend endpoints intended to connect with and ingest real-time data from numerous medical devices, such as insulin trackers, heart rate monitors, and other wearables. This allows a patient's chart to be automatically populated with their latest health metrics before they even see the doctor. In the actual primary care setting, this feature set could be used to automate the retrieval and input of data from devices like scales, blood pressure monitors, and similar.
To supplement this automated data stream, OneChart utilizes an LLM-powered chatbot, powered by NVIDIA’s Nemotron Nano 9B v2. This chatbot serves as a secondary intake tool, efficiently collecting essential patient information not provided by connected devices—such as chief complaints, symptoms, medical history, etc.—through natural conversation, replacing a traditional form.
It's important to note that OneChart is not designed to replace existing patient charting systems like MyChart. Reinventing those complex systems would be pointless. Instead, this project is designed to be a simple, lightweight, and flexible component that fits into that larger ecosystem. Its core strength is its ability to be used in combination with these established tools through its API and data endpoints, acting as the crucial link between a patient's personal devices and their official medical record.
Once all information is gathered (both from the endpoints and the chatbot), a comprehensive, unified summary is generated, providing a complete and up-to-the-minute health profile viewable on their devices.
How we built it
We built OneChart using HTML, CSS, and JavaScript on the frontend, Python (Flask) on the backend, as well as NVIDIA’s Nemotron Nano 9B v2 as the basis for our supplemental text-based data collection.
We specifically chose NVIDIA’s Nemotron Nano 9B v2 for our chatbot because of its efficiency. While our task (extracting patient information from text) is important, it doesn't require a massive, 100-billion+ parameter model. Nemotron Nano is a smaller, more specialized model that can accomplish this job perfectly. This approach makes our application more accessible and efficient, as smaller models require less computational power to run. This not only speeds up performance but is also a more environmentally-friendly choice compared to using larger, more resource-intensive models. It is worth noting that our project is capable of running on our own home computers. The version demonstrated here is running on a consumer graphics card (Nvidia RTX 3060).
Accomplishments that we're proud of
Designed a scalable backend architecture capable of connecting to and unifying data from diverse external health tools.
Successfully integrated NVIDIA’s Nemotron model as a chat system for collecting supplemental patient-reported data.
Created a functional framework that proves the concept of a unified, automated patient charting.
Created a functional framework with advanced LLM features capable of running on consumer hardware at lower energy demands.
What we learned
How to design a robust API framework capable of receiving external health data.
How to integrate advanced LLMs (like Nemotron) to complement an automated data pipeline rather than just replace a simple form.
How to run OneChart
A running instance of ollama is required on the host machine. if the model is not the nemotron model, the name of the model will need to be changed in
llminterface.py. This is relatively trivial and can be done by only changing one line in the initialization of theChatSessionclassMake sure all requirements are installed through
requirements.txtRun the main app by using
python3 app.py. By default, the app will run on port 5000.
Log in or sign up for Devpost to join the conversation.