Inspiration

Oftentimes, the largest bottlenecks in a radiologist's workflow is documenting and communicating their findings with corresponding parties, i.e. the primary care provide (PCP) and their patient. RadStream AI seeks to alleviate this by providing comprehensive data analysis and report-generation capabilities while still maintaining human-in-the-loop involvement.

AI medical imaging models like those provided by HOPPR are excellent and providing preliminary analyses of medical scans and large language models are great at data-aggregation and summation. But these systems cannot replace the expertise of a medical professional, and that's why RadStream AI seeks to complement existing workflows, rather than disrupt.

What it does

RadStream AI is a full-stack workflow tool for healthcare providers, built on a robust Django and React.js platform.

  1. Contextual Upload: A radiologist or PCP uploads the patient's scan (e.g., from the sample DICOM images) and adds critical context, including Patient History and Reason for Visit (from our mock patient dataset).

  2. AI Analysis (HOPPR): The system calls the HOPPR Python SDK, running the scan against relevant Binary Classification Models (like mc_chestradiography_pneumothorax) and the Vision-Language Model (VLM).

  3. Report Generation: RadStream AI compiles various data sources to prepare comprehensive and inclusive reports. Sources include the patient's medical history, any relevant notes from previous primary healthcare providers, and their most recent medical scan findings. In doing so, the app ensures proper context is being utilized when generating findings and suggestions.

  4. Human-in-the-Loop Verification: The draft is presented to the radiologist in a simple verification dashboard. They can now approve the report, after which it is sent to the PCP and their patient.

How we built it

  • Backend: Python, Django, Django REST Framework
  • Database: SQLite3 (using the schema we designed for Studies and Findings)
  • Frontend: React.js, Tailwind CSS
  • Architecture: Our core agentic-system was built using HuggingFace's smolagents library, allowing the app to respond autonomously to user input and tailor its data analysis to closely fit what the PCP expects and the patient requires.
  • Core AI (HOPPR): We used the HOPPR Python SDK to perform inference for both binary classification and VLM findings.
  • AI (Synthesis): A generative LLM (Llama3.3-70B) to synthesize all inputs into the final report draft.

Challenges we faced

Our main challenge was the unpredictability of the inference models. We suspect that due to a high amount of traffic, certain classification models would often become very slow in response, thereby slowing down our app's response time. We also faced some challenge in integrating the different parts of the project, like the React.js frontend with the Django-based backend and agentic system.

What we learned

We learned how to ideate in high-intensity environments and also how agentic architectures can be leveraged to enable autonomous workflows that adapt to user needs on-the-fly. Surprisingly, we also learnt quite a bit about radiology, and the word done by a radiologist, and their expectations for tools that would complement their skillsets.

Built With

Share this project:

Updates