OMNI

Motivation

Modern medical diagnostic tools like ECG and pulse oximetry have saved innumerable lives of infants around the world. However, infant mortality has not been adequately addressed by medical device manufacturers in impoverished countries. Respiratory ailments like pneumonia and bronchitis are a leading cause of infant deaths in such countries. The conventional medical care model, which allows for monitoring in primary healthcare centers, is often inaccessible to such communities. The advent of low cost open-source biosensing hardware like the OpenBCI coupled with deep learning models on edge devices like the raspberry pi opens a new avenue for delivering in-situ medical monitoring for a fraction of the cost of traditional diagnostic equipment.

OMNI provides a deep learning algorithm to robustly extract the Heart Rate (HR) and Breathing Rate (BR) from a single lead ECG. Furthermore, the algorithm is realized on a Raspberry Pi on ECGs obtained real-time from the OpenBCI Ganglion board.

Proposed Application

Why OMNI?

OMNI is inspired by initiatives like the Glia Project and Embrace baby warmer which seek to democratize access to medical hardware. We see OMNI aiding these efforts by showing the potential of 1d deep learning in the robust detection of heart rate and breathing rate. Lowering the cost of Single board computing hardware opens the possibility of delivering effective monitoring to isolated communities who might not have internet connectivity.

Here is the edge implementation of OMNI in action given real-time noisy (motion corrupted) 10 second ECG signals real-time inference for heartbeat, breathing detection

Features

  • Well documented, modular code based on Pytorch.
  • Model Visualization using Tensorboard (now native to Pytorch).
  • PyQT based GUI.
  • Directions of first aid in the event of an alarm.
  • An extension of the algorithm onto the Raspberry Pi on ECG acquired from OpenBCI.

Dependencies

System configuration

  • Ubuntu 16.04
  • Nvidia 1080Ti - (Required for training the model)

Hardware Requirements (Edge implementation)

Edge HW block diag Components needed:

  1. OpenBCI Ganglion kit

    Open source biosignal acquisition hardware for research grade biosignal acquisition

  2. Raspberry Pi 4

  3. Rasberry Pi 4 cooling case

  4. Shirt using conductive textile electrodes [1$ for 1 electrode]

Required Libraries

All of the libraries are mentioned in a script in the git repo.

Issues we ran into

  • Streaming the ECG from OpenBCI to Raspberry PI --> Solved using bluepy library.
  • Developing a generalized model for breathing rate. Most public databases do not have data collected from neonates or infants. The one database that did have the required data did not have annotations. --> An optimization of the find_peaks function in scikit learn library was used to generate jittered annotations. Apart from solving the annotation issue, this also improved the model's performance in the event of a noisy ECG.

What's next for OMNI?

Mission Statement

We are looking to usher medical diagnostics, a largely proprietary domain, into the era of open source. We want to explore the possibility of running scalable deep learning models on the edge, which on the parallel ensures the privacy of the patient and would also ameliorate quick access to medical care.

Challenges

  • Implementing the above algorithm in an hardware-optimized fashion.
  • Large scale validation of the model in a real scenario.

Team Bios

  • Sricharan Vijayarangan is a Project Engineer at HTIC, IITM, India. He is interested in providing scalable solutions in the intersection of AI, healthcare and electronics.
  • Prithvi Suresh is an intern at HTIC, IITM. He is a Machine Learning enthusiast, working in the intersection of AI and systems.
  • Deepak Vagish is an intern at HTIC, IITM. Has dabbled in various fields in computer science including Crytography, Machine Learning and Quantum computing.
  • Vignesh Ravichandran is a Graduate Student at University of Rhode Island, South Kingstown, Rhode Island, US. Is a BCI/HCI Enthusiast, working in the intersection of VR/AR and AI. ## Credits A couple of other people helped us in setting up all the elements of the project.
  • Logo - Harshita Suresh
  • Video - Ramachandran Vennimalai, Arvind V Divakar, Jose Miguel Canton, Victor Chung

Built With

Share this project:

Updates