Inspiration 😇

The concept was sparked by the notorious COVID-19. Not only has COVID-19 directly killed millions of people globally, but those who suffer from heart disease and go to their doctor on a regular basis to have their hearts checked can no longer do so owing to social distance and rules. COVID-19 immunizations are, of course, growing by the day. Even if coronavirus fades into a type of contemporary flu, our way of life has altered tremendously. And that entails working from a distance. As a result, I noticed a need for a mechanism to correctly self-diagnose patients suffering from cardiac issues remotely, as well as a mechanism for patients to connect with their physicians. FLOW arose as a result of this.

What it does ?

Flow is available as both a smartphone app and a physical device. The actual gadget functions as a digital stethoscope, connecting to the app and transmitting data through Bluetooth.

Our app connects to the stethoscope we made, which then sends data through Bluetooth. This data is then examined by our custom-built Machine Learning Model, which produces a result indicating whether the patient has a regular or irregular heartbeat. When the app analyzes the patient's heartbeat, his records, together with prior recordings, will appear on his site. If the patient has an abnormal heartbeat, the app allows him to share his recording with the Cardiologists. Flow is available as both a smartphone app and a physical device. The actual gadget functions as a digital stethoscope, connecting to the app and transmitting data through Bluetooth. Through a live video call, the doctor may review the heartbeat and confer with the patient. We think that our initiative creates a full FLOW between physicians and patients, resulting in an exceptional user experience.

How we built it 🤔

Hardware: ⛏ The microphone is an electret MAX4466, and the microcontroller is an ESP32, with the microphone connected to the GND and 3v3 pins and the data connected to pin 35. The ESP32 runs on platformIO and uses a unique protocol I devised to deliver data to the sampling through Bluetooth async, ensuring that the sampling occurs at precisely 1kHz. Any conventional stethoscope head should work, and the mechanical framework is a basic 3D printed frame with screws and an airtight mixture of silicone/hot glue. The gadget is powered by a battery bank or a PC through USB.

Frontend: 🌟 For the UI, we utilized flutter, and for the video call, we utilized webRTC. The software is available for both iOS and Android devices. The software utilizes a plugin to collect signals from any Bluetooth device in its vicinity and then automatically links with the stethoscope we made using regex to detect the Bluetooth device's name. Following that, it sends the message to begin delivering data and saves it in a vector. When the recording timer expires, it transmits an array of bytes to the backend to be converted to.wav format for the algorithm and playback.

Backend:💪 The backend was created in Python Flask, a micro web framework created in Python. We were hosting it on a DigitalOcean server but eventually chose to host it locally on a team member's computer and port forward it. We did this because DigitalOcean was experiencing difficulty resolving dependencies required by the ML algorithm.

Machine Learning: 🤓 We spent an abnormal amount of time reviewing research articles on this topic before developing the machine learning model. From studying biology to building Hidden State Semi-Markov models based on Bayesian Inference and Logistic Regression for heartbeat segmentation. After researching a variety of various ways used in the past to try to solve this sort of problem, we combined the best of each and produced a hybrid. We began by preparing the data to denoise it using a bandpass filter, normalize it, and downsample it. Then, to limit the number of superfluous features and speed up training, we represented each audio file as a collection of 13 Mel-frequency cepstral coefficients. We attempted to utilize HSMMs to segment the data into exactly 5 heartbeat cycles, each consisting of one S1 and one S2 beat, but we were unable to train the HSMM to a high enough level in the little time we had. Instead, we chose to divide the data into 5 seconds of heartbeat sound recordings and use their MFCCs to create a dataset. Then, for the binary classification task of an irregular or normal heartbeat, we trained a stacked, two-layered, bidirectional LSTM followed by a tiny dense network with a final sigmoid activation function. To overcome concerns when the model would cease learning, we employed learning rate schedulers, various callbacks to store the model as it advanced, early halting, batch normalization, dropout, and a number of other ways.

Challenges we ran into 😬

One significant problem was connecting the phone to the microphone in order to transfer the data. We intended to use an I2S MEMS microphone at first and spent over 8 hours attempting to get it to work. Following that, we shifted to utilizing an electret microphone, which had its own set of issues, such as needing to run multithreaded async loops and then design a transmission system because the I2S transmission buffer type system via Bluetooth would not work. This then led to another problem in which the simplest protocol I could devise was an array of data points, which we had to find out how to turn into a sound file.

Another issue we encountered was that the API was unable to connect with the app. It took us a long time to understand out why the API couldn't be implemented on Digital Ocean. We then attempted hosting it locally and calling it in the app via local ports, which again resulted in issues that we were able to resolve later. Finally, the machine learning models took a long time to train, and at one point, the main LSTM model meant to classify heartbeat sounds became stuck at 52 percent accuracy and required multiple hours of debugging to resolve, after which it happily trained until it reached 82 percent accuracy with the help of batch normalization and dropout.

Accomplishments that we're proud of 😃

We are grateful that we were able to contribute to society in a way that benefited everyone. Doctors and patients who used the app were able to connect in a new way, and we developers were able to gain a lot of new abilities that we may apply in future projects. Overall, I would say we are happy to have been able to overcome all of the hurdles that were put in front of us because many of them appeared to be unfixable and would have permanently broken the app at the time. We're especially happy of how effectively we worked together despite the fact that we live in entirely different time zones and had to do practically everything remotely despite the fact that this was a mostly hardware project. Technically, we are most proud of the ML model, the ESP32 communication mechanism, and the seamless integration of all of these features into the app and API.

What we learned 📖

Austin: Dhanush taught me how to use Flutter, and Ani taught me how to use ML models. As my first hackathon, it was fascinating to see an entire app built together in 36 hours, complete with login menus, a completely functional app, ML capability, API, and device. I also had the opportunity to work with I2S for the first time, as well as get experience with platformIO and freeRTOS on the ESP32. It was also my first time utilizing a FIFO data structure for buffering, which piqued my attention. This was also my first time utilizing Bluetooth with a dedicated app rather than simply a terminal to write instructions into, so learning how to make the system automated proof was fascinating.

Dhanush: Ani taught me a lot about how machine learning models function. APIs have always bothered me until this hackathon when I learned a lot about them and aced them. I learned how to construct authentication using Mongo DB, as well as how to make API requests and use methods like POST, GET, UPDATE, and so on. I also studied graphic design while creating the app's logo and a few pieces.

Akash: I learned a lot from my colleagues, which is one of the things I love about hackathons. I was able to acquire some unique Flutters skills from Dhanush, such as Frosts and Animations. And taught me what a model is, how to train it, and how to build an AI pipeline. I learned a lot about audio from Austin (analog vs digital). I was also able to learn a few new things! I performed some graphic design, which I thought I was horrible at and was pleasantly pleased when I came up with something that didn't look horrible in the end.

Ani: I learned a lot about how audio processing works, including the rumors about Fourier Transforms and Markov models. Before this hackathon, I hadn't done anything with audio, so it was fascinating to discover how many characteristics you can extract from a basic WAV file and how powerful recurrent neural networks can be.

What's next for Flow

Our actual ambition for Flow is to introduce it to the market, particularly in third-world nations, and have a global influence. Our technology, we think, has the potential to transform healthcare. We have active plans to add to Flow in the future, perhaps converting it into an ultra-compact 24/7 wearable that would provide doctors with massive data on a person's heart at all times. We also intend to expand the app's and API's functionality, as well as improve the machine learning model's accuracy and identify new cardiac characteristics. Then we'll aim to build this MVP into a full platform, complete with a healthy user base and everything.

Built With

  • api
  • audio
  • audiosegmentation
  • banspassfilters
  • bidirectionallstm
  • bluetooth
  • c++
  • dart
  • datanormalization
  • denoising
  • digitalocean
  • electronics
  • esp32
  • flask
  • flutter
  • health
  • hsmm
  • jitsi
  • keras
  • librosa
  • logic
  • logisticregression
  • matplotlib
  • mongodb
  • numpy
  • platformio
  • programming
  • python
  • queue
  • react
  • react.js
  • research
  • reseearchpapers
  • sharedprefferences
  • soldering
  • stethoscope
  • tensorflow
  • usb
Share this project:

Updates