Inspiration

Cardiovascular disease, also known as heart disease, is the leading cause of death globally, taking the lives of around 17.9 million individuals annually. CVD is prevalent in both emerging and developing nations, and the World Health Organization reports that India alone accounts for one-fifth of the world's fatalities, particularly among the youth. Due to the absence of essential healthcare services, rural areas now have higher mortality rates compared to urban areas. Thus, addressing this issue became a priority for us.

What it does

The project provides an economical and productive answer to the current problem. The method involves gathering audio signals from clinical investigations or the general population to identify irregular heartbeats. The classification of heartbeat sounds is performed automatically using Deep-Convolutional Neural Networks (CNNs) and time-frequency heat maps.

The model's test accuracy was approximately 97.25%, while its training accuracy was 98.35%. Upon individual input evaluation, the model yielded excellent outcomes with notably high confidence values and correctly classified heartbeat sounds.

How we built it

Deep Learning has experienced remarkable success in audio processing in recent years due to its growing popularity. It eliminates the need for traditional audio processing techniques and allows for standard data preparation without the creation of custom manuals and feature sets.

Spectrograms are utilized in deep-learning audio applications to depict sounds. The process begins with a wave file containing raw audio data, which is transformed into the corresponding spectrogram. After obtaining the spectrogram, we can analyze the image data using popular CNN architectures to extract feature maps, which are condensed representations of the spectrogram image.

In this project, we describe an automated method for categorizing heart sounds that involve the use of deep CNNs and time-frequency heat maps. The hyperparameters are then adjusted.

Challenges we ran into

We ran into the following challenges while building this project:

  • Limited Dataset (Clinical and Non-Clinical Data)
  • Denoising of Dataset
  • Training of CNN based on Clinical Trial Data
  • Designing the robust backend to integrate the model into the application

Accomplishments that we're proud of

We are proud of the testing and validating accuracy of the model despite the limited dataset. We are also really proud that we were able to complete the model in less than 36 hours.

Healthcare was the most demanding track of Hacklytics 23'. We posed a huge risk by choosing the same. I am personally really proud of my team for making this possible and achieving what we aimed for.

What we learned

Building Rhythm Insight, We learned a lot of new things. We started as three strangers at the event to complete the project as a team. We faced multiple challenges throughout the 36-hour duration. Some of them are:

  • Training the model with a limited dataset
  • Working as a team and overcoming various personal hurdles
  • Learning about different aspects of CNN and Spectrograms
  • Designing the wireframe of the application without any prior experience
  • Managing hacking time along with multiple workshops
  • How to survive sleepless nights!

What's next for Rhythm Insight

We look forward to the launch of Rhythm Insight. We have planned on launching the application on PlayStore followed by the Appstore. We really look forward to all the feedback from our users.

Some of the sneak peeks into our future builds:

  • Deploying the application GLOBALLY
  • Integrating voice assistants
  • Different modes, time zones, and themes
  • Getting medical certification from Industry Experts (Hospital Tie-Ups)

Built With

  • audiosignalprocessing
  • cnn
  • frequencydomain
  • google-colab
  • hyperparameters
  • melfrequencycepstralcoefficients
  • python
  • spectrogram
  • timedomain
  • timefrequencyheatmaps
Share this project:

Updates