Inspiration

This project began with a simple, personal question:
What if monitoring our breathing could be as easy as charging our phones at night?

Some of us have family members who live with asthma or chronic coughs, and we have seen how worrying it can be to hear them struggle to breathe at night. Going to a clinic is not always affordable or convenient, and many people around the world lack access to specialized care or equipment.

We wanted to change that.

BreathWatch was created from the belief that health awareness should not depend on wealth or location. By using something almost everyone already owns, a smartphone, we can help people better understand their own respiratory health in a way that is private, simple, and empowering.

What it does

BreathWatch uses on-device machine learning to analyze nighttime audio recordings for signs of coughs and wheezes. The app processes ambient audio locally, classifying sound segments into cough events, wheeze patterns, and normal breathing.

The system then generates daily summaries that can be used for educational, research, or wellness-tracking purposes. No internet connection is required, which protects user privacy.

How we built it

The frontend was developed in React Native for cross-platform mobile deployment. Audio preprocessing uses a Mel spectrogram and noise reduction pipeline in TensorFlow Lite. The model is a CNN-based classifier trained on publicly available respiratory datasets, including the ICBHI Challenge and curated cough datasets.

Deployment is handled through TFLite for on-device inference, ensuring full data privacy. Visualization features include local graphs that display cough and wheeze counts over time.

Challenges we ran into

One major challenge was data scarcity. Public respiratory datasets were limited and often imbalanced.

Privacy constraints required aggressive optimization to run everything efficiently on-device.

We also faced issues with false positives, since distinguishing between speech, snoring, and coughing was difficult.

Finally, performance limitations meant we had to prune and quantize our model to ensure smooth operation on mid-range phones.

Accomplishments that we're proud of

We trained a lightweight model (under 5 MB) that detects coughs and wheezes with strong accuracy.

We successfully deployed real-time inference fully on-device, with no cloud support.

We also designed an intuitive, minimalist React Native interface for overnight monitoring and clear result visualization.

What we learned

We learned how to integrate machine learning audio models into mobile applications and how to balance accuracy, speed, and privacy in edge AI systems.

We also discovered the importance of establishing clear ethical boundaries when working with health-adjacent technology.

What's next for BreathWatch

Next, we plan to expand our dataset to include more diverse age groups and recording environments.

We will add personalized calibration for each user’s baseline breathing pattern and partner with public health researchers to explore long-term respiratory trends.

Finally, we aim to open-source our model to support education and global health research.

Built With

Share this project:

Updates