Inspiration
In 1818, John Cheyne and William Stokes were credited with discovering irregular breathing patterns in dying patients. The nurses and women clinicians who sat beside those patients documenting every breath by hand were never named at all.
Their names are Dr. Mary Ellen Avery and Dr. Ann Woolcock. Dr. Avery proved that missing surfactant causes respiratory distress syndrome in newborns, ultimately saving millions of lives. Yet, she fought to get the paper published. In the 1980s, Dr. Woolcock built the world's first asthma management guidelines from the ground up. Neither name is in a textbook most people have read.
This legacy of erasure has a modern consequence: over 30 million Americans have sleep apnea, yet a clinical breathing study still costs $3,000. The observation work that could catch it has always existed, but has never been accessible. With our project, AIRA, we hope to make breathing observation accessible to honor the legacy of women clinicians who were never credited by building technology that listens for everyone.
What it does
AIRA, or The Adaptive Inhalation Risk Assessor, uses a Grove sound sensor to continuously monitor breathing rhythm and stream live data to a real-time waveform dashboard. Vultr's cloud infrastructure analyzes the incoming signal and allows the user/physician to check their record, flagging dangerous anomalies like apnea events and irregular cadence. When a threat is detected, ElevenLabs generates an AI voice alert. Every event is logged with a timestamp to give patients, clinicians, and families a full record of the event.
How we built it
We built AIRA across three integrated layers. On the hardware side, a Grove sound sensor wired to an Arduino Uno samples the environment's sound at intervals, and then averages the readings to filter out noise and isolate the sound of breathing. On the software side, a Python script running pySerial reads the serial stream in real time and feeds it into a Flask application, which plots the live waveform in the browser. Anomaly detection runs in the Python layer by monitoring whether the sound level stays below the breathing threshold for four or more seconds. If so, the script flags an apnea event and posts it to our Vultr cloud instance with a timestamp. On the cloud side, Vultr hosts a Flask-based backend with two endpoints: one to receive and log incoming anomaly events, and one that the laptop polls continuously to check for new alerts. Vultr works like a global interface which allows other authorized user's family and physicians. When an alert is confirmed, a separate Python script calls the ElevenLabs API to generate a voice notification and plays it through the laptop speakers using the ElevenLabs audio library.
Challenges we ran into
Initially, we decided to try using a Raspberry Pi to connect to WiFi to be able to run our whole system on there, instead of a laptop. However, we ran into multiple issues in being able to read serial data while being on UTD WiFi. We ultimately had to pivot and use two laptops, one to host the ElevenLabs AI voice agent and Vultr, and the other to host the Arduino code.
For the ElevenLabs, we had hardships in the beginning to integrate both the terminal and the voice. One of the voices 'Rachel' wasn't compatible with our application, so we had to go back to our code and configure it to pick 'Adam' instead.
For Vultr, we encountered several technical issues with the infrastructure. We initially attempted to use a shared CPU with Ubuntu and planned to apply our MLH credits, but we ran into multiple errors during setup. Later, we learned that virtual machines (VMs) are not covered by the credits due to past misuse by users. As a result, we needed to pay out of pocket in order to continue using the infrastructure.
The biggest issue was networking, since the school Wi-Fi blocked direct communication between devices and forced us to rely on a personal hotspot to keep the system connected. We also had to work around unstable IP addresses in a crowded hackathon environment, which affected communication reliability across components. On the software side, getting a Mac and Windows laptop to communicate smoothly over the same network required careful consideration, and we also had to manually adjust the firewall settings on the cloud server before all parts of the system could connect properly. In addition, tuning the breath threshold required multiple rounds of testing because system behavior varied across environments. Overall, a significant part of our work went into making the system reliable and consistent rather than simply getting it to function.
Accomplishments that we're proud of
One of our biggest accomplishments was successfully building a full stack project that integrated hardware and software into a single working system. We were especially proud of implementing AI into our application and connecting our project’s theme to historical figures who weren't fully appreciated in the past. Additionally, getting our sensor to accurately detect signs of apnea was a major milestone that demonstrated the effectiveness of our design and the functionality of our system.
What we learned
We learned the importance of staying calm and composed when facing challenges. We also gained experience using tools like ElevenLabs and Vultr and combining them effectively to bring our project together. Most importantly, we developed the ability to integrate software, hardware, and AI into a unified system, turning individual components into a fully functional solution.
What's next for AIRA - Adaptive Inhalation Risk Assessor
The next step is refining our sensor calibration so AIRA can reliably distinguish real breathing patterns from background noise across any environment. From there, we want to bring in additional sensors, like FSR pressure and SpO2, so the system can cross-validate readings and catch a wider range of respiratory events. We also hope to add a Figma dashboard to our system to be able to host multiple screens through Vultr to be able to simulate different screens from the perspective of the user and of the doctor.
Log in or sign up for Devpost to join the conversation.