Inspiration
In the US alone, over 3 million patients suffer from decubitus ulcers, or bed sores, as a result of sustained pressure being placed on a particular part of the body, from lying down for a prolonged period of time; hospitals in particular, see up to 38% of patients affected by bed sores [1]. It is estimated that the cost of treating pressure ulcers is 2.5 times the cost of preventing them [2]. Our team set out on a mission to solve this issue by prioritizing nurses to frequent patients more often who are bedridden through R.A.P.I.D.
What it does
To avoid bed sores, a patient shouldn’t be stationary for more than two hours [3]. Our sensor module detects if significant movement is made by the patient. If the patient remains still for two hours, a nearby nurse is pinged every 10 minutes until they come and rotate the patient.
How we built it
Our design is comprised of four modules. The first ESP32 (patient sensor) is equipped with a camera, then sends the video stream over WiFi to our central controller, a Flask server, run on a Raspberry Pi. On a separate ESP8266 (second patient sensor), a gyro sensor detects changes in the patient’s angular velocity, and 2 seconds of continuous positive input from the gyro sensor counts as “patient movement”.
The Pi then uses OpenCV to detect the difference in pixels between the video feed, and checks if there is 5 seconds of continuous movement, which counts as “patient movement”. Using sensor fusion with the first ESP32, the Pi detects movement with more depth and precision as opposed to just the single camera.
To pull the movement data from the Pi, the M5Stack (another ESP32) pings the Pi every 5 minutes, and will alert the nurse if the patient requires movement. The data is also visualized on the display as separate patient rooms, running on a Flask server, and programmed using Bootstrap (HTML/CSS).
Challenges we ran into
- Unable to run Neural Engine SDK on the Qualcomm HDK 8450
- Got TensorFlow working on the HDK, but unable to communicate to the Raspberry Pi due to lack of Android Dev experience
Accomplishments that we're proud of
- 3 out of 4 team members’ first hackathon!
- Integrating so many devices together simultaneously
- Working with a diversity of tech ranging from low to high level
What we learned
- How to create a WiFi access point from a Raspberry Pi
- How to use OpenCV
- Android studio development & TensorFlow (not used though)
- How to use ESP32 camera
- How to use M5 Stack
- To push for a minimum viable product rather than the most ideal solution
What's next for R.A.P.I.D
- Using a more refined ML model (i.e., TensorFlow) working under the neural engine to train the model and detect a higher range of motion types
- Practitioners can use long-term modelled data to determine if patient’s condition is worsening or getting better
References
[1] https://www.ncbi.nlm.nih.gov/books/NBK2650/
[2] https://pubmed.ncbi.nlm.nih.gov/2505808/
[3] https://myhealth.alberta.ca/Health/aftercareinformation/pages/conditions.aspx?hwid=abo6592
Log in or sign up for Devpost to join the conversation.