Dementia patients generally require assistance with their daily activities. The intense workload of caregivers often leads to higher stress both emotionally and physically. Even worse, it sometimes results in higher morbidity and mortality rates. On the patient's’ side, individual privacy is needed in some of their activities. Specifically, privacy is important when the patient is dressing up. Based on this consideration, we designed a dressing assistant, an automatic camera monitor, with computer vision to instruct the patient to dress up without any caregiver in presence.
What it does
Our dressing assistant can detect different dressing errors in details and trigger voice & text instructions in real time to advice the patient to dress correctly. For example, if the collar is not folded correctly, the assistant will recognize and instruct the patient to fold the collar until it is correct.
How we built it
Challenges we ran into
A Naive Model At the first glance, we considered to place symbolic marks on the clothes and let the dressing assistant detect the marks to determine if the patient dressed properly. The symbolic marks method, however, might not be as accurate since the clothes sometimes are crumpled and make the marks hard to detect. Moreover, it is inconvenient to place multiple symbolic marks on every clothes of the patient, and problems may happen if the clothes are washed. Thus we have to construct a more accurate and stable method for detection.
The Improved Model We defined several characteristic errors, including collar, belt, inside-out, and leaving, and train our computational model to identify those characteristics. We implemented the object detection model from DarkFlow, which is based on TensorFlow and OpenCV, and utilized the weights from Tiny Yolo as our initialized weights. This new method turns out to be more accurate and does not require any external labels to put on the clothes.
Accomplishments that we're proud of
Our model is able to detect multiple types of dressing errors simultaneously and can monitor the patient in real time.
What we learned
Cutting-edge technologies of computer vision and user interface designing languages. Problem-solving, teamwork.
What's next for Dementia Dressing Assistant
We plan to expand our model for larger scale of implementation by generalizing our model to detect more kinds of clothes and dressing errors. We also hope to investigate the patients' home for the human test to improve our model.