Problem

Millions of Americans suffer from expensive and inconvenient hand therapy. Patients sacrifice so much of their time to attend the costly therapy sessions, only to struggle with their daily at-home exercises. Whether the patients are bored or simple don't have the necessary corrective feedback, patients of modern at-home physical therapy struggle to accurately complete their whole therapy regimen, resulting in complications, setbacks, and an overwhelming unnecessary cost for the patients and their families.

What We Do

HandE uses your laptop camera to track the motions of your hands. By providing an interactive mechanism for therapy, HandE allows for the patient to interact with the computer during their prescribed exercises. The app incorporates the exercises into a game, ensuring the patient is motivated to complete their exercises. HandE also has the potential to provide precise feedback on the patient's motions, track their daily improvements, and report completion data to their doctors and therapists.

Why We Do It

Our team is passionate about using our interests in image processing, machine learning, and computer vision to provide an important medical service to millions of people worldwide without requiring the use of outside hardware devices. We strive to change therapy, to truly bring therapy to the patient.

How We Do It

On technical side: we collected the data by taking around 400 pictures of our own hands motions. We isolated hand from its background using computer vision knowledge(opencv), compressed and encoded the images into small-sized, but good quality bitmaps in Python. We use machine learning algorithm in R such as K-means Clustering, Gaussian Mixture Models, and Ensemble models to classify the bitmaps into one of seven categories (six hand positions, one empty picture with no hand) . As we bring in more data points, the means of the clusters are more representative of themselves, therefore the more closer between the new points and a mean, the more likely it belongs to that clustering. Finally, this analysis then calculates and reports which motion is being performed. The game responds to the motion in the appropriate manner, as if the user had pressed a key or clicked the mouse.

Challenges we ran into

One of the largest difficulties was isolating the hand. As powerful as OpenCV is, our team only had experience in haar cascades and facial detection. Isolating the hand required further ingenuity, forcing us to leverage our statistical background to conduct deep analysis on the cropped matrix of each hand image.

Also, combining the back-end analytics, machine learning with the front end data collection and reaction was tricky. We ran into issues with library compatibility, cross platform issues, and even translating from R back to Python.

Last but not the least, image with high resolution takes a lot of memory for personal computer. Usual software cannot process that large-scaled data. We come up with two resolutions for this. First is the dimension reduction using Principle Component Analysis, which poses difficulty for us to reduce the dimension for new data points. Then we come up with the idea of compressing the image while maintaining the aspect ratio of it.

Built With

Share this project:
×

Updates