Inspiration
Team wanted to solve a problem within the bio space, to produce something simple yet impactful. During our initial research phase, we landed on articles linked to Parkinson's diagnosis methods. We were combing through the articles looking for problems to which we could possibly build a solution in 24 hours and landed on a simple test physicians use to detect Parkinson's. This test is called "Spiral Test". A Spiral test is performed when a Physician predicts the patient could have Parkinson's. The patient will then be asked to draw a Spiral on a piece of paper as best they can. Physicians would then use this spiral image to check for cues of tremors which could indicate Parkinson's and other tremor-related illnesses. In the article, it stated that there wasn't a computerized system to take and analyze patient spirals. What we set out to do is check whether we can utilize ML + Computer Vision to detect the tremors in patients spirals.
What it does
Web page that takes in user input as a spiral image drawn through a touchscreen laptop. currently only works on Chrome browser. User will need to input their spiral image in the "Drawbox" following the "guide" spiral to the best of their ability. Once the user has finished drawing their spiral, they will then click on "Evaluate" to send their spiral image for testing. When the image has ran through the ML model, the user will then be notified of their result. The application prints out 2 results: "Healthy" vs. "Detection of Parkinson's". The application will also download the spiral image figure onto the users local machine, they can use this image however they wish.
How we built it
Client-Side: Javascript, HTML5, CSS, Bootstrap
Server-Side: Microsoft Azure Custom Vision service
Challenges we ran into
Preprocessing Parkinson's dataset from Kaggle. The spiral image figures were given as a .csv file. This meant that we would have to use a plotting function in Matlab to plot the spirals into a image. Here we ran into an issue where there would be an artifact in the image once plotting function has finished leading to a line between the 2 ends of the spiral.
Defining and refining scope. There were a lot of concurrent projects and ideas being thrown around at the same time, this created modularity in the group and issues were ran into which prohibited any one project to move forward. For example, we were working on Kinect gait sensing, accelerometer testing, and spiral figure testing all at once. The SDK's for the kinect were easy to understand but there was a hardware issue which I ran into on my laptop which prohibited the Kinect from properly functioning, this caused me to spend valuable time debugging pointless issues. We could've implemented the accelerometer data but there was strange issues in which browsers would gain their tilt data differently, causing some browsers to completely stop working when using the accelerometer. To get a workable demo, we would have to focus our efforts in pushing the spiral image test forward.
Azure ML training. Multiple challenges were encountered during the model training phase of our project. We didn't know whether we wanted to use "hand-drawn" spirals or "computer-drawn" spirals as our method of input. We would also train the model on one geographical area but when we would try to access that API resource, the API resource wouldn't exist because the API resource would exist in another geographical area. The training and resource functions were separated from each other, causing confusion amongst our group. Also models would take at least 1 hr to train, during which time we wouldn't know whether the model will work or not. We had very limited training time sessions due to the nature of the Hackathon.
Website development. JS, CSS, HTML5 are never nice to a developer. There are hosts of issues that one runs into when developing even a simple web application. For example, being able to get a HTML5 canvas object to produce a spiral figure drawn with a hand took multiple function calls to allow the canvas object to work smoothly. Then to take the data produced by the canvas object was a pain as the data was given as a BLOB. BLOB is shorthand for binary large object, and we had never worked this type of data before. Basically it represented the entire Canvas figure in binary format through which we would then have to parse. But it wasn't purely the Binary object itself, attached to the file was metadata which made it even more challenging to grab the image data.
Accomplishments that we're proud of
Built something in 24 hours that works!!!
In all seriousness though, we're quite proud of what we were able to achieve as such a multidisciplinary team. We managed to take a problem statement from our literature review and define goals from which we were able to create this spiral image figure. Some members of the team never had experience building web applications before and they managed to learn so much about that process.
Our ability to work in a newly formed team and set and deliver on goals.This was the first time we all worked on any project together, and to see our team work well and deliver on what we set out to do is quite amazing. This is something we're truly proud of.
Our precision and recall metrics are very high considering the small training dataset that we had. We had great mentors from the Microsoft booth helping us out to choose a ML computer vision platform.
What we learned
How to define, refine, and reduce scope. We all have experience programming and working on projects, so we all know about those 3 things. But doing all of that in 24 hours is new to all of us, so quick decisions and ability to reduce scope was crucially important for our final demo.
This Hackathon turned out to be more of test of our ability to use pre-existing APIs then actually building stuff from the ground up. Again we've all had experience using pre-built APIs but not at this scale. When we could we'd copy and paste code, use the simplest models, and utilize all pre-existing functions than make things ourselves. This doesn't mean we didn't build anything, but we were just piecing different components together to solve our problem at hand. This mindset of relying on the work of others to build something novel really helped us to finish our project and demo during the presentations.
Define a problem statement from literature review. Reading from a research article about different Parkinson's diagnosis methods and defining a problem statement to work on. This was new to a couple members on our team as we never had this experience of working backwards from a research article.
Those of us who hadn't had front-end experience, gained front-end experience. Something critical to any programmer in their career is building and designing UX/UI, which some might never do. We were able to all work on the front-end experience and iterate through multiple solutions as we'd dogfood our own product. This "dogfooding" was crucial to gaining an understanding of what the end-user experiences and how to refine the UX.
What's next for Tremor Vision
Total Healthcare Parkinson's AI management tool.
Feature 1: be uploaded anywhere so we can store and track the progression of a patients treatment by cataloging their spirals by the date.
Feature 2: Use a dedicated tablet to draw spirals with a pen that can also track the pressure of the spiral at any given point, more data points that the physician can use.
Feature 3: Use accelerometer data to measure frequency of the tremor.
Feature 4: Implement advanced kinect sensing tools to measure and monitor gait tremors (lower extremity tremors).
Log in or sign up for Devpost to join the conversation.