Inspiration

Surgery has the potential to be transformed by data to act on insights in realtime and drive better patient outcomes. Yet, today, evaluating surgical performance and recognizing its risks are still a subjective process, with performance evaluated after the fact. Operator's goal is to bring automated insights to surgery; we want to inform surgeon's of their performance and analyze the risk of their actions in realtime.

What it does

Using robust classification techniques in machine learning, Operator generates scores learned from two components, a kinematic analysis on the surgeon's gestures (using publicly available data on the daVinci machine containing everything from the (x,y,z) positions of tooltips in space to their velocities and accelerations.

We've built the proof of concept of a service that ideally could work in the background. A surgeon will perform their surgery and we analyze kinematic data generated in realtime to predict the extent to which they are performing the maneuver correctly. We hope make surgical evaluation in a training setting more objective.

Here, we built an iOS app that by gesturing on a surgical feed, we take a snapshot of a specific area and conduct an evaluation and risk analysis on it by returning a score. The second component to this learned score is a prediction generated from a trained image classifier using IBM Watson’s visual recognition API services on whether the current frame of the procedure is more similar to positive or negative examples of surgical complications.

Our kinematic analysis is hosted online as its own API.

How we built it

An SVM (support vector machine) classifier was created in R and hosted as its own API using kinematic data from the DaVinci robot. The classifier from Watson was trained from a training set we gathered. We took positive and negative examples of complications, specifically posterior capsular rupture during cataract surgery, and trained it to recognize how similar the current snapshot is to a positive/negative example from training.

Challenges we ran into

The Watson API required an image file to be generated. We ran into issues with saving the chosen snapshots as a file in a local directory. We had issues using two dependencies at once.

Accomplishments that we're proud of

We’re proud of the fact that we managed to get a working app that actually uses the kinematic data and supplies actionable risk and evaluation scores.

What we learned

We learned how to better integrate analytics meaningfully into a user-friendly product.

What's next for @Operator

Training more image classifiers for other forms of surgical complications, generating specific risk analysis based on a specified surgical tool, and refining our algorithm.

Share this project:
×

Updates