Inspiration
Every day, thousands of neuroscientists around the world work hard at furthering our knowledge of the brain. A key element to this is understanding how neurons connect to each other. And to understand this it is often necessary to observe "dendritic spines", little connections between two neurons. These tiny structures can be hard to see though and every single nerve can have dozens or hundreds. Currently, highly intelligent and motivated neuroscientists are forced to labour for hours, repetitively labelling spines on images. Their time could be used much more productively! (And they deserve better work than "clicking the little white bumps")
What it does
My program is a prototype for a deep learning system that can automate the tedious task of labelling spines in images of neurons captured using two-photon microscopy. Unlike similar such systems, mine natively understands both 3D space and time, allowing it to track the same spines over the course of multiple experiments.
How I built it
Using PyTorch as my deep learning library of choice, I built the code for a variant on the ResNeXt (https://arxiv.org/abs/1611.05431)architecture, a state of the art architecture usually used for image recognition. I generalized ResNeXt to natively use 3D rather than 2D information and deal with time series data using Long Short Term Memory (LSTMs).
Challenges I ran into
Given the high cost of acquiring the type of data I use (both the microscopy and labelling of the resulting images) and that my network uses its data relatively inefficiency (training on whole series of experiments as single time points), the dataset I have is woefully small. Additionally, the dataset has many peculiarities, probably related to inconsistent tagging of images or bugs in the tagging software. Despite this I was able to understand the data and integrate it into the system.
Accomplishments that I'm proud of
I am proud I was able to implement such a relatively novel architecture (I can't find any references in the literature that use the exact approach I use) in such a short time, even while being distracted by my friends.
What I learned
Data inefficiency is a real problem. There are many problems that could be solved using deep learning that are held back not only by a lack of trying, but more severely by the lack of proper amounts of data. In this field too I think this is the limiting factor. While I think my architecture is rather good and should be able to solve this problem well, it is very inefficient in its use of data and so might ultimately turn out to be just too data-hungry for practical use in this field.
What's next for Dendritic Spine Detection
I hope to perhaps collaborate with other labs to acquire more data. Dendritic spine labelling is a very common practice in many laboratories, and perhaps with enough cooperation, it could be possible to collect enough data to make this method viable.
Log in or sign up for Devpost to join the conversation.