Inspiration
Despite having a microscopy and plenty of materials to study many types of live cells, studying live neurons has remained out of reach. This is primarily due to cost of the samples, microscopes, and other equipment as well as the licenses required to handle the specimens. However, with a good simulation environment, a lot of work and research on AI tooling for neuroscience can be done without any of that.
What it does
This project is a proof of concept that you can use simulation tools to build AI models for neuroscience and microscopy, even without having the physical equipment.
How we built it
We started by looking at some of the real-world datasets of neurons under fluorescent microscopes. While these datasets are good, they are also very specific to the specimen, the microscope, and the types of fluorescence and lighting used in the study. We started by trying to replicate some of the features in the real world data with simulation techniques - such as ray tracing and simplex noise.
Challenges we ran into
Generating the data takes a long time. Our first prototype simulation of a confocal microscope took too long for this project, due to all the different layers in the Z axis that needed to get imaged.
Getting the model trained on primarily synthetic still has problems doing accurate segmentation of real world data. While there's still some work to do here, we think the results are not so far off.
Accomplishments that we're proud of
Getting the minimum viable product working in a week was tough. The first few projects made for this hackathon weren't really unique enough - so we took on this one at the last minute just to make something we felt was useful and unique as well.
What we learned
Performance of the simulation matters a lot. Despite using technologies such as C++ and Embree, the simulation is still too slow for very large scale simulations. We are looking to port the ray tracing technology to Nvidia RTX in order to address this issue.
Log in or sign up for Devpost to join the conversation.