We developed an end-to-end computational pipeline for segmenting filaments in budding yeast cells in both 2D and 3D. Our main innovation is a Temporal TinyUNet that leverages image sequences rather than single frames alone. By incorporating temporal context, the model can distinguish true filaments from transient bright structures that would otherwise produce false positives in frame-by-frame inference. This allows it to learn whether a structure persists over time and follows plausible filament dynamics.
To generate training masks, we used a multi-pronged strategy. First, we built a manual annotation tool for filament labeling. Second, we applied topological data analysis, or TDA, as a slower but accurate and tunable method for producing candidate masks that can then be reviewed by a human. Third, we created synthetic training data by sampling fluorescence distributions and the morphology of cells and filaments from real images, then generating large numbers of simulated cells and masks. To make these synthetic images realistic under low signal-to-noise conditions, we added Poisson shot noise and Gaussian read noise.
We evaluated the Temporal TinyUNet against both the classical TDA method and a ridge detection based approach in 2D and 3D. Our method produced more accurate segmentations while reducing runtime from minutes to about one second.
In addition, we built an interface that automatically detects clusters of budding yeast cells directly from the original field of view. This uses 2D brightfield segmentation together with fluorescence spatial information to identify relevant regions for analysis. The result is a complete workflow that takes the user from raw microscopy data through automated cropping, 2D or 3D filament segmentation with multiple method options, and quantitative analysis of filament properties at both the single-cell and dataset level. Finally, the pipeline supports export of crops and masks for 3D visualization in napari, enabling manual inspection and downstream quantification.
Built With
- ai-tools-used:-gemini-3.1-pro-and-gpt-5.4
- and-models/
- apple-m2-pro
- batch-analysis
- cellpose
- core-workflow:-annotate
- gemini-3.1-pro
- gradio
- hardware-used:-nvidia-geforce-rtx-5090-and-apple-m2-pro-with-mps
- imageio
- imageio-ffmpeg
- infer
- install-with-uv-sync
- local-tiff-microscopy-datasets-in-tifs2d/-and-tiffs3d/
- matplotlib
- mps
- numpy
- nvidia-geforce-rtx-5090
- opencv
- outputs-in-results/
- pandas
- pillow
- plotly
- python-3.10+
- pytorch
- run-with-uv-run
- scikit-learn
- scipy
- tiff-microscopy-datasets
- tifffile
- torchvision
- tqdm
- train
- uv
- uv-for-environment-and-dependency-management
- videos/
- web-viewer-inspection
Log in or sign up for Devpost to join the conversation.