in a research laboratory in India, a team examined high-resolution, label-free phase-contrast time-lapse microscopy sequences from the LiveCell dataset—documenting neuroblastoma cell differentiation across three distinct spatial locations over three days. The sequences revealed intricate patterns of cell division, migration, and neurite outgrowth, yet one fundamental question persisted: Can the future state of cellular growth be reliably predicted using only a short historical window of phase-contrast images? This limitation inspired a pivotal inquiry: What if artificial intelligence could forecast the next frame of cellular dynamics using just four prior images captured at 4-hour intervals—without fluorescent labeling, genetic modification, or manual tracking? Could a deep learning model not only reconstruct past behavior but accurately anticipate where and how cells will expand next, while preserving the natural uncertainty inherent in biological systems? From this vision emerged CellPredict: a hybrid ConvLSTM-GAN architecture engineered to generate soft growth probability maps from sparse temporal inputs. The model learns spatiotemporal dynamics via ConvLSTM and ensures biological realism through adversarial training—producing subtle, gradient-based predictions that reflect probabilistic growth zones rather than artificial binary boundaries. This work lays the foundation for AI-driven predictive biology, enabling in silico drug screening, cancer progression forecasting, and regenerative therapy design—all from minimally invasive imaging. Core question guiding the future: If AI can predict cellular fate from 4-hour imaging intervals alone, how rapidly might we accelerate therapeutic discovery without ever touching a pipette?

Built With

Share this project:

Updates