Inspiration

Central nervous system (CNS) drugs fail in clinical trials more than any other category — often because of unexpected neurotoxic side effects that appear late, after billions of dollars have been spent. We asked: what if we could warn researchers on day one, before animal studies or human trials?

What we built

We created NeuroTox Watchdog, a hybrid AI–organoid co-processor that predicts whether a drug is likely to cause neurotoxicity. The system combines:

A machine learning model trained on drug structures and known side-effect data.

A biological validation loop using Cortical Labs’ living neural tissue, where drugs are encoded as stimulation patterns and the electrophysiological response is analyzed.

A fusion engine that integrates AI probability scores with organoid stress indices to produce a final safety label: Likely Safe, Caution, or High Risk.

How we built it

Collected a curated list of compounds (neurotoxic vs. safe) and generated molecular descriptors (ECFP4, physicochemical features).

Trained a calibrated Random Forest baseline ( AUROC ≈ 0.8 AUROC≈0.8 on our pilot dataset).

Designed stimulation encodings for drugs, ran them on the organic computer, and extracted spike-train features such as firing rate, burstiness, and entropy.

Developed a Streamlit web app that lets users input a SMILES string + spike CSV and see predictions, explanations (via SHAP), and raster plots.

What we learned

How to translate molecules into both vectorized fingerprints for AI and electrode stimulation patterns for living tissue.

That organoid responses show measurable “stress signatures” under toxic compounds, especially changes in synchrony and burst statistics.

How to merge two radically different modalities (machine learning and wetware) into a single, interpretable decision pipeline.

Challenges we faced

Small datasets: only a handful of neurotoxic compounds are well characterized, so we had to carefully balance interpretability with statistical power.

Signal noise: organoid spike trains are inherently noisy, requiring robust feature extraction and normalization against baselines.

Integration: bridging RDKit → ML → electrophysiology → Streamlit in just a hackathon weekend was a major engineering challenge.

Ethics: ensuring safe and responsible use of neural tissue, while communicating clearly that the system does not approach sentience.

Built With

  • calibration
  • computer
  • cortical
  • descriptors)-scikit-learn-(random-forest
  • electrophysiology
  • for
  • labs
  • live
  • organoid
  • python-(3.10)-rdkit-(molecular-fingerprints
  • raster
  • shap-explainability)-numpy-/-pandas-/-scipy-(data-wrangling
  • stats)-streamlit-(web-ui-for-demo)-matplotlib-/-seaborn-(plots
  • visualization)
Share this project:

Updates