Inspiration
Drug discovery operates under extreme constraints: limited labeled data, high experimental costs, and noisy biological signals. In molecular property prediction tasks like Blood–Brain Barrier Penetration (BBBP), even strong graph neural networks often struggle to generalize reliably, leading to late-stage drug failures and wasted R&D effort.
Recent advances in dendritic optimization suggest that biologically inspired learning dynamics can improve how neural networks allocate capacity and learn from scarce data. This hackathon project was inspired by a simple but powerful question: can dendrites help graph neural networks learn more effectively in realistic drug discovery settings without changing the model architecture or data?
By applying Perforated AI’s dendritic optimization to a standard GIN model on MoleculeNet BBBP, this project explores whether dendrites can reduce remaining error, improve convergence behavior, and unlock better performance in noisy, data-limited biomedical graphs. The goal is not just higher accuracy, but insights into how dendritic learning could make AI-driven drug screening more reliable, efficient, and accessible on constrained hardware.
What it does
This project applies Perforated AI’s Dendritic Optimization to a Graph Isomorphism Network (GIN) trained on the MoleculeNet BBBP (Blood–Brain Barrier Penetration) dataset to improve molecular property prediction.
Specifically, it:
Trains a baseline GIN model using standard backpropagation to predict whether a molecule can cross the blood–brain barrier.
Trains an identical GIN architecture enhanced with dendritic optimization, keeping the dataset, optimizer, and random seed fixed for a fair comparison.
Automatically tracks learning dynamics, validation performance, parameter growth, and restructuring events using Perforated AI’s training instrumentation.
Produces reproducible metrics and graphs that visualize how dendritic learning alters capacity allocation and convergence behavior over time.
Quantifies the impact of dendrites using validation AUC, test AUC, parameter count, and remaining error reduction.
The result is a side-by-side evaluation showing how dendritic optimization can significantly reduce remaining prediction error in a realistic drug discovery benchmark—without changing the underlying model architecture or data.
How we built it
We built this project by integrating Perforated AI’s Dendritic Optimization into a Graph Neural Network (GIN) workflow for molecular property prediction, while keeping the experimental setup as controlled and reproducible as possible.
Model & Dataset
We used a Graph Isomorphism Network (GIN) implemented with PyTorch Geometric.
The model was trained on the MoleculeNet BBBP dataset, where molecules are represented as graphs with atom-level node features and bond-level connectivity.
Baseline Training
A baseline GIN model was trained using standard backpropagation.
Fixed hyperparameters (hidden dimension, number of layers, optimizer, learning rate, weight decay, and random seed) ensured consistent comparison.
Dendritic Integration
We wrapped the same GIN model with Perforated AI’s PAINeuronModuleTracker, enabling dendritic optimization without changing the architecture.
Dendrites dynamically reallocated capacity during training based on validation performance, allowing the network to adapt its internal structure.
All dendritic events (restructuring, parameter growth, learning-rate interactions) were automatically tracked.
Instrumentation & Tracking
Training metrics were logged using Weights & Biases (W&B).
Perforated AI’s built-in logging generated raw PAI graphs and CSV artifacts required for verification and reproducibility.
Evaluation
We compared baseline and dendritic runs using validation AUC, test AUC, parameter count, and remaining error reduction.
Multiple runs and ablations were performed to understand stability and generalization behavior on a small biomedical dataset.
This approach demonstrates how dendritic optimization can be added to an existing GNN pipeline with minimal code changes while delivering measurable performance gains.
Challenges we ran into
Small, Noisy Biomedical Data
The BBBP dataset is relatively small and noisy, which makes performance improvements difficult and increases the risk of overfitting.
Small gains in AUC are meaningful, but instability can appear when model capacity grows too quickly.
Balancing Dendritic Capacity
Dendritic optimization dynamically increases model capacity, which can improve accuracy but also introduce unnecessary parameters if unconstrained.
We observed that not all dendritic growth regimes generalize well on near-saturated datasets like BBBP.
This required careful tuning of dendritic behavior and validation-based switching.
Ensuring Fair Comparisons
To make results credible, the baseline and dendritic models had to be identical in:
Architecture
Dataset splits
Optimizer and learning rate
Random seed
Even small differences in setup could invalidate conclusions.
Framework Integration Details
Integrating Perforated AI into a PyTorch Geometric workflow required attention to:
Optimizer re-initialization after dendritic restructuring
Proper handling of wrapped vs. unwrapped modules
Avoiding unsupported optimizer arguments during dynamic model updates
Reproducibility & Verification
The hackathon requires the raw PAI output graph for verification.
Managing multiple generated artifacts (graphs, CSVs, checkpoints) and selecting the correct final run required discipline and cleanup.
Interpreting Dendritic Behavior
Understanding when dendrites help versus when they simply add capacity was non-trivial.
This led to an important insight: dendritic optimization is most effective when aligned with the dataset’s error regime (accuracy-seeking vs. compression-seeking).
These challenges ultimately strengthened the project by forcing careful experimentation, clearer ablations, and a deeper understanding of how dendritic optimization behaves in real-world drug discovery settings.
Log in or sign up for Devpost to join the conversation.