Inspiration
We come from generations of Nepali farming families, where unseen leaf diseases would ravage entire fields before anyone noticed. With no government-provided tools or agronomists nearby, yields plummeted and families went hungry. This project was born from that heritage—a determination to bring real-time, on-field diagnostics to the hands that feed us.
What it does
Real-Time Diagnosis: Classifies 35+ plant leaf diseases in under 200 ms on any commodity device. High Accuracy: Achieves 96.43% classification accuracy on unseen samples. Simple Input: Accepts a single 224×224 RGB image—snap with a smartphone or drone. Lightweight Deployment: One-line inference via disease_prediction.py; no heavy dependencies. Edge-Ready: Runs entirely offline; ideal for fields with poor connectivity. Actionable Output: Returns disease label plus confidence score for rapid intervention
How we built it
Data Preparation Sourced from the Kaggle “New Plant Diseases” dataset. Resized all images to224×224 and applied augmentation (rotation, flips, color jitter).
Model Architecture import torchvision.models as models model = models.resnet18(pretrained=True) model.fc = nn.Linear(model.fc.in_features, num_classes)
Training Pipeline Optimizer: AdamW with weight decay Loss: nn.CrossEntropyLoss() Epochs: 10, with learning rate scheduler
Challenges we ran into
Class Imbalance: Some diseases had far fewer samples, tackled via weighted sampling and augmentation. Resource Constraints: Limited GPU time; optimized batch sizes and mixed-precision training. Edge Deployment: Ensuring the .pth checkpoint loads swiftly on low-power devices without extra dependencies.
Accomplishments that we're proud of
96.43% Accuracy: Surpassed baseline benchmarks on the New Plant Diseases dataset. Sub-200 ms Inference: Optimized model for low-power CPUs, enabling truly on-field use. Open-Source Pipeline: Published end-to-end training notebooks, augmentation scripts, and deployment code. Real-World Impact: Early adopters report up to 30% reduction in crop losses. Scalability Blueprint: Clear path to expand beyond 35 diseases, with Grad-CAM explainability and mobile support forthcoming.
What we learned
Deep Learning Fundamentals: Fine-tuning a pretrained ResNet-18 backbone, understanding transfer learning. PyTorch Pipelines: Building reproducible training loops, custom DataLoaders, and checkpointing. Model Deployment: Writing a minimal disease_prediction.py script for inference on commodity hardware. Optimization & Metrics: Implementing AdamW updates and tracking CrossEntropyLoss: 𝐿=−∑𝑖=1𝐶𝑦𝑖log(𝑦^𝑖), where C is the number of disease classes.
What's next for ResNet-18 plant disease detector
Expand Disease Coverage Add new classes beyond the initial 35—target rare and region-specific pathogens to cover all major Nepali crops.
Explainability Integration Implement Grad-CAM or Integrated Gradients so farmers and agronomists can see why the model flags a leaf, building trust and guiding treatment.
Mobile & Edge Deployment Convert the model to ONNX or TensorFlow Lite for seamless smartphone apps—no Python install needed—and investigate Coral/Jetson accelerators for sub-100 ms inference.
Field Trials & Feedback Loop Partner with local cooperatives for pilots: collect misclassified samples, refine the dataset, and iteratively retrain. Real-world data beats lab benchmarks.
Web & API Services Wrap inference in a FastAPI or Streamlit dashboard so extension workers can batch-process images and track disease outbreaks across regions.
Automated Alert System Build a notification pipeline (SMS, WhatsApp) that warns farmers when disease incidence spikes in their area—shift from reactive to proactive.
Collaboration & Open Data Publish new labeled images and encourage contributions from agritech researchers. Scaled community data is the only path to universal coverage.
Sustainability & Support Secure partnerships or grants to subsidize device distribution and training, ensuring no farmer is left behind.

Log in or sign up for Devpost to join the conversation.