๐ก Inspiration
Biodiversity conservation is one of the most pressing challenges of our time. Researchers, conservationists, and even citizen scientists rely on identifying species quickly and accurately. However, traditional methods require expert taxonomists and a lot of time. At the same time, multispecies datasets are often imbalanced, making it harder to train robust models. We asked ourselves: Can deep learning help democratize species recognition across ecosystems?
Thatโs how BioVisionNet was bornโa transfer learning framework that uses state-of-the-art deep learning models for multispecies image classification, enabling faster, more reliable biodiversity monitoring.
โ๏ธ What it does
BioVisionNet is a computer vision framework that:
๐ผ๏ธ Classifies images across multiple species (plants, animals, insects, etc.)
๐ค Uses transfer learning (ResNet, DenseNet, EfficientNet) to adapt pretrained models on biodiversity datasets
๐ Handles imbalanced datasets with augmentation, class weighting, and sampling strategies
๐ Provides an interface where users (researchers, students, conservationists) can upload an image and get real-time classification predictions
๐ฌ Scales for use in ecological monitoring, research, and citizen science apps
๐ ๏ธ How we built it
Dataset: Collected multispecies images from open biodiversity datasets (Kaggle, GBIF, iNaturalist).
Preprocessing: Cleaned and augmented the data to handle imbalance and improve generalization.
Modeling:
Implemented transfer learning with EfficientNetB0 (for performance vs. efficiency tradeoff).
Compared with ResNet50 and DenseNet121 for benchmarking.
Added CBAM attention blocks to improve feature extraction on fine-grained species details.
Training: 80:20 split, optimized with Adam and learning rate scheduling.
Evaluation: Measured accuracy, precision, recall, F1-score, and confusion matrix to ensure balanced performance across species.
Deployment: Built a simple web app with Flask where users can upload an image and receive instant classification results.
๐ Challenges we ran into
๐ Imbalanced datasets (some species had thousands of samples, others only a few dozen).
๐ฆ Fine-grained differences (many species look visually similar, requiring high-resolution feature extraction).
โฑ๏ธ Training time and GPU limitations during experimentation.
๐งฉ Integrating models into a lightweight web app while keeping predictions fast.
โ Accomplishments that weโre proud of
Built a generalizable transfer learning framework that works across multiple biodiversity datasets.
Achieved ~85%+ accuracy on test data, with strong F1-scores across minority classes.
Integrated attention mechanisms (CBAM) to highlight species-specific visual cues.
Created a working web demo where anyone can upload an image and test the model.
Designed the project with real-world applicability for conservation tech and citizen science.
๐ What we learned
Transfer learning is a game-changer for limited and imbalanced datasets.
Attention mechanisms like CBAM and SE blocks improve fine-grained classification significantly.
Deployment is just as important as accuracyโusers need an easy interface to trust and adopt AI models.
Hackathons push you to go from idea โ working prototype in record time!
๐ฎ Whatโs next for BioVisionNet
๐ Expand to more species datasets (marine life, endangered species, crop diseases).
๐ฑ Build a mobile app for offline species recognition in the field.
๐ค Partner with NGOs and conservation organizations to make BioVisionNet useful in biodiversity projects.
๐ Integrate explainable AI to show which visual features influenced the prediction (to help researchers trust results).
Log in or sign up for Devpost to join the conversation.