You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

The PlantVillage dataset, with over 54,000 images spanning 14 plant species and 26 disease types, has been widely used for leaf disease classification. However, it is limited in both scale and diversity. To address these limitations, we developed LeafNet, a large-scale dataset designed to support foundation models for leaf disease diagnosis. We introduce LeafNet comprises over 186,000 images from 22 crop species, covering 43 fungal diseases, 8 bacterial diseases, 2 mould (oomycete) diseases, 6 viral diseases, and 3 mite-induced diseases, categorized into 97 classes. The dataset was meticulously collected and processed to minimize intra-class variations while ensuring clarity by maintaining a consistent imaging distance. The disease symptom descriptions were curated from reputable sources, including UME, NIH, and published studies, providing high-quality annotations to support AI-driven plant pathology research.

Notes

This is part of LeafNet dataset is public for training with ~ 70% of the whole dataset

BibTeX

If you found our work useful in your research, please consider citing our work at: Khang Nguyen Quoc, Lan Le Thi Thu, and Luyl-Da Quach. "A Vision-Language Foundation Model for Leaf Disease Identification."(2025).

@article{NGUYENQUOC2025130084,
title = {A Vision-Language Foundation Model for Leaf Disease Identification},
journal = {Expert Systems with Applications},
pages = {130084},
year = {2025},
issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2025.130084},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425037005},
author = {Khang {Nguyen Quoc} and Lan Le {Thi Thu} and Luyl-Da Quach},
keywords = {Leaf disease identification, Contrastive learning, Vision-language models, Foundation models, Image-text retrieval, Context-aware learning},
abstract = {Leaf disease identification plays a pivotal role in smart agriculture. However, many existing studies still struggle to integrate image and textual modalities to compensate for each other’s limitations. Furthermore, many of these approaches rely on pretraining with constrained datasets such as ImageNet, which lack domain-specific information. The research proposes SCOLD (Soft-target COntrastive learning for Leaf Disease identification), a context-aware vision-language foundation model tailored to domain-specific tasks in smart agriculture. SCOLD is developed using a diverse corpus of plant leaf images and corresponding symptom descriptions, comprising over 186,000 image-captions pairs aligned with 97 unique concepts. Through task-agnostic pretraining, SCOLD leverages contextual soft targets to mitigate overconfidence in contrastive learning by smoothing labels, thereby improving model generalization and robustness on fine-grained classification tasks. Experimental results demonstrate that SCOLD outperforms existing Vision-language models (VLMs) such as LLaVA 1.5, Qwen-VL 2.5, OpenAI-CLIP-L, BioCLIP, and SigLIP2 across several benchmarks, including zero-shot and few-shot classification, image-text retrieval, and image classification, while maintaining a competitive parameter footprint. Ablation studies further highlight SCOLD’s effectiveness in contrast to its counterparts. The proposed approach significantly advances the agricultural vision-language foundation model, offering strong performance with minimal or no supervised fine-tuning. This work lays a solid groundwork for future research on models trained with long-form and simplified contexts, tasks involving class ambiguity, and multi-modal systems for intelligent plant disease diagnostics. The code for this study is available at https://huggingface.co/enalis/scold.}
}
Downloads last month
30

Models trained or fine-tuned on enalis/LeafNet

Collection including enalis/LeafNet