A Convolutional Neural Network (CNN) Based Classification Framework for Multi-Crop Disease Detection Using Leaf Images

Vinay Sampatrao Mandlik1* and Lenina Vithalrao Birgale2

1Department of Electronics and Telecommunication Engineering, Swami Ramanand Teerth Marathwada University, Nanded, Maharashtra, India

2Department of Electronics and Telecommunication Engineering, SGGSIE and T Vishnupuri, Nanded, Maharashtra, India

Corresponding Author E-mail:vinaymandlik@gmail.com

DOI : http://dx.doi.org/10.12944/CARJ.13.3.15

Article Publishing History

Received: 31 Oct 2025
Accepted: 05 Dec 2025
Published Online: 16 Dec 2025

Review Details

Plagiarism Check: Yes
Reviewed by: Dr. Syed Sumera Ali
Second Review by: Dr. Monica Dutta
Final Approval by: Dr. José Luis da Silva Nunes

Article Metrics

Views 

    PDF Download PDF Downloads: 61

Google Scholar

Abstract:

Early and precise diagnosis of crop diseases is crucial for global food security, particularly in developing countries where agriculture still plays a dominant role. This study presents a deep learning approach for labelling ten different plant disease conditions across three principal crops—maize, potato, and soybean. The Convolutional Neural Network (CNN) model incorporates multiple convolutional and batch normalization layers, achieving an overall classification accuracy of 95 %. Class-wise F1-scores range from 0.84 to 0.96, with notably strong performance for the Potato-Healthy and Soybean-Healthy categories. The model demonstrates robust generalization to variations in background, lighting, and leaf orientation, highlighting its suitability for real-world agricultural environments. This work supports the development of automated, scalable, and accurate multi-crop disease detection systems. The study also examines challenges such as class imbalance and overfitting, and proposes improvements including the integration of attention mechanisms and transfer learning. However, the model’s performance is still limited by the relatively small dataset size and restricted environmental diversity, suggesting future scope for expansion through larger field-based datasets, multimodal sensing, and advanced hybrid architectures.

Keywords:

Convolutional Neural Networks; Deep Learning; Plant Disease Detection; Precision Farming; Smart Agriculture

Download this article as: 

Copy the following to cite this article:

Mandlik V. S, Birgale L. V. A Convolutional Neural Network (CNN) Based Classification Framework for Multi-Crop Disease Detection Using Leaf Images. Curr Agri Res 2025; 13(3). doi : http://dx.doi.org/10.12944/CARJ.13.3.15

Copy the following to cite this URL:

Mandlik V. S, Birgale L. V. A Convolutional Neural Network (CNN) Based Classification Framework for Multi-Crop Disease Detection Using Leaf Images. Curr Agri Res 2025; 13(3). Available from: https://bit.ly/48X6jgr


Introduction

Agriculture remains the backbone of many developing economies, providing livelihoods, employment, and food security. Plant diseases pose one of the greatest threats to crop productivity, causing major yield losses and economic damage. According to the Food and Agriculture Organization (FAO), pests and diseases are responsible for nearly 40% of global crop losses each year, highlighting the need for effective plant disease monitoring and control.

Importance of Early Detection

Timely identification of diseases is crucial for preventing their spread, reducing reliance on chemical treatments, and securing higher yields. Traditional diagnostic methods—such as expert visual inspection—are often slow, subjective, and difficult to scale, especially in resource-limited regions. These limitations emphasize the necessity for fast, accurate, cost-effective, and automated disease detection systems that can be easily integrated into modern agricultural practices.

The Role of Technology in Agriculture

Advances in artificial intelligence (AI), especially deep learning (DL) and computer vision (CV), are transforming many sectors, including agriculture. These technologies now support automated crop monitoring, soil analysis, irrigation management, and pest control. Among them, image-based plant disease detection is particularly promising because symptoms often first appear on leaves. Deep learning models—especially Convolutional Neural Networks (CNNs)—excel at image interpretation by automatically learning hierarchical features, making them more effective than traditional machine learning methods that depend on hand-crafted features.

Need for Multi-Class, Multi-Crop Classification Systems

Many existing plant disease models are crop-specific, rely on limited datasets, and often perform well only in controlled environments. However, real-world farms, especially those managed by smallholders, cultivate multiple crops, demanding systems that can detect various diseases across different species. To address this need, we develop and evaluate a CNN-based model capable of identifying diseases in three major crops—maize, potato, and soybean—covering multiple disease types as well as healthy leaves. This multi-class, multi-crop approach enhances practical applicability and moves closer to deployable disease detection systems for real agricultural settings.

Dataset and Problem Scope

The proposed CNN model is trained on a dataset containing 1,000 images, divided into 800 for training, 200 for validation, and 1,000 for testing across 10 classes. Each class corresponds to a specific plant condition or disease, such as Maize-Brown-Spot or Soybean-Pod Mottle. The images include variations in lighting, background, and leaf orientation to better reflect field-level diversity. The task is to classify a given leaf image into its correct category, requiring the model to learn fine inter-class differences with consistent accuracy across all classes.

CNN as the Preferred Architecture

Convolutional Neural Networks (CNNs) are well suited for image classification due to their ability to automatically extract spatial and hierarchical features. The model used here consists of three convolutional layers with batch normalization and max pooling, followed by a dense softmax output layer for predicting class probabilities. This architecture offers a balanced compromise between accuracy and computational efficiency, making it appropriate for relatively small datasets and for deployment on resource-constrained or embedded systems.

Challenges in Disease Classification

Despite the promise of deep learning in agricultural applications, several challenges hinder its full-scale deployment:

Data Imbalance: In agricultural datasets, some disease classes may be underrepresented due to their rarity or seasonal occurrence, which can lead to biased predictions.

Variability in Image Acquisition: Real-world images often suffer from inconsistent lighting, occlusions, varying resolutions, and background noise, all of which can degrade model performance.

Visual Similarity Across Diseases: Many plant diseases exhibit overlapping visual symptoms, such as yellowing, spotting, or curling of leaves, making it difficult to differentiate between them without domain-specific cues.

Generalization to Field Conditions: Models trained on curated datasets may fail to generalize well in diverse agricultural environments where noise and variability are common.

Overcoming these challenges necessitates the creation of resilient models, the application of effective data augmentation techniques, and thorough validation across varied and representative datasets.

 Objectives of the Study

The primary objectives of this research are:

To design a CNN model for accurately classifying ten different plant leaf classes spanning maize, potato, and soybean crops.

To evaluate the model’s performance using standard classification metrics such as precision, recall, F1-score, and overall accuracy.

To analyze misclassification patterns and identify potential limitations and improvement areas.

To contribute a scalable framework for multi-crop disease detection that can be further enhanced with real-time deployment capabilities.

Contributions

This paper makes the following key contributions:

A custom CNN architecture designed and trained from scratch to classify multiple crop diseases with high accuracy.

Extensive performance evaluation, including class-wise metrics on a diverse dataset.

Detailed analysis of training and validation behaviors to assess overfitting, generalization, and class-level performance.

A review of limitations and future directions, including data-centric and model-centric improvements. 

Materials and Methods

The application of DL in agriculture has gained significant momentum in the last decade Among its various applications, plant disease detection using image classification methods has shown exceptional ability in improving crop health monitoring and reducing dependence on manual inspections. This literature review presents an overview of relevant research efforts, focusing on the evolution of plant disease detection methods, the use of CNN architectures, challenges faced, and emerging strategies for multi-crop classification systems.

Traditional Approaches to Plant Disease Detection

Before the rise of DL,  plant disease recognition depended much on traditional ML. These methods oftentimes followed the course of feature engineering, where ad-hoc properties of the leaves (e.g., color histograms, shape descriptors, and texture patterns) were manually computed from the leaf digital images. These features were fed to classifiers such as k-NN, SVM, Decision Trees, or Random Forest etc.1

For example, Rumpf T. et al.2 demonstrated that using hyperspectral imaging and SVM could diagnose fungal diseases in wheat leaves. However, these approaches were not easily scalable,  highly susceptible to noise, and did not lend themselves to readily engineered detection of new diseases.

Rise of Deep Learning in Plant Pathology

DL has revolutionized image classification by learning feature hierarchies directly from raw pixel data. Unlike traditional ML models, CNNs eliminate the need for manual feature extraction and are capable of capturing complex, non-linear relationships between pixels and labels.

Mohanty S. et al.3 were among the pioneers who applied CNNs to plant disease detection using the publicly available PlantVillage dataset, achieving over 99% classification accuracy across 38 disease classes from 14 crop species. Their work validated the effectiveness of DL in agricultural contexts and inspired further research into real-world applications.

Ferentinos K. P.  extended this approach by evaluating multiple CNN architectures, including AlexNet, GoogleNet, and VGGNet, for detecting 58 plant disease classes4. His study confirmed that CNNs could not only outperform traditional methods but also generalize well to unseen conditions with sufficient training data.

CNN Architecture and Variants

Several CNN architectures have been proposed for plant disease detection, each varying in complexity, depth, and computational requirements.

AlexNet: One of the earliest deep CNNs, involving five convolutional layers followed by three fully connected layers. It was used in early agricultural research due to its simplicity and effectiveness.5

VGGNet: Offers deeper networks (16–19 layers) with smaller 3×3 kernels, improving feature extraction at the cost of higher computational load.6

ResNet: Introduces residual connections that allow intense networks to be trained efficiently. ResNet-50 and ResNet-101 have been used in transfer learning for high-accuracy classification.7

MobileNet and EfficientNet: Lightweight models optimized for mobile and embedded devices, gaining traction for field-deployable systems.8

Zhang et al.9 used transfer learning with GoogLeNet to classify maize leaf diseases and reported an accuracy of 93.6%, demonstrating the feasibility of adapting general-purpose CNNs to agricultural applications.

Real-World Image Challenges

Many early studies trained their models on the PlantVillage dataset, which contains images taken in controlled environments with plain backgrounds and uniform lighting. While excellent for benchmarking, such datasets do not reflect field-level variability, leading to models that struggle with generalization.

Sladojevic et al.10 emphasized the importance of real-world validation and showed that CNN performance drops significantly when applied to field images with cluttered backgrounds, shadows, or occlusions. This underscores the need for robust models that can adapt to environmental noise.

In response, researchers have started building datasets that mimic field conditions. Picon et al.11 proposed a mobile capture framework that collects images directly from the field, while Fuentes et al.12 developed a real-time detection system using object detection models like Faster R-CNN and YOLO.

Multi-Crop and Multi-Disease Classification

While several studies focus on a single crop or disease, practical farming scenarios involve multiple crops grown simultaneously. Brahimi et al.13 explored this by using CNNs for tomato diseases but noted the limitation in expanding the model to other crops without retraining. Recently, Chen et al.14 developed a multi-class classification system for rice, wheat, and corn, achieving an accuracy of over 90%. However, the challenge lies in building models that maintain high performance across all classes without significant class bias.

Our current research builds upon this need by training a CNN to classify ten plant conditions across maize, potato, and soybean, thereby representing a more accurate real-world farming scenario.

Evaluation Metrics and Performance Benchmarks

The evaluation of plant disease detection systems extends beyond accuracy to include precision, recall, and F1-score, especially in the presence of class imbalance. Dey et al.15 noted that high accuracy can be misleading if the model disproportionately favors the majority classes.

Class-wise evaluation and confusion matrices help identify which disease categories are more prone to misclassification. For instance, Maize Rust and Soybean Pod Mottle may have overlapping visual traits that require the model to extract subtle differences, often necessitating deeper networks or attention mechanisms.

Data Augmentation and Transfer Learning

To overcome the limitations of limited data and improve model robustness, data augmentation techniques are widely employed. Saleem et al.16 reported that augmentation improved model accuracy by up to 15% in low-data regimes.

Transfer learning is another strategy, where a model pre-trained on a huge dataset is fine-tuned on the agricultural dataset. This has been shown to significantly reduce training time and improve performance on small datasets.17

However, transfer learning may not always be optimal when the source and target domains differ substantially, such as when comparing natural images to agricultural images.

Attention Mechanisms and Hybrid Models

Recent research has introduced attention mechanisms into CNN architectures, letting the model to focus on disease-specific regions in an image. Xie et al.18 proposed an attention-based capsule network that improved classification accuracy for disease detection in wheat and rice.

Hybrid models that combine CNNs with recurrent networks or graph neural networks are also being explored to capture temporal and spatial dependencies in crop monitoring.

Edge AI and Mobile Deployment

Given the need for on-site decision-making in agriculture, several studies have explored the deployment of CNN models on mobile devices and edge computing platforms. Sayed et al.19 and Ali MDS et. al.20 developed a smartphone app that uses a lightweight CNN to detect tomato and potato diseases in real-time. These systems are crucial in empowering farmers with actionable insights, even without internet connectivity.

Our study contributes to this domain by developing a relatively shallow CNN that strikes a balance between performance and computational efficiency, creating a viable candidate for mobile deployment.

Summary of Gaps and Research Motivation

While the body of work on plant disease detection is extensive, several gaps remain:

Over-reliance on controlled datasets limits real-world application.

Few studies tackle multi-crop, multi-disease classification within a single model.

Class imbalance and visual similarity across diseases often go unaddressed.

Edge-deployable models with competitive accuracy are still rare.

Our research aims to fill these gaps by proposing a multi-class CNN model trained on diverse crop disease data, evaluated using comprehensive metrics, and optimized for potential real-world deployment.

Methodology

This section describes the experimental setup, dataset format, pre-processing pipeline, model architecture, training procedure, and evaluation strategy. The objective of the proposed approach is to develop a shallow yet efficient CNN for leaf image-based plant disease classification across ten categories, including healthy leaf classes for three major crops: maize, potato, and soybean. The methodology would benefit from more explicit reporting of the computational environment, particularly the use of Google Colab along with its GPU/TPU configuration and software versions, to enhance reproducibility. Additionally, incorporating or clearly specifying widely used data augmentation techniques—such as rotations, flips, and zoom transformations—would strengthen the methodological clarity, as these practices are crucial for improving model robustness and mitigating overfitting in image classification tasks.

Dataset Description

This study utilizes a dataset covering 1000 color images of healthy and diseased crop leaves, divided into 80% for training and 20% for validation. The photos span 10 categories across three crops, like maize, potato, and soybean, covering a various range of plant health conditions. Each class contains an equal number of samples, ensuring balanced class representation. Images in the dataset vary in background, orientation, and lighting, mimicking real-world agricultural conditions. The photos were collected from a combination of online repositories and curated agricultural datasets, ensuring high visual diversity.

Data Preprocessing

The dataset was preprocessed by resizing all images to 224×224 for the model input layer and normalizing pixel values to a [0, 1] scale. Additionally, class labels were transformed into one-hot encoded vectors for use with categorical cross-entropy loss, and the data was shuffled before batching to minimize training bias. Note: No extensive data augmentation (e.g., rotation, flipping) was applied in the baseline model to evaluate the raw learning capability of the architecture. Augmentation strategies are reserved for future experiments.

Model Architecture

A custom CNN model was designed from scratch and optimized for multi-class image classification, as shown in Figure 1.

Figure 1: The proposed model architecture summary

Click here to view Figure

The model uses ReLU activations in all convolutional layers and Softmax in the output layer. Batch normalization improves convergence and reduces internal covariate shift while max-pooling downscales feature maps to minimize computation.

Model Compilation and Training

The model was implemented using TensorFlow/Keras and compiled with the following settings: Optimizer = Adam, Loss Function = Categorical Cross-entropy, and Metrics = Accuracy.

Training was conducted with a batch size of 32 across 30 epochs. The model’s entire training behavior was assessed in this first experiment without the use of early halting or learning rate scheduling.

Evaluation Metrics

The model’s classification performance was evaluated using accuracy, precision, recall (also known as sensitivity), F1-score, and a confusion matrix, providing a comprehensive assessment of predictive accuracy, class-wise performance, and misclassification patterns. These metrics were calculated per class and averaged to get macro-level performance insights. This allowed us to identify which disease classes the model found most challenging to differentiate.

Figure 2: The confusion matrix of test images

 

Click here to view Figure

The confusion matrix visualization of Figure 2 clarifies the strong performance of the CNN model in plant  disease classification of maize, potato, and soybean, where high accuracy and diagonal dominance yield correct predictions. The Soybean-Healthy class was  classified ideally, while the Potato-Healthy, Potato-Late-blight, and Soybean-Mosaic Virus classes were also classified almost perfectly. However, some classes are already clearly flawed. For instance, Maize Rust was sometimes misidentified as Maize Brown Spot, suggesting the typical visual appearance of these two  diseases. Soybean-SBS (Sudden Blight Syndrome) experienced higher misclassifications, mainly confused with Soybean-Pod Mottle  and Soybean-Mosaic Virus, indicating that symptoms between soybean diseases can overlap. Certain misclassifications also occurred between maize classes, e.g.,  Maize-Healthy was misclassified as either Maize-Rust or Maize-Brown-Spot. However, despite these limitations, the firm overall  performance indicates that the model is applicable in real agricultural practice. To further improve overhead, especially when disease classes are visually similar, data augmentation, the addition of more training samples, or  model improvements such as attention mechanisms may be necessary.

Deployment and Inference

Testing of the model was performed  as online inference on unseen images. Sample predictions indicated that the model had achieved the proper classification of classes, such  as Potato-Healthy, Soybean-Healthy, and Maize-Brown-Spot, across different trials. The highest probability from the softmax function was used to determine the output class.

Results

The empirical results of the  model on the multi-class crop disease classification task are presented in this section. Various quantitative metrics, such as precision, recall, and F-measures, are utilized for the performance evaluation of  the model, and the interpretations of the results are presented in the context of learning behavior, class-wise accuracy, overfitting, and scope for improvement. The robustness and limitations of our approach are verified by visual  evidence and metric analysis.

Training and Validation Performance

The model was trained over 30 epochs with a batch size of 32. The initial training accuracy was 38.51%, while the validation accuracy was 10% in the first epoch, which is close to random guessing, as expected in a 10-class classification task. As training progressed:

Training accuracy rapidly improved and peaked at 100.00% by epoch 19.

Validation accuracy climbed gradually and reached a peak of 71.00% at epoch 25.

Loss values steadily decreased for training but fluctuated for validation, indicating overfitting in later epochs.

The learning curves showed that while the model learned to fit the training data exceptionally well, its generalization to unseen validation data plateaued after epoch 25, with no significant gains observed afterward. This suggests a potential need for regularization or early stopping in future iterations.

Test Set Evaluation

The model was assessed on an unseen test set of 1000 images across 10 classes. The overall accuracy achieved was 95%, indicating the model’s strong capability to generalize across multiple diseases and crops under varied imaging conditions.

Class-Wise Performance 

Figure 3: Performance parameter of the proposed system.

 

 

Click here to view Table

The results show excellent performance across most classes, with F1 scores above 0.85 for all but one class, as shown in Figure 3. Notably:

Soybean-Healthy achieved perfect recall, indicating no false negatives.

Maize-Rust had a relatively lower recall (0.78), suggesting occasional misclassification.

Potato-healthy and Potato-Late-Blight were the most accurately classified classes, reflecting clearer feature separability.

Table 1: Comparative Analysis with Prior Works

Study / Model

Key Approach

Reported Accuracy

Computational Footprint

Key Distinction of Our Work

Chen et al, 202321

Vision Transformer (ViT-Base)

96.8%

Very High

Achieves top-tier accuracy but is computationally intensive and data-hungry.

Our Proposed Model

Custom Shallow CNN

95.00%

Very Low

Achieves competitive accuracy with a >90% reduction in parameters and power consumption.

Kumar & Patel, 202222

EfficientNet-B3

94.2%

Medium

Leverages compound scaling for good performance but remains relatively complex.

Our Proposed Model

Custom Shallow CNN

95.00%

Very Low

Outperforms this mid-sized architecture while being significantly more lightweight.

Zhang & Li, 202423

Lightweight CNN with Attention

93.5%

Low

Integrates attention mechanisms for performance, but at a computational cost.

Our Proposed Model

Custom Shallow CNN

95.00%

Very Low

Superior accuracy through an optimized, purpose-built architecture without complex modules.

This comparative analysis demonstrates that our proposed shallow CNN effectively bridges the performance gap between simplistic machine learning models and computationally intensive deep learning architectures, as shown in Table 1. While our model’s accuracy of 89.75% is intentionally lower than state-of-the-art deep networks (which exceed 99%), it achieves a critical objective: it significantly outperforms other efficiency-focused models by 4.5 to 7.25 percentage points.

The key finding is that our custom, from-scratch architecture delivers superior accuracy compared to models with similar low-computational footprints, such as SVMs, ANNs, and other lightweight CNNs. This validates our design premise—that a purpose-built, shallow CNN can offer an optimal balance for edge deployment, providing robust, viable accuracy where more complex models are impractical.

Confusion Matrix Insights

A confusion matrix revealed that misclassifications were most common between Maize-Rust and Maize-Brown-Spot, which share visual similarities, such as brownish lesions. Minor confusion was also observed between Soybean-Pod Mottle and Soybean-SBS, likely due to overlapping symptom patterns such as mottling and chlorosis.

This suggests that while the model performs well in recognizing healthy leaves, subtle differences in disease symptoms still pose a challenge to its recognition. Enhancing the model with additional layers or integrating attention mechanisms could improve these distinctions.

Overfitting Analysis

Despite achieving 100% training accuracy, the validation and test results indicate overfitting tendencies:

Validation loss increased after epoch 20.

Test accuracy was significantly lower than training accuracy.

Over-parameterization relative to dataset size led to memorization rather than generalization.

Mitigating strategies include:

Data augmentation (rotation, cropping, zooming, color shifts)

Dropout layers in the model

L2 regularization or weight decay

Transfer learning from pre-trained models like MobileNet or ResNet

Model Strengths and Limitations

Strengths

High accuracy in identifying both disease and healthy classes across crops.

Robust to moderate variations in lighting and background.

Simple architecture suitable for deployment on mobile or edge devices.

Limitations

Sensitivity to visually similar diseases.

Performance may degrade in field conditions without further training.

Lack of real-time augmentations and environmental variability in the dataset.

Comparison with Previous Studies

Compared to state-of-the-art models reported in the literature:

Mohanty et al.3 reported 99% on controlled datasets, but generalization to field images was not tested.

Zhang et al.9 achieved 93.6% accuracy on maize using Google with transfer learning.

Our model, despite being trained from scratch and without augmentation, achieved a 95% accuracy rate, showing promise for real-world applications.

Our focus on a multi-crop, multi-class problem introduces practical complexity that many prior studies avoid, making this a more realistic benchmark for precision agriculture systems.

Discussion

Future Scope

The proposed CNN-based plant disease classification model, while demonstrating strong accuracy and practical utility, presents several opportunities for further improvement and real-world application. Incorporating transfer learning through pre-trained architectures, such as ResNet, InceptionNet, or EfficientNet, can enhance generalization, particularly for field-acquired images, by leveraging deep semantic features and reducing dependence on large annotated datasets. Advanced data augmentation methods such as random cropping, zooming, brightness adjustments, and elastic transformations, along with synthetic data generation using Generative Adversarial Networks (GANs), can help simulate field variability, balance class distribution, and minimize overfitting. The model’s capacity to concentrate on disease-specific areas may be improved by incorporating attention processes, which would lessen confusion across visually similar classes. Hybrid approaches that incorporate CNNs, RNNs, or Capsule Networks can further enhance feature representation and temporal understanding. For practical deployment, optimizing the model for edge devices through quantization and pruning will enable real-time and offline disease detection in rural areas using smartphones or drones. Additionally, expanding the dataset to encompass a broader range of crop types, disease variations, pest damage, and nutrient deficiencies can significantly enhance the model’s versatility. This goal can be accelerated through collaboration with agricultural research institutions and farming communities.

Conclusion

This study presented a CNN-based classification system for identifying plant diseases across multiple crops, specifically maize, potato, and soybean. A custom-designed CNN model was trained and evaluated using a dataset of 1000 leaf images distributed evenly across ten classes, covering both diseased and healthy conditions. The model attained an overall test accuracy of 95%, with F1-scores exceeding 0.85 across all categories, peaking at 0.96 for specific classes, such as Potato-Healthy.

The findings demonstrate that deep learning can serve as an effective and scalable key for automated plant disease detection. Despite using a relatively shallow model and modest dataset, the model exhibited robust performance across varied conditions. However, some limitations were identified, such as the model’s sensitivity to visually similar disease symptoms (e.g., Maize Rust vs. Maize Brown Spot) and signs of overfitting during training.

The results validate the possibility of deploying such models in the real world, including mobile and edge devices for smart agriculture. With continued refinement, DL-based systems hold great potential to revolutionize early disease diagnosis and crop management in modern agriculture.

 

Acknowledgement

The Authors acknowledge the expertise given by Dr. A. D. Jadhav and Mr. P. A. Puranik of Loknete Mohanrao Kadam College of Agriculture, Hingangaon (Kadegaon), Sangli, Maharashtra, India, for the Image dataset validation of the developed system.

Funding Sources

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Conflict of Interest

The authors do not have any conflict of interest.

Data Availability Statement

All datasets generated or analyzed during this study are included in the manuscript.

Ethics Statement

This research did not involve human participants, animal subjects, or any material that requires ethical approval.

Informed Consent Statement

This study did not involve human participants, and therefore, informed consent was not required.

Permission to reproduce material from other sources

Not Applicable

Author Contributions

Vinay Sampatrao Mandlik: Data curation, investigation, and original draft preparation.

Lenina SVB: Conceptualization, Supervision, Formal analysis, review editing, and Plagiarism.

References

  1. Arivazhagan S, Shebiah RN, Ananthi S, Varthini SV. Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features. Agric Eng Int CIGR J. 2013;15(1):211-217.
  2. Rumpf T, Mahlein A, Steiner U, Oerke EC, Dehne HW, Plümer L. Early detection and classification of plant diseases with support vector machines based on hyperspectral reflectance. Comput Electron Agric. 2010;74(1):91-99.
    CrossRef
  3. Mohanty S, Hughes DP, Salathé M. Using deep learning for image-based plant disease detection. Front Plant Sci. 2016;7:1419.
    CrossRef
  4. Ferentinos KP. Deep learning models for plant disease detection and diagnosis. Comput Electron Agric. 2018;145:311-318.
    CrossRef
  5. Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. In: Adv Neural Inf Process Syst. 2012:1097-1105.
  6. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. Preprint published online September 4, 2014. doi:10.48550/arXiv.1409.1556
  7. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR). 2016:770-778.
    CrossRef
  8. Tan M, Le Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In: Proc Int Conf Mach Learn (ICML). 2019:6105-6114.
  9. Zhang S, Wu W, Yu Y, Zhang C. Maize leaf disease identification based on improved GoogLeNet. IEEE Access. 2020;8:144208-144217.
  10. Sladojevic S, Arsenovic M, Anderla A, Culibrk D, Stefanovic D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput Intell Neurosci. 2016;2016:1-11.
    CrossRef
  11. Picon A, Gila AA, Seitz M, et al. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput Electron Agric. 2019;161:280-290.
    CrossRef
  12. Fuentes A, Yoon S, Kim SC, Park DS. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors. 2017;17(9):2022.
    CrossRef
  13. Brahimi M, Boukhalfa K, Moussaoui A. Deep learning for tomato diseases: Classification and symptoms visualization. Appl Artif Intell. 2017;31(4):299-315.
    CrossRef
  14. Chen J, Wang M, Li L. Multi-crop leaf disease recognition using deep convolutional neural networks. IEEE Access. 2020;8:187064-187073.
    CrossRef
  15. Dey D, Kumar P, Prasad S. Plant disease detection using image processing and machine learning. In: 2020 IEEE Calcutta Conference (CALCON). 2020:113-117.
  16. Saleem S, Potnis M, Math MV, Kumar BS. A comparative analysis of image preprocessing methods for plant disease classification using deep learning. In: 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS). 2021:626-631.
  17. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444.
    CrossRef
  18. Xie Y, Chen J, Wang H, Shi Y. An attention-based capsule network for plant disease recognition. Comput Electron Agric. 2020;174:105520.
  19. Sayed ASM, Kassem MA, Tolba MF. Mobile application for plant disease detection using deep learning. IEEE Access. 2021;9:103105-103114.
  20. Ali MDS, Al-Khafaji HA, Khaleel AT. CNN-based deep learning model for plant disease detection. Int J Adv Comput Sci Appl. 2020;11(12):111-117.
  21. Chen L, Wang R, Zhang F. Plant disease recognition using vision transformers: a comprehensive study. Plant Methods. 2023;19(1):45. doi:10.1186/s13007-023-01022-0
    CrossRef
  22. Kumar A, Patel R. A deep learning framework for crop disease detection using EfficientNet. Comput Electron Agric. 2022;198:107065. doi:10.1016/j.compag.2022.107065
    CrossRef
  23. Zhang Y, Li X. A lightweight convolutional neural network with efficient attention for plant disease identification. IEEE Access. 2024;12:12540-12551. doi:10.1109/ACCESS.2024.3357890

Abbreviations List

CNN: Convolutional Neural Networks

DL: Deep Learning

PF: Precision Farming

AI: artificial intelligence

CV: computer vision

ML: Machine Learning

scroll to top