CONVOLUTIONAL NEURAL NETWORK-BASED APPROACH FOR CLASSIFYING FUSARIUM WILT DISEASE IN CHICKPEAS USING IMAGE ANALYSIS
Ahmad Ali AlZubi
Computer Science Department, Community College, King Saud University, Riyadh, Saudi Arabia
Corresponding author’s emai: aalzubi@ksu.edu.sa https://orcid.org/0000-0001-8477-8319
ABSTRACT
Legume crops, particularly chickpeas, are highly nutritious and play a vital role in global food security. However, they are susceptible to various diseases, among which Fusarium wilt, caused by Fusarium oxysporum, leads to significant yield losses. Early detection of Fusarium wilt is essential for effective disease management. Traditional diagnostic methods are often labour-intensive and time-consuming. This study aims to classify Fusarium wilt in chickpeas using Deep Convolutional Neural Networks (DCNN). The dataset consists of 4,339 chickpea plant images obtained from Kaggle. The images are categorized into five classes based on disease severity: highly resistant (HR), resistant (R), moderately resistant (MR), susceptible (S), and highly susceptible (HS). The images were pre-processed, resized, normalized, and augmented to enhance model performance. The classification was performed using a SoftMax classifier. The DCNN was trained using the Adam optimizer and categorical cross-entropy as the loss function, with hyperparameters fine-tuned to optimize performance. The proposed model achieved an overall accuracy of 73.96%, with a training accuracy of 73.16% and a validation accuracy of 77.64% after 100 epochs. Performance metrics revealed the highest precision and recall for the highly susceptible (HS) class, while accuracy was lower for intermediate classes (R and MR). The confusion matrix highlighted areas where the model excelled and where further refinement is needed. The study demonstrates the potential of DCNNs for automated classification of Fusarium wilt in chickpeas, offering a practical tool for disease management. However, the model's limitations in intermediate classes underline the need for further improvements. Future work will focus on enhancing dataset diversity, refining preprocessing techniques, and exploring advanced architectures to improve classification accuracy across all severity levels. These findings contribute to the development of robust, automated solutions for managing plant diseases and supporting sustainable agriculture.
Keywords:Fusarium wilt, Chickpea, Deep Convolutional Neural Network (DCNN), Accuracy
INTRODUCTION
Chickpea (Cicer arietinum L.) is an important legume crop farmed globally, contributing considerably to global food security and nutritional variety. The chickpea plant exhibits resistance to drought, cold, and barrenness (Bakir et al., 2021; Negussu et al., 2023). Its extensive root system, large number of root nodules, and superior capacity to fix nitrogen aid in conserving soil and water, as well as promoting ecological balance. In addition to being a significant source of plant-based protein, chickpeas are also abundant in vitamins, minerals, dietary fibre, amino acids, and healthy unsaturated fatty acids (Fu et al., 2021). Chickpeas grow in semiarid, desert, temperate, and subtropical climates in at least 50 countries (Zhang et al., 2020). It was initially found in western Asia and the Near East. Nowadays, America, Asia, Africa, and the Mediterranean region are where it is most commonly cultivated (World Population Review, 2024; Zhang et al., 2024). India continues to lead, with its production rising from 11.91 million tons in 2021 to 13.53 million tons in 2022 (FAO, 2023). Australia, the second-largest producer, saw an increase from 876,468 tons to 1.13 million tons, while Turkey's production grew from 475,000 tons to 580,000 tons. Ethiopia and Myanmar also experienced production gains. However, the United States experienced a decline in production, decreasing from 129,770 tons in 2021 to 101,000 tons in 2022.
Chickpeas are productive due to their adaptability to dry and semi-arid climates. They can grow with limited rainfall and require minimal water. Their ability to fix nitrogen in the soil reduces the need for fertilizers. Chickpeas also have a short growing season, which makes them suitable for crop rotation. Additionally, they are resistant to pests and diseases. Growing demand for plant-based proteins has further boosted their production. These factors together make chickpeas a highly productive crop (Chen et al., 2023).
Fusarium wilt caused by Fusarium oxysporum f. sp. ciceris is a harmful crop disease that affects chickpeas (Erbay and Hayit, 2024). Chickpeas are susceptible to several pathogens, including fungi, bacteria, and viruses, which can reduce yields and quality. Ascochyta blight (Ascochyta rabiei) is a major fungal disease that affects leaves, stems, and pods, especially in wet conditions. Fusarium wilt (Fusariumoxysporum) damages the plant’s vascular system, causing wilting and yellowing in warmer conditions. Root rot caused by Rhizoctonia and Pythium species can lead to root decay in saturated soils. Bacterial diseases like Xanthomonas blight and viral infections such as chickpea chlorotic dwarf virus also threaten chickpea crops. Effective management is crucial for minimizing damage (Rocha et al., 2023).
Hossain et al. (2019) proposed a method for diagnosing plant diseases using the K-nearest neighbour (KNN) algorithm. The KNN algorithm was used in the study to identify common plant diseases including early blight, bacterial blight, bacterial spot, and leaf spot of numerous plant species (Hossain et al., 2019). Gonçalves et al. (2021) used CNN models along with visual images to predict the severity of soybean rust, wheat tan spot, and coffee leaf miners. Hasan et al. (2020) suggested that combining CNN with LSTM improves image recognition, as CNN alone is less effective for feature extraction and recognition.
Although, the DCNN Model has proven effective in plant disease classification; however, its application to Fusarium wilt severity in chickpeas is limited. This study proposes that DCNNs can classify Fusarium wilt severity accurately. In this work, preprocessing techniques, such as noise reduction, normalization, and data augmentation are used to improve the model accuracy and generalization. Unlike previous methods, this study classifies images into five distinct severity levels. It also explores how the extreme classes (HR, HS) achieve higher accuracy than intermediate classes (R, MR). Model performance is assessed using accuracy, precision, recall, and F1-score. Finally, the DCNN model is compared with existing approaches to highlight its effectiveness and novelty.
MATERIALS AND METHODS
Dataset: The dataset utilized in this work was collected from the Kaggle database to obtain the required images (healthy and diseased chickpea images) for experimentation (Hayit et al., 2023).
The images in the collection are separated according to their severity levels as follows:
1(HR): 0%–10% of the plant has wilted (Highly Resistant)
3(R): 11%–20% of the plant has wilted (Resistant)
5(MR): 21–30% of the plant has wilted (Tolerant/Moderately Resistant)
7(S): 31%–50% of the plant has wilted (Susceptible)
9(HS): Over 51% of the plant has wilted (Highly Susceptible).

Figure 1. Categorization of Fusarium Wilt Disease in chickpea plants.
The dataset consists of 4,339 plant images, each categorized based on severity levels of Fusarium Wilt Disease in chickpea plants (Figure 1). Specifically, there are 959 images representing highly resistant cases. 1,177 images for resistant instances, 1,133 images indicating moderately resistant/tolerant conditions, 558 images showcasing susceptible cases, and 512 images capturing highly susceptible events. This diverse dataset offers a detailed and varied representation of different severity levels, providing valuable resources for training and evaluating models to detect Fusarium Wilt Disease in chickpea plants.
Outlined Methodology: The processes in the proposed method are as follows: preprocessing images, scaling, pixel normalization (0, 1), noise reduction, augmentation, feature extraction, model creation, and the following phases of training, validation, and testing (Mohta et al., 2022). The flowchart for identifying Fusarium wilt disease in chickpeas is displayed in Figure 2.

Figure 2. Flowchart for Fusarium wilt disease detection in chickpea.
Image Preprocessing: Noise in the images is expected because images of the infected chickpeas were from different harvesting sites and under different circumstances. The presence of noise can impact the quality of the images and reduce the prediction performance of the model. Before the input image is passed into the CNN model, it undergoes several preprocessing processes, including image scaling, normalization, and noise filtering, to reduce the impact of noise (Gonçalves et al., 2021; Yan et al., 2023). Reducing the impact of noise is the main objective of these preprocessing stages, which will ultimately improve the ability to predict the model.
The images were resized to a resolution of pixels, which was found to be the most suitable. The pixel ranges from 0 to 255, and normalizing these values to a scale of 0 to 1 is important because large integers can slow down the machine learning (ML) training process. Noise filtering is crucial since images can be corrupted by noise from various sources. The type of noise present affects how it should be removed, and filtering techniques are applied to clean the images.
Augmentation is an important method for increasing the dataset by making small changes to the images (Ayalew et al., 2022). This helps prevent the model from learning too much from the training data. It also allows us to test how well the model performs on images that are rotated. The dataset was split into three parts: 80% for training the model and 20% for testing how well it works. The validation set was used to check the model's performance during training and to fine-tune its settings.
Feature Extraction Process: Using CNN architecture, significant properties were automatically extracted from the collected images by feature extraction. This is helpful in the classification of the provided images into the preset classifications of Fusarium wilt (1(HR), 3(R), 5(MR), 7(S), and 9(HS)).
The model sequential is structured with distinct layers for feature extraction. The first convolution layer, labelled conv2d, operates on the input data with a filter and 64 filters, resulting in an output shape of (32, 256, 256, 64). This output is followed by max pooling with a pool size, leading to an output shape of (32, 128, 128, 64). The subsequent convolutional layers, conv2d_1, to conv2d_5, sequentially apply filters and max pooling operations, progressively extracting features and reducing spatial dimensions. The final layers, dense and dense_1, transition to fully connected layers for classification (Figure 3).

Figure 3. Description of convolution and max pooling layers used in CNN model.
The flattened output (32, 2048) from the previous layers serves as input to the dense layers. The dense layer comprises 64 neurons, and the subsequent dense_1 layer has 5 neurons, representing classes’ numbers. The total number of parameters in the model is 2,125,061, with all parameters being trainable. The model's architecture demonstrates a systematic approach to feature extraction and then classification, with a focus on optimizing performance for disease detection in chickpea crops. The Rectified Linear Unit (ReLU) activation function is used within the activation layer to add nonlinearity to the model. ReLU's computational efficiency over other nonlinearities like swish or tanh led to its selection for DCNN model training via gradient descent (Ayalew et al., 2022) (Figure 4).

Figure 4. Detection of Fusarium wilt (1(HR), 3(R), 5(MR), 7(S), and 9(HS)) diseases in chickpea.
To provide the required nonlinearity for efficient model training, the ReLU activation function outputs zero for negative input values and maintains the current value for non-negative inputs. To improve the network's stability and reliability, dropout is added after the convolutional and pooling layers. The last fully connected layer that leads to the classifier was applied after the stacked convolutional and pooling layers, as well as a dropout rate of 0.5, which represents the chance of dropping each neuron. As an advanced regularization method, dropout actively lowers the model's sensitivity to particular network weights. This occurs during training by a random selection of activations being set to zero, which helps to reduce overfitting and improves precision.
Model training: Following the extraction of features from the image inputs using the CNN network architecture, an array of labelled training images was utilized to train the CNN model. After that, the collected aspects are used by the classification process to group the data into a specific category.
Softmax: In the proposed model, a SoftMax classifier is used after the CNN layers to identify chickpea diseases. The SoftMax classifier estimates the likelihood that an input image belongs to a specific class. It produces output values between 0 and 1, and the total of all probabilities adds up to one (Ho and Wookey, 2019). Using SoftMax has several benefits, including fast training and prediction speeds and its straightforward way of defining output probabilities. It easily accepts the output from the last fully connected layer. Ultimately, SoftMax classifies chickpea images into five different categories related to Fusarium wilt.
Hyperparameter Values (Optimization of Algorithm, Epochs, Learning rate, Loss function, and Batch size): The values of the parameters are given outside of the algorithm and are established before the start of training. Determining the proper hyperparameters in a specific case lacks a commonly accepted methodology. Thus, lots of experiments are performed to find suitable hyperparameters. The hyperparameters used during the model's training are presented in Table 1. The suggested method is trained using the Adam optimization approach, which updates weights via backpropagation of error, to reduce the error rate. Because of its widespread use and efficacy, Adam stands out as the most commonly used optimization technique in the field of deep learning studies. To minimize the loss function, this approach optimizes parameters and dynamically modifies the model's weights. For every parameter, Adam uses an adjustable learning rate, a moving average of the gradient and squared gradient scaling. This flexible method improves optimization and helps with effective model training.
Table 1. Description of hyperparameter used for the model.
Measurement parameter
|
Values
|
Number of convolution layers
|
Six
|
Max Pooling layers
|
Six
|
Activation Mechanism
|
ReLU
|
Flatten layer
|
32, 2048
|
Epochs
|
hundred
|
Size of batch
|
32
|
Rate of Learning
|
0.0001
|
Dropout Rate
|
0.5
|
During the backpropagation training stage, the learning rate must be added to control the size of weight increases. The learning rate is mostly used to regulate the amount that weights are changed throughout the backpropagation process. With a lower learning rate, the model performed better. A learning rate of 0.001 is used to balance effective training and good model performance. The loss function, or cost function, is important for measuring how well the model meets its goals (Ho and Wookey, 2019). For this model, the chosen loss function is Categorical Cross-Entropy (CCE) loss. While there are other options like Mean Squared Error (MSE) and Binary Cross-Entropy (BCE), Categorical Cross-Entropy is recommended for problems with more than two classes. The model is trained in this study using a batch size of 32 pixels and 100 epochs.
To evaluate the model's performance, three common metrics were used: accuracy, precision, and recall. A confusion matrix provides a simple way to show how well the classifier predicts each class.
Accuracy is calculated as the ratio of correctly identified data divided by the total number of samples. It is given as

The precision gives a sense of how accurately each class is classified by a classifier. In mathematics, precision is represented as follows

The ability to locate each relevant occurrence within a dataset is expressed as recall. Mathematically, recall can be written as

RESULTS
The experiments employed the hyperparameters listed in Table 1, which included the use of categorical cross-entropy as the loss function, Adam optimizer as the optimization function, a batch size of 32, 100 training epochs, a learning rate of 0.001, and a dropout rate of 0.5 applied after dense and pooling layers. These parameters were fine-tuned to balance training time and model performance effectively.

Figure 5. Training and Validation Accuracy Vs Epochs.
Figure 5 illustrates the training and validation accuracy over 100 epochs. The model achieved a validation accuracy of 77.64%, which exceeded the training accuracy of 73.16%. This discrepancy suggests that the model avoided overfitting and maintained a reasonable ability to generalize to unseen data. The training and validation losses, depicted in Figure 6, further support this observation. After 100 epochs, the training loss was reduced to 0.6050, while the validation loss was slightly higher at 0.6269. These values indicate that the model effectively minimized errors on both training and validation datasets without significant overfitting.

Figure 6. Training and Validation Loss Vs Epochs.
The performance of the trained model was also evaluated by calculating confidence scores for each class. Confidence scores indicate how certain the model is about its predictions. Figure 7 displays a collection of actual and predicted images with their associated confidence scores. The results show that the model performed well in classifying Fusarium wilt severity into discrete categories. For example, the model predicted class 1 (HR) with a confidence score of 93.81%, demonstrating robustness in this category. However, the model showed reduced confidence for intermediate classes, such as class 3 (R) and class 5 (MR), with confidence scores of 61.88% and 64.82%, respectively.

Figure 7. Actual, Predicted Fusarium Wilt disease in chickpeas with their Confidence Score.
The confusion matrix, shown in Figure 8, provides additional insights into the model’s performance across the five classes. It reveals that the model made 85 accurate predictions for class 1(HR), but misclassified 15 instances as class 2(R) and 8 as class 3(MR). Similarly, class 5(MR) had 119 accurate predictions but encountered some misclassifications in other classes. Class 9(HS) performed exceptionally well, with 41 correct predictions and only three misclassifications. These results highlight the variations in the model’s ability to classify different severity levels, with extreme classes (HR and HS) performing better than intermediate ones.

Figure 8. Confusion matrix for five classes of Fusarium wilt disease in chickpeas.
Table 2 presents the evaluation metrics, including precision, recall, and F1-score, for each class. The model achieved its best performance for class 9 (HS), with a precision of 89.13%, recall of 93.18%, and F1-score of 91.11%. For class 1 (HR), the precision was 73.91%, recall was 78.70%, and the F1-score was 76.23%. In contrast, intermediate classes such as class 3 (R) had lower performance metrics, with a recall of 57.25% and an F1-score of 64.38%. The overall accuracy of the model was 73.96%. The macro average for precision, recall, and F1-score was 75.33%, while the weighted average was slightly lower at 73.50%.
Table 2. Performance Metrics of the model for Fusarium Wilt Severity Levels in Chickpea.
Class |
precision |
recall |
f1-score |
support |
1(HR) |
0.7391 |
0.7870 |
0.7623 |
108 |
3(R) |
0.7353 |
0.5725 |
0.6438 |
131 |
5(MR) |
0.7000 |
0.8380 |
0.7628 |
142 |
7(S) |
0.7447 |
0.6364 |
0.6863 |
55 |
9(HS) |
0.8913 |
0.9318 |
0.9111 |
44 |
accuracy |
0.7396 |
480 |
macro avg |
0.7621 |
0.7532 |
0.7533 |
480 |
weighted avg |
0.7411 |
0.7396 |
0.7350 |
480 |
DISCUSSION
The results demonstrate that the proposed DCNN model effectively classifies Fusarium wilt severity in chickpeas. However, performance varied significantly across the five classes. The model performed exceptionally well for extreme severity levels, such as Highly Resistant (HR) and Highly Susceptible (HS). These classes likely had more distinct features, making them easier for the model to differentiate. For instance, class 9 (HS) achieved the highest F1-score of 91.11%, indicating robust predictions. In contrast, intermediate classes like Resistant (R) and Moderately Resistant (MR) were more challenging for the model, as evidenced by their lower precision, recall, and F1-scores.
The variations in classification performance can be attributed to several factors. First, intermediate classes often exhibit overlapping features, which can confuse the model. For example, classes 3 (R) and 5 (MR) may share similar symptoms, such as partial discoloration or moderate leaf wilting, making it harder for the model to distinguish between them. Second, the augmented dataset, while useful for increasing sample size, may not fully capture the variability in real-world conditions, such as lighting, plant age, and environmental factors. Addressing these issues may require further refinement of preprocessing techniques and dataset quality.
The comparison of the proposed study with existing literature underscores the effectiveness of various machine learning models in detecting Fusarium diseases in plants. Hayit et al. (2023) employed a KNN-based approach for classifying Fusarium wilt in chickpeas using color and texture features, achieving an impressive accuracy of 94.5%. However, the reliance on handcrafted features may limit scalability and robustness under varying environmental and field conditions. Similarly, Belay et al. (2022) combined CNN and LSTM for feature extraction, achieving a high accuracy of 92.55% using a dataset of 8,391 images processed with noise filters and augmentation techniques. Milke et al. (2023) proposed a CNN model for coffee wilt detection, achieving accuracies of 98.1% (training) and 97.9% (testing) through careful tuning of hyperparameters and dataset preparation. Kaur et al. (2023) introduced a hybrid CNN-SVM model for Fusarium wilt classification, with accuracies ranging between 87.84% and 98%, indicating robust performance across disease classes.
In comparison, the proposed study used a DCNN model to classify Fusarium wilt severity in chickpeas, achieving an accuracy of 73.96%. While the DCNN model effectively identified highly susceptible cases, it encountered difficulties in accurately classifying intermediate severity levels (resistant and moderately resistant classes). The challenges faced by prior studies, such as reliance on handcrafted features (Hayit et al., 2023) or dataset limitations (e.g., class imbalance or image diversity), motivated the design of this study. By using a DCNN, the current study aimed to explore an end-to-end deep-learning solution for classifying disease severity. However, further improvements, such as enhancing dataset quality and exploring advanced model architectures, are required to improve performance, particularly in challenging classes.
Conclusion: The model achieved an overall accuracy of 73.96%, with strong precision, recall, and F1 scores for certain classes (HR and R), though some misclassifications in other classes indicate areas for improvement. The CNN-based model can serve as a reliable tool for disease detection in chickpeas, but further refinement is needed, particularly in addressing misclassification patterns. Future research will incorporate a broader range of diseases and improve the dataset. Hybrid machine learning techniques such as decision trees, SVM, and random forests will also be explored to enhance model performance.
Acknowledgement: The author thanks King Saud University for funding this work through the Researchers Supporting Project number (RSP2025R395), King Saud University, Riyadh, Saudi Arabia
Funding Statement: This work was supported by the Researchers Supporting Project number (RSP2025R395), King Saud University, Riyadh, Saudi Arabia
Author contributions: The author contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
Declaration of conflicts of interests: The author declares that he has no conflict of interest.
Data Availability Statement: Not applicable
Declarations: The author declares that all works are original and this manuscript has not been published in any other journal.
REFERENCES
- Ayalew, A.M., A.O. Salau, B.T. Abeje and B. Enyew (2022). Detection and classification of COVID-19 disease from X-ray images using convolutional neural networks and histogram of oriented gradients. Biomed. Signal Process. Control. 74: 103530. https://doi.org/10.1016/j.bspc.2022.103530
- Belay, A.J., A.O. Salau, M. Ashagrie and M.B. Haile (2022). Development of a chickpea disease detection and classification model using deep learning. Inform. Med. Unlocked. 31: 100970. https://doi.org/10.1016/j.imu.2022.100970
- Bakir, M., D. Sari, H. Sari, M. Waqas and R.M. Atif (2021). Chickpea wild relatives: potential hidden source for the development of climate-resilient chickpea varieties. In: Elsevier eBooks, pp. 269–297. https://doi.org/10.1016/b978-0-12-822137-2.00015-1.
- Chen, W., L.D. Porter and M.J. Wunsch (2023). Diseases of chickpea. In: The American Phytopathological Society eBooks, pp. 57–73. https://doi.org/10.1094/9780890546758.005.
- Erbay, H. and T. Hayit (2024). A vision transformer approach for Fusarium wilt of chickpea classification. Multimedia Tools Appl. https://doi.org/10.1007/s11042-024-20224-9.
- Food and Agriculture Organization of the United Nations (FAO), 2023. The Global Economy of Pulses. Rome, FAO. [Accessed on 22 November 2024]. https://doi.org/10.4060/cc7724en
- Fu, Y., Z. Li and Y. Liu (2021). Progress of research on chickpea resources and its isoflavones. Preserv. Process. 21(3): 130–135.
- Gonçalves, J.P., F.A. Pinto, D.M. Queiroz, F.M. Villar, J.G. Barbedo and E.M. Del Ponte (2021). Deep learning architectures for semantic segmentation and automatic estimation of the severity of foliar symptoms caused by diseases or pests. Biosyst. Eng. 210: 129–142. https://doi.org/10.1016/j.biosystemseng.2021.08.011
- Hasan, M.J., M.S. Alom, U.F. Dina and M.H. Moon (2020, June). Maize diseases image identification and classification by combining CNN with bi-directional long short-term memory model. In: Proc. 2020 IEEE Region 10 Symposium (TENSYMP), pp. 1804–1807. IEEE. https://doi.org/10.1109/TENSYMP50017.2020.9230796
- Hayit, T., A. Endes and F. Hayit (2023). KNN-based approach for the classification of Fusarium wilt disease in chickpea based on color and texture features. Eur. J. Plant Pathol. 1–17. https://doi.org/10.1007/s10658-023-02791-z
- Ho, Y. and S. Wookey. (2019). The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling. IEEE Access. 8: 4806–4813. https://doi.org/10.1109/ACCESS.2019.2962617
- Hossain, E., M.F. Hossain and M.A. Rahaman (2019, February). A color and texture-based approach for the detection and classification of plant leaf disease using KNN classifier. In: Proc. 2019 Int. Conf. Electr. Comput. Commun. Eng. (ECCE), pp. 1–6. IEEE. https://doi.org/10.1109/ECACE.2019.8679247
- Kaur, A., V. Kukreja, M. Aeri, S. Tanwar and N. Mohd (2023). Nature’s secrets revealed: unraveling Fusarium wilt diseases through CNN and SVM. In: Proc. 4th IEEE Global Conf. Adv. Technol. (GCAT), pp. 1–7. https://doi.org/10.1109/GCAT59970.2023.10353538
- Milke, E.B., M.T. Gebiremariam and A.O. Salau. (2023). Development of a coffee wilt disease identification model using deep learning. Inform. Med. Unlocked. 42: 101344. https://doi.org/10.1016/j.imu.2023.101344
- Mohta, A., I. Gupta, R. Gajjar and M.I. Pate (2022). CNN-based leaf wilting classification using modified RESNET152. In: Lecture Notes in Electrical Engineering, pp. 239–248. https://doi.org/10.1007/978-981-19-6737-5_20
- Negussu, M., E. Karalija, C. Vergata, M. Buti, M. Subašić, S. Pollastri, F. Loreto and F. Martinelli (2023). Drought tolerance mechanisms in chickpea (Cicer arietinum L.) investigated by physiological and transcriptomic analysis. Environ. Exp. Bot. 215: 105488. https://doi.org/10.1016/j.envexpbot.2023.105488
- Rocha, F.S., M. Sharma, A. Tarafdar, W. Chen, D.M.Q. Azevedo, P. Castillo, C.A. Costa and D.R. Chobe. (2023). Diseases of chickpea. In: Handbook of Plant Disease Management, pp. 1–44. https://doi.org/10.1007/978-3-030-35512-8_26-1
- World Population Review. (2024). Chickpea production by country. World Population Review. Retrieved November 27, 2024, from https://worldpopulationreview.com/country-rankings/chickpea-production-by-country
- Yan, K., M.K.C. Shisher and Y. Sun (2023). A transfer learning-based deep convolutional neural network for detection of Fusarium wilt in banana crops. AgriEng. 5(4): 2381–2394. https://doi.org/10.3390/agriengineering5040146
- Zhang, J., W. Chen, Y. Shang, C. Guo, S. Peng and W. Chen (2020). Biogeographic distribution of chickpea rhizobia in the world. In: Molecular Aspects of Plant Beneficial Microbes in Agriculture, pp. 235–239. Academic Press.
- Zhang, J., J. Wang, C. Zhu, R.P. Singh and W. Chen (2024). Chickpea: its origin, distribution, nutrition, benefits, breeding and symbiotic relationship with Mesorhizobium species. Plants. 13(3): 429. https://doi.org/10.3390/plants13030429
|