Optimizing plant disease classification with hybrid convolutional neural network–recurrent neural network and liquid time-constant network
Loading...
Supplementary material
Other Title
Authors
Le, An Thanh
Shakiba, Masoud
Ardekani, Iman
Abdulla, W.H.
Shakiba, Masoud
Ardekani, Iman
Abdulla, W.H.
Author ORCID Profiles (clickable)
Degree
Grantor
Date
2024-10-09
Supervisors
Type
Journal Article
Ngā Upoko Tukutuku (Māori subject headings)
Keyword
tomatoes
plant disease
Region-based Convolutional Neural Network (RCNN)
Convolutional Neural Network (CNN)
pattern recognition systems in agriculture
unsupervised image classification
computer vision in agriculture
Internet of Things (IoT)
plant disease
Region-based Convolutional Neural Network (RCNN)
Convolutional Neural Network (CNN)
pattern recognition systems in agriculture
unsupervised image classification
computer vision in agriculture
Internet of Things (IoT)
ANZSRC Field of Research Code (2020)
Citation
Le, A. T., Shakiba, M., Ardekani, I., & Abdulla, W. (2024). Optimizing plant disease classification with hybrid convolutional neural network–recurrent neural network and liquid time-constant network. Applied Sciences, Special Issue in Multimedia Signal Processing: Theory, Methods, and Applications, 14(19), 9118. https://doi.org/10.3390/app14199118
Abstract
This paper addresses the practical challenge of detecting tomato plant diseases using a hybrid lightweight model that combines a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Traditional image classification models demand substantial computational resources, limiting their practicality. This study aimed to develop a model that can be easily implemented on low-cost IoT devices while maintaining high accuracy with real-world images. The methodology leverages a CNN for extracting high-level image features and an RNN for capturing temporal relationships, thereby enhancing model performance. The proposed model incorporates a Closed-form Continuous-time Neural Network, a lightweight variant of liquid time-constant net works, and integrates Neural Circuit Policy to capture long-term dependencies in image patterns, reducing overfitting. Augmentation techniques such as random rotation and brightness adjustments were applied to the training data to improve generalization. The results demonstrate that the hybrid models outperform their single pre-trained CNN counterparts in both accuracy and computational cost, achieving a 97.15% accuracy on the test set with the proposed model, compared to around 94% for state-of-the-art pre-trained models. This study provides evidence of the effectiveness of hybrid CNN-RNN models in improving accuracy without increasing computational cost and highlights the potential of liquid neural networks in such applications.
Publisher
MDPI (Multidisciplinary Digital Publishing Institute)
Permanent link
Link to ePress publication
DOI
https://doi.org/10.3390/app14199118
Copyright holder
© 2024 by the authors. Licensee MDPI, Basel, Switzerland
Copyright notice
CC BY Attribution 4.0 International
