DS_FusionNet: Dynamic Dual-Stream Fusion with Bidirectional Knowledge Distillation for Plant Disease Recognition
DOI:
https://doi.org/10.54097/mj2fea78Keywords:
ConvNeXt, Plant disease recognition, Dynamic dual-stream fusion network, Bidirectional knowledge distillation.Abstract
Given the severe challenges confronting the global growth security of economic crops, precise identification and prevention of plant diseases has emerged as a critical issue in artificial intelligence-enabled agricultural technology. To address the technical challenges in plant disease recognition, including small-sample learning, leaf occlusion, illumination variations, and high inter-class similarity, this study innovatively proposes a Dynamic Dual-Stream Fusion Network (DS_FusionNet). The network integrates a dual-backbone architecture, deformable dynamic fusion modules, and bidirectional knowledge distillation strategy, significantly enhancing recognition accuracy. Experimental results demonstrate that DS_FusionNet achieves classification accuracies exceeding 90% using only 10% of the PlantDisease and CIFAR-10 datasets, while maintaining 85% accuracy on the complex PlantWild dataset, exhibiting exceptional generalization capabilities. This research not only provides novel technical insights for fine-grained image classification but also establishes a robust foundation for precise identification and management of agricultural diseases.
Downloads
References
[1] FAO. (2024). The State of Food and Agriculture[R]. Food and Agriculture Organization of the United Nations.
[2] Wang X, Liu J, Liu G. Diseases detection of occlusion and overlap** tomato leaves based on deep learning[J]. Frontiers in plant science, 2021, 12: 792244.
[3] Jiang C M, Najibi M, Qi C R, et al. Improving the intra-class long-tail in 3d detection via rare example mining[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 158-175.
[4] Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks[C]//International conference on machine learning. PMLR, 2019: 6105-6114.
[5] Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11976-11986.
[6] Liu Z, Ning J, Cao Y, et al. Video swin transformer[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 3202-3211.
[7] Deng L J, Vivone G, ** C, et al. Detail injection-based deep convolutional neural networks for pansharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(8): 6995-7010.
[8] Mohameth F, Bingcai C, Sada K A. Plant disease detection with deep learning and feature extraction using plant village[J]. Journal of Computer and Communications, 2020, 8(6): 10-22.
[9] Liu X, Liu Q, Wang Y. Remote sensing image fusion based on two-stream fusion network[J]. Information Fusion, 2020, 55: 1-15.
[10] Zhang C, Yue P, Tapete D, et al. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 166: 183-200.
[11] Cardarilli G C, Di Nunzio L, Fazzolari R, et al. A pseudo-softmax function for hardware-based high speed image classification[J]. Scientific reports, 2021, 11(1): 15307.
[12] Bu Y, Niu H, Qiu T, et al. Analysis of stage parameters of low-temperature oxidation of water-soaked coal based on kinetic principles[J]. Science of The Total Environment, 2024, 946: 173947.
[13] Yang Z, Huang T, Ding M, et al. Batchsampler: Sampling mini-batches for contrastive learning in vision, language, and graphs[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023: 3057-3069.
[14] Lin J, Wu Z, Lin W, et al. M2sd: Multiple mixing self-distillation for few-shot class-incremental learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(4): 3422-3431.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Highlights in Science, Engineering and Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







