Abstract
Convolutional neural networks (CNNs) are a promising tool for solving real-world problems. However, successful CNNs often require a large number of parameters, which leads to a significant amount of memory and a higher computational cost. This may produce some undesirable phenomena, notably the overfitting. Indeed, in CNNs, many kernels are usually redundant and can be eliminated from the network while preserving the performance. In this work, we propose a new optimization model for kernels redundancy reduction in CNN named KRR-CNN. It consists of minimization and optimization phases. In the first one, a dataset is used to train a specific CNN generating a learned CNN with optimal parameters. These later are combined with a decision optimization model to reduce kernels that have not contributed to the first task. The optimization phase is carried out by the evolutionary genetic algorithm. Efficiency of KRR-CNN has been demonstrated by several experiments. In fact, the suggested model allows reducing the kernels redundancy and improving the classification performance comparable to the state-of-the-art CNNs.
Similar content being viewed by others
References
Gomes L (2014) Machine-learning maestro michael jordan on the delusions of big data and other huge engineering efforts. IEEE spectrum 20:
Shamshirband S, Rabczuk T, Chau K-W (2019) A survey of deep learning techniques: application in wind and solar energy resources. IEEE Access 7:164650–164666
Banan A, Nasiri A, Taheri-Garavand A (2020) Deep learning-based appearance features extraction for automated carp species identification. Aquacultural Engineering 89:102053
Liu W, Wang Z, Liu X et al (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26
Fan Y, Xu K, Wu H et al (2020) Spatiotemporal modeling for nonlinear distributed thermal processes based on KL decomposition, MLP and LSTM network. IEEE Access 8:25111–25121
Hinton GE, Osindero S, Teh Y-W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324
Hernandez KAL, Rienmüller T, Baumgartner D, Baumgartner C (2020) Deep learning in spatiotemporal cardiac imaging: A review of methodologies and clinical usability. Computers in Biology and Medicine 104200
Chen W, Shi K (2019) A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network. Neurocomputing 359:384–394
Mahdavifar S, Ghorbani AA (2019) Application of deep learning to cybersecurity: a survey. Neurocomputing 347:149–176
Li Z, Dong M, Wen S et al (2019) CLU-CNNs: Object detection for medical images. Neurocomputing 350:53–59
Somu N, MR GR, Ramamritham K, (2021) A deep learning framework for building energy consumption forecast. Renewable and Sustainable Energy Reviews 137:110591
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778
Lin M, Chen Q, Yan S (2013) Network in network. ar**v preprint ar**v:13124400
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ar**v preprint ar**v:14091556
Zhang Q, Zhang M, Chen T et al (2019) Recent advances in convolutional neural network acceleration. Neurocomputing 323:37–51
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105
Sainath TN, Kingsbury B, Sindhwani V, et al (2013) Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 6655–6659
Denil M, Shakibi B, Dinh L, et al (2013) Predicting parameters in deep learning. ar**v preprint ar**v:13060543
Jaderberg M, Vedaldi A, Zisserman A (2014) Speeding up convolutional neural networks with low rank expansions. ar**v preprint ar**v:14053866
Lebedev V, Ganin Y, Rakhuba M, et al (2014) Speeding-up convolutional neural networks using fine-tuned cp-decomposition. ar**v preprint ar**v:14126553
Tai C, **ao T, Zhang Y, Wang X (2015) Convolutional neural networks with low-rank regularization. ar**v preprint ar**v:151106067
Ding H, Chen K, Yuan Y, et al (2017) A compact CNN-DBLSTM based character model for offline handwriting recognition with Tucker decomposition. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). IEEE, pp 507–512
Ma R, Miao J, Niu L, Zhang P (2019) Transformed \(l^{1}\) regularization for learning sparse deep neural networks. Neural Netw 119:286–298
Singh A, Rajan P, Bhavsar A (2020) SVD-based redundancy removal in 1-D CNNs for acoustic scene classification. Pattern Recogn Lett 131:383–389
Ide H, Kobayashi T, Watanabe K, Kurita T (2020) Robust pruning for efficient CNNs. Pattern Recogn Lett 135:90–98
Zhang Y, Zhu F (2021) A kernel-based weight decorrelation for regularizing CNNs. Neurocomputing 429:47–59
Bai C, Huang L, Pan X et al (2018) Optimization of deep convolutional neural network for large scale image retrieval. Neurocomputing 303:60–67
Junior FEF, Yen GG (2019) Particle swarm optimization of deep neural networks architectures for image classification. Swarm Evol Comput 49:62–74
Passricha V, Aggarwal RK (2019) PSO-based optimized CNN for Hindi ASR. Int J Speech Technol 22:1123–1133
Louati H, Bechikh S, Louati A et al (2021) Deep convolutional neural network architecture design as a bi-level optimization problem. Neurocomputing 439:44–62
Ranzato M, Boureau Y-L, LeCun Y (2007) Sparse feature learning for deep belief networks. Adv Neural Inf Process Syst 20:1185–1192
Le Cun Y, Boser B, Denker JS, et al (1989) Handwritten digit recognition with a back-propagation network. In: Proceedings of the 2nd International Conference on Neural Information Processing Systems. pp 396–404
Hoseini F, Shahbahrami A, Bayat P (2019) AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation. J Digit Imaging 32:105–115
Holland JH (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press
Joudar N, Ettaouil M (2019) Mathematical mixed-integer programming for solving a new optimization model of selective image restoration: modelling and resolution by CHN and GA. Circuits Syst Signal Process 38:2072–2096
Gen M, Cheng R (1999) Genetic algorithms and engineering optimization. Wiley, United States
Goldberg DE (2006) Genetic algorithms. Pearson Education India
**ao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. ar**v preprint ar**v:170807747
Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images
Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6:182–197
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Hssayni, E.h., Joudar, NE. & Ettaouil, M. KRR-CNN: kernels redundancy reduction in convolutional neural networks. Neural Comput & Applic 34, 2443–2454 (2022). https://doi.org/10.1007/s00521-021-06540-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-021-06540-3