Log in

An Improved Data Compression Framework for Wireless Sensor Networks Using Stacked Convolutional Autoencoder (S-CAE)

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Data compression is crucial in the networks as there is limited energy which is accessible to sensor nodes in wireless sensor networks (WSNs). The sensor nodes lifetime is extended enormously by reducing the data reception and transmission. We introduce a stacked convolutional RBM auto-encoder (stacked CAE) model for compressing sensor data, which is made up of layers: an encode layer and a decode layer, both of which are discussed. The encode layer is used to compress and decompress data from sensor and then, the decode layer is used for reconstruction and compression of the data from sensors. Both encode and decode layers are comprised of four standard restricted Boltzmann machines, which are employed throughout the system. This work focuses on energy reduction technique by reduction of model’s parameters which in turn reduces the model’s calculation and storage energy. The model's effectiveness is evaluated against the Intel Lab data. The average temperature reconstruction inaccuracy is 0.312 °C, the average percentage RMS difference is 9.84%, and the figures imply that the model's compression ratio is 10. Hence, there is a possibility of minimizing the consumption of energy of node communication in WSNs by 92%. The new model attains higher compression proficiency and remark precision while kee** the same pressure ratio when in comparison with the previous method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Lazarescu MT. Design of a WSN platform for long-term environmental monitoring for IoT applications. J Emerg Sel Top Circuits Syst. 2013;3:45–54.

    Article  Google Scholar 

  2. Janai J, Güney F, Behl A, Geiger A. Computer vision for autonomous vehicles: Problems, datasets and state-of-the-art. 2017. ar**v:1704.05519.

  3. Kuo TW, Lin KCJ, Tsai MJ. On the construction of data aggregation tree with minimum energy cost in wireless sensor networks: NP-completeness and approximation algorithms. IEEE Trans Comput. 2016;65:3109–21.

    Article  MathSciNet  MATH  Google Scholar 

  4. Mcdonald D, Sanchez S, Madria S, Ercal F. A survey of methods for finding outliers in wireless sensor networks. J Netw Syst Manag. 2015;23:163–82.

    Article  Google Scholar 

  5. He S, Chen J, Yau DKY, Sun Y. Cross-layer optimization of correlated data gathering in wireless sensor networks. IEEE Trans Mob Comput. 2012;11:1678–91.

    Article  Google Scholar 

  6. Keogh E, Chakrabarti K, Pazzani M, Mehrotra S. Locally adaptive dimensionality reduction for indexing large time series databases. ACM Sigmod Rec. 2001;30:151–62.

    Article  MATH  Google Scholar 

  7. Buragohain C, Shrivastava N, Suri S. Space efficient streaming algorithms for the maximum error histogram. In: Proceedings of the IEEE 23rd international conference on data engineering, Istanbul, 16–20 April 2007. p. 1026–1035.

  8. Li M, Lin HJ. Design and implementation of smart home control systems based on wireless sensor networks and power line communications. IEEE Trans Ind Electron. 2015;62:4430–42.

    Article  Google Scholar 

  9. Donoho DL. Compressed sensing. IEEE Trans Inf Theory. 2006;52:1289–306.

    Article  MathSciNet  MATH  Google Scholar 

  10. Eldar YC, Kutyniok G. Compressed sensing: theory and applications. Cambridge: Cambridge University Press; 2012. p. 1289–306.

    Book  Google Scholar 

  11. Candès EJ, Wakin MB. An introduction to compressive sampling. IEEE Signal Process Mag. 2008;25:21–30.

    Article  Google Scholar 

  12. Zhang Z, Xu Y, Yang J, Li X, Zhang D. A survey of sparse representation: algorithms and applications. IEEE Access. 2015;3:490–530.

    Article  Google Scholar 

  13. Haupt J, Bajwa WU, Rabbat M, Nowak R. Compressed sensing for networked data. IEEE Signal Process Mag. 2008;25:92–101.

    Article  Google Scholar 

  14. Caione C, Brunelli D, Benini L. Distributed compressive sampling for lifetime optimization in dense wireless sensor networks. IEEE Trans Ind Inform. 2012;8:30–40.

    Article  Google Scholar 

  15. Ranieri J, Rovatti R, Setti G. Compressive sensing of localized signals: Application to analog-to-information conversion. In: Proceedings of the 2010 IEEE international symposium on circuits and systems (ISCAS), Paris, 30 May–2 June 2010. p. 3513–3516.

  16. Brunelli D, Caione C. Sparse recovery optimization in wireless sensor networks with a sub-Nyquist sampling rate. Sensors. 2015;15:16654–73.

    Article  Google Scholar 

  17. **ang L, Luo J, Vasilakos A. Compressed data aggregation for energy efficient wireless sensor networks. In: Proceedings of the 2011 8th annual IEEE communications society conference on sensor, mesh and ad hoc communications and networks (SECON), Salt Lake City, 27–30 June 2011. p. 46–54.

  18. Li S, Xu LD, Wang X. Compressed sensing signal and data acquisition in wireless sensor networks and internet of things. IEEE Trans Ind Inform. 2013;9:2177–86.

    Article  Google Scholar 

  19. Wu M, Tan L, **ong N. Data prediction, compression, and recovery in clustered wireless sensor networks for environmental monitoring applications. Inf Sci. 2016;329:800–18.

    Article  Google Scholar 

  20. Sheltami T, Musaddiq M, Shakshuki E. Data compression techniques in wireless sensor networks. Future Gener Comput Syst. 2016;64:151–62.

    Article  Google Scholar 

  21. Bhosale RB, Jagtap RR. Data compression algorithm for wireless sensor network. Int Res J Multidiscip Stud. 2016;2:1–6.

    Google Scholar 

  22. Ying B. An energy-efficient compression algorithm for spatial data in wireless sensor networks. In: Proceedings of the 18th IEEE international conference on advanced communications technology, PyeongChang, 31 January–3 February 2016. p. 161–164.

  23. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313:504–7.

    Article  MathSciNet  MATH  Google Scholar 

  24. Mousavi A, Patel AB, Baraniuk RG. A deep learning approach to structured signal recovery. In: Proceedings of the 2015 53rd annual Allerton conference on communication, control, and computing, Monticello, 29 September–2 Octorber 2015. p. 1336–1343.

  25. Qiu L, Liu TJ, Fu P. Data fusion in wireless sensor network based on sparse filtering. J Electron Meas Instrum. 2015;3:352–7.

    Google Scholar 

  26. Yildirim O, San TR, Acharya UR. An efficient compression of ECG signals using deep convolutional autoencoders. Cogn Syst Res. 2018;52:198–211.

    Article  Google Scholar 

  27. Norouzi M, Ranjbar M, Mori G. Stacks of convolutional restricted boltzmann machines for shift-invariant feature learning. In: Proceedings of the 2009 IEEE conference on computer vision and pattern recognition (CVPR 2009), Miami, 20–25 June 2009. p. 2735–2742.

  28. Hinton GE, Salakhutdinov RR. A better way to pretrain deep Boltzmann machines. In: Proceedings of the twenty-sixth conference on neural information processing systems, Lake Tahoe, 3–8 December 2012. p. 2447–2455.

  29. Tramel EW, Manoel A, Caltagirone F, Gabrié M, Krzakala F. Inferring sparsity: compressed sensing using generalized restricted Boltzmann machines. In: Proceedings of the 2016 IEEE information theory workshop (ITW), Cambridge, 11–14 September 2016. p. 265–269.

  30. Papa JP, Rosa GH, Marana AN, Scheirer W, Cox DD. Model selection for discriminative restricted Boltzmann machines through meta-heuristic techniques. J Comput Sci. 2015;9:14–8.

    Article  Google Scholar 

  31. Tomczak JM, Zieba M. Classification restricted Boltzmann machine for comprehensible credit scoring model. Expert Syst Appl. 2015;42:1789–96.

    Article  Google Scholar 

  32. Carreira-Perpinan MA, Hinton GE. On contrastive divergence learning. Aistats. 2005;10:33–40.

    Google Scholar 

  33. Tulder GV, Bruijne MD. Combining generative and discriminative representation learning for lung CT analysis with convolutional restricted Boltzmann machines. IEEE Trans Med Imaging. 2016;35:1262–72.

    Article  Google Scholar 

  34. Côté MA, Larochelle H. An infinite restricted Boltzmann machine. Neural Comput. 2016;28:1265–88.

    Article  MathSciNet  MATH  Google Scholar 

  35. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323:533–6.

    Article  MATH  Google Scholar 

  36. Salakhutdinov R, Mnih A, Hinton G. Restricted Boltzmann machines for collaborative filtering. In: Proceedings of the 24th international conference on machine learning, Corvalis, 20–24 June 2007. p. 791–798.

  37. Smith LN. Cyclical learning rates for training neural networks. In: Proceedings of the 2017 IEEE winter conference on applications of computer vision (WACV), Santa Rosa, 24–31 March 2017. p. 464–472.

  38. Li H, Kadav A, Durdanovic I, Samet H, Graf HP. Pruning filters for efficient convnets. 2016. ar**v:1608.08710.

  39. Han S, Pool J, Tran J, Dally W. Learning both weights and connections for efficient neural network. In: Proceedings of the twenty-ninth conference on neural information processing systems, Montréal, 7–12 December 2015. p. 1135–1143.

Download references

Funding

No funding received for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lithin Kumble.

Ethics declarations

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Advances in Computational Approaches for Image Processing, Wireless Networks, Cloud Applications and Network Security” guest edited by P. Raviraj, Maode Ma and Roopashree H R.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumble, L., Patil, K.K. An Improved Data Compression Framework for Wireless Sensor Networks Using Stacked Convolutional Autoencoder (S-CAE). SN COMPUT. SCI. 4, 419 (2023). https://doi.org/10.1007/s42979-023-01845-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-023-01845-7

Keywords

Navigation