Abstract
6G networks are anticipated to provide a wider range of capabilities compared to previous generations, potentially accommodating applications beyond existing mobile apps, including virtual and augmented reality, artificial intelligence (AI), and the Internet of Things (IoT). IoT data analytics is complex, requiring more supporting processes to provide high accuracy.The raw data generated from IoT systems is not certain, and it cannot be processed due to many variations, outliers, missing elements, wrong and unconditional data flow, mismatched data types, and non-defined data size. The massive amounts of data generated by 6G-enabled IoT systems make artificial intelligence (AI) and machine learning (ML) crucial in reconfiguring and improving their performance. Thus, recent research focuses on pre-processingto improve data quality and use learning models for prediction. It increases the computational and time complexity of the whole work. This problem is considered the major problem, and this research implements Baye’s Theorem for predicting hypothesis-based data to be forecasted and streamed from one place to another, which has less complexity and takes less time. AI and ML are crucial in enhancing and improving IoT systems provided by 6G because of the vast quantity of exploratory data produced. This researchuses a hybrid deep learning strategy to improve the streaming efficiency of 6G Enabled Massive IoT Systems. The proposed EDA and Baye’s Theorems are implemented in Python, and the results are verified.The results are compared with other similar methods to evaluate its performance regarding prediction accuracy and streaming efficiency. The overall experiment shows that the data streaming process obtained using EDA-Baye’s Theorem outperforms the other methods with high efficiency.
Similar content being viewed by others
Data Availability
Data sharing does not apply to this article as no datasets were generated or analyzed during the current study.
References
Marjani, M., Nasaruddin, F., Gani, A., Karim, A., Hashem, I. A. T., Siddiqa, A., & Yaqoob, I. (2017). Big IoT data analytics: Architecture, opportunities, and open research challenges. IEEE Access, 5, 5247–5261.
Goknil, A., Nguyen, P., Sen, S., Politaki, D., Niavis, H., Pedersen, K. J., & Ziegenbein, A. (2023). A systematic review of data quality in CPS and IoT for industry 4.0. ACM Computing Surveys, 55, 1–38.
Zhang, L., Jeong, D., & Lee, S. (2021). Data quality management in the internet of things. Sensors, 21(17), 5834.
Sathyadevaki, R., Sundar, D. S., & Raja, A. S. (2018). Photonic crystal 4 * 4 4× 4 dynamic hitless routers for integrated photonic NoCs. Photonic Network Communications, 36, 82–95.
Biju, V. G., Schmitt, A. M., & Engelmann, B. (2024). Assessing the influence of sensor-induced noise on machine-learning-based changeover detection in CNC machines. Sensors, 24(2), 330.
Teh, H. Y., Kempa-Liehr, A. W., & Wang, K. I. K. (2020). Sensor data quality: A systematic review. Journal of Big Data, 7(1), 1–49.
Assahli, S., Berrada, M., & Chenouni, D. (2017). Data preprocessing from internet of things: Comparative study. In 2017 international conference on wireless technologies, embedded and intelligent systems (WITS). IEEE, pp 1–4
Jane, V. A., & Arockiam, L. (2021). IoT data preprocessing survey. Webology, 18(6), 1735.
Othon, M., Mile, S., de Melo, A. F. M. F., Junior, D. A., & Arruda, A. M. (2019). Evaluation of the removal of anomalies in data collected by sensors.
Hariri, R. H., Fredericks, E. M., & Bowers, K. M. (2019). Uncertainty in big data analytics: Survey, opportunities, and challenges. Journal of Big Data, 6(1), 1–16.
Malki, A., Atlam, E. S., & Gad, I. (2022). Machine learning approach to detecting anomalies and forecasting time series of IoT devices. Alexandria Engineering Journal, 61(11), 8973–8986.
González-Vidal, A., Rathore, P., Rao, A. S., Mendoza-Bernal, J., Palaniswami, M., & Skarmeta-Gómez, A. F. (2020). Missing data imputation with bayesian maximum entropy for internet of things applications. IEEE Internet of Things Journal, 8(21), 16108–16120.
Palanivinayagam, A., & Damaševičius, R. (2023). Effective handling of missing values in datasets for classification using machine learning methods. Information, 14(2), 92.
Oleghe, O. (2020). A predictive noise correction methodology for manufacturing process datasets. Journal of Big Data, 7(1), 1–27.
Syafrudin, M., Alfian, G., Fitriyani, N. L., & Rhee, J. (2018). Performance analysis of IoT-based sensor, big data processing, and machine learning model for real-time monitoring system in automotive manufacturing. Sensors, 18(9), 2946.
Deng, X., Jiang, P., Peng, X., & Mi, C. (2018). An intelligent outlier detection method with one class supports a tucker machine and genetic algorithm for big sensor data on the internet of things. IEEE Transactions on Industrial Electronics, 66(6), 4672–4683.
Bhatnagar, A., Shukla, S., & Majumdar, N. (2019). Machine learning techniques to reduce error in the internet of things. In 2019 9th international conference on cloud computing, data science & engineering (Confluence). IEEE, pp 403–408
Nithiyanandam, N., Rajesh, M., Sitharthan, R., Shanmuga Sundar, D., Vengatesan, K., & Madurakavi, K. (2022). Optimization of performance and scalability measures across cloud based IoT applications with efficient scheduling approach. International Journal of Wireless Information Networks, 29(4), 442–453.
Ramadan, L., Shahrour, I., Mroueh, H., & Chehade, F. H. (2021). Use of machine learning methods for indoor temperature forecasting. Future Internet, 13(10), 242.
Tran, T. T. K., Lee, T., Shin, J. Y., Kim, J. S., & Kamruzzaman, M. (2020). Deep learning-based maximum temperature forecasting assisted with meta-learning for hyperparameter optimization. Atmosphere, 11(5), 487.
Zahroh, S., Hidayat, Y., Pontoh, R. S., Santoso, A., Sukono, F., & Bon, A. T. (2019). Modeling and forecasting daily temperature in Bandung. In Proceedings of the international conference on industrial engineering and operations management Riyadh, Saudi Arabia, pp 406–412
Mohan Das, R., et al. (2023). A novel deep learning-based approach for detecting attacks in social IoT. Soft Computing. https://doi.org/10.1007/s00500-023-08389-1
Funding
No funds, grants, or other support was received.
Author information
Authors and Affiliations
Contributions
KK, SL, and SS: Conceptualization, Methodology, Investigation, Data Validation, and Writing—Original draft preparation; VG, PP and MB: Visualization, Data Curation, Reviewing and Editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they do not have any competing interests. The authors of this research acknowledge that they are not involved in any financial interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kaliaperumal, K., Lakshmisridevi, S., Shargunam, S. et al. Hybrid Deep Learning Modelfor Enhancing the Streaming Efficiency of 6G Enabled Massive IoT Systems. Wireless Pers Commun (2024). https://doi.org/10.1007/s11277-024-11249-2
Accepted:
Published:
DOI: https://doi.org/10.1007/s11277-024-11249-2