1 Introduction

Internet of Things principles can improve patients' health and welfare by increasing the availability and quality of care, as well as significantly reducing treatment expenses and frequent travel [1]. The Internet of Medical Things (IoMT) is a digital healthcare system that connects patients to medical resources and services [2]. Wireless sensor networks are becoming more pervasive and easy-to-use enabling technology for structural health monitoring than current wired systems [3]. Patients can use smart wearable devices with sensors that come with smartphones to gather data about their health status such as heart rate, glucose level and blood pressure [4]. The analysis and processing of data is done by cloud servers. Moreover, cloud computing is the most likely practical approach for connecting IoT with healthcare [5]. Patient data may be used not only to monitor a patient's present health, but also to forecast future medical concerns using cloud big data storage and machine learning techniques [6]. But patient's physical condition changes over time, demanding quick action to monitor remote patients. And cloud mechanism lacks to handle the real time application and cannot meet the requirements of quality-of-service (QoS). There is need of system that can continually and quickly monitor the report on the patient's condition [7].

The introduction of fog computing in healthcare applications is to bridge the gap between IoT devices and analytics [8]. Fog computing is a distributed computing platform for managing applications and services at the network edge [9]. The probability of a mistake and the delay increases as the volume of data transmitted over the network grows. Data packet loss and transmission latency are directly proportional to the amount of data transported by IoT devices to the cloud. The Edge or Fog paradigm overcomes problems like latency by placing small servers known as edge servers in close proximity to end user devices [10]. A fog-based IoT system comprises of three layers: device, fog, and cloud. It has been hailed as a promising paradigm for lowering networking infrastructure and processor energy consumption while offering cloud-like health monitoring services [11]. The number of fog-based applications is expanding and is thought to outweigh IoT apps in near future [12]. IoT technology in healthcare can enhance the quality as well as affordability of medical treatment by automating formerly manual activities [13].

Fog provides storage and processing capabilities more accessible to end-users. Fog can capture, analyse, and store massive amounts of data in real-time [14]. Because medical sensors collect data on a frequent basis, real-time analysis performance might be enhanced, enabling intelligent data analysis and decision-making based on end-user rules and network resources. [15, 16].

The following are the main contributions of this work:

Fog computing uses deep learning sigmoid based neural network clustering and score based scheduling to calculate entropy values for each fog node and thus to improve the quality of service in fog based architecture.

The manuscript's structure is organized as follows: Sect. 2 examines the existing literature on the proposed strategy. Section 3 provides a brief overview of the proposed system, Sect. 4 explores the exploratory findings, and Sect. 5 concludes the article.

2 Related works

The quality of service is determined by resource allocation and load balancing in cloud/fog computing. Fog-based architectures have been proposed by many researchers for a variety of applications. Table 1 presents an overview of existing Fog literature surveys relevant to our work.

Table 1 Summary of existing techniques in Fog computing

The support for real-time applications is a major reason for the fog computing architecture emergence. There are several QoS metrics to consider, including latency, bandwidth, energy consumption reduction, and cost minimization for the successful development of fog-based system.

3 Proposed methodology

The three tiers of computing is cloud computing, fog computing, and sensors, which all communicate with one another. The primary purpose of the proposed technique is to present three-tier architecture for context and latency-sensitive monitoring systems. In this paper, we propose that fog computing can be utilized to assist in the monitoring of patients' healthcare data, ensuring that data is gathered and evaluated efficiently. Sensors are used to collect data from patients at first. Both external and internal data are recorded by these sensors. The role of sensors is to gather and transmit all data to the fog computing layer. Fog computing then uses Deep Learning Sigmoid based Neural Network Clustering and Score based Scheduling to get the entropy value for each fog node. This layer analyses the data and information collected by the edge devices. The layer functions similarly to the server. In addition, the cloud-computing tier constantly checks the health monitoring system as shown in Fig. 1.

Fig. 1
figure 1

Proposed methodology to improve quality of service in healthcare system

To resolve jobs in a more qualified manner or to implement a range of strategies in order to reach a better result, the neural network must always learn. When it receives new information from the system, it learns how to respond to a new circumstance. A deep neural network is a sort of machine learning in which the system uses numerous layers of nodes to extract high-level functions from input data. It requires converting numerical data into a more abstract and artistic form. Convolution, Sigmoid-based normalisation, pooling, and a fully connected layer are among the suggested DLSNN layers that solve CNN problems. Figure 2 depicts the deep learning sigmoid neural network clustering topology.

Fig. 2
figure 2

Architecture of the DLSNN clustering

3.1 Deep learning sigmoid neural network clustering (DLSNNC)

A sigmoid function is a mathematical function with a distinctive "S"-shaped curve, sometimes known as a sigmoid curve. Equation (1) represents the sigmoid function,

$$f(sig) = \frac{1}{{1 + E_{y}^{i} \,}},$$
(1)

where (sig) is the input and f is the output. The output of a sigmoid function is used in DLSNN normalisation. The measurement of haphazardness used to describe the texture of the input fog node data is entropy (E). The entropy of the ith data \(E_{y}^{i}\) is calculated by the condition (2)

$$E_{y}^{i} \, = \,\sum\limits_{u = 0}^{m - 1} {\sum\limits_{v = 0}^{m - 1} {P(u,v)( - \log_{2} (P(u,v)))} } \,,$$
(2)

where, \(u\) and \(v\) are the coefficients of co-occurrence matrix of enhanced node, \(P(u,v)\) is the component in the co-occurrence matrix at the coordinates \(u\),\(v\)\(m\) and is the dimension of the co-occurrence matrix.

figure a

The weight and biases of the preceding layers in the structure design are used by the DLSNNC classifier to reach a conclusion. The model is then improved with conditions (3) and (4) for each layer independently.

$$\Delta W_{l} = - \,\,\frac{x\lambda }{r}W_{n} - \frac{x}{{N_{t} }}\frac{\partial C}{{\partial W_{n} }} + m\Delta W_{n} (t),$$
(3)
$$\Delta B_{n} = - \,\,\frac{x}{n}\frac{\partial C}{{\partial B_{n} }} + m\Delta B_{n} (t),$$
(4)

where \(W_{n}\) means the weight, \(B_{n}\) means the bias,\(n\) means the layer number, \(\lambda\) means the regularization parameter, \(x\) means the learning rate, \(N_{t}\) means the total amount of sensor data sets,\(m\) means a momentum, \(t\) means the upgrading phase, and \(C\) means the cost function.

The DLSNN Cluster contains various kinds of layers are according to the subsequent,

Step 1: Convolutional layer: Using a condition, this layer completes the convolution of the input data with the kernel (5).

$$C_{k} = \sum\limits_{m = 0}^{M - 1} {y_{n} \hat{h}_{k - n} } ,$$
(5)

where \(y_{n}\) represents reproduced segmented data, \(\hat{h}\) represents the filter, and \(M\) represents the number of components in \(y\) and the output vector is \(C_{k}\).

Step 2: Sigmoid-based normalization layer: The technique of linearly modifying data to fit it within a given range is known as normalisation. The Z-score normalisation method is used to standardise data by changing it linearly. The formula for Z-score normalisation is shown in Eq. (6):

$$Z_{norm} = \frac{f - \mu }{\sigma }.$$
(6)

Here, \(Z_{norm}\) is the normalized output, f is the sigmoid function value, \(\mu\) is the mean value of the convolutional layer output data, and \(\sigma\) is the standard deviation of the values in the convolutional layer output. The convolutional layer output is normalized using the sigmoid function by using the Eq. (6). The Sigmoid-based normalised output from this layer is sent into the pooling layer. This layer is expected to contribute to the pooling layer by providing value-based normalised data support.

Step 3: Pooling layer: The down-sampling layer is another name for this layer. To save computing effort and minimise overfitting, the pooling stage reduces the size of output neurons from the convolution layer. The max-pooling algorithm selects only the highest value in each data map, resulting in fewer output neurons. Pooling layers are typically used after convolution layers to assist simplify the information in the convolution layer's output.

Step 4: A fully connected layer: The actuation work computes a probability distribution of the classes. Thus, the output layer uses the softmax function to find a preceding layer outcome that fits the most clustered data.

$$p_{i} = \frac{{e^{{y_{i} }} }}{{\sum\nolimits_{1}^{k} {e^{{y_{i} }} } }},$$
(7)

where \(y\), which represents the resultant cluster. Here, the DLSNNC is adapted with the sigmoid function-based normalization to direct the over-fitting in layers and conclusions in the important clustering of sensor data to the fog-cloud computing layers.

3.2 Score based scheduling algorithm

Our major purpose is to design workflow tasks that contain patients' health-care data. Initially, the task request is produced and separated into numerous task requests so that execution durations may be reduced at a reasonable cost while staying within the user-specified deadline. The Score based workflow task scheduling algorithm selects only those task requests that match the minimal threshold of workflow tasks for scheduling. In [27], there is an existing scheduling algorithm. The flow chart in Fig. 3 describes our proposed Score based workflow task scheduling system.

Fig. 3
figure 3

Flowchart of the proposed SBS algorithm

The steps of the Score based Scheduling algorithm are described below:

The following are the steps of the score-based scheduling algorithm:

Step 1: Submit the workflow task list, which includes patient healthcare information. (T = T1, T2, T3, ….,Tn).

Step 2: Contact the data centre to learn about the virtual resources that are available. VM = VM1, VM2, VM3, and so on, up to VMn.

Step 3: Assign a user-defined deadline constraint D in the form of sub-deadlines for various task requests to the whole workflow application.

Step 4: Using the components' minimum sub-scores, determine the VMs' scores value (SV) where X-defines the observed value, \(\mu\)- mean of the sample task, \(\sigma\)-standard deviation task.

Step 5: Repeat steps 6, 7, and 8 if the task list contains the tasks to schedule; otherwise, return to the task map**.

Step 6: Select the lowest-scoring VM from the VM list that meets the task's threshold. The task threshold (p) is determined by the length of the instructions.

Step 7: The job is assigned to the selected VM if it can finish the work within the specified deadline; else, the assignment is sent to the next lowest-scoring VM from the list of resources.

Step 8: From the list, choose the next assignment. If all jobs have been scheduled, their map** to VMs will be completed.

4 Result and discussion

The implementation of our proposed effective DLSNN Clustering and Score-based scheduling for cloud IoT applications is done in PYTHON and by using an online cloud healthcare dataset. Different execution estimations such as latency and network are estimated to explore the performance of the proposed work. Finally, the average delay is estimated and compared using existing FCFS [28], SJF [28], and BMO [29] to prove the relevance of the proposed approach.

4.1 Latency

There will be data flow between the various tiers in our fog computing solution in health informatics. In many circumstances, the amount of information and thus the amount of time required will differ. As a result, the latency varies. As shown in equation, latency is the difference between the time of commencement and the time of completion of service (8),

$$L = ST + PT + TQT + IT.$$
(8)

Here, L denotes latency, ST denotes the requested task's start time, PT denotes the requested task's processing time, TQT denotes the transmission and queuing time prior to the requested task, and IT is the desired job's initiation time. In the data ranges 500, 1000, 1500, 2000, 2500, and 3000, the latency comparison between cloud and mixed cloud and fog computing layers is shown in Table 2. In addition, Fig. 4 depicts a latency comparison graph in the cloud, as well as both cloud and fog computing layers.

Table 2 Latency comparison of cloud and fog and cloud
Fig. 4
figure 4

Latency of fog compared to Fog + cloud system

4.2 Network usage

The second evaluation constraint is network usage (\(N_{usage}\)). As the number of devices on the network grows, so does network usage, resulting in network congestion. As a result of the congestion, the application running on the Cloud network performs poorly. By dispersing the load across intermediary fog devices, fog computing aids in the reduction of network congestion. Equation is used to calculate network utilization (9),

$$N_{usage} = \sum\limits_{i = 1}^{N} {L_{i} } \times S_{i} ,$$
(9)

where N is the total number of tasks, \(L_{i}\) is the latency, and \(S_{i}\) is the network size of ith task. Table 3 states the Network usage in the cloud and both cloud and fog computing layers along with the network usage in GB mentioned and the data usage 500–3000. Figure 5 stated the network usage comparison graph in the cloud and both cloud and fog computing layers.

Table 3 Network usage of cloud and fog and cloud
Fig. 5
figure 5

Network utilization in fog compared to fog + cloud system

4.3 Average delay

The ratio of average delays state the difference between starting execution time ST and the ending execution time ET for the request tasks noted in Eq. (10),

$$Average_{Delay} = ST - ET.$$
(10)

Table 4 states the Average Delay in FCFS, SJF, and BMO comparing with the proposed technique with data usage 500–3000. Figure 6 stated the average delay when the average waiting time increases the average delay also increases comparison graph in FCFS, SJF, BMO and comparing with the proposed technique decreases the average delay.

Table 4 Performance measure of average delay
Fig. 6
figure 6

Average delay of proposed approach compared to existing techniques

5 Conclusion and future work

We propose a fog-cloud computing technique for health monitoring systems in this paper. The purpose of the study reported in this paper is to improve service quality. In this work, DLSNN Clustering and Score-based Scheduling are used to improve prediction. According to simulation results, proposed solution improve Quality-of-Service in the cloud/fog computing environment in terms latency and network consumption. Additionally, the proposed technique outperforms existing approach in terms of average delay. Different encryption techniques can be incorporated with the implementation of proposed architecture to improve the security of the system.