1 Introduction

Disasters such as earthquakes and floods affect large geographical areas, which creates fragile environment for rescue services to operate. One of the major aftermaths of these disaster situations is lack of communication infrastructure to support applications that could provide coordination among all the rescue teams and disseminate necessary information and data for informed decision making. This requires an antifragile and resilience application framework that can also provide on-demand computing resources and judicious decision making. Given this opportunity, wireless mesh networks (WMNs) have found an interesting alternative application in public safety and disaster recovery (PSDR) as they are enabled with key features such as fault tolerance, broadband support, and interoperability.

Typically a WMN is made up of radio nodes organized in a mesh topology. Whereby, a WMN is a group of unplanned nodes and mesh routers interconnected through wireless links [1]. The wireless service access point is installed at each network user’s locale requiring minimal configuration and infrastructure. In WMN, each network acts as an autonomous user and forwards data traffic to the next node. The network infrastructure is decentralized, in which these autonomous nodes send and receive data independently and is not controlled by the centralized controller.

A WMN usually consists of three types of nodes: mesh clients (MC), mesh router (MR), and mesh gateway (MG). Clients can be either stationary or mobile, and can form a mesh network among them by involving the mesh routers. Whereas the wireless routers in WMN relay packets originating from the client nodes to the gateway nodes, which are further connected to a backbone wired network. In this way, large areas can be covered by a low-cost infrastructure. As an intermediary, a mesh access points (MAP) also serves the network, which works on the principle of a radio frequency (RF) access point and transmits the packet to other mesh networks, or to the wired backbone. It takes the packet form one mesh client network and forwards either to some other mesh client network or some heterogeneous network using the wired backbone. MR chooses a suitable gateway to route the received packets. MG is an entity that combines or integrates two networks like connecting the wired network with the wireless mesh network for better support.

Wireless mesh networks also inherit the characteristics of ad hoc networks such as self-organized, self-configured, self-healing, scalability and reliability. Such kinds of networks automatically incorporate a new node into the existing structure without requiring any adjustments by a network administrator. Its configurations allow a local network to run faster, because a local packet does not need to travel back to a central server. They can provide cost-effective network deployment under the diverse environments of PSDR and better coverage to both stationary and mobile users. They offer multiple paths from a source to destination through their on demand routing algorithms allowing each node to make an intelligent decision i.e., which path can effectively be used to forward packets through the network in order to improve the overall network performance.

While WMN’s deployments for PSDR purpose, efficient utilization of the bandwidth is a critical issue especially for audio, text, images or video traffic. Some types of data might be bearing small chunk of packets for control information, though small but numerous packets give rise to considerable amount of overhead for control information in wireless networks [2]. Moreover, such increased overhead is also not affordable in jeopardy situations such as 9/11 and Hurricane Katrina. This paper addresses the state-of-the-art aggregation approaches bringing about some imminent modifications that would not only increase throughput but could also use to produce the optimum results in regard to network performance.

We proposed a flexible and efficient aggregation framework for WMN named as adaptive aggregation based decision model (AADM) and is supported by OMNeT++ simulation. In AADM the aggregation is performed, based on run time decisions after evaluating the more dynamic factors like congestion, routing, reliability, average delay and energy consumption. The AADM exclusively monitors the live statistics of a network channel and takes run time decisions that either aggregation is needed or not? If needed, whether it needs to aggregate on real time or non-real time basis? Then it defines the maximum transmission unit (MTU) for aggregation depending on aforementioned factors. Finally, it decides which type of aggregation is suitable for the aggregation i.e., node to node or end to end aggregation. AADM also defines role of source and destination in the aggregation applied.

The rest of this paper is organized in the following order: Related work is elaborated with respect to types of aggregation approaches in Sect. 2. Proposed framework has been described containing the detail of aggregation algorithm for generating hash tables in Sect. 3, and then results and analysis follow in Sect. 4, and in the final section we conclude the paper.

2 Related work

Aggregation is a composition technique for building a new object from one or more existing objects that support some or all of the new object’s required interfaces [2]. There are two types of aggregation with respect to packet size. One is static aggregation and the other is dynamic aggregation. In static aggregation, the maximum transmission unit (MTU) remains fixed, while in dynamic aggregation it may varies according to the input parameters. The dynamic aggregation is similar to the fixed aggregation except the characteristics of local link it used to determine an appropriate packet size to reduce the chances of network packets being dropped. Such packet size is called the aggregation threshold and is not more than that the value of MTU. The decision when to aggregate is substantially influenced by two parameters i.e., the maximum queue size and the time delay. In this regard, existing literature can be broadly classified into three categories i.e., frame aggregation, packet aggregation and the link aggregation.

2.1 Packet aggregation

In packet aggregation a number of small packets are combined into larger packets. In this approach distributed aggregators are used to collect smaller packets through various network connections and assemble as a large packet. In packet aggregation the sender adds an aggregation header, so that the receiver can de-aggregate the packets correctly, which cause this a more critical operation. There are two types of packet aggregation i.e., hop-by-hop aggregation and end-to-end aggregation. In hop-by-hop scheme, the packets are aggregated and de-aggregated on each hop frequently until packets become arrived at the final destination; while in end-to-end scheme packets are aggregated at source and de-aggregated at destination node only.

Bayer et al. [2] proposed a hop-by-hop packet aggregation mechanism for 802.11s WMNs and worked on the feasibility of voice over internet protocol (VoIP) in a dual radio mesh environment. They introduced a novel packet aggregation scheme to avoid the small packet overhead that reduces the MAC layer busy time and enhances the network performance. It aggregates packets at IP layer and is enabled with different network conditions and traffic characteristics. This approach does not increase delay unless it provides a good aggregation ratio.

Kyungtae et al. [3] developed a distributed multi-hop aggregation algorithm for VoMESH that uses natural waiting time in the interface queue of packets in a loaded network. They used a combined approach of header compression and packet aggregation that reduces VoIP protocol overhead and introduces signalling overhead. They proposed a zero-length header compression algorithm integrated with packet aggregation, which does not need to depend on the signalling mechanism to recover the context discrepancy between compressor and de-compressor. The major challenges are to reduce VoIP protocol overhead and 802.11 MAC overhead, which decreases VoIP performance for mesh network. They investigate these problems in 802.11-based wireless mesh network and propose novel solutions to reduce overheads. Each of these methods produces considerable improvement in operation of the mesh with respect to network capacity and quality of service (QoS).

Andreas et al. [4] proposed an adaptive hop-by-hop aggregation scheme that computes the target aggregation size for each hop and is based on wireless link characteristics. This scheme behaves well relative to its counterpart schemes, like static aggregation scheme for various performance parameters. They focused on the relation between link quality and packet size for packet aggregation in multi-hop WMNs. Therefore, the overall aggregation along a path will not be constrained by the weakest link which leads to significant performance improvement. Marcel et al. [5] proposed a novel packet aggregation mechanism which leads to the enhancement of VoIP capacity with the maintenance of voice quality. It helps reduce the MAC layer contention and significantly increases the concurrent VoIP flows. It is a hop-by-hop aggregation scheme and inherently derives all the limitations of such schemes. Their main contribution is to design the packet aggregation mechanism which adapts itself to network traffic and minimizes delay and overhead without requiring any changes to current MAC layer implementations. So then, increases VoIP performance and reduces MAC delay.

The above solution is focused on increasing the VoIP traffic in multi-hop WMNs. Niculescu et al. [6] proposed several methods to improve voice quality and used multiple interfaces label based forwarding architecture along with path diversity and aggregation. They present experimental results from an IEEE 802.11b test bed, optimized for voice delivery. They implemented a distributed packet aggregation strategy and made a good use of the natural waiting time of MAC for aggregation purpose. These methods show a little improvement in the enhancement of simultaneous number of calls. They focus on two important problems in supporting VoIP over WMNs i.e., increasing VoIP capacity and maintaining QoS under internal and external interference. They evaluate the performance of VoIP, over WMN and provide various approaches for system optimization.

Raghavendra et al. [7] proposed another method on an IP-based adaptive packet concatenation for WMNs. In this method authors used the adaptive technique i.e., to decide during runtime whether to concatenate the packets or not. The packet size for aggregated packets is calculated based on the route quality, because a good quality route can potentially carry larger aggregated packets.

In our findings, the aforementioned solutions have following shortcomings: [2, 3] and [6] solutions are static and are inefficient to use if aggregation is not required. The [2] and [5] solutions ignore the bandwidth, energy consumption, congestion, routing and other important factors in packet aggregation. The [3, 5] and [6] solutions cannot be applied in single hop WMNs. Since, [5] ignores the security of data and the network at the same time. The [5] and [6] emphasize on increasing capacity and QoS in VoIP, however, later in particular, did not specify routing mechanism of the packets. The [7] ignores the influence of link quality on packet size or only consider the end-to-end path quality due to the use of routing metrics such as WCETT. These metrics reflect path characteristics, suitable for end-to-end aggregation and thus achieve suboptimal performance. If there is a bottleneck link such as one characterized through low signal quality, this will lead to a small packet size for end-to-end aggregation, and thus the benefit of aggregating will be lost or negligible at all.

Furthermore, the solutions proposed for VoIP cannot successfully be implemented in WMN due to the absence of a central controller. Secondly, the main purpose of these solutions is to increase the number of calls, improve voice quality and minimize overhead. Moreover, many other important parameters such as security, congestion, and routing, reliability and energy consumption and delay are ignored. Therefore, these solutions are not suitable for aggregation in WMNs. Some solutions are particularly suggested for wireless sensor networks (WSNs) [8

Fig. 1
figure 1

Flow graph for AADM

In AADM, we developed an intelligent algorithm called aggregation finding possibility algorithm (AFPA) as demonstrated bellow in Algorithm 1, which exclusively identify possibility of aggregation based on the information provided by an environment. If aggregation is not feasible then it recommends transferring the packets as it is, otherwise AFPA has to take few more decisions.

In this connection, AFPA tells whether it should be hop-by-hop aggregation or it should be end-to-end aggregation, packets of which source and destination should be aggregated and so on. These decisions make the aggregation more efficient and results in delivery of more and more packets by consuming minimum number of resources. This solution also helps in load balancing over the network and indirectly plays its role to reduce the network congestion.

The packet aggregation in AADM decreases physical and MAC layer overhead and thereby reduces the transmission time. At times the aggregation might not be required and will be decided by AADM. The proposed algorithm takes some of parameters as the input and gets helps to decide with a threshold value, say, THRESH_LIMIT_AGG, which is set as 40 in our scenario (assumed, and can be adjusted according to the specification of network). If the sum of the parameters is greater than the THRESH_LIMIT_AGG, the aggregation will be performed. If the value of THRESH_LIMIT_AGG is less than 40 the original packets will be sent without the aggregation process. It takes the input parameters like congestion, bit error rate, delay, bandwidth, and buffer size and packet loss ratio. If there is high congestion on the channel, the algorithm adaptively discourages the aggregation and vice-versa. The higher round trip time (RTT) or the increased degree of packet drop due to the short queue buffer indicates the higher degree of congestion over the path. At the same time, if there is more routing delay which is due to the heavy routing tables, no aggregation will be performed.

If there is high average delay due to less buffer or queue size, the aggregation is disallowed by the algorithm. Furthermore, if the link is not reliable with respect to bit error rate, the aggregation will not be performed, as the link with high bit error rate cannot be relied upon and may negatively affect about integrity of data. Hence, the link quality (i.e., in relation to BER) is considered another significant factor in deciding the viability of aggregation.

Lastly, the number of the neighbours might affect such decision given that, the increased number of neighbours performing aggregation would be a bottleneck for the bandwidth capacity. If the bandwidth is reasonably higher, the aggregation probability would be relatively high. Hence, the higher number of neighbours does not negatively affect the system. The higher the packet loss ratio, the lesser would be the chances of aggregation, which is one of the higher weight parameters that can aggressively affect the chances for aggregation. For all the decisions, there is a reasonably higher weight scale for input parameters, that is, varying values will be fed to the algorithm. In turn, it gives the sum of parameters for comparison with THRESH_LIMIT_AGG value. This threshold value will help in deciding the possibility of performing aggregation.

Adaptive aggregation based decision model considers another threshold value namely THRESH_LIMIT_TYPE that is set to 50 %, which suggests the type of aggregation to be performed. The type of aggregation is selected from hop-by-hop and end-to-end aggregation. The high sum of input parameters value suggests the end-to-end type of aggregation; otherwise hop-by-hop aggregation is ultimately considered to execute. Hence all these threshold values allow AADM while making such critical decisions.

figure a

The varying degree of input values may lead in the attainment of THRESH_LIMIT_AGG up to 40 % and progress to further decisions required. In one case less network congestion might be contributor in the achievement of such threshold. In other case, if there is high congestion, some other parameters like less average delay, higher buffer size, less packet drop ratio, higher bandwidth or less bit error rate, might be the major contributors in achieving THRESH_LIMIT_AGG and vice-versa. The same threshold is used to measure the possibility of real and non-real time traffic aggregation. The real time traffic would be aggregated in every case while non-real time traffic depends on the reasonable value of threshold. AADM considers another threshold called PACKET_SIZE_THRESHOLD (PST) i.e., PST_1, PST_2 and PST_3. This threshold estimates the range of maximum packet size, in which the sum of the parameters exists. The larger sum would certainly lead to higher threshold, and thus the higher number of packets per aggregation.

Adaptive aggregation based decision model is fundamentally based on fuzzy logic, as the real values of the parameters are passed as input arguments to the algorithm it transforms all these inputs into a suitable value for making decisions. Let us call it as the F_L_value for applying fuzzy logic. These values correspond with the actual or real parameters. These F_L_values are further utilized to calculate the ultimate threshold value. There are different levels of F_L_values assigned to different parameters in every decision. In most cases the relationship is inversely proportional between the original input value and the assigned weight scale.

For instance, in the first case of decision if congestion is very low i.e., 1 or 5 % we assign 0.15–0.20 points and if congestion is very high i.e., 20–25 % then assign 0.01–0.02 points etc. Similarly, for each parameter F_L_value has been assigned. Here, the total points are 20 out of which the F_L_values are given. Likewise, in packet loss ratio and bit error rate the total points are 25, in delay 10, in bandwidth 10 and buffer size is also 10. These maximum points vary for every decision and weight formula also changes for every decision.

The proposed algorithm has incorporated an end-to-end packet aggregation approach for voice-based packets and hop-by-hop aggregation approach for other types of data traffic. The voice packets are required to get to the destination on real-time basis and the aggregation or de-aggregation of packets takes time at each hop, when hop-by-hop is selected. Hence, the later approach is discouraged for voice-based packets. However, in case of packets bearing diverse destinations, the hop-by-hop is a crucial approach and cannot be overlooked. In case of voice packets having the same source and destination, the end-to-end approach is adopted necessarily. After making such decision, AFPA algorithm makes this distinction on packet-to-packet basis and selects a particular queue accordingly. Data packets are treated according to real time or non-real time requirement.

The maximum size of the aggregated packet cannot be more than MTU, which is determined by every network. If AADM selects to perform the aggregation, then, the level of aggregation is also adjusted according to the magnitude of PACKET_SIZE_THRESHOLD as explained earlier. The MTU for wired networks is 1500 bytes, and for IEEE 802.11 is 2300 bytes. Hence, the aggregated packet size remains below than MTU. It is therefore, in AADM the higher the PACKET_SIZE_THRESHOLD the higher would be the aggregation level and the size of aggregated packet. Besides, these thresholds are adjustable parameters, and would vary according to the specifics of the environment. Hence, it would not be wise to assume a single static threshold, suitable to any particular setting.

3.1 Selected attributes for AADM

In aggregation, there are many attributes which play vital role in defining the efficiency of a mesh network. The representative attributes are given as: congestion, delay, packet loss ratio (PLR), bit error rate (BER), buffer size, and bandwidth. In the sequel, we perform the following four aggregation decisions on the basis of aforementioned six attributes, i.e., aggregation/no aggregation; hop-by-hop/end-to-end; real time/non-real time, and packet size. For each decision we allocate weights to each of the above-mentioned attributes. Since the role of each attribute differs in each of the four decisions, their weights become changed accordingly. The list of input variables is given as: (1) bandwidth (b), (2) average delay (d), (3) packet loss ratio (Lq), (4) bit error rate (p), (5) buffer size_BS (q), (6) congestion (C). Whereas, the constants defined for the proposed algorithm are as follows: (1) THRESH_LIMIT_AGG = 40 %, (2) THRESH_LIMIT_TYPE = 50 %, (3) PST_1 = 25 %, (4) PST_2 = 50 %, (5) PST_3 = 99 %.

Table 1, further represents the notations and inputs for the AFPA algorithm. As defined that there are four decisions (D) to take, for each decision maximum weights that can be assigned to these attributes becomes different for each decision and are presented in Table 2. At first the AFPA algorithm takes the input and adapts these values according the criterion specified. These adapted or modified parameter values are added to find a sum to compare with stipulated thresholds. In this way, AFPA take useful decisions and determine the actions required to perform with packets dynamically. We come up with multiple actions serially as we traverse down by AFPA algorithm.

Table 1 Notations for input parameters

For decision-1, the passing criterion is 40 %, that is, if the sum of parameters is above 0.40, the aggregation would be performed; otherwise will not be considered. For decision-2, the passing criterion is 50 %, given that, if the sum of the parameters is above than 0.50, then execute end-to-end aggregation; otherwise, hop-by-hop aggregation would be performed. For decision-3, the passing criterion is again set to 50 %, in which if the sum of parameters is above that 0.50, then it will follow both the real time data and non-real time data aggregation; otherwise only the real time data traffic aggregation would be performed.

For decision-4, the MTU is specified by identifying the number of packets to be aggregated. The passing criterion tends to be progressive, as the MTU size is getting large with increasing sum. If the sum of parameters is up to 0.25, then only two packets would be aggregated; if it is above 0.25 and less than 0.50 then three packets would be aggregated, while four packets would be aggregated, if the sum is above then the 0.50 until 0.99.

Table 2 Weights distribution schemes for attributes

3.2 Case study

A network has 2 Mb bandwidth with 10 % congestion level, 10 % packet loss ratio (PLR), 5 % bit error rate (BER), 100 ms average delay, and 500 buffer size. The prospect for performing aggregation or not in a network is as follows:

  • 2 Mb = 0.07

  • 10 % = 0.10

  • 10 % = 0.10

  • 5 % = 0.15

  • 100 ms = 0.04

  • 500 = 0.07

since, 0.53 (53 %), is more than 40 % so aggregation is allowed. And, the prospects for performing either hop-by-hop or end-to-end aggregation:

  • 2 Mb = 0.07

  • 10 % = 0.05

  • 10 % = 0.10

  • 5 % = 0.20

  • 100 ms = 0.04

  • 500 = 0.07

As 0.50 (50 %) is equal to 50 %, thus end-to-end aggregation is allowed. The prospects for performing either real time or non-real time or both:

  • 2 Mb = 0.04

  • 10 % = 0.05

  • 10 % = 0.10

  • 5 % = 0.12

  • 100 ms = 0.30

  • 500 = 0.04

and, 0.65 (65 %) is more than 50 %, so aggregation can be applied on real time and on non-real time data. Finally, the determined number of packets to be aggregated:

  • 2 Mb = 0.04

  • 10 % = 0.04

  • 10 % = 0.14

  • 5 % = 0.22

  • 100 ms = 0.04

  • 500 = 0.04

In this case here, 4 packets will be aggregated, as the calculated sum is above that 0.50 i.e., 0.52 (52 %). Nevertheless, the four hash tables illustrated in Figs. 2, 3, 4 and 5 are based on the selected six parameters. However, the map** weight differs for each one. Our algorithm is based on the packet based aggregation, rather than frame or link based aggregation. Hence, the VoIP packet is relatively smaller than packets of other types of data. The control information in the form of header has to be sent with each packet. The small packet means more control information is being sent on the channel that leads to extra overhead. The VoIP data suffers the most traditional networks. Hence the scheme is oriented towards finding the solution for VoIP streaming, which leads to the selection of End-End based approach.

Fig. 2
figure 2

Hash table of each attribute for decision- 1

Fig. 3
figure 3

Hash table of each attribute for decision- 2

Fig. 4
figure 4

Hash table of each attribute for decision- 3

Now it is binding on the ingress node performing aggregation, to explore the ultimate and alike destinations of maximum number of packets getting through it. However, in other case the Hop-by-Hop approach would be the solution. Since single packet is not the right one candidate for aggregation i.e., if the ingress node finds only a single packet or two for some destination, it should not perform aggregation in that case and would cost the network, if it is being performed. Thus, this is one of the cases where aggregation is not going to be performed, besides the other cases we have seen in the algorithm. The algorithm takes care of different input parameters which are assigned with different suitable fuzzy logic values. Finally a computed value is obtained after summing up the fuzzy logic values and is, then compared with a threshold variable to find the feasibility of aggregation.

Fig. 5
figure 5

Hash table of each attribute for decision- 4