Background

The pressure to demonstrate that responses to crises are grounded in research evidence has been growing over recent years [1,2,3]. While other domains have been able to make progress in this field, the humanitarian aid domain still faces some challenges [1, 4, 5]. Part of the challenge may be a lack of understanding of the benefits of using evidence to inform decision-making. Research evidence can help decision-makers understand a problem, frame options to respond appropriately, and address implementation considerations for interventions in specific contexts. When used appropriately, evidence can help decision-makers build on the success of others and avoid repeating the failures of others by learning from systematic studies of their impacts and experiences. A significant literature exists that examines the use of research evidence in decision-making, some of which pays particular attention to low- and middle-income countries (LMICs), where most crises occur [6,7,8,9,10,11,12,13,14,15,16]. However, there is a need for a theoretically informed framework outlining the strategies that would leverage facilitators and address the barriers to evidence-informed decision-making in crisis zones in LMICs. This study aims to fill this gap by develo** a conceptual framework.

Decision-making is complex, both because it is context dependent and because it is often influenced by the need to act quickly in sometimes less than ideal situations with relatively little access to information. Recognising this complexity, evidence-informed decision-making has been described as an approach that aims to ensure that decisions are influenced by the best available research evidence, while acknowledging the other factors that influence it [17]. These other factors include institutional constraints, interests, ideas such as values, and external factors like the election of a new governing party. In spite of these complexities, strengthening the use of research evidence in decision-making holds the promise of achieving better use of limited humanitarian aid resources.

Crises are no longer contained in one geographical location but rather transcend borders and they can affect mass populations and disrupt health systems. There are several defining characteristics of a crisis situation. First, events that led up to a crisis situation are often unexpected. Second, the crisis event creates uncertainty with what the future holds under this new unexpected event. Third, the crisis event is seen as a threat to the important goals of security and sustainability of a normal structure. Recent humanitarian crises – be it the Ebola epidemic or the Syrian refugee crisis – have placed considerable stress on health systems that are not fully equipped to deal with such crises. For all these reasons, it is important that we start to think how we can build effective humanitarian systems that are able to respond to crises. What makes decision-making in crisis situations unique is the high levels of stress, often in intense and sometimes dangerous situations. Research evidence can help decision-makers respond in a timely manner in such situations.

One area to consider when seeking to strengthen the use of research evidence in crisis zones is what strategies can be used to support evidence-informed decision-making. Up until now, the thinking about the strategies has been mostly confined to the research system, with an emphasis on making evidence more available and accessible to decision-makers and less on formalised processes for facilitating its use [5, 18, 19]. When the focus turns to the humanitarian aid system, the emphasis has been more on establishing a receptive climate for evidence [20]. There has been less attention given to systems beyond the research and humanitarian aid systems. Given the very little research into a fulsome array of strategies to support evidence use in crisis zones, both within and beyond the research and humanitarian aid systems, our compass question is – what are the strategies that leverage the facilitators and address the barriers to evidence use in crisis zones in LMICs? The strategies to support evidence use in crisis zones can be employed to integrate the use of evidence more systematically within different systems.

Methods

Design

We used a critical interpretive synthesis (CIS) to develop the theoretical framework and answer our compass question – what are the strategies that leverage the facilitators and address the barriers to evidence use in crisis zones in LMICs? CIS, developed by Dixon-Woods et al. [21], uses many conventional systematic review processes but allows for the examination of both quantitative and qualitative empirical and non-empirical literature (e.g. editorials, essays). This approach is particularly appropriate for this study because there is an ill-defined, diverse, yet nascent body of literature on the barriers to and facilitators of strategies to support evidence use in crisis zones in LMICs. Moreover, contrary to conventional systematic reviews, where there is a well formulated research question at the outset, CIS employs a compass question that allows for a more iterative and responsive process of synthesis as different types of literatures open up new themes and relationships among themes [21, 22].

Literature search

The literature search was carried out in phases and guided by our compass question and included available research literature that aims, through empirical or non-empirical approaches, to contribute to generalisable knowledge (Fig. 1). Initial search terms were developed in consultation with a librarian (Additional file 1). Several sample search strategies were run and the strategies were adjusted iteratively. Small adjustments were made to the search string for each database to ensure that the formatting is optimal for that database. These database searches were complemented with reviews of the websites of relevant non-governmental organisations (e.g. Médecins Sans Frontières) and international agencies (e.g. WHO), and a hand search of reference lists from relevant articles. The searches were executed from February to April 2017, with additional articles added throughout the analysis phase to fill any conceptual gaps. Duplicate articles resulting from the above parameters were excluded using the EndNote database.

Fig. 1
figure 1

QUORUM flow chart of the inclusion/exclusion process

Article selection

For inclusion, the documents had to provide examples of strategies, facilitators and/or barriers to evidence use in crisis zones in LMICs. For the purpose of article selection, we defined research evidence as the output of research that has been conducted in a systematic way and reported in a transparent manner. Our definition of research evidence includes evidence described in both empirical papers (e.g. observational studies, surveys and case studies) and conceptual papers (e.g. theoretical papers). It also includes both primary studies and secondary research (e.g. systematic reviews and other forms of evidence synthesis). We distinguish such research evidence from other types of information, including data, tacit knowledge or ordinary knowledge [23], and stakeholder opinions.

We excluded the following types of articles: (1) focused on translating clinical research into practice; (2) focused on translating health knowledge to citizens (e.g. patients, members of the public); (3) focused on information systems that deal with raw data and not research evidence; and (4) deemed to be fatally flawed (as determined by an adapted version of the criteria proposed by the National Health Service National Electronic Library for Health for the evaluation of qualitative research, which assess the appropriateness of the aims and objectives and of the research design, etc.).

We assessed the relevance of included studies in the synthesis. For the purposes of this interpretive review, we applied a low threshold of relevance to maximise the inclusion and contribution of a wide variety of papers that address the objectives of this synthesis [24]. We did not perform an appraisal of quality because the core objective is the development of a theoretical framework based on insights and interpretation drawn from relevant sources, rather than those that meet particular quality criteria.

A second reviewer (KM) was assigned to a representative sample of articles to ensure intercoder reliability at two stages of article selection (e.g. titles and abstracts and full-text documents). Given that this is a mixed method synthesis, a Cohen’s Kappa statistic measuring inter-rater agreement was performed with the intent of spurring reflection about the inclusion and exclusion criteria for this study rather than being overly focused on the quantitative estimate [25]. As a result of that reflection, we developed a working dictionary of key terms to be used in the synthesis (e.g. knowledge vs. research evidence). Discrepancies were identified and resolved through discussion.

Similar to a grounded theory approach, additional articles were purposively sampled from the broader literature providing insight into strategies to support evidence use in other settings but that are equally relevant to crisis zones [26]. The additional articles helped with the interpretive process that led to our conceptual framework.

Data synthesis and analysis

All included papers (n = 27) were read in full and any specific information in the results and discussion sections of the included papers that shed light into the topic area were considered as data. The overarching guide used when develo** categories for data synthesis was that the category contributed to answering our compass question. Concepts that were repeated in papers that do not provide a new insight into the topic area were excluded as the focus was on uncovering new insights into the strategies to support evidence use, and the facilitators of and barriers to evidence use in crisis zones.

Facilitators and barriers to evidence use were identified if they were referenced in the original text. Strategies were identified for this synthesis in three ways. First, strategies were identified if they were explicitly referenced in the original text. Second, strategies were deduced and extrapolated based on the implications of the identified facilitators and barriers in the literature and the principal investigator’s accumulated understanding of the knowledge translation field. Third, strategies were drawn from the broader literature providing insight into strategies to support evidence use in other settings but that are equally relevant to crisis zones. For example, strategies were drawn from the Lavis et al. [27] framework for assessing country-level strategies to link research to action and the Cochrane Knowledge Translation Strategy framework [27, 28].

An interpretive analytic approach was used to synthesise the results and help develop the conceptual framework. We used a constant comparative method throughout the analysis where emerging data was compared to previously collected data to find similarities and differences [26, 29]. This approach included observations on the concepts used to describe the strategies that leverage the facilitators and address the barriers to evidence use within each system. All data collected were reviewed and detailed notes of the concepts that emerged were included in the analysis.

Results

Included articles

All 27 documents selected were published between 2002 and 2017 (Table 1). The region of focus for all documents was LMICs, with a wide range of country of focus (e.g. India, Peru, South Africa). Of the 27 documents, 16 focused solely on natural hazards (e.g. tsunami), 5 on man-made hazards (e.g. armed conflict), and 6 on both. The Cohen’s Kappa was 0.78 for the initial eligibility screen based on titles and abstracts and it was 0.87 for the full-text document assessment, both of which are considered as excellent inter-rater agreement [56]. Five articles were deemed fatally flawed and thereby excluded from our results.

Table 1 Characteristics of included studies retrieved in searches and with additional purposive sampling

Four-part structure of the framework

Our analysis of the findings from the literature resulted in a conceptual framework (Fig. 2) that focuses on evidence use in crisis zones examined through the lens of four distinct systems that crisis zones operate within (i.e. political, health, international humanitarian aid and health research). The political system refers to the various actors at the government level tasked with setting laws that pertain to the health, international humanitarian aid and health research system. For the political system, the two main domains consists of institutional constraints and different actors interests influencing evidence use, informed through the 3-I framework – a political science framework with three categories of influences on the policy-making process, namely ideas, interests and institutions [57].

Fig. 2
figure 2

Strategies and the facilitators (+) and barriers (−) to support evidence use in crisis zones

The health system refers to Ministries of Health and health organisations that, when well-functioning, are able to get the right programmes, services and drugs to those who need them. The international humanitarian aid system refers to organisations that are involved in delivery of humanitarian aid services. Some of the principles of the humanitarian aid system that guide interventions in crisis zones include focusing on the most vulnerable population first and operating with impartiality, independence, neutrality, etc. The health research system refers to the people and organisations engaged in the conduct, synthesis and dissemination of research [58]. For the health, international humanitarian aid and health research systems, the facilitators and barriers were analysed according to arrangements that were informed through an established health systems taxonomy that includes governance (i.e. who can make what types of decisions to support evidence use), financial (i.e. understanding how funds can be channelled in ways that support evidence use) and delivery (i.e. infrastructure to support evidence use) [59]. Within each of the four systems, the framework identifies the most actionable strategies that leverage the facilitators and address the barriers to evidence use.

Table 2 outlines, in more detail, the facilitators of and barriers to evidence use in crisis zones in LMICs and the strategies aimed at specific actors within each system to support evidence use. Below, we provide our interpretation about the strategies that leverage the facilitators and address the barriers to support evidence use in decision-making in crisis zones, recognising that many of them are transferable across other applicable systems.

Table 2 Strategies and the facilitators (+) and barriers (−) to support evidence use in crisis zones

Strategies, facilitators and barriers in each section of the framework

Political system

Policy-making about the health, international humanitarian aid and research systems have historically drawn heavily on professional opinion [30, 41, 43, 51, 52]; this reliance on professional opinion is attributed to two main factors. First, decision-makers perceive a lack of existing research evidence to clarify problems, frame options and address implementation considerations. Second, decision-makers need research evidence presented to them alongside other factors that influence their decisions (e.g. stakeholders’ opinions and citizens’ values). Relying solely on professional opinion comes with potential associated errors [64]. For example, cognitive bias is a type of error in thinking that stems from our inability to be entirely objective, resulting in inaccurate judgement. This is not to say that professional opinions should not be highly valued, but rather that it has to be considered alongside the existing research evidence to minimise associated errors.

There are at least two strategies that policy-makers can draw upon to address the barrier of research evidence not being presented alongside other factors that influence decision-making. First, stakeholder dialogues aim to place relevant evidence alongside professional opinion [65]. This strategy is better suited to a protracted crisis as it requires time to prepare an evidence brief to inform the dialogue and adequate resources to support this type of collective problem-solving (e.g. infrastructure needed to convene the dialogue participants). Policy-makers should consider whether they or another group are better positioned to produce the evidence briefs and conduct the policy dialogues. For example, the Knowledge to Policy (K2P) Center in Beirut produced evidence briefs and conducted policy dialogues over a 6-month period to support evidence use in the country’s response to the Syrian refugee crisis [66, 67]. For a fast-evolving crisis, a rapid evidence service can answer an urgent question with the best available evidence alone or alongside insights from key stakeholders (drawn from key-informant interviews) in a short time-frame [68].

Health system

The barriers to the use of evidence at the health system level deal mostly with key stakeholders’ involvement with the health services element of humanitarian aid delivery. Stakeholder involvement serves two purposes in supporting evidence use in crisis zones [1, 34, 38, 51, 55]. First, it allows for sharing of evidence among the appropriate groups in a system that has adopted a networked approach to delivering health services as part of humanitarian aid. Second, it strengthens “local ownership of research”, which facilitates better uptake of evidence [51]. For example, the Lebanese health system during the Syrian refugee crisis established networks with key stakeholders to collect and share relevant evidence and other types of information to better address the health needs of Syrian refugees [69].

To address challenges with stakeholder involvement and given the dynamic environment of crises, it is imperative for health system leaders to invest in building partnerships with key stakeholders involved in the delivery of the health services element of humanitarian aid to improve evidence sharing and use [50, 51, 53]. One way to build this partnership is by leveraging technology to facilitate evidence-informed discussions among stakeholders. For example, a National Emergency Management Network was created after Hurricane Katrina, which is basically an emergency management software programme that provides a common platform with other participants to share relevant information [60, 61].

International humanitarian aid system

Creating new evidence is a costly and time-consuming strategy. A recent estimate found that there are more than 200,000 systematic reviews across all topic areas, although only a small fraction of these reviews are related to humanitarian aid [70]. Undoubtedly, there will always be gaps that need filling in the existing evidence on humanitarian action [33, 52]. However, there is an abundance of existing evidence that is not being used by humanitarian aid workers because of access barriers (e.g. payment required to access evidence, evidence scattered across reports and journals) [31, 34, 35, 39,40,41,42, 45, 51, 55].

Evidence websites do exist and can help to address the barriers related to access to systematic reviews. For example, the Evidence Aid website collates systematic reviews specifically aimed at humanitarian action [32]. However, there is a need to increase awareness among humanitarian aid workers on the existence of such sites and their added value in supporting evidence use in decision-making [1, 39, 40, 51, 54, 55]. Humanitarian aid organisations can host training workshops that can be customised to address decision-makers evidence needs in crisis zones. Additionally, decision-makers can enrol in online courses designed to help them find and use research evidence to inform their decision-making (e.g. McMaster Health Forum Finding and Using Research Evidence to Inform Decision-Making in Health Systems and Organizations).

Health research system

Supporting the use of healthcare research in decision-making is a complex process that both researchers and decision-makers in crisis zones struggle with [71]. Many authors emphasised that part of the struggle is that existing evidence does not meet decision-makers’ needs (e.g. evidence about interventions does not address implementation considerations) and that the evidence is not presented in a concise manner that can be easily understood by non-technical decision-makers [1, 30, 33,34,35, 38,39,40,41,42,43,44, 46,47,48,49, 51, 52, 54, 55, 72, 73].

The research literature on the best strategies to support the use of research evidence in decision-making suggests that interactive engagement between researchers and decision-makers may be most effective [63]. For example, decision-makers can be engaged in research priority-setting processes to develop specific research questions related to humanitarian action in crisis zones [33, 34, 38,39,40,41, 51, 54, 55, 62, 74]. Another key strategy is to develop and disseminate actionable messages for decision-makers, particularly by research organisations that produce syntheses or systematic reviews, not single studies. Systematic reviews “focus on bodies of research knowledge” that are critical to the development of actionable messages [63]. Knowledge brokers can fill the gap by acting as ‘intermediaries’ between the world of research and decision-making, hel** to turn research findings into actionable messages to support their use in crisis zones [38, 53, 55, 75,76,77].

Discussion

Our theoretical framework can be thought of as a heuristic that can be used to identify (1) the strategies that can be employed to integrate the use of evidence more systematically into decision-making as well as (2) the facilitators and barriers that influence evidence use in decision-making in crisis zones, both individually and in relation to each other (Fig. 2). The different strategies can be undertaken by different actors within each system – political, health, humanitarian aid and research – that have an influence on the use of evidence in crisis zones. The strategies to support evidence use can occur sequentially or simultaneously within or across the four systems. Our conceptual framework offers a window into the continued progress regarding both the conceptual and practical implementation of strategies to support evidence use in decision-making in crisis zones.

Discussion around the use of evidence in humanitarian action has been ongoing since the 1990s, but much of the discussion has been around filling the knowledge gaps by conducting new research in crisis zones. Our review recognises that there are times when the existing research evidence on crisis zones is lacking (e.g. crisis-specific facilitators of and barriers to the implementation of interventions) and rapid operational research is needed. However, strategies are needed to support the use of the vast pool of high quality and locally applicable research evidence. For example, an organisation has collected such evidence in a freely available online resource (e.g. Evidence Aid).

The focus in the broader literature has been on emphasising the importance of research evidence, even as it acknowledges that research evidence is only one input into the decision-making processes [78,79,80,81]. This is especially problematic in the humanitarian aid sector where professional judgement is known to play a key role in informing decisions [1, 70, 82]. Our review recognises that decisions are not determined by evidence alone, but rather alongside professional opinion and other inputs to decision-making. This is why in the political system, we proposed strategies such as stakeholder dialogues that allow the research evidence to put alongside the tacit knowledge and real-world views and experiences of front-line staff [83].

The broader literature contains many strategies to support evidence-informed decision-making in other settings that are equally relevant to crisis zones [20, 28, 65, 76, 83,84,85]. For example, in healthcare settings, rapid evidence summaries have emerged as a responsive approach involving the presentation of short summary of evidence from systematic reviews, making them more useful and easier to take in by decision-makers [86]. Rapid evidence summaries can also be useful in the humanitarian aid sector, given the need for evidence to be presented in a concise manner that can be easily understood by non-technical decision-makers in a short time-frame [30, 31, 34, 36, 38, 39, 41, 43, 44, 47, 51, 52].

Strengths and limitations

The strengths of the study included the use of a critical interpretive synthesis methodology that harnessed both a rigorous traditional systematic review methodology with the benefits of an interpretive approach (e.g. evolving compass question, purposive sampling of a diverse literature). Additionally, a second reviewer was involved in the two phases of article selection and in the inclusion phase and a Cohen’s Kappa statistic was completed, with a result that indicated excellent inter-rater agreement and spurred reflection about the appropriate inclusion and exclusion of articles. Finally, the synthesis identified the strategies to support evidence use and the facilitators of and barriers to evidence use, within different systems, that can serve as a point of departure for researchers undertaking empirical work that focuses on one or more specific systems.

Within humanitarian aid research, this study is the first to explicitly focus on the four interconnected systems – political, health, international humanitarian aid and health research. Research to date has tended to take a broader, non-system-specific approach to examining evidence use in crisis zones. This makes it challenging to identify which system the strategies to support evidence use are best handled by and, within a system, which actor is best suited to implement the strategies. The systems level analysis explored in this study contributes to alleviating this challenge by focusing on each system specifically and the actors that can exert influence on supporting evidence use within them.

Despite the merits of our approach, a limitation of the study was that, at times, it was difficult to know from the literature which system the strategies to support evidence use in crisis zones are best handled by and, within a system, whether the strategies are focused on policy-makers, health-system leaders, humanitarian aid decision-makers or research producers. In addition, literature stemming from highly insecure contexts was less available as often researchers have difficulty conducting research in such settings. We addressed these limitations by drawing on existing knowledge translation literature to inform our interpretation of those who would be best positioned to support evidence use, and by suggesting strategies that can be applicable in highly insecure contexts (e.g. rapid evidence service).

In addition, despite our best efforts to examine evidence use in crisis zones, we were unable to make assertions on how context influences the application of strategies to support evidence use in crisis zones in different systems. For example, it is considerably easier to convene a stakeholder dialogue to inform policy options within a relatively stable county (i.e. for Syrian refugees in Lebanon), rather than attempting to convene dialogue in the midst of war zones, outbreaks or natural disasters. However, the findings presented in this study serve as a foundation for research that aims to explore the impact of context on strategic outcomes related to evidence use.

Implications for policy and practice

The results of our study may enable different actors in crisis zones to reflect on how they can utilise their professional position to support the use of evidence in decision-making, both in the system within their sphere of at least potential control and in the other systems that may be within their sphere of influence. For example, policy-makers in the political system can engage researchers in the health research system to help facilitate a stakeholder dialogue. We recognise that asking these actors to adopt or adapt established strategies and develop new ones that address all the barriers and leverage all of the facilitators is a big challenge to undertake. Our hope is that our framework and strategies serve as the starting point for incremental change to occur over time with the goal of getting closer to addressing the evidence needs of decision-makers in crisis zones.

Future research

Future studies could apply our theoretical framework in purposively sampled crises, examining specific facilitators of and barriers to research evidence use in decision-making as well as which strategies, if any, are used to leverage the facilitators or address barriers. This would be beneficial in drawing lessons from the framework’s application and in identifying gaps in the framework that need to be addressed. Additionally, future studies could apply the strategies in one or more of the four involved systems to examine whether and how they increase the prospects for evidence use in crisis zones. This could potentially better inform the design of future strategies to support the use of research evidence in such situations and contribute further to our understanding of what types of influence each strategy could be expected to have if successfully implemented in different systems and for different types of crises.

Conclusions

During a humanitarian response, decision-makers tend to rely on their professional judgement to make decisions as their main goal is the provision of support to people affected by the crisis in often unpredictable situations. Part of the challenge in getting decision-makers to account for research evidence alongside their professional judgement is their uncertainty of whether the existing research evidence can be applied to their unique setting. What is currently missing from the theory is specific strategies to support evidence use in crisis zones that leverage the facilitators and address the barriers to evidence use within different systems (e.g. political, health, etc.). This study offers a new conceptual framework that addresses this gap by identifying and hel** to explain the strategies that can be employed to integrate the use of evidence more systematically in crisis zones.