Background

Over the past three decades, remarkable progress has been achieved in addressing the leading causes of child deaths, resulting in a global decrease of over 50% in the under-five mortality rate. Worldwide, 62 countries met the Millennium Development Goal 4 target by 2015 [1]. Despite notable achievements at the country level, significant disparities persist across subnational regions. Insights gleaned from “exemplar” countries – those that have surpassed anticipated reductions in under-five mortality compared to similar countries in geography or socioeconomic conditions – may provide salient and actionable lessons. The lessons are critical to tracking existing disparities, optimizing the limited resources, and accelerating progress towards the Sustainable Development Goals.

The adoption of standardized implementation science frameworks and robust analytic designs, such as quasi-experiments and mixed methods inquiry, is increasing. These methodologies facilitate a comprehensive understanding of implementation processes and outcomes, encompassing the multi-level implementation determinants (barriers and facilitators). They enhance causal inference on country-level actions leading to under-five mortality reduction and complement results from trials that establish the efficacy of evidence-based interventions (EBIs) for preventing childhood mortality under controlled conditions [2]. The work described in this supplement provides evidence and essential guidance for low- and middle-income countries (LMICs) struggling to implement and scale-up EBIs to address under-five mortality. The supplement materials unpack how and why the six exemplar countries (Bangladesh, Ethiopia, Nepal, Peru, Rwanda, and Senegal) made such notable progress. This is a critical step to translating knowledge into practical application in other settings.

It is estimated that full and sustained coverage of EBIs delivered at or around the time of birth can lead to as much as a 72% reduction in neonatal mortality if implemented with high fidelity and quality [3]. Notably, available EBIs targeting the leading causes of under-five mortality have been adopted as routine policy across most LMICs, with numerous global financing mechanisms in place to ensure their accessibility [4]. Despite the integration of EBIs into normative guidelines, and greatly expanded access to EBIs at a national level, many LMICs have experienced slowdowns in the pace of under-five mortality reductions. This stagnation is due to a mix of unaddressed neonatal mortality, subnational disparities in EBI coverage, and insufficient guidance and challenges to translate knowledge into effective EBI delivery [5, 6]. Addressing these subnational disparities and ensuring high-fidelity and high-quality EBI delivery is essential to bend the curve on under-five mortality.

Contextual factors shape how well EBIs are delivered across settings, and therefore their relative impact on mortality. Studying these factors, including how they determine and change the delivery of EBIs, is critical to accomplishing the under-five mortality reduction desideratum [6]. Specifically, applying implementation research tools is imperative to scrutinize contextual factors that influence the delivery of EBIs. This approach builds on the body of knowledge on evidence-based strategies to strengthen health systems across settings. Ultimately, it seeks to enhance and sustain the delivery of EBIs, maximizing their potential impact on population-level health indices.

A critical analysis of implementation research tools used to assess drivers of EBI implementation

Tools from the emerging field of implementation science – such as frameworks, implementation strategies, and dissemination approaches – can effectively target and bridge gaps in delivering priority EBIs. Furthermore, these tools aid in communicating outcomes to a broader audience [7]. Regardless of the approachability of specific frameworks, there is no unanimity on how to select the appropriate framework for a particular question, whether to use one or to combine multiple frameworks, or how to meaningfully integrate data across domains of interest [8]. Importantly, most implementation science frameworks lack validation in LMICs, limiting their reliability and usefulness. For example, a systematic review that assessed the application of EPIS (Exploration, Preparation, Implementation, Sustainment) framework identified only one study that used this framework in sub-Saharan Africa (South Africa) [9]. Moreover, there’s a lack of application of the Consolidated Framework for Implementation Research (CFIR) – another commonly used implementation science framework – within LMICs [10]. In this supplement, the investigators developed a hybrid framework based on existing frameworks (CFIR, EPIS, and RE-AIM) [11] to describe how EBIs were implemented and outcomes achieved, specifically in LMICs. As they note, the existing frameworks alone could not accurately cover relevant themes surfaced in investigating EBI implementation. Recognizing “adaptation” as a critical characteristic in the implementation and scale-up of EBIs, EPIS was expanded to EPIAS [12]. EPIAS was tested through a process of application to describe salient contextual factors and capture drivers of success from six countries with a greater reduction in under-five mortality. EPIAS efficiently mapped critical implementation contextual determinants, including the need to i) engage leaderships and communities; ii) adapt EBIs and strategies to the context; iii) embed EBIs within the existing health delivery systems; and iv) consider socioeconomic disparities. Despite providing a framework for identifying determinants of implementation success in LMIC settings, further inquiry is merited to assess how EPIAS applies in other contexts, including in countries with poorer performance.

The implementation research methods that were used to identify exemplar countries – which involved identifying countries that outperformed the average, followed by mixed methods enquiry to identify drivers of this positive deviance – would theoretically apply at a subnational level and could be used by country leadership to identify and spread what works in practice within a country’s borders. However, the field faces a number of constraints in carrying out this exercise. First, there remain substantial gaps in measuring under-five mortality (as well as other population-level health indices) in LMIC settings lacking robust and representative vital statistic systems. Gold standard population-level surveys are infrequently conducted, and imprecise in estimating mortality rates, particularly at subnational administrative levels. Second, while there are acceptable model-based approaches to estimate under-five mortality at the subnational level, they insufficiently capture important determinants of mortality, are not context-adjusted, and make strong assumptions that are often hard to hold. Given the intention to provide generalizable results to guide replication in other settings, caution is merited in causally attributing mortality improvements to specific contextual determinants or implementation strategies. Therefore, conceptually, implementation research methods applied in exemplar countries could define a model for providing granular evidence relevant for informing policies and adjustments to implementation efforts. To achieve this end, we believe it is pertinent to consolidate frameworks such as EPIAS for LMIC settings, in addition to improving population health measurement at subnational levels, as these estimates will define the sampling to study contextual drivers of differential performance.

The application of mixed methods has gained relevance in implementation research, particularly by maximizing synergies between quantitative and qualitative methods to measure the impact of EBIs and simultaneously explain how, why, and in what context EBIs work. Effective mixed methods inquiry is not just applying both quantitative and qualitative research techniques, but intentionally and consistently integrating quantitative and qualitative results to increase the relevance and rigor of study evaluations and findings [13]. The presented mixed methods case studies used quantitative data to identify how exemplar countries made progress in reducing under-five mortality. Further specifying operative details on how qualitative and quantitative data were integrated, including clarity on the reason for applying a mixed methods design and how both data types informed each other in every step of the study facilitates study interpretation and provides a model for others to replicate. Explaining how quantitative data informed the qualitative sampling strategy and data collection, as well as how findings were meaningfully combined and interpreted, is often neglected in mixed methods inquiry, thereby distorting mixed methods’ overall goal.

Conclusion

The work presented in this supplement is encouraging from several perspectives. It sets a stage for implementation researchers, particularly those in LMICs, to improve evaluations of what works to increase coverage of EBIs and reduce under-five mortality. It stands as a valuable demonstration of a harmonized methodology tailored for intricate evaluations spanning multiple countries. This approach yields systematic outcomes that hold the potential for greater generalizability, accelerating mortality reductions in high-need settings. It helps to understand policy implementation processes and outcomes as well as stakeholder dynamics in LMICs. While we focused on child mortality, we firmly believe that EPIAS is sufficiently flexible to be applied to other health areas and across implementation strategies. As a framework adapted specifically for LMICs, EPIAS should be tested by other implementation researchers and funders to advance the science of implementation, and bolster efforts to achieve the Sustainable Development Goals.