Abstract
Science–policy engagement efforts to accelerate climate action in agricultural systems are key to enable the sector to contribute to climate and food security goals. However, lessons to improve science–policy engagement efforts in this context mostly come from successful efforts and are limited in terms of empirical scope. Moreover, lessons have not been generated systematically from failed science–policy engagement efforts. Such analysis using lessons from failure management can improve or even transform the efficacy of efforts. To address this knowledge gap, we examined challenges and failures faced in science–policy engagement efforts of the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS). We developed an explanatory framework inspired by Cash et al.’s criteria for successful knowledge systems for sustainable development: credibility, salience, and legitimacy, complemented with insights from the wider literature. Using this framework in a survey, we identified factors which explain failure. To effectively manage these factors, we propose a novel approach for researchers working at the science–policy interface to fail intelligently, which involves planning for failure, minimizing risks, effective design, making failures visible, and learning from failures. This approach needs to be complemented by actions at the knowledge system level to create an enabling environment for science–policy interfaces.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Agriculture and its related activities contribute 23% of global anthropogenic greenhouse gas emissions (IPCC 2019), and significant emissions reductions are needed from the sector to meet the target of limiting global warming to 2 degrees Celsius as set out in the Paris Climate Agreement (Wollenberg et al. 2016). Efforts to enhance ambition in the lead up to the 26th Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC) to keep warming well below 2 degrees, and to strive toward 1.5 degrees, would require even greater ambition within the sector. At the same time, the sector is the source of livelihoods for those dependent on the over 475 million small farms (Lowder et al. 2014). These small-scale farmers are among the most vulnerable to the impacts of climate change and actions are needed to enable them to cope with climate change impacts (Loboguerrero et al. 2018). Actions to mitigate and adapt to the impacts of climate change need to occur as the world has seen an increase in hunger since 2014 (FAO 2018). In this context, the Agricultural Research for Development (AR4D) community needs to step up efforts to innovate in the face of climate change, and to inform decision-making to ensure large-scale uptake of innovations (Dinesh et al. 2018; Steiner et al. 2020; Vermeulen et al. 2012b). Science–policy engagement has become a crucial tool for researchers working on agriculture and climate change, to inform decision-making and enhance the impact of their work (Dinesh et al. 2018; UNEP 2017).
Research on science–policy engagement in the context of environmental change has identified ways to improve the efficacy of these efforts (Cash et al. 2003; Clark et al. 2016a; Holmes and Clark 2008; Kristjanson et al. 2009). Much of the lessons are drawn from successful case studies and empirical studies are still emerging (Dunn and Laing 2017; Van Enst et al. 2014). So far, lessons have not been generated systematically from failures (Turnhout et al. 2020; Wyborn et al. 2019), which can be a powerful tool to facilitate innovation, and as Thomas Watson said, “the way to succeed is to double your failure rate” (von Stamm 2018). Lesson learning from failure has been found to drive innovation in various contexts (Danner and Coopersmith 2015; Heath 2009; Knott and Posen 2005; von Stamm 2018), including in telecommunications (Baumard and Starbuck 2005), information technology (Gupta et al. 2019), policy-making (Dunlop 2017), pharmaceuticals (Khanna et al. 2016), and microfinance (Woolcock 1999). Despite these advances, our understanding of failures and lesson learning from failures remains quite limited (McGrath 2011), and this is especially true in the case of science–policy engagement for climate action in agriculture. There is an opportunity to address this knowledge gap, while at the same time applying lessons generated to improve the efficacy of science–policy engagement efforts and thus accelerate climate action.
Science–policy engagement scholars have identified challenges involved in the engagement process (e.g., Laing and Wallis 2016; Neßhöver et al. 2013; Sarkki et al. 2014; Talwar et al. 2011; Van Enst et al. 2014). However, much of these insights emerge from studying successful case studies, and while successes are recorded and reported, failures often remain undetected or are neglected (McGrath 2011; Rajkotia 2018; Vinck 2017). At the same time, studies in science–policy engagement show that current approaches to informing policy processes are not always delivering sufficient results (Hoppe et al. 2013; Kirchhoff et al. 2013; Strydom et al. 2010; van Kerkhoff and Lebel 2006), and there is a need to shift to fundamentally different approaches. Scholars have noted that failures in science–policy engagement are inevitable (Armitage et al. 2015; Lawton 2007; Wyborn et al. 2019), yet an effort to systematically generate lessons and learn from these failures has not been undertaken. In this context, this paper aims to generate lessons from unsuccessful science–policy engagement efforts and challenges of the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS).
CCAFS is an international research program with a focus on outcome-oriented research (Thornton et al. 2017; Vermeulen et al. 2012a), working with over 700 partner organizations at the local, sub-national, national, regional, and global levels to improve the livelihoods of small-scale farmers in the face of climate change. Outcome delivery is a key criterion to measure performance of projects. CCAFS interprets outcomes as changes in policies and practices of non-research partners (Dinesh et al. 2018; Earl et al. 2001); this includes informing policies and practices of Governments, international organizations, private sector, non-governmental, and farmer organizations. Examples include informing national policy and associated investments in Cambodia through participatory scenarios and provision of climate services to farmers in Senegal (Westermann et al. 2018). The program’s performance in delivering such outcomes is monitored through annual reporting processes. CCAFS’ emphasis on outcome delivery and science–policy engagement as a tool to achieve outcomes makes it a good case to study in the context of the emerging literature on Science–Policy Interface Organizations (SPIORGs) (Sarkki et al. 2019), within the wider literature on boundary organizations (Guston 2001).
CCAFS management is open to learning from its experiences, and past studies have generated lessons from the program’s science–policy engagement efforts (Cramer et al. 2018; Dinesh et al. 2018; Zougmoré et al. 2019), but, also within CCAFS, lessons from unsuccessful efforts and challenges are yet to be studied systematically. Studying failure within organizations is difficult because of psychological and organizational barriers (Cannon and Edmondson 2001). However, as a research program with a mandate for “lesson learning”, CCAFS is open to learning from its failures. In the present paper, we have endeavored to combine insider perspectives from two of the authors associated with the program with outsider perspectives from the other co-authors. The research questions we answer in the context of facilitating change for climate action in agriculture through science–policy engagement are: what challenges and failures can be faced? What strategies can be adopted to overcome these challenges and failures? To answer these questions, we first developed an explanatory framework based on the literature, consisting of factors which could potentially explain failure in science–policy engagement efforts. We then used this framework as the basis to administer a survey to CCAFS’ project leaders and coordinators. The results from this survey were analyzed to identify challenges and failures in the CCAFS context, and an approach has been developed to “fail intelligently.” Thereafter, we also conducted interviews with CCAFS management to validate our findings. Thus, in addition to contributing to the literature on science–policy engagement, failure management, and AR4D, the present paper will also help researchers to develop more effective science–policy engagement strategies which are more resilient to challenges and failures.
2 Explanatory framework
We understand unsuccessful science–policy engagement efforts or failures as instances where the expected outcome from efforts are not achieved, i.e., where goals are unmet (Kunert 2018; Leoncini 2017). In the context of CCAFS, this means efforts to drive changes in policies and practices of non-research partners are unsuccessful. Failures arise as a result of challenges or “fail factors” which may be faced in the science–policy engagement process, and we consider these challenges or “fail factors” to be independent variables, with “failure of science–policy engagement efforts to achieve expected results” as the dependent variable. While several challenges may be experienced in policy-engagement processes, “fail factors” are differentiated by the direct link that these have to the dependent variable. From the perspective of execution of science–policy engagement efforts, existence of these “fail factors” may be considered to be “early warning signs” (Leoncini 2017) that the expected outcome may not be achieved. The explanatory framework (Table 1) is envisaged as a context-specific tool to analyze failures (Edmondson 2011), with the proposed fail factors hypothesized based on the literature on science–policy engagement. Three of the hypothesized fail factors are a reversal of success factors identified by Cash et al. (2003), wherein principles of credibility, salience, and legitimacy are considered key success factors in science–policy engagement. We in turn make the assumption that lack of these success factors could lead efforts to fail. We complemented the Cash principles with additional factors including role of intermediaries, power dynamics, and institutional capacity, as these feature prominently in the literature on science–policy engagement efforts.
3 Methods
Learning from failures within organizations is difficult and even learning organizations struggle due to the challenges involved (Cannon and Edmondson 2005). These challenges include technical ones, due to a lack of understanding of processes to learn from failure, as well as social challenges which stem from psychological reactions to failure. In this context, learning from failure, although important, is a challenging endeavor. To overcome technical challenges faced in learning from failure, we drew on the literature on failure management. To address social challenges around learning from failure, we tried to create a safe and open environment for researchers to share challenges and failures that they have faced. This was done in several ways; firstly, while two of the authors are associated with CCAFS, the other authors are external to the program and ensure greater objectivity. The survey was sent out by the second author who is an academic and not directly involved in CCAFS, which could help ensure that respondents were not at risk of bias or evaluation by the program’s management. Findings from the survey were processed anonymously and are presented at an aggregate level. From the responses, it is not possible to deduce the identity of individual respondents, neither is it possible to relate the responses to the performance of individuals. Despite this, there was substantial non-response—this might signal that talking/writing about failure is a delicate matter within CCAFS, as in other contexts.
In order to understand failures faced in CCAFS science–policy engagement efforts, we conducted a literature review and developed an explanatory framework (Table 1). We then used the explanatory framework to design a survey (Annex 1) which was administered to Leaders and Coordinators of CCAFS projects. The objective of this survey was to validate explanatory factors identified, gain further insights on how these factors affect science–policy engagement efforts, and to identify additional explanatory factors. CCAFS had a portfolio of 54 ongoing research projects at the time of this study, and the survey was sent to the Leaders and Coordinators of all these projects as it was not possible to identify projects with explicit failures since failures are not formally reported. Therefore, we took an open-ended approach, reaching out to all Project Leaders and Coordinators. In addition to the current portfolio, we also contacted Project Leaders and Coordinators of completed projects, to ensure that prior experiences are also captured. The survey was sent to a total of 156 recipients and we received 24 complete responses, which form the basis of our analysis. While the response rate is fairly low compared to the average survey response rate of 52.7% with a SD of 20.4 in organizational research (Baruch and Holtom 2008), this shows the difficulty associated with studying failure. Thirteen recipients of the survey who attempted to answer it did not complete it. This may have been due to uncertainty about issues related to failure (as explained by one respondent) or concern in disclosing these experiences. While we recognize the limitation of the sample to be statistically significant, the insights gained are useful for interpretative qualitative research which captures experiences from the CCAFS context. To address challenges associated with studying failure, the responses were anonymized, enabling respondents to frankly share their challenges and unsuccessful science–policy engagement experiences. The results were analyzed thematically (Guest et al. 2011), and common themes were identified using an inductive approach and are presented in the “Results” section. Thereafter, the Leaders of CCAFS’ four flagship research programs (priorities and policies for CSA, climate-smart technologies and practices, low emissions development, and climate services and safety nets), and the program’s Director and Head of Global Policy Research were interviewed using a semi-structured approach (see Annex 2) to share further insights. Based on these insights, we further refined the explanatory factors consisting of challenges in science–policy engagement efforts, and generated lessons to fail intelligently and to improve efficacy of efforts. It must be noted that the survey respondents and interviewees do not represent research users; while it is important to capture the perspectives of users, in the present study we endeavored to get greater granularity about the issues faced by researchers and gain perspectives on knowledge production.
4 Results
4.1 Knowledge generated is not perceived as credible
Forty-two percent of the respondents associated their challenges to the first type of factors listed in Table 1: demonstrating credibility to partners. In the case of these respondents, the issues varied, ranging from time constraints to build credibility to complexity and uncertainty involved in research outputs, which undermine efforts to build credibility. Lack of quantitative data to support engagement efforts and capacity to conduct analysis required by decision-makers were also factors which affected efforts to build credibility (Table 2).
4.2 Knowledge generated is not salient
A majority of the respondents (63%) associated their challenges with research goals, questions, and results not being salient to the needs of decision-makers, the second category of explanatory factors from Table 1. Respondents encountered a number of challenges (Table 3), including lack of sufficient conversations and dialogue with decision-makers, differences in timelines of research and decision-making, retaining decision-makers’ attention, misunderstandings with decision-makers, and non-technical factors needed to inform decisions. Science–policy engagement can often be a long process and ensuring that the salience is retained across this process, even when there are changes to other factors, is important. A respondent noted, “Main challenge here is with respect to continuous changes of government staff. When the project starts, the goals are aligned but once people move or leaders change, those goals, all of a sudden, become not very well aligned.” This points to the need for adaptive strategies to ensure and retain salience in the engagement process. Such strategies included the setting up of science-policy multi-stakeholder platforms, getting results validated by decision-makers, applying methods which are quicker, being flexible to changes in the decision-making process, and develo** a coherent theory of change and network map**.
4.3 Knowledge generated is not legitimate
Only 22%Footnote 1 of the respondents found their challenges to be related to decision-makers not finding the research to be legitimate, the third explanatory factor from our framework (Table 1). In these instances where decision-makers found issues relating to legitimacy, this was due to the complexity of research, theoretical rather than practical orientation, lack of sufficient information (including other views), conflicts of interest, and existing prejudices, for example, on gender roles.
In the climate change context, communicating uncertainty can be a factor which informs the perceived legitimacy of knowledge, and respondents undertook a number of efforts to communicate uncertainty effectively. These included participatory processes to engage stakeholders and make them aware of uncertainties, convening roundtables with decision-makers, develo** multiple scenarios, and tailored approaches to supporting decision-making. Overall, a fair and balanced approach where researchers are upfront about the limitations and uncertainties associated with their research was the dominant strategy, and existence of different communication channels was crucial.
4.4 Engagement process lacked appropriate intermediaries
Lack of appropriate intermediaries was not found to be a problem, as a majority of the respondents (73%) relied on intermediaries including knowledge brokers and boundary organizations in their engagement efforts. These included Non-Governmental Organizations (NGOs), United Nations (UN) agencies, private sector consultancies, national research institutes, and government agencies (Table 4). In addition to institutions, the role of thought leaders and champions was crucial in several instances. These are individuals well connected and respected in decision-making processes, and are able to connect researchers to decision-making processes. Referring to one such thought leader, a respondent said, “He seems to have links to everyone. He invited CCAFS to participate in a working group that was going to consolidate efforts on adaptation tracking tools.“ Of the respondents that did not use intermediaries, only two indicated that using intermediaries could have been valuable.
4.5 Adverse power dynamics
Adverse power dynamics were a key factor affecting science–policy engagement process observed by 70% of the respondents, and the role of researchers within these dynamics influences the success or failure of efforts (Table 5). There were differences in how respondents viewed the role of researchers in such power dynamics. While some of the respondents believed that researchers should remain distant and neutral to these, others believed that researchers should actively engage in these, as one respondent remarked, “when power dynamics are in play, you play within them. Scientists and science are not outside of political action, we’re in the middle of it, if not necessarily central to it.” Power dynamics are often not within the control of researchers, and in some instances, the science community is not considered to be a political heavy weight, and such factors are also taken into account when develo** strategies to navigate these. Many approaches were taken to navigate power dynamics encountered, including remaining neutral and evidence based, providing quid pro quo support to help advance goals, engaging in political processes, and identifying champions who can help navigate the power dynamics. Overall, researchers need to be extremely cautious while engaging in such power dynamics, engaging proactively but respectfully.
4.6 Lack of institutional capacity
Most of the respondents (78%) found that their impact partners had adequate capacity to absorb research findings. Where capacity gaps existed, these related to sufficient technical staff not being available, capacity gaps to achieve scale with initiatives, and a lack of understanding of technology requirements and funding models for effective implementation.
4.7 Inductively derived fail factors in science–policy engagement
In addition to exploring the hypothetical fail factors of our explanatory framework, we posed an open-ended question on the top three reasons why science–policy engagement efforts failed to achieve expected outcomes, and respondents came up with a number of different reasons which we list as empirical fail factors (Table 6). Where these fail factors add further contextual detail to the hypothetical fail factors in our explanatory framework (Table 1), we have indicated this, while other factors outside the explanatory framework are also listed. Our assumption that lack of salience is a key fail factor is validated, but the survey results show the nuance involved. While in some cases this is because of research results not addressing the needs of decision-makers, in other cases this is due to a lack of demand for science-based solutions among decision-makers. Similarly, while we hypothesized institutional capacity gaps among partners to be a fail factor, we find that these gaps also extend to CCAFS researchers and manifest in the form of limited capacity for engagement and communications and to form and maintain partnerships. Differences in organizational cultures is also a key manifestation, which emerges from lack of capacity among both researchers and partners to adapt to the culture of the other. The main additional fail factor which we identified is around funding uncertainties which affected science–policy engagement efforts.
4.8 Fail factors contextualized in examples
The above sections are drawn from experiences of project leaders and coordinators. Through additional interviews with the CCAFS management, we identified concrete examples of failed science–policy engagement efforts, which help contextualize the above results. These examples are summarized in Table 7, together with the fail factors which led to efforts failing. It must be noted that adverse power dynamics and lack of institutional capacity are the two predominant fail factors identified from the CCAFS management’s perspective. This may be because in these examples, CCAFS management has taken a very proactive role through their portfolio management function to ensure that research results are salient, credible, and legitimate, complemented by support to form and develop partnerships. However, adverse power dynamics often affected the outcome, and in other cases the research partners chosen lacked the capacity or skills necessary to realize the outcome.
5 Discussion
The results provide detailed empirical insights into failed science–policy interactions, a hitherto underexposed field of study. Experiences with failure were derived from reports of interviewees and, therefore, might to some extent be idiosyncratic. Nevertheless, they do provide new insights into challenges and failures which go beyond the factors currently identified in the literature. These insights drawn from unsuccessful efforts not only show “what not to do” but also how lessons can be generated systematically and how management can adapt to emerging failures in science–policy engagement efforts. In this section, we discuss the implications of the results for research and practice of science–policy engagement efforts.
5.1 Credibility, salience, and legitimacy
The three principles of enhancing credibility, salience, and legitimacy (Cash et al. 2003) have formed the basis of efforts to improve efficacy of science–policy engagement efforts. We hypothesized that the absence of these principles could lead efforts to fail. The results show that lack of credibility was not an important fail factor for respondents. While this is an important finding, science–policy engagement is context specific, and the specific contexts within which respondents operate could have influenced this. In a previous study on success factors of CCAFS science–policy engagement efforts (Dinesh et al. 2018), it was found that the credibility of the CGIAR and its researchers was a key success factor. This may point to a broader perceived credibility for the organization and explain why a lack of credibility was not faced by most respondents. From the responses of respondents who were faced with this issue, we gain lessons which can be useful to strengthen science–policy engagement efforts. This includes spending time and effort to build credibility, addressing complexity and uncertainty, and the production of case studies and quantitative data which can support engagement efforts.
Lack of salience on the other hand was found to be a key fail factor. However, this fail factor not only arises when efforts on the part of researchers and research managers to make outputs salient prove insufficient but also when there is a lack of demand for salient knowledge. CCAFS has an emphasis on generating evidence salient to the needs of decision-makers (Dinesh et al. 2018; Zougmoré et al. 2019), and this emphasis has enabled the program to deliver successes which have been recorded in the literature (Westermann et al. 2018), but there are areas where this can be further strengthened, for example, by improving dialogue on problem definitions/problem structuring to make results more relevant (Funtowicz and Ravetz 1997; van der Hel 2016), aligning the timelines of research and decision-making, accommodating for changes in decision-makers, and communicating and engaging better. Development of salient knowledge needs to start from true interaction with next users (i.e., the immediate next users of research rather than ultimate beneficiaries), as opposed to an approach of retrofitting existing knowledge and tools to needs, as this creates path dependence (Interview-C 2019). In engaging next users, care must be taken to address criticisms of such engagement approaches, including the costs versus benefits and adverse power dynamics (Oliver et al. 2019; Turnhout et al. 2020; Wyborn et al. 2019).
Lack of legitimacy was also not validated as a fail factor by respondents of our survey, and this may also be a context-specific feature of CCAFS, where good practices around ensuring legitimacy have been noted in the literature (Vervoort et al. 2013; Zougmoré et al. 2019). We also considered issues around communicating uncertainty in relation to legitimacy, and found that a number of actions were taken to communicate uncertainty in a fair and balanced banner. However, as noted by Sarkki et al., management of uncertainty is considered important in relation to all three principles (Sarkki et al. 2015), and responses in relation to credibility also show the relationship between communicating uncertainty and the credibility of research outputs and institutions. The relationship between communicating uncertainty and salience has been studied by others (Bromley-Trujillo and Karch 2019), and therefore communicating uncertainty is relevant in relation to all three principles.
5.2 Institutional arrangements and capacity
Appropriate institutional arrangements and capacity are key to ensure that knowledge leads to changes on the ground (Múnera and van Kerkhoff 2019). In this context, lack of institutional capacity among partner organizations was identified as a fail factor (Table 1). The results show that it is not only absorptive capacity that needs to be enhanced but also the capacity of researchers to do outcome-oriented research and engagement activities. For example, the role of partnerships is quite central to delivering outcomes, and this includes partnerships with boundary organizations, development agencies, government agencies, farmer organizations etc., and lack of suitable partnerships or non-performance of partnerships have caused efforts to fail. For example, in the Honduras example, develo** different and more in-country partnerships could have been effective (Interview-B 2019). This stems from a lack of capacity to develop and manage suitable partnerships. Although CCAFS has an emphasis on partnerships at the programmatic level, failure arises from the lack of the right partnerships in specific contexts. While this is difficult to pre-empt, as performance of partners may change over time, adaptive management, which enables revisiting partnerships in response to needs, could be an effective strategy. Skills to develop partnerships also need to be fostered, as these tend to be different from research skills. As noted in the Mali case, skills to develop partnerships may have been absent resulting in efforts not succeeding (Interview-C 2019). Models of partnerships which have been tested in other contexts can also offer inspiration for CCAFS partnership building efforts (Dentoni et al. 2018).
While CCAFS has been successful in leveraging on the potential of science–policy engagement to achieve development outcomes (Dinesh et al. 2018; Thornton et al. 2017; Westermann et al. 2018), the degree to which the principles adopted at a programmatic level are operationalized varies; while there are projects which have taken this on board to deliver outcomes, there also remain projects/efforts which do not have effective science–policy engagement and communications strategies in place. CCAFS as a program advocates dedicating a third of research efforts for engagement and communications (Dinesh et al. 2018); however, a key reason for failure was that researchers did not have sufficient time to dedicate to engagement and communications activities, for example, in the cases from Mali and South East Asia. Effective implementation of programmatic priorities, including through resource allocation and capacity building, can help overcome this to a certain extent.
Limited institutional capacity on the part of decision-makers has been identified as a fail factor. CCAFS does make efforts to build capacity, including emphasis on institutional strengthening (CCAFS 2017); however, capacity gaps still exist among decision-makers. Strengthening efforts to build capacity is needed, but capacity is to some extent the result of the political and knowledge system, and a concerted effort is needed beyond a single program or institution, to build capacity of decision-makers to respond to challenges of climate change. Research and decision-making are two entirely different cultures, and while science–policy engagement offers a way for addressing these differences, deep cultural differences can cause efforts to fail. For example, the timeframes that both communities operate to are entirely different (Sarkki et al. 2014) and often impossible to reconcile. The role of knowledge brokers and translators can help bridge these differences, but a fundamental revisiting of organizational cultures is needed if both communities are seamlessly integrated in an ongoing science–policy engagement effort. In examples from Kenya and the UNFCCC, although efforts failed to achieve the expected outcomes in the expected timeframe, these outcomes were realized in later years because of political and institutional factors involved.
Much emphasis has been put on co-production of knowledge and social learning to engage decision-makers; however, in the contexts which CCAFS works in, high turnover of decision-makers was observed as a key challenge and a cause of failure. This points toward the need for engagement processes to go beyond individuals and to be institutionalized to ensure longevity. However, weak institutional structures may deter implementation of such efforts in some contexts. Moreover, recent work by Turnhout et al. shows that in order for co-production to be transformative, it needs to address unequal power relations (Turnhout et al. 2020).
5.3 Navigating power dynamics
Power dynamics play an important role in linking knowledge to action (Clark et al. 2016b; Turnhout et al. 2020; van Kerkhoff and Lebel 2006). In science–policy engagement efforts, researchers move outside the knowledge production process to enter the political realm, where power dynamics are crucial and navigation of these power dynamics may lead to success or failure in terms of achieving the expected outcome. Different approaches to engagement may be pursued, with varying implications to the empowerment of different stakeholders involved (van Kerkhoff and Lebel 2006). From the perspective of researchers engaging in decision-making processes, their power varies, for example, for researchers participating in a process set by authorities, their power may not go beyond defining problems, whereas when there is formal organization-level engagement, researchers are more powerful, although not in a position to challenge the power of decision-makers (van Kerkhoff and Lebel 2006). This relative power that researchers hold in the engagement process can cause efforts to succeed or fail, for example, while working with APEC, an integration approach (van Kerkhoff and Lebel 2006) was adopted to set a shared agenda; however, due to political priorities at play and researchers not being a powerful enough player, these efforts failed to realize expected outcomes. This was also true in the cases of informing decisions of the Nigerian Government and that of USAID, where changes in Governments and subsequent leadership played a key role in defining priorities, and researchers were not powerful enough to challenge this power. These power relations could be reversed in the case of co-production processes established by researchers themselves, wherein researchers tend to hold more power, and there is a need to ensure that other stakeholders are empowered (Turnhout et al. 2020; Wyborn et al. 2019).
In addition to the power play between researchers and decision-makers, an additional perspective observed was the role of other competing researchers/research groups. There is often competition among research groups for “their results” to inform decisions and to have the ear of the decision-makers. Such competition can be an external factor which affects the success or failure of engagement efforts. In the CCAFS context, such competition was not only observed from other research institutions but also within the same organization (Interview-B 2019).
5.4 Funding uncertainties
The role of funding organizations and funding commitments in determining the priority accorded to science–policy engagement has been noted (Arnott et al. 2020; Sarkki et al. 2019). This crucial role of funding organizations and commitments also emerged during our study, and specifically, we found that changes to funding on an annual basis make it difficult for researchers to plan and execute multi-year engagement strategies, and has been an important fail factor. While adaptive planning on the part of researchers can help mitigate this to some extent, large-scale changes to funding beyond the control of researchers can be detrimental. This can only be addressed through multi-year commitments and certainty from donors, which maximize the potential to address challenges. Funding uncertainties also extend beyond funding for engagement, to also include funding for implementing science-based decisions. Uncertain funding to implement and scale science-based solutions has also been identified as a cause of failure. However, while funding uncertainties have been a fail factor, it is also important to be cognizant of the fact that funding uncertainties should not be used as an excuse for other fundamental problems around project design and implementation (Interview-F 2019). Examples are emerging of researchers grou** to address challenges of scarce resources (Sarkki et al. 2019), and similar models may also benefit CCAFS and other organizations.
6 Failing intelligently at the interface between science and policy
While we have identified the key causes of failure of science–policy engagement efforts in the context of climate action in agriculture, failing is inevitable as studies in other sectors have shown. Therefore, rather than endeavoring to entirely avoid failures, a conscious effort to fail intelligently is more desirable. Such an approach will enable researchers to improve the efficacy of their science–policy engagement efforts. Intelligent failure arises from thoughtfully planned actions, which are executed effectively, at a scale which is modest, in areas where lessons can be generated from such failures (Sitkin 1992). This involves taking cognizance of failures, learning from failure, and develo** a culture around failing intelligently to improve and innovate (Cannon and Edmondson 2005). In relation to science–policy engagement efforts in the context of climate action in agriculture, we propose the following steps to fail intelligently (Fig. 1). These steps are inspired by Cannon and Edmondson (2005), and aim to apply the generic set of principles to science–policy engagement efforts:
-
1.
Plan for failures: At the design stage, take cognizance of failures which may be experienced in the science–policy engagement process and develop strategies to overcome these. The fail factors identified in the present paper offers researchers insights into potential challenges which may be faced and can enable the development of appropriate mitigation plans.
-
2.
Minimize risks: Where there is a possibility of failure, ensure that risks are minimal in terms of resources expended and time spent in science–policy engagement efforts.
-
3.
Design efforts intelligently for generating lessons, in success or failure: Design science–policy engagement strategies intelligently, so that in the event that these strategies fail, they generate lessons which can enable researchers to navigate similar challenges in the future, for example, in identifying early warning signs of failure (Leoncini 2017).
-
4.
Make failures visible: Record failures carefully and foster a culture where failures are admitted early, and understood to be part of the culture of experimentation and innovation. This can be the most difficult step as it requires a change in organizational culture.
-
5.
Learn from failures: Actively generate lessons from failures to improve the efficacy of science–policy engagement efforts.
It must be noted that the applicability of these steps is context dependent, and in a highly competitive environment, some steps may be easier to implement than others. For example, in the CCAFS case, we noted that failure is a delicate subject overall, and most program participants were not willing to share their experiences in our survey. This means that step 4 would be the most challenging to implement in such a context. However, in most contexts, the right incentives and support from management would be crucial to empower researchers to learn from their failures in science–policy engagement.
7 Conclusions
We provide empirical insights into the challenges and failures faced in science–policy engagement efforts for climate action in the agricultural sector. By analyzing failures rather than successes, we provide a perspective which has until now not been reflected upon in the literature on science–policy engagement. Meanwhile, for the literature on failure management, we provide insights from application of failure management concepts in the science–policy engagement context. Specifically, we have identified fail factors, which can be addressed to improve the efficacy of science–policy engagement processes. These include the lack of salience in research results, lack of institutional capacity, adverse power dynamics, and funding uncertainties. Various dimensions of these fail factors and their relationship to the literature have been discussed, enabling future research and practice. Future research can shed light on context-specific performance of the fail factors as well as identify additional fail factors. Efforts to capture user perspectives on failure of science–policy engagement efforts will also be valuable. However, research efforts should transcend disciplinary boundaries to offer fresh insights to address pressing knowledge needs.
To address fail factors identified in research management, we propose that capacity-building efforts are undertaken, both within the research community and among decision-makers to build buy-in for science-based solutions. Priority should be accorded to build capacity of expert intermediaries and boundary spanners. Second, better matching of demand and supply of knowledge is needed, for example, through the production of synthesis outputs in formats which are useful for decision-makers. Platforms which facilitate matching of demand and supply can also play an important role. Third, to address the power imbalances faced by researchers, efforts need to be taken to strengthen the position of researchers, through their technical expertise and clear communications. However, the knowledge system operates in different scales, and it is necessary to be cognizant of the diversity (Warghade 2015). Principles of research funding, with its huge emphasis on success, need to be revisited to see failure as possible, as acceptable, and also valuable. Finally, an understanding of which factors fall beyond the sphere of influence of any given project is also valuable for those involved in that specific project. Even though external factors cannot be steered, they can still be adapted to; and moreover, individual researchers and projects can also work actively to extend their sphere of influence to bring factors that start as external within reach—something that may be especially feasible for projects that are supported over longer periods of time.
Our findings point toward redefining the role of the researcher (Turnhout et al. 2013). A researcher is no longer only a generator of knowledge, but a policy entrepreneur who identifies and accesses windows of opportunity, and as with all forms of entrepreneurships, both successes and failures can be faced in this path. However, as with entrepreneurship, intelligent failure (Edmondson 2011) can enable researchers to learn from failures, generate lessons for the wider community, and apply adaptive management strategies to be successful. The How on integrating learning from failures is key, as failures are often unreported; therefore, a shift in our approach to research and research management, which values failures for their lesson learning function, is needed.
In order to address the challenges of adaptation, mitigation, and food security, it is essential that knowledge-sharing mechanisms are improved within the agricultural sector. This requires wider changes to the knowledge system that is conducive of science–policy interfaces (Felt et al. 2016). In the absence of such a change, improved efforts of the research community will continue to deliver suboptimal result. Learning from failures can not only help improve practice at the level of the researchers but address wider issues within the knowledge and political systems.
Data availability
Data used for analysis include survey results and interview transcripts, but these are not made public to protect the identity of respondents. The questionnaire used for the survey and questions used for semi-structured interviews are included in the annexures.
Notes
We only received 23 responses to this question, as opposed to 24 responses to other questions.
References
Armitage D et al (2015) Science–policy processes for transboundary water governance. Ambio 44:353–366. https://doi.org/10.1007/s13280-015-0644-x
Arnott JC, Neuenfeldt RJ, Lemos MC (2020) Co-producing science for sustainability: can funding change knowledge use? Glob Environ Chang 60:101979. https://doi.org/10.1016/j.gloenvcha.2019.101979
Baruch Y, Holtom BC (2008) Survey response rate levels and trends in organizational research. Hum Relat 61:1139–1160. https://doi.org/10.1177/0018726708094863
Baumard P, Starbuck WH (2005) Learning from failures: why it may not happen. Long Range Plann 38:281–298. https://doi.org/10.1016/j.lrp.2005.03.004
Bromley-Trujillo R, Karch A (2019) Salience, scientific uncertainty, and the agenda-setting power of science. Policy Studies Journal n/a. https://doi.org/10.1111/psj.12373
Cáceres DM, Silvetti F, Díaz S (2016) The rocky path from policy-relevant science to policy implementation—a case study from the South American Chaco. Curr Opin Environ Sustain 19:57–66. https://doi.org/10.1016/j.cosust.2015.12.003
Cannon MD, Edmondson AC (2001) Confronting failure: antecedents and consequences of shared beliefs about failure in organizational work groups. J Organ Behav 22:161–177. https://doi.org/10.1002/job.85
Cannon MD, Edmondson AC (2005) Failing to learn and learning to fail (intelligently): how great organizations put failure to work to innovate and improve. Long Range Plann 38:299–319. https://doi.org/10.1016/j.lrp.2005.04.005
Carlsson B, Jacobsson S (1997) In search of useful public policies — key lessons and issues for policy makers. In: Carlsson B (ed) Technological systems and industrial dynamics. Springer US, Boston, pp 299–315. https://doi.org/10.1007/978-1-4615-6133-0_11
Cash DW et al (2003) Knowledge systems for sustainable development. Proc Natl Acad Sci 100:8086–8091. https://doi.org/10.1073/pnas.1231332100
CCAFS (2017) CCAFS phase II capacity development strategy. CGIAR research program on climate change. Agriculture and Food Security, Wageningen
Clark WC, Tomich TP, van Noordwijk M, Guston D, Catacutan D, Dickson NM, McNie E (2016a) Boundary work for sustainable development: natural resource management at the Consultative Group on International Agricultural Research (CGIAR). Proc Natl Acad Sci 113:4615–4622. https://doi.org/10.1073/pnas.0900231108
Clark WC, van Kerkhoff L, Lebel L, Gallopin GC (2016b) Crafting usable knowledge for sustainable development. Proc Natl Acad Sci 113:4570–4578. https://doi.org/10.1073/pnas.1601266113
Cramer L et al (2018) Lessons on bridging the science–policy divide for climate change action in develo** countries. CGIAR Research Program on Climate Change, Agriculture and Food Security, Wageningen
Danner J, Coopersmith M (2015) The other "F" word: how smart leaders, teams, and entrepreneurs put failure to work. John Wiley & Sons
Dentoni D, Bitzer V, Schouten G (2018) Harnessing wicked problems in multi-stakeholder partnerships. Journal of Business Ethics 150:333–356. https://doi.org/10.1007/s10551-018-3858-6
Dilling L, Lemos MC (2011) Creating usable science: opportunities and constraints for climate knowledge use and their implications for science policy. Glob Environ Chang 21:680–689. https://doi.org/10.1016/j.gloenvcha.2010.11.006
Dinesh D et al (2018) Facilitating change for climate-smart agriculture through science–policy engagement. Sustainability 10:2616
Dunlop CA (2017) Policy learning and policy failure: definitions, dimensions and intersections. Policy Polit 45:3–18. https://doi.org/10.1332/030557316X14824871742750
Dunn G, Laing M (2017) Policy-makers perspectives on credibility, relevance and legitimacy (CRELE). Environ Sci Policy 76:146–152. https://doi.org/10.1016/j.envsci.2017.07.005
Earl S, Carden F, Smutylo T (2001) Outcome map**: building learning and reflection into development programs. IDRC, Ottawa
Edmondson AC (2011) Strategies for learning from failure. Harv Bus Rev 89:48–55
FAO (2018) The state of food security and nutrition in the world 2018: building climate resilience for food security and nutrition. Food and Agriculture Organization of the United Nations, Rome
Felt U, Igelsböck J, Schikowitz A, Völker T (2016) Transdisciplinary sustainability research in practice: between imaginaries of collective experimentation and entrenched academic value orders. Science, Technology, & Human Values 41:732–761. https://doi.org/10.1177/0162243915626989
Funtowicz S, Ravetz J (1997) Environmental problems, post-normal science, and extended peer communities. Études et Recherches sur les Systèmes Agraires et le Développement:169–175
Guest G, MacQueen KM, Namey EE (2011) Applied thematic analysis. Sage
Gupta SK, Gunasekaran A, Antony J, Gupta S, Bag S, Roubaud D (2019) Systematic literature review of project failures: current trends and scope for future research. Computers & Industrial Engineering 127:274–285. https://doi.org/10.1016/j.cie.2018.12.002
Guston DH (2001) Boundary organizations in environmental policy and science: an introduction. Science, Technology, & Human Values 26:399–408. https://doi.org/10.1177/016224390102600401
Heath R (2009) Celebrating failure: the power of taking risks, making mistakes, and thinking big. Career Press, New Jersey
Holmes J, Clark R (2008) Enhancing the use of science in environmental policy-making and regulation. Environ Sci Policy 11:702–711. https://doi.org/10.1016/j.envsci.2008.08.004
Hoppe R, Wesselink A, Cairns R (2013) Lost in the problem: the role of boundary organisations in the governance of climate change. Wiley Interdisciplinary Reviews. Clim Change 4:283–300. https://doi.org/10.1002/wcc.225
Interview-B (2019) Interview with CCAFS management team member on failures in science–policy engagement efforts
Interview-C (2019) Interview with CCAFS management team member on failures in science–policy engagement efforts
Interview-F (2019) Interview with CCAFS management team member on failures in science–policy engagement efforts
IPCC (2019) Climate change and land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems. Intergovernmental Panel on Climate Change, Geneva
Janse G (2008) Communication between forest scientists and forest policy-makers in Europe — a survey on both sides of the science/policy interface. Forest Policy Econ 10:183–194. https://doi.org/10.1016/j.forpol.2007.10.001
Keeley J, Scoones I (2014) Understanding environmental policy processes: cases from Africa. Routledge, London
Khanna R, Guler I, Nerkar A (2016) Fail often, fail big, and fail fast? Learning from small failures and R&D performance in the pharmaceutical industry. Acad Manage J 59:436–459. https://doi.org/10.5465/amj.2013.1109
Kirchhoff CJ, Lemos MC, Dessai S (2013) Actionable knowledge for environmental decision making: broadening the usability of climate science. Annu Rev Env Resour 38:393–414. https://doi.org/10.1146/annurev-environ-022112-112828
Knott AM, Posen HE (2005) Is failure good? Strategic Management Journal 26:617–641. https://doi.org/10.1002/smj.470
Kristjanson P et al (2009) Linking international agricultural research knowledge with action for sustainable development. Proc Natl Acad Sci 106:5047–5052. https://doi.org/10.1073/pnas.0807414106
Kunert S (2018) Introduction. In: Kunert S (ed) Strategies in failure management: scientific insights, case studies and tools. Springer International Publishing, Cham, pp 1–6. https://doi.org/10.1007/978-3-319-72757-8_1
Laing M, Wallis PJ (2016) Scientists versus policy-makers: building capacity for productive interactions across boundaries in the urban water sector. Environ Sci Policy 66:23–30. https://doi.org/10.1016/j.envsci.2016.08.001
Lawton JH (2007) Ecology, politics and policy. J Appl Ecol 44:465–474. https://doi.org/10.1111/j.1365-2664.2007.01315.x
Leoncini R (2017) How to learn from failure. Organizational creativity, learning, innovation and the benefit of failure organizational creativity, learning, innovation and the benefit of failure. Rutgers Business Review 2(1):98–104
Loboguerrero AM et al (2018) Feeding the world in a changing climate: an adaptation roadmap for agriculture. Global Commission on Adaptation, Rotterdam and Washington, DC
Lowder SK, Skoet J, Singh S (2014) What do we really know about the number and distribution of farms and family farms worldwide? Vol 14. Food and Agriculture Organization of the United Nations, Rome
McGrath RG (2011) Failing by design. Harv Bus Rev 89(76–83):137
Múnera C, van Kerkhoff L (2019) Diversifying knowledge governance for climate adaptation in protected areas in Colombia. Environ Sci Policy 94:39–48. https://doi.org/10.1016/j.envsci.2019.01.004
Neßhöver C et al (2013) Improving the science–policy interface of biodiversity research projects. GAIA - Ecological Perspectives for Science and Society 22:99–103. https://doi.org/10.14512/gaia.22.2.8
Oliver K, Kothari A, Mays N (2019) The dark side of coproduction: do the costs outweigh the benefits for health research? Health Research Policy and Systems 17:33. https://doi.org/10.1186/s12961-019-0432-3
Radaelli CM (1995) The role of knowledge in the policy process. J Eur Publ Policy 2:159–183. https://doi.org/10.1080/13501769508406981
Rajkotia Y (2018) Beware of the success cartel: a plea for rational progress in global health. BMJ Glob Health 3:e001197. https://doi.org/10.1136/bmjgh-2018-001197
Sarkki S, Niemelä J, Tinch R, van den Hove S, Watt A, Young J (2014) Balancing credibility, relevance and legitimacy: a critical assessment of trade-offs in science–policy interfaces. Science and public policy 41:194–206. https://doi.org/10.1093/scipol/sct046
Sarkki S et al (2015) Adding ‘iterativity’ to the credibility, relevance, legitimacy: a novel scheme to highlight dynamic aspects of science–policy interfaces. Environ Sci Policy 54:505–512. https://doi.org/10.1016/j.envsci.2015.02.016
Sarkki S et al (2019) Managing science–policy interfaces for impact: interactions within the environmental governance meshwork. Environ Sci Policy. https://doi.org/10.1016/j.envsci.2019.05.011
Sitkin SB (1992) Learning through failure: the strategy of small losses. Research in Organizational Behavior 14:231–266
Spilsbury MJ, Nasi R (2006) The interface of policy research and the policy development process: challenges posed to the forestry community. Forest Policy Econ 8:193–205. https://doi.org/10.1016/j.forpol.2004.09.001
Steiner A et al (2020) Actions to transform food systems under climate change. CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS), Wageningen
Strydom WF, Funke N, Nienaber S, Nortje K, Steyn M (2010) Evidence-based policymaking: a review. S Afr J Sci 106:17–24
Talwar S, Wiek A, Robinson J (2011) User engagement in sustainability research. Science and Public Policy 38:379–390. https://doi.org/10.3152/030234211X12960315267615
Thornton PK, Schuetz T, Förch W, Cramer L, Abreu D, Vermeulen S, Campbell BM (2017) Responding to global change: a theory of change approach to making agricultural research for development outcome-based. Agr Syst 152:145–153. https://doi.org/10.1016/j.agsy.2017.01.005
Turnheim B, Asquith M, Geels FW (2020) Making sustainability transitions research policy-relevant: challenges at the science–policy interface. Environ Innov Soc Trans 34:116–120. https://doi.org/10.1016/j.eist.2019.12.009
Turnhout E, Stuiver M, Klostermann J, Harms B, Leeuwis C (2013) New roles of science in society: different repertoires of knowledge brokering. Science and Public Policy 40:354–365. https://doi.org/10.1093/scipol/scs114
Turnhout E, Metze T, Wyborn C, Klenk N, Louder E (2020) The politics of co-production: participation, power, and transformation. Curr Opin Environ Sustain 42:15–21. https://doi.org/10.1016/j.cosust.2019.11.009
UNEP (2017) Strengthening the science–policy interface: a gap analysis. United Nations Environment Programme, Nairobi
van der Hel S (2016) New science for global sustainability? The institutionalisation of knowledge co-production in Future Earth. Environ Sci Policy 61:165–175. https://doi.org/10.1016/j.envsci.2016.03.012
Van Enst WI, Driessen PPJ, Runhaar HAC (2014) Towards productive science–policy interfaces: a research agenda. JEAPM 16:1450007. https://doi.org/10.1142/s1464333214500070
van Kerkhoff L, Lebel L (2006) Linking knowledge and action for sustainable development. Annu Rev Env Resour 31:445–477. https://doi.org/10.1146/annurev.energy.31.102405.170850
Vermeulen S et al (2012a) Climate change, agriculture and food security: a global partnership to link research and action for low-income agricultural producers and consumers. Curr Opin Environ Sustain 4:128–133. https://doi.org/10.1016/j.cosust.2011.12.004
Vermeulen SJ et al (2012b) Options for support to agriculture and food security under climate change. Environ Sci Policy 15:136–144. https://doi.org/10.1016/j.envsci.2011.09.003
Vervoort JM et al. (2013) Linking multi-actor futures for food systems and environmental governance. Paper presented at the Earth System Governance Conference, Tokyo.
Vinck D (2017) Learning thanks to innovation failure. Critical studies of innovation. https://doi.org/10.4337/9781785367229.00022
von Stamm B (2018) Failure in innovation: is there such a thing? In: Kunert S (ed) Strategies in failure management: scientific insights, case studies and tools. Springer International Publishing, Cham, pp 27–45. https://doi.org/10.1007/978-3-319-72757-8_3
Warghade S (2015) Policy formulation tool use in emerging policy spheres: a develo** country perspective. In: Jordan AJ, Turnpenny JR (eds) The tools of policy formulation: actors, capacities, venues and effects. Edward Elgar Publishing, Cheltenham, pp 205–224. https://doi.org/10.4337/9781783477043.00022
Westermann O, Förch W, Thornton P, Körner J, Cramer L, Campbell B (2018) Scaling up agricultural interventions: case studies of climate-smart agriculture. Agr Syst 165:283–293. https://doi.org/10.1016/j.agsy.2018.07.007
Wollenberg E et al (2016) Reducing emissions from agriculture to meet the 2 °C target. Glob Chang Biol 22:3859–3864. https://doi.org/10.1111/gcb.13340
Wolmer W, Keeley J, Leach M, Mehta L, Scoones I, Waldman L (2006) Understanding policy processes: a review of IDS research on the environment. Institute of Development Studies, Brighton
Woolcock MJV (1999) Learning from failures in microfinance. Am J Econ Sociol 58:17–42. https://doi.org/10.1111/j.1536-7150.1999.tb03281.x
Woolthuis RK, Lankhuizen M, Gilsing V (2005) A system failure framework for innovation policy design Technovation 25:609–619. https://doi.org/10.1016/j.technovation.2003.11.002
Wyborn C, Datta A, Montana J, Ryan M, Leith P, Chaffin B, Miller C, van Kerkhoff L (2019) Co-producing sustainability: reordering the governance of science, policy, and practice. Annu Rev Env Resour 44:319–346. https://doi.org/10.1146/annurev-environ-101718-033103
Young JC et al (2014) Improving the science–policy dialogue to meet the challenges of biodiversity conservation: having conversations rather than talking at one-another. Biodivers Conserv 23:387–404. https://doi.org/10.1007/s10531-013-0607-0
Zougmoré RB et al (2019) Science–policy interfaces for sustainable climate-smart agriculture uptake: lessons learnt from national science-policy dialogue platforms in West Africa. International Journal of Agricultural Sustainability 17:367–382. https://doi.org/10.1080/14735903.2019.1670934
Funding
This work was implemented as part of the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS), which is carried out with support from CGIAR Fund Donors and through bilateral funding agreements. For details, please visit https://ccafs.cgiar.org/donors. The views expressed in this document cannot be taken to reflect the official opinions of these organizations.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by D.D. The first draft of the manuscript was written by D.D. and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
D.D. and B.M.C. are employed by the CCAFS program; however, they are researchers and have contributed to this paper for lesson generation from the program. Other authors have no interests to declare.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Annex 1: Questionnaire
Learning from challenges and lack of success is an important aspect of scientific endeavor. This questionnaire aims to capture lessons from challenges faced within science policy engagement efforts of CCAFS projects. This is not part of formal evaluations, and the aim is to capture insights which are not captured through formal reporting, for lesson learning with a paper being written summarizing the findings.
All responses are anonymous, but we would like to have interviews with a sample of project leaders, and if you would like to be interviewed, please provide your email address at the end.
-
1.
We are looking for cases where you have encountered challenges or lack of success in science–policy engagement efforts, as we want to learn from these instances. Science policy engagement efforts are interpreted broadly, to include engagement in sub-national, national, and international policy frameworks, as well as policies/strategies/investments of international institutions, private sector, and farmer organizations. In either your current or past science–policy engagement efforts, have you encountered challenges or lack of success? If you answer NO, the survey will end for you.
-
Yes/No
-
If Yes, what were these challenges or lack of success? Please describe and if possible identify the reasons why there were challenges or lack of success:
.........................................................................................................................................
-
2.
The perceived credibility of research outputs (i.e., that research outputs are authoritative and trusted) is considered to be a key success factor in science–policy engagement. In your experience, were the challenges due to difficulty in demonstrating credibility to partners?
-
Yes/No
-
i.
If Yes, please describe the challenges encountered in ensuring salience and efforts which you undertook to address these:
.....................................................................................
-
i.
-
3.
Were challenges associated with research goals, questions, and outputs not being aligned/salient with the needs of decision-makers targeted?
-
Yes/No
-
If Yes, please describe the challenges encountered in ensuring salience and efforts which you undertook to address these:
................................................................................................
-
4.
Were challenges encountered because decision-makers did not find the research to be fair, balanced, and representing diverse interests?
-
Yes/No
-
If Yes, what issues did the decision-makers have with the research?
...........................................................................................
-
5.
How did you communicate uncertainty to decision-makers?
....................................................................................................................................
-
6.
Were there intermediaries (e.g., brokers or boundary organizations) involved in your engagement efforts?
-
Yes/No
-
If Yes, who were the intermediaries you engaged with and what role did they play?
................................................................................................
-
If No, would the presence of intermediaries have improved your ability to achieve an outcome?
...........................................................................
-
7.
During your engagement process, did you encounter power dynamics at play?
-
Yes/No
-
If Yes, how did you navigate these?
......................................................
-
8.
Did your policy partner(s) have adequate institutional capacity to take on board research findings?
-
Yes/No
-
If No, what capacity gaps existed? Please describe:
..................................
-
-
9.
If your engagement efforts failed to realize the expected outcome, what were the three top reasons for this?
-
....................................................
-
....................................................
-
....................................................
-
-
10.
Is there anything else you would like to share regarding these topics?
-
11.
Would you be willing to be part of a longer (30 min) interview to discuss these topics further?
-
Yes/No
-
If Yes, please enter your email and we will contact you.
Annex 2: Interview questions—lessons from challenges and failures at the interface of science and policy for climate action in agriculture
Each interview topic was introduced with an open-ended question. Text between brackets was used to explain the question if necessary. Text in bulleted lists was used to ask follow-up questions, but only if needed.
-
1.
How do you understand failure in science–policy engagement efforts? (Based on a review of literature and a survey of CCAFS projects, we understand failures in science–policy engagement efforts as instances where expected outcomes are not realized)
-
How do you understand the relationship between challenges and failures? (We understand challenges and failures to be closely related to each other, with challenges being early warning signs of potential failure of science–policy engagement efforts. In some cases, these challenges or early warning signs can be navigated through adaptive management, but in other cases, these become fail factors, leadings to efforts failing)
-
-
2.
As part of your role in the design of the CCAFS portfolio and ensuring delivery of outcomes:
-
How do you address challenges in science–policy engagement, both at the level of projects and at the program level?
-
How do you address failures in science–policy engagement, both at the level of projects and at the program level?
-
Can you provide specific examples?
-
-
3.
What efforts do you take at the flagship/program level to ensure that knowledge generated is salient toward the needs of decision-makers? Can you provide specific examples?
-
4.
What can be done to further strengthen efforts to enhance salience, for example, through capacity building of researchers? Please provide examples where possible.
-
5.
Lack of the right partnerships in specific contexts has been a cause of failure. What efforts do you take to ensure skills in develo** and maintaining partnerships? Please provide examples where possible.
-
6.
How do you approach projects with poor engagement and communications efforts? Please provide examples where possible.
-
7.
How do you ensure that funding uncertainties do not disrupt science–policy engagement activities and efforts to realize outcomes? Please provide examples where possible.
-
8.
How does CCAFS (program, flagships, projects) approach partners with limited capacity to absorb and implement findings?
-
9.
How does CCAFS navigate power dynamics in science–policy engagement efforts including rapid turnover among decision-makers?
-
10.
To improve science–policy engagement efforts in the context of climate action in agriculture, we propose a five-step process to fail intelligently (1. Plan for failures, 2. Minimize risks, 3. Design efforts intelligently, 4. Make failures visible, 5. Learn from failures). To what extent can these steps be implemented in a competitive environment such as that CCAFS operates in? How do the steps differ in terms of their complexity to implement? Are there examples of successful execution of each step?
-
11.
Is there anything else that you would like to share?
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dinesh, D., Hegger, D., Vervoort, J. et al. Learning from failure at the science–policy interface for climate action in agriculture. Mitig Adapt Strateg Glob Change 26, 2 (2021). https://doi.org/10.1007/s11027-021-09940-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11027-021-09940-x