PerspectiveProjecting the performance of conservation interventions
Introduction
Reliable evidence of future performance and efficacy of interventions is a critical component of successful decision-making for environmental management (Ferraro and Pattanayak, 2006, Rissman and Smail, 2015). Examples of such decision-making include achieving global protected area targets (Visconti et al., 2015), designing new national-level payments for ecosystem services programs (Bryan et al., 2014), and controlling invasive species (Firn et al., 2015, Martin et al., 2015). Yet determining future impacts of conservation interventions is challenged by a range of complex ecological, economic, social and ethical factors, as well as trade-offs between multiple objectives. Increasingly, scholars and practitioners are more systematically collating and synthesizing existing literature on past impacts for use as an evidence base in conservation (Sutherland et al., 2004). But making accurate inferences from this relies on the quality of this evidence base. Researchers and practitioners are also seeking to improve the quality of this evidence by conducting more robust assessments of past policy impacts through retrospective evaluations (Miteva et al., 2012, Pressey et al., 2015, Baylis et al., 2016). These retrospective evaluations typically use principles of causal inference (Box 1), which focuses on clarifying the assumptions needed to infer causal relationships from data, and on reducing the bias of impact estimates (Miteva et al., 2012, Meyfroidt, 2015, Pressey et al., 2015). This movement towards enhanced transparency and reduced bias is a response to the historical deficiencies of retrospective policy evaluations in conservation science (Ferraro and Hanauer, 2014, Meyfroidt, 2015, Baylis et al., 2016).
Yet when used to inform the design of conservation policies and interventions, retrospective evaluations only tell half the story: predictions of expected outcomes are also necessary. While ‘improving future policy and interventions’ is a commonly stated goal of retrospective analyses (Baylis et al., 2016), rigorous analysis of past outcomes alone is insufficient for this purpose. Evidence from past interventions can be highly context-specific (Pfaff and Robalino, 2012), and may not extrapolate to other times and areas (Sinclair et al., 2010, Dobrowski et al., 2011, Cook et al., 2014, Oliver and Roy, 2015). Such extrapolation is traditionally the domain of projection analyses: the use of modelling to project intervention impacts across time and space.
If, in developing projections, analysts ignore the new insights and methods of retrospective evaluations, the advice yielded by these projections will lack scientific credibility. Scientific credibility refers to the plausibility and technical accuracy of the science. Implicit and untested assumptions regarding causality limit the credibility of prospective policy analysis, as associations observed in the past may not hold in the future (Meyfroidt, 2015). Scientific credibility may also be limited if projections rely on potentially biased inferences from limited data (Miteva et al., 2012, Pressey et al., 2015), and which have an unclear treatment of uncertainty or poor interpretation of potentially biased results. These issues of untested assumptions, limited data, and imperfect use of this data are important for successful conservation decision-making: overestimation of benefits associated with proposed conservation interventions may lead to sub-optimal outcomes, whereas underestimation of benefits may result in more effective options being overlooked.
Here, we outline the relevance, benefits, and challenges of integrating into prospective evaluation of conservation interventions the principles of causal inference and associated principles of systematic literature review, expert elicitation, and scenario analysis. We discuss how these established and emerging techniques can be employed to (1) improve problem definition by clarifying causal assumptions, key variables, alternative scenarios, and using appropriate model frameworks, (2) improve model parameterization by identifying potential bias in data, and avoiding these where possible, and (3) improve model use and interpretation through analyses to understand model sensitivity and parameter or model uncertainty. These techniques are designed to encourage conservation scientists to use and interpret imperfect data more effectively, thereby delivering policy and program advice that is more scientifically credible, and, if heeded by decision-makers and acceptable to stakeholders, capable of delivering improved conservation outcomes.
Section snippets
Characterizing key variables in a causal context
A key challenge in creating robust and transparent model projections of conservation interventions is to define the problem. How is the intervention expected to work within the environmental, social, and economic context? To answer this question, models that depict mechanism-based, causal relationships between interventions, processes and variables are developed, ideally explicitly and graphically (Pearl, 2009, Margoluis et al., 2013) (Box 2). Causal relationships between key variables may be
Parameterization: using better data
Biases are pervasive in empirical conservation research because this research is often conducted in contexts of strong personal motivations, extremely low rates of study replication, complex systems, and high intrinsic rates of variability (Iftekhar and Pannell, 2015). Causal inference, systematic literature reviews, and robust expert elicitation methods offer ways to identify and mitigate biases in data drawn from a wide variety of sources (Martin et al., 2012b, Cook et al., 2014, Martin et
Interpretation: using data better
Biases may still be unavoidable even with greater attention to experimental design and analysis, systematic review procedures, and rigorous expert elicitation methods. For example, bias is likely in regional or global scale analyses, when data are not necessarily collected for the specific purpose of the evaluation (McKinnon et al., 2015). However, if data shortcomings are made transparent, improvements in model specification and interpretation may be possible. Model and data imperfections can
Synthesis and ways forward
To support the development of conservation interventions in complex environmental, social, economic, and ethical contexts, transparent, evidence-based models are critical. More transparent assumptions and more believable causal models engender greater confidence in the predictions of prospective evaluations, and these predictions will be more justifiable in the face of critique. This confidence in the robustness of the science is, of course, only one element contributing to the wider salience,
Acknowledgements
Funding from the Australian Research Council (http://www.arc.gov.au/) is acknowledged, including Centre of Excellence (CE110001014) (EAL, KD, MHH, GI, CM, JR, KAW), Discovery (DP150101300) (EAL, KAW), and Future Fellowship (FT100100413) (KAW) programs. This manuscript is based on discussions from the interdisciplinary workshop “Causal inference in environmental decisions” organized in Brisbane, Australia, June 2015, attended by the authors and funded by the ARC Centre of Excellence for
References (74)
- et al.
Supply of carbon sequestration and biodiversity services from Australia's agricultural land under global change
Glob. Environ. Chang.
(2014) - et al.
Thinking inside the box: a participatory, computer-assisted approach to scenario discovery
Technol. Forecast. Soc. Chang.
(2010) - et al.
Strategic foresight: how planning for the unpredictable can improve environmental decision-making
Trends Ecol. Evol.
(2014) - et al.
Discovering plausible energy and economic futures under global change using multidimensional scenario discovery
Environ. Model. Softw.
(2013) - et al.
Many objective robust decision making for complex environmental systems undergoing change
Environ. Model. Softw.
(2013) - et al.
Dynamic scenario discovery under deep uncertainty: the future of copper
Technol. Forecast. Soc. Chang.
(2013) - et al.
Measuring effectiveness, efficiency and equity in an experimental payments for ecosystem services trial
Glob. Environ. Chang.
(2014) An introduction to sensitivity assessment of simulation models
Environ. Model. Softw.
(2015)- et al.
Integrating knowledge on biodiversity and ecosystem services: mind-mapping and Bayesian Network modelling
Ecosyst. Serv.
(2016) - et al.
How to avoid a perfunctory sensitivity analysis
Environ. Model. Softw.
(2010)