Elsevier

Biological Conservation

Volume 215, November 2017, Pages 142-151
Biological Conservation

Perspective
Projecting the performance of conservation interventions

https://doi.org/10.1016/j.biocon.2017.08.029Get rights and content

Highlights

  • Conservation policies and programs are hotly debated, with complex, uncertain impacts.

  • To make informed decisions, reliable projections of the likely performance of interventions are required.

  • Robust projections need to focus on model assumptions, bias and uncertainty.

  • Clarifying causal assumptions will lead to better data and better use of data.

Abstract

Successful decision-making for environmental management requires evidence of the performance and efficacy of proposed conservation interventions. Projecting the future impacts of prospective conservation policies and programs is challenging due to a range of complex ecological, economic, social and ethical factors, and in particular the need to extrapolate models to novel contexts. Yet many extrapolation techniques currently employed are limited by unfounded assumptions of causality and a reliance on potentially biased inferences drawn from limited data. We show how these restrictions can be overcome by established and emerging techniques from causal inference, scenario analysis, systematic review, expert elicitation, and global sensitivity analysis. These technical advances provide avenues to untangle cause from correlation, evaluate and transfer models between contexts, characterize uncertainty, and address imperfect data. With more rigorous projections of prospective performance of interventions, scientists can deliver policy and program advice that is more scientifically credible.

Introduction

Reliable evidence of future performance and efficacy of interventions is a critical component of successful decision-making for environmental management (Ferraro and Pattanayak, 2006, Rissman and Smail, 2015). Examples of such decision-making include achieving global protected area targets (Visconti et al., 2015), designing new national-level payments for ecosystem services programs (Bryan et al., 2014), and controlling invasive species (Firn et al., 2015, Martin et al., 2015). Yet determining future impacts of conservation interventions is challenged by a range of complex ecological, economic, social and ethical factors, as well as trade-offs between multiple objectives. Increasingly, scholars and practitioners are more systematically collating and synthesizing existing literature on past impacts for use as an evidence base in conservation (Sutherland et al., 2004). But making accurate inferences from this relies on the quality of this evidence base. Researchers and practitioners are also seeking to improve the quality of this evidence by conducting more robust assessments of past policy impacts through retrospective evaluations (Miteva et al., 2012, Pressey et al., 2015, Baylis et al., 2016). These retrospective evaluations typically use principles of causal inference (Box 1), which focuses on clarifying the assumptions needed to infer causal relationships from data, and on reducing the bias of impact estimates (Miteva et al., 2012, Meyfroidt, 2015, Pressey et al., 2015). This movement towards enhanced transparency and reduced bias is a response to the historical deficiencies of retrospective policy evaluations in conservation science (Ferraro and Hanauer, 2014, Meyfroidt, 2015, Baylis et al., 2016).

Yet when used to inform the design of conservation policies and interventions, retrospective evaluations only tell half the story: predictions of expected outcomes are also necessary. While ‘improving future policy and interventions’ is a commonly stated goal of retrospective analyses (Baylis et al., 2016), rigorous analysis of past outcomes alone is insufficient for this purpose. Evidence from past interventions can be highly context-specific (Pfaff and Robalino, 2012), and may not extrapolate to other times and areas (Sinclair et al., 2010, Dobrowski et al., 2011, Cook et al., 2014, Oliver and Roy, 2015). Such extrapolation is traditionally the domain of projection analyses: the use of modelling to project intervention impacts across time and space.

If, in developing projections, analysts ignore the new insights and methods of retrospective evaluations, the advice yielded by these projections will lack scientific credibility. Scientific credibility refers to the plausibility and technical accuracy of the science. Implicit and untested assumptions regarding causality limit the credibility of prospective policy analysis, as associations observed in the past may not hold in the future (Meyfroidt, 2015). Scientific credibility may also be limited if projections rely on potentially biased inferences from limited data (Miteva et al., 2012, Pressey et al., 2015), and which have an unclear treatment of uncertainty or poor interpretation of potentially biased results. These issues of untested assumptions, limited data, and imperfect use of this data are important for successful conservation decision-making: overestimation of benefits associated with proposed conservation interventions may lead to sub-optimal outcomes, whereas underestimation of benefits may result in more effective options being overlooked.

Here, we outline the relevance, benefits, and challenges of integrating into prospective evaluation of conservation interventions the principles of causal inference and associated principles of systematic literature review, expert elicitation, and scenario analysis. We discuss how these established and emerging techniques can be employed to (1) improve problem definition by clarifying causal assumptions, key variables, alternative scenarios, and using appropriate model frameworks, (2) improve model parameterization by identifying potential bias in data, and avoiding these where possible, and (3) improve model use and interpretation through analyses to understand model sensitivity and parameter or model uncertainty. These techniques are designed to encourage conservation scientists to use and interpret imperfect data more effectively, thereby delivering policy and program advice that is more scientifically credible, and, if heeded by decision-makers and acceptable to stakeholders, capable of delivering improved conservation outcomes.

Section snippets

Characterizing key variables in a causal context

A key challenge in creating robust and transparent model projections of conservation interventions is to define the problem. How is the intervention expected to work within the environmental, social, and economic context? To answer this question, models that depict mechanism-based, causal relationships between interventions, processes and variables are developed, ideally explicitly and graphically (Pearl, 2009, Margoluis et al., 2013) (Box 2). Causal relationships between key variables may be

Parameterization: using better data

Biases are pervasive in empirical conservation research because this research is often conducted in contexts of strong personal motivations, extremely low rates of study replication, complex systems, and high intrinsic rates of variability (Iftekhar and Pannell, 2015). Causal inference, systematic literature reviews, and robust expert elicitation methods offer ways to identify and mitigate biases in data drawn from a wide variety of sources (Martin et al., 2012b, Cook et al., 2014, Martin et

Interpretation: using data better

Biases may still be unavoidable even with greater attention to experimental design and analysis, systematic review procedures, and rigorous expert elicitation methods. For example, bias is likely in regional or global scale analyses, when data are not necessarily collected for the specific purpose of the evaluation (McKinnon et al., 2015). However, if data shortcomings are made transparent, improvements in model specification and interpretation may be possible. Model and data imperfections can

Synthesis and ways forward

To support the development of conservation interventions in complex environmental, social, economic, and ethical contexts, transparent, evidence-based models are critical. More transparent assumptions and more believable causal models engender greater confidence in the predictions of prospective evaluations, and these predictions will be more justifiable in the face of critique. This confidence in the robustness of the science is, of course, only one element contributing to the wider salience,

Acknowledgements

Funding from the Australian Research Council (http://www.arc.gov.au/) is acknowledged, including Centre of Excellence (CE110001014) (EAL, KD, MHH, GI, CM, JR, KAW), Discovery (DP150101300) (EAL, KAW), and Future Fellowship (FT100100413) (KAW) programs. This manuscript is based on discussions from the interdisciplinary workshop “Causal inference in environmental decisions” organized in Brisbane, Australia, June 2015, attended by the authors and funded by the ARC Centre of Excellence for

References (74)

  • W.J. Sutherland et al.

    The need for evidence-based conservation

    Trends Ecol. Evol.

    (2004)
  • P.F. Addison et al.

    Practical solutions for making models indispensable in conservation decision-making

    Divers. Distrib.

    (2013)
  • O. Arrhenius

    Species and area

    J. Ecol.

    (1921)
  • E. Bareinboim et al.

    A general algorithm for deciding transportability of experimental results

    J. Causal Infer.

    (2013)
  • E. Bareinboim et al.

    Transportability from multiple environments with limited experiments: completeness results

  • K. Baylis et al.

    Mainstreaming impact evaluation in nature conservation

    Conserv. Lett.

    (2016)
  • K.A. Bollen et al.

    Eight myths about causality and structural equation models

  • T.M. Brooks et al.

    Habitat loss and extinction in the hotspots of biodiversity

    Conserv. Biol.

    (2002)
  • J. Bull et al.

    Importance of baseline specification in evaluating conservation interventions and achieving no net loss of biodiversity

    Conserv. Biol.

    (2014)
  • D.W. Cash et al.

    Knowledge systems for sustainable development

    Proc. Natl. Acad. Sci.

    (2003)
  • W.C. Clark et al.

    Crafting usable knowledge for sustainable development

    Proc. Natl. Acad. Sci.

    (2016)
  • Guidelines for systematic review and evidence synthesis in environmental management

    Environ. Evid.

    (2013)
  • A. Coreau et al.

    The rise of research on futures in ecology: rebalancing scenarios and prediction

    Ecol. Lett.

    (2009)
  • P. Desmet et al.

    Using the species-area relationship to set baseline targets for conservation

    Ecol. Soc.

    (2004)
  • S.Z. Dobrowski et al.

    Modeling plant ranges over 75 years of climate change in California, USA: temporal transferability and species traits

    Ecol. Monogr.

    (2011)
  • S. Drakare et al.

    The imprint of the geographical, evolutionary and ecological context on species–area relationships

    Ecol. Lett.

    (2006)
  • P.J. Ferraro et al.

    Advances in measuring the environmental and social impacts of environmental programs

    Annu. Rev. Environ. Resour.

    (2014)
  • P.J. Ferraro et al.

    Money for nothing? A call for empirical evaluation of biodiversity conservation investments

    PLoS Biol.

    (2006)
  • P.J. Ferraro et al.

    Measuring the difference made by conservation initiatives: protected areas and their environmental and social impacts

    Philos. Trans. R. Soc. Lond. B

    (2015)
  • P.J. Ferraro et al.

    Conditions associated with protected area success in conservation and poverty reduction

    Proc. Natl. Acad. Sci.

    (2011)
  • P.J. Ferraro et al.

    Estimating the impacts of conservation on ecosystem services and poverty by integrating modeling and evaluation

    Proc. Natl. Acad. Sci.

    (2015)
  • J. Firn et al.

    Priority threat management of invasive animals to protect biodiversity under climate change

    Glob. Chang. Biol.

    (2015)
  • B. Fisher et al.

    Moving Rio forward and avoiding 10 more years with little evidence for effective conservation policy

    Conserv. Biol.

    (2014)
  • A. Gelman et al.

    Data Analysis Using Regression and Multilevel/Hierarchical Models

    (2006)
  • J.B. Grace et al.

    Structural equation modeling: building and evaluating causal models

  • N. Haddaway et al.

    Making literature reviews more reliable through application of lessons from systematic reviews

    Conserv. Biol.

    (2015)
  • M.M. Hanauer et al.

    Implications of heterogeneous impacts of protected areas on deforestation and poverty

    Philos. Trans. R. Soc. Lond. B

    (2015)
  • Cited by (0)

    View full text