- Scientists are keen to get better data and evidence into the hands of decision-makers and the public in general. However, systematically sorting, assessing, and synthesizing scientific data from reams of journal articles takes time and resources.
- In an effort to get faster results, rapid methods for evidence synthesis are desirable, but their use can have substantial drawbacks and limitations that ultimately affect the accuracy and validity of findings.
- We applaud the launch of the Conservation Effectiveness series on Mongabay and its spotlight on the effects and effectiveness of prominent conservation strategies to a broad readership. However, some of the compromises made in expediting and simplifying their approach to synthesis have implications for replicability of the methods and confidence in the final results.
- This post is a commentary. The views expressed are those of the authors, not necessarily Mongabay.
This commentary is in response to Mongabay’s Conservation Effectiveness series. You can read Mongabay’s response here.
Scientists are keen to get better data and evidence into the hands of decision-makers and the public in general. However, systematically sorting, assessing and synthesizing scientific data from reams of journal articles takes time and resources. In an effort to get faster results, rapid methods for evidence synthesis are desirable, but their use can have substantial drawbacks and limitations that ultimately affect the accuracy and validity of findings.
We applaud the launch of the Conservation Effectiveness series on Mongabay and its spotlight on the effects and effectiveness of prominent conservation strategies to a broad readership. However, some of the compromises made in expediting and simplifying their approach to synthesis have implications for replicability of the methods and confidence in the final results. Given that the series has the potential to reach new and influential audiences, we highlight several areas for caution and clarity.
Bias
One of the major benefits of systematic reviews and evidence syntheses is their ability to bring consensus around which strategies or actions are effective without motive or manipulation. The alternative is choosing single data points or cherry-picking results to fit a desired narrative. Thus efforts to avoid bias in determining which studies are included in a synthesis, and how they are assessed, are critical to facilitate a true reflection of an existing evidence base.
The recent Conservation Effectiveness series was susceptible to bias in several ways. First, it focused its search on just the top 1,000 results from Google Scholar. Reliance on a single database risks excluding relevant studies from a synthesis. Several prominent systematic maps and reviews were missed (e.g., a systematic review by 3ie on decentralized forest management). Second, no clearly stated criteria were provided for which studies were included in each review. Without these criteria, the reader is unable to assess how reliable the choices made by the researchers were in selecting relevant studies and how a reviewer’s individual biases might affect what was included.
Transparency
Systematic approaches to evidence synthesis follow extensively tested guidelines for planning, searching, screening, critically appraising, and extracting data from relevant studies. These are usually documented in a protocol prepared before starting the review. The protocol includes the rationale, hypotheses and detailed methods of the planned synthesis review, and is described in a way that others can understand why and how key steps were conducted.
The synthesis approach used for the Conservation Effectiveness series lacked transparency. It did not adequately describe criteria for selecting studies, nor how differences between choices made by different reviewers were resolved. Further, search results from Google Scholar are not repeatable due to frequent changes in their proprietary algorithms. The lack of documentation raises questions about the reliability of the approach and prevents other researchers from being able to replicate the methods.
Subjectivity
Research synthesis is useful for translating large bodies of data into broad insights. The visualizations and infographics used by Mongabay for the series are compelling single-page summaries. However, the simple design of these resources involved making subjective decisions about which studies were “good” or “bad.” For example, this series denotes direction of impact using a “stop light” scheme where direction of outcome is coded as “green” for good and “red” for bad. However, in this schematic representation, all “green” articles are treated as equal value, when, in fact, “positive” outcomes reported in one paper might significantly differ in effect size from those reported in another paper (and so on for yellow and red articles).
Subjective treatment of results can also mask important patterns. Counting study outcomes subjectively as “positive” or “negative” makes sweeping assumptions about direction and magnitude of impacts. Such “vote counting” ignores the wide array of impacts occurring within and between populations and time frames within a single study. Importantly, in addition to not reflecting varying degrees of impacts, the series’ installments do not consider differences in study design and quality – giving equal weight to studies whether poorly designed or rigorously executed.
False confidence
The overall implication of these syntheses design flaws is that methods are not replicable or defensible. Such failures inhibit the capacity of other researchers to verify, repeat or update these syntheses and thus prevents future assessment of progress in filling existing knowledge gaps in the evidence base. And since the search for evidence was biased and incomplete, outstanding questions remain about any conclusions made regarding the state of the evidence. Subjective assessment on outcomes (or vote counting) is particularly misleading. In the case of certification, for instance, the findings are substantively different (in domain, magnitude and direction) to the findings from the handful of quasi-experimental studies.
Opportunities for improvement
We recognize that rapid approaches appear desirable to both researchers and donors, but they also carry risks and limitations such as those we list above. We implore the series contributors, and readers considering embarking on syntheses, to seek higher standards to honestly assess conservation effectiveness and strongly recommend those provided free to access from the Collaboration for Environmental Evidence. Furthermore, the Eklipse project has provided guidance on a wide range of knowledge synthesis methods (21 methods so far), together with their limitations, including lower-cost approaches such as expert elicitation, and review methods with shortcuts such as rapid evidence assessment.
However, for focused questions that claim to assess the effectiveness of major interventions, as posed in the Conservation Effectiveness series, such approaches may provide misleading findings, especially when they appear to challenge existing and more rigorously conducted systematic reviews.
We cannot stop, or even respond to, every poor or substandard review that is put out in the scientific or popular literature. As a community, we are working toward internationally agreed methodological standards, with rigorous, well-documented evidence synthesis methods embedded in environmental decision-making. From this perspective, clear methods reporting and adherence to standards are crucial. What we need now is for trusted sources like Mongabay and others to work with us and commit to those standards.
The authors are members of the Collaboration for Environmental Evidence and/or the Science for Nature and People Partnership Working Group on Evidence-based Conservation. The views are the authors’ own and do not reflect those of their employers or affiliated institutions.
Madeleine McKinnon, Paul G. Allen Philanthropies, Seattle WA, U.S.A.
Samantha Cheng, Arizona State University, U.S.A.
Lynn Dicks, University of East Anglia, U.K.
Ruth Garside, University of Exeter, U.K.
Andrew Pullin, University of Bangor, U.K.
Claudia Romero, University of Florida, U.S.A.
You can read Mongabay’s response to this critique here.
FEEDBACK: Use this form to send a message to the author of this post. If you want to post a public comment, you can do that at the bottom of the page.