Don’t just believe the headline – understanding research methods matters

12 Jan 2024
Don’t just believe the headline – understanding research methods matters

At the end of last year, a new publication by a consortium of academics highlighted serious flaws in research by West et al. (2023), disproving some of the fiercest REDD+ critics. As might be expected, this has reignited online debate over REDD+ project baselines and surprisingly raised questions about the role of rebuttals in research – a point we’d like to address.

But first, to put this latest debate in context: West et al. (2023) was a widely reported paper published in Science that was highly critical of the methodologies through which projects calculate baselines (i.e. determine how much forest would have been lost if the project didn’t exist). Given that the research was published in such a prestigious academic journal, we would have expected that the results would be reliable. However, having worked within the REDD+ space for years, we know they simply aren’t. 

In fact, far from it. West et al. presented only one method of creating control areas to test whether baseline scenarios are accurate. There are many. And now we have to ask whether this single method was even conducted accurately because Mitchard et al. say not, and have backed this up with some pretty convincing evidence: 

  1. Global forest change dataset was inappropriately used. A detailed analysis of deforestation detection error revealed that the West et al. findings significantly downplayed the risk of forest loss within REDD+ regions – resulting in an underestimation of project carbon benefits by nearly 90%.   
  2. Control areas were unrealistic. The majority of projects were compared to control areas with less historic forest loss and accordingly less future risk of deforestation – leading to the inaccurate conclusion that baselines are widely overstated. 
  3. Findings cannot be statistically validated. A series of validation and sensitivity experiments by the rebuttal authors demonstrate that the West et al. model is deeply flawed with a high occurrence of failure – producing unsupportable results. 
  4. Calculation errors were abundant. West et al. incorrectly calculated project emissions reductions due to two independent calculation errors – resulting in an underestimation of project effectiveness by more than half.
  5. The sample size is too small. West et al. only analyzed 23% of active REDD+ projects which is not sufficient to make broad conclusions about the REDD+ endeavor.

These findings mirror the results of our own analysis, where we found that taken together the number of missteps within the methods of West et al. make the results patently unreliable. 

Without a critical eye on the methods of analysis and calculation, inaccurate results and the headlines they lead to, do damage. In this case, it risked undermining one of the only scalable approaches available to halting deforestation by causing a shift in market demand away from carbon credits generated by REDD+ projects. Or as the authors of the latest analysis put it, these inaccuracies risked “cutting off finance for protecting vulnerable tropical forests from destruction at a time when funding needs to grow rapidly.” 

While many leading market stakeholders have welcomed Mitchard et al.’s comprehensive rebuttal, others appear to miss the point, dismissing these recent findings because “the reviewers at Science thought they did an okay job”. We respectfully disagree. 

Peer review does not end with publication. Instead by highlighting the inherent issues within West et al.’s methodology, Mitchard and colleagues demonstrate the impact these issues have on results’ reliability and therefore the extent to which they can be trusted. Moreover, a more comprehensive understanding of methodologies like Mitchard et al. offers, paves the way for more accurate baseline assessments which can only be beneficial to the REDD+ endeavor and therefore the world.

Undoubtedly this debate will and should continue, but while it does we must remember that the data proves that REDD+ already works, despite the highly complex and challenging environments where projects are often located. Based on these successes, the model is ready to scale, which will be aided by innovation including new jurisdictional approaches to developing baselines and the development of the Equitable Earth Standard

So if we take just one thing away from Mitchard et al. and discussions in recent days: the methodologies of REDD+ are robust now, and with further critical analysis we can and will continue to strengthen them. 

References

  1. Mitchard, E., et al. (2023) “Serious errors impair an assessment of forest carbon projects: A rebuttal of West et al.” Available at SSRN: https://ssrn.com/abstract=4661873 or http://dx.doi.org/10.2139/ssrn.4661873
  2. West, T.A.P., et al. (2023) “Action needed to make carbon offsets from forest conservation work for climate change mitigation”. Science 381, 873-877.
  3. Pauly, M., Crosse, W. and Tosteson, J. (2023) “A critical review of West et al. (2023)”. Science eLetter.  
  4. Greenfield, P. (2023). “Revealed: more than 90% of rainforest carbon offsets by biggest certifier are worthless, analysis shows”. The Guardian. Available at: https://www.theguardian.com/environment/2023/jan/18/revealed-forest-carbon-offsets-biggest-provider-worthless-verra-aoe 
  5. Calyx Global. The REDD controversy: Back in the news. Insights. Available at: https://calyxglobal.com/blog-post?q=78