At the end of last year, a new publication by a consortium of academics highlighted serious flaws in research by West et al. (2023), disproving some of the fiercest REDD+ critics. As might be expected, this has reignited online debate over REDD+ project baselines and surprisingly raised questions about the role of rebuttals in research – a point we’d like to address.
But first, to put this latest debate in context: West et al. (2023) was a widely reported paper published in Science that was highly critical of the methodologies through which projects calculate baselines (i.e. determine how much forest would have been lost if the project didn’t exist). Given that the research was published in such a prestigious academic journal, we would have expected that the results would be reliable. However, having worked within the REDD+ space for years, we know they simply aren’t.
In fact, far from it. West et al. presented only one method of creating control areas to test whether baseline scenarios are accurate. There are many. And now we have to ask whether this single method was even conducted accurately because Mitchard et al. say not, and have backed this up with some pretty convincing evidence:
These findings mirror the results of our own analysis, where we found that taken together the number of missteps within the methods of West et al. make the results patently unreliable.
Without a critical eye on the methods of analysis and calculation, inaccurate results and the headlines they lead to, do damage. In this case, it risked undermining one of the only scalable approaches available to halting deforestation by causing a shift in market demand away from carbon credits generated by REDD+ projects. Or as the authors of the latest analysis put it, these inaccuracies risked “cutting off finance for protecting vulnerable tropical forests from destruction at a time when funding needs to grow rapidly.”
While many leading market stakeholders have welcomed Mitchard et al.’s comprehensive rebuttal, others appear to miss the point, dismissing these recent findings because “the reviewers at Science thought they did an okay job”. We respectfully disagree.
Peer review does not end with publication. Instead by highlighting the inherent issues within West et al.’s methodology, Mitchard and colleagues demonstrate the impact these issues have on results’ reliability and therefore the extent to which they can be trusted. Moreover, a more comprehensive understanding of methodologies like Mitchard et al. offers, paves the way for more accurate baseline assessments which can only be beneficial to the REDD+ endeavor and therefore the world.
Undoubtedly this debate will and should continue, but while it does we must remember that the data proves that REDD+ already works, despite the highly complex and challenging environments where projects are often located. Based on these successes, the model is ready to scale, which will be aided by innovation including new jurisdictional approaches to developing baselines and the development of the Equitable Earth Standard.
So if we take just one thing away from Mitchard et al. and discussions in recent days: the methodologies of REDD+ are robust now, and with further critical analysis we can and will continue to strengthen them.
References