To draft environmental laws and regulations, policymakers often rely on studies produced by researchers who examine human impact on nature. However, the methods behind many such studies are flawed and thus ill serve the makers of environmental policy at local, state, and national levels, according to a new paper co-authored by Johns Hopkins Bloomberg Distinguished Professor and Carey faculty member Paul Ferraro.
STUDIES FALL SHORT BY ASSUMING CORRELATIONS WITHOUT COMPLETE ANALYSIS, FERRARO SAYS
To draft environmental laws and regulations, policymakers often rely on studies produced by researchers who examine human impact on nature.
However, the methods behind many such studies are flawed and thus ill serve the makers of environmental policy at local, state, and national levels, according to a new paper in the Proceedings of the National Academy of Sciences.
The central problem with the studies is that they frequently assume a correlation between a policy and a result without sufficiently analyzing all the available data, say the paper’s co-authors ― researchers at Johns Hopkins University, Duke University, and the University of California at Davis.
In an interview, one of the co-authors, Bloomberg Distinguished Professor Paul Ferraro of Johns Hopkins University, cites the example of a national park. Some researchers might reach the conclusion that the park designation saved the land from degradation. But Ferraro points out that that land probably was chosen for a park because it lacked economic value and therefore would not have been a candidate for development, with or without the park designation.
Such failures to verify the foundation on which scientific conclusions are being drawn ― a common shortcoming across much of the CHANS (“coupled human and natural systems”) literature ― make it difficult to gauge the legitimacy of the studies’ findings and undercut their credibility with policymakers, say the co-authors.
“Policymakers increasingly are demanding empirical evidence about what works to benefit both people and the environment, but scientists who study CHANS are often pure modelers. To bridge this divide, we have to reframe our approach, using observable data in a way that puts the alleged causal links in the models on firmer scientific ground,” adds Ferraro.
One possible solution, the co-authors say, would be to combine approaches from other branches of economics and ecology.
“No single methodological approach, or disciplinary perspective, can provide the evidence that scientists and policymakers need to advance our understanding of how humans and the environment interact,” says co-author James Sanchirico, a professor of environmental science and policy at UC Davis. “Our tools for understanding the impacts of human activities that disturb the environment, as well as human attempts to mitigate or reverse that disturbance, are flawed and biased in a variety of ways. Understanding those flaws and biases, and how combinations of empirical methods can mitigate them, is where scientific attention should be directed.”
“We’ve created new laws and incentive programs to reduce how much humans pollute the Chesapeake Bay, and it’s only natural to ask if those laws and programs have had an impact,” says Ferraro, who is a faculty member at Johns Hopkins’s Carey Business School, Bloomberg School of Public Health, and Whiting School of Engineering. “But in complex systems in which human behavior and the natural environment are tightly linked, answering that question is a lot harder than you might imagine. In these coupled human and natural systems, isolating cause and effect is not as straightforward as studying the effect of a new drug or medical treatment.”
“Simply put: If you’re going to study linkages between human and natural systems, you need to look at the two systems together,” says co-author Martin D. Smith, George M. Woodwell Distinguished Professor of Environmental Economics at Duke’s Nicholas School of the Environment.
To make credible inferences about causal links in complex systems, scientists make, often implicitly, two very basic assumptions when analyzing their data: excludability and no interference. In non-technical terms, these assumptions are akin to what lab scientists assume when randomizing treatments across petri dishes that do not interact. Yet these assumptions rarely are satisfied in most published studies on CHANS. In CHANS, multiple confounding variables, both observed and unobserved, create complex feedback across social and environmental dimensions.
“Observational data only gets you so far. Modeling only gets you so far. You need to use multiple approaches, with the aim of identifying sources of bias in each approach and then triangulating on credible inferences,” Smith says. “The trouble is, few studies do this.”
In their analysis, the researchers reviewed one of the largest CHANS literatures: studies measuring the impacts of marine-protected areas on fisheries. Among nearly 200 studies that aimed to determine if protected areas helped or harmed fisheries and the communities that depended on them, only one study addressed whether the two basic assumptions of excludability and no interference were satisfied.
The researchers’ paper, “Causal Inference in Coupled Human and Natural Systems,” is derived from materials they presented at the National Academy of Sciences’ Arthur M. Sackler Colloquium, “Economics, Environment and Sustainable Development,” in January 2018 in Irvine, California.