Forecasting the Fate of an Ecosystem: The Double-Edged Sword of Predictive Modelling
Let’s get the humblebragging out of the way – this week a paper that I wrote was published in the Journal of Applied Ecology. It was a paper that I genuinely enjoyed writing, and it gives a tangible outcome – the forecasting of the establishment of invasive species within a region. The applications are obvious. Knowing where an invasive species is likely to pop up lets us detect it early and take action quickly.
Yet that very tangibility of the outcome has resulted in it being the paper of which I most fear the consequences. So in an exorcism of my general nerves (and as a soft disclaimer), I wanted to talk about why forecasting or predicting anything can be such a complicated undertaking for an ecologist.
When people think of an ecologist, what may come to mind is an Attenborough-esque type wading through a bog with a net, or a forest with a notebook and binoculars. Obviously collecting specimens and data in the field is still an integral part of ecology (not to mention that the field is rapidly become more diverse than just old white guys), yet more and more often ecologists will be spending time in front of a computer, breaking down the data they’ve collected using some form of statistical software.
Over the last 20-30 years, advances in technology have made advanced statistical analysis more and more accessible to ecologists, including those not well versed in probability and statistical theory. At the same time, a rise in interdisciplinarity – the combination of multiple disciplines working on a shared project, has brought together mathematicians, statisticians and ecologists. With that sharing of knowledge, statistical modelling, and the subsequent forecasting or predictive work derived from it – has come to the fore in ecology.
The concept is relatively straightforward. If we can use a species’ varying abundance or presence over a region and compare it to different environment characteristics as they vary over that region, we can create a statistical model to give us an idea of which of those characteristics have an impact on one or more species. Once we understand that impact, we can use that model to try to predict what will happen in another region with similar characteristics, or in the future as those characteristics change.
It’s not hard to see the advantages inherent in predicting scenarios based on a model. We’re living in a world where human activity is changing landscapes all the time. If we can predict how species will decline, disperse, or simply die out, we can help them out. If we can figure out where they’ll thrive, we can help protect that area.
Yet any model comes with mountains of uncertainty. How sure are we that one of those environmental characteristics is having that impact? Perhaps it’s another aspect of the environment that is simply correlated with the one we’re measuring. Random events could also change the likelihood of our predictions. And if we’re trying to predict how well a species will take to a similar ecosystem in a new region, do our assumptions about one ecosystem really translate that well to a totally new one?
Often this uncertainty is quantifiable and easy to communicate. We can put a ‘confidence interval’ around a claim, saying that although a vole population will likely grow by 2,000 next year, there’s a 2.5% chance it won’t grow at all. We can say that there’s a 75-80% chance an invasive species will pop up in a lake nearby, and that given such an occurrence there’s a 20-30% chance a local species will die out.
Despite these quantifications, giving tangible results – or indeed any sort of predictive scenario – is often seen as an invitation for someone to use those results in forming relevant conservation policy. Whether we like it or not, making new knowledge available can necessitate action. If a paper predicts the extinction of a beloved species in a certain region, we are implicitly stating that something needs to be done, whether it was our intention or not. Add that to the fact that policy makers and conservationists are often on the lookout for those definitive predictions, and what we think of as a scientific article can unknowingly become the basis for policy.
This can lead to problems. Poorly constructed, or poorly communicated predictions or forecasts can lead to poor conservation policy. It can lead to distrust in research, wasted allocations of funds, and at the worst damage to the species involved. If a re-introduction program selects the wrong area for an endangered species, or managers of invasive species focus too much on one region and let others slip through the net, the work becomes counterproductive. For a truly disastrous result, check out my earlier interview on the subject below.
Most scientific papers are reasonably upfront about the aforementioned uncertainty. Many even caution against using their numbers as a guide for creating policy, with a strong disclaimer indicating where more research is needed. Surely this is enough to exonerate a scientist from any blame if the prediction doesn’t turn out to be correct?
Yet an ecologist can’t be too careful, constantly couching their claims in uncertainty and doubt. It’s the nature of academia, that if something isn’t inherently useful, there’s no point publishing it. It’s made worse by the constant obsession with impact factors (IFs) – the metric by which a scientific journal’s influence is measured. Often we’re cajoled into dressing a paper up, perhaps overstating its claims, in order to get into a journal with a higher IF, rather than a more relevant one with a lower IF. It’s a double-edged sword, and one that I have not particularly enjoyed dancing on.
Another issue is scientific language – often loaded with jargon and inaccessible to people not deeply versed in the discipline. It can prevent the nuance of a prediction from getting across. Many journals are encouraging more reader-friendly language these days, or releasing papers alongside ‘plain language’ versions. Yet progress is slow.
I’m fortunate enough that I’ve previously met the people most likely to use the research in the paper I’ve just seen published – I’ve spoken to them before about the research, and I can be open and honest about its strengths and weaknesses. Yet this isn’t the case for everyone. Many ecologists publish work without knowing who will pick up on and run with it. Others simply don’t care, too wrapped up in the pressure to publish.
Looking ahead, I’m hoping scientists continue the current trend towards more accessible science, resulting in more widespread understanding of how predictive modelling or forecasting can be used. They are important and extremely useful tools in conservation, and proper understanding of them will be a huge bonus for any policymaker.
Dr. Sam Perrin is a freshwater ecologist who completed his PhD at the Norwegian University of Science and Technology who realises he just spent hours writing a piece that undermines his research and honestly isn’t sure how he feels about it. You can read more about his research and the rest of the Ecology for the Masses writers here, see more of his work at Ecology for the Masses here, or follow him on Twitter here.