Let’s Get Meta… The Good Kind
In my last post we talked about using images as data. This time we’ll consider another non-traditional source of data: the results of other investigations. Using results to generate more results? That seems weird… at first. But think about how science progresses. We build on other studies all of the time! Sometimes we use others’ findings as a jumping off point. Other times, studies invite us to see if we can reproduce their findings under new conditions or with respect to our own study site or species of interest.
Modern methods of communication have made collaborating over great distances easier, but there are still scientists from all over the world working on similar issues completely separate from one another. They may be working on the same phenomena, just in a different place, or on a slightly different species. We may want to synthesize their work, and see if there really is a widespread effect that is stable across a variety of situations. But how can we combine the results of all of these papers into one mega-finding?
Perhaps we are interested in heat stress on small mammals (that latest IPCC report has us rightfully spooked) so we look up all research articles that have studied this in the past. This is of course much easier today with the literature at our fingertips thanks to the internet. Otherwise we’d be hitting the library stacks. And with new tools to automatically grab and filter text from online papers and queryable APIs to databases like Google Scholar it has become easier to search papers and other publication records automatically.
What are we looking for in these papers exactly? Are we really going to read every word? No thanks! We first want to make sure we have papers that are actually about small mammals or at least ones we think are fairly comparable. This can be done at least semi-automatically with a strategic use of keywords. Then our goal is to find a bunch of estimated effects or other quantities. Maybe they live in a table or graph; perhaps they are spelled out in the results section. In either case these effects may arise from observing temperature changes and measuring differences in behavior or other physical response of individuals.
Individual studies may process this data to estimate some kind of overall effect (think a slope with respect to the level of temperature change). We will also need the uncertainties associated with any estimates of interest. These uncertainties help us determine how much to weigh (or trust) each estimate in our overall super-estimate. If we have a really uncertain estimate, we trust it less, so it gets a lower weighting, and it won’t “count” as much towards our final answer as a more certain estimate will.
In theory, we have now turned a bunch of papers into a spreadsheet with variables like species, location, time period, sample size, estimate of heat stress effect, and standard error of the heat stress effect. Methods for combining all of this information into a new consensus value fall under the category of meta analysis. These approaches all have their own nuances, but they often boil down to some kind of weighted average.
When we do a meta-analysis we might find that the individual uncertainties from different studies don’t really reflect the true uncertainty we have about our big question about heat stress. When we compare the confidence intervals of the effects we’re assessing, we expect many to overlap, since most of them should contain the “true effect”. If this isn’t the case, we have uncovered what is called “dark uncertainty” that can only be revealed by looking at the results of many studies.
Sometimes differences across studies are of interest themselves. Maybe we want to understand factors that are associated with this variation we are seeing. Are mammals that live in particular regions less sensitive to heat stress? Instead of going to a bunch of regions ourselves we can stand on the shoulders of others, collecting single estimates across multiple study sites.
Now we effectively want to estimate a slope in a meta-analysis framework. There are different fancy ways of doing this, but you can just picture those uncertainties being used as weights in a regression of some shape or form.
Meta-analysis lets us ask bigger questions than we could tackle on our own, no going into the field necessary!
Have a quantitative term or concept that mystifies you? Want it explained simply? Suggest a topic for next month → @sastoudt