## “Those Things Are Evil”: Prediction Intervals in Mixed Models

### Suppose we study salamanders and want to predict body mass based on their body length. We also want to account for different access to food and differing levels of competition at each site we’ve collected our salamanders from. So we fit a linear model with a random effect for site as we only have samples from a subset of sites. (Want a refresher on random effects? We’ve got you covered.)

This seems simple enough. We’ve now got what’s commonly referred to as a **mixed model**, and we know how to do typical things like compute the confidence interval on the coefficient for body length. This would give us a sense of the relationship between body length and body mass, but let’s pause and remember *what* we are actually confident in though. A confidence interval tells us something about a *population parameter*. This is the value of the thing we care about that we would get if we could actually compute it based on everyone or everything in the population, here every salamander.

But sometimes we have a different goal in mind. Maybe we don’t actually want to interpret the relationship between body length and body mass for salamanders. Instead, we want to be able to predict the body mass of the next salamander we come across after we’ve measured its body length. To answer this type of question, we’ll need a different type of interval: a prediction interval. We will expect a future observation to fall within this type of interval with a specified probability given what we’ve seen so far.

Intuitively, it seems harder to say something concrete about the next salamander we see than it does to say something about the overall, average relationship between two properties of salamanders. Therefore we’d expect a prediction interval to be wider than a confidence interval.

Can we just rely on software to give us what we need, now that we know what we actually are looking for? Sort of. Any reputable function that lets you predict a value based on a mixed model will give you a prediction interval if you want it (a simple example here), but remember those random effects? Well, they make things a little more complicated.

To create a prediction interval we will have to account for the uncertainty in the random effect part of the model, namely the variance term that defines it. We can account for that extra uncertainty ourselves, and we can do it in a couple of different ways. But first we’ll have to review the bootstrap procedure.

The bootstrap gives us a way to estimate the sampling distribution of something we are interested in. In other words, how much is our estimate likely to vary just based on the process of sampling and its natural variation. Ideally we could take enough different samples from the population to figure this out without any fancy stats work, but ecologists are not traditionally made of money and time. So instead we take one of two approaches; we either resample from our original sample (a non-parametric bootstrap) or generate new data from the estimated data-generating process (a parametric bootstrap) which we’ll do here.

You may have used a bootstrap approach before to get a confidence interval for something like the coefficient for body length. But we can also use this approach to get a prediction interval for body mass for a set of body lengths. It’ll look something like this:

- Generate new data from the fitted model based on our observed data set.
- Refit the model using this new data.
- Use the model to predict body mass for a sequence of body lengths.
- Repeat Steps 1-3 many times.
- Use the empirical quantiles (across bootstrap estimates) of the predicted values per body length value to form each prediction interval.

Step 1 hides all manner of sins, including how we decide to deal with the random effects. Say we use the parametric bootstrap and simulate new data. We would use a normal distribution, with the mean equal to the predicted value of body mass given a specified body length, and variance equal to the residual variance. This matches the assumptions of a typical linear regression.

This effectively conditions on the random effects (i.e. they stay fixed at their estimated values). But that implies that the values of the random effects will be the same for the future observation. That seems a bit odd since random effects are, by definition, not supposed to be fixed.

Instead, we can treat the *distribution* of the random effects (rather than the individual values) as fixed, and add an additional term drawn from that parametric distribution as part of Step 1. However, we should be aware that this still assumes that we’ve estimated the variance of the random effects correctly and that the random effects truly come from a normal distribution (as typically assumed for random effects).

I should also note that this approach could get computationally intensive if Step 2, the model fit, is a slow process. But there are alternatives (plus some more concrete guidance about implementation here).

Choices, choices, choices! In general, it’s always good practice to account for as many elements of uncertainty as possible. We don’t want to have prediction intervals that are too narrow after all. And now that we know what to look out for, maybe prediction intervals for mixed effects models aren’t so evil after all.

*Have a quantitative term or concept that mystifies you? Want it explained simply? Suggest a topic for next month → @sastoudt *

*Title Image Credit: Jerzy Gorecki, Pixabay licence, Image Cropped*