Preliminaries

  1. Open Rstudio, create a new R Script or RMarkdown document (whichever you prefer working with) and give it a title for this week.

  2. Load any of the packages you think you might be using today.
    By now you may have started get into a sort of routine with R, and you might know what functions you like, and which you don’t. Because there are so many alternatives for doing the same thing, the packages and functions you use are very much up to you.
    If I’m planning on doing some multi-level modelling, I will tend to load these by default at the top:

    library(tidyverse) # for everything
    library(lme4) # for fitting models
    library(broom.mixed) # for tidying model output

Exercises: Longitudinal Models

Last week when we introduced multilevel models (or “mixed effects models” or whatever we’re calling them!), we saw in the lectures a little bit about the idea of having datapoints from the same participant over time. This kind of data tends to get termed “longitudinal” (mainly used to refer to studies which follow-up participants over the course of months or years). The lectures this week have also introduced this idea of ‘change over time’ by looking at some data from Public Health England.

Let’s work our way through an example.

WeightMaintain Data Codebook

The weight maintenance data (WeightMaintain3), a made-up data set based on Lowe et al. (2014, Obesity, 22, 94-100), contains information on overweight participants who completed a 12-week weight loss program, and were then randomly assigned to one of three weight maintenance conditions:

  • None (Control)
  • MR (meal replacements): use MR to replace one meal and snack per day
  • ED (energy density intervention): book and educational materials on purchasing and preparing foods lower in ED (reducing fat content and/or increasing water content of foods)

Weight was assessed on day 1 of maintenance, 12 months post, 24 months post, and 36 months post.

It is available, in .rda format, at https://uoepsy.github.io/data/WeightMaintain3.rda

variable description
ID Participant ID
Condition Weight Maintenance Condition (‘None’ = No maintenance program, ‘MR’ = Meal replacement, ‘ED’ = Energy Density intervention)
Assessment Assessment number (0 = Day 1, 1 = 12 months, 2 = 24 months, 3 = 36 months)
WeightChange Difference in weight (lbs) from end of 12-week weight loss program
Question A1

Load the data, and take a look at what is in there. Hopefully it should match the description above.

Hint: load(url("https://uoepsy.github.io/data/WeightMaintain3.rda"))

Solution

Question A2

Q: Overall, did the participants maintain their weight loss or did their weights change?

We need to remember that each of our participants has measurements at 4 assessments. We have randomly sampled participants, and then within them have measured multiple observations. So our observations are not independent. We’re not interested in estimating differences between specific participants - our participants are just a random sample of people. But we do want to account for the dependency they introduce in our data. This is why we would want to fit a multilevel model and incorporate participant-level variation into our model structure.

  1. Fit an “intercept-only” model.
  2. Fit a model with weight change predicted by assessment.
  3. Compare the two models (use anova(model1, model2) to conduct a likelihood ratio test).

Things to think about:

  • We cannot compare models that differ in both the fixed and random parts.
  • For now, ignore messages saying boundary (singular) fit: see ?isSingular (that comes in a couple of questions’ time).

Solution

Question A3

Q: Did the experimental condition groups differ in overall weight change and rate of weight change (non-maintenance)?

Hint: It helps to break it down. There are two questions here:

  1. do groups differ overall?
  2. do groups differ over time?

We can begin to see that we’re asking two questions about the Condition variable here: “is there an effect of Condition?” and “Is there an interaction between Assessment and Condition?”

Try fitting two more models which incrementally build these levels of complexity, and compare them (perhaps to one another, perhaps to models from the previous question - think about what each comparison is testing!)

Solution

boundary (singular) fit: see ?isSingular

Okay. Let’s talk about those “singular fits” messages we keep getting.
By now, you have hopefully fitted a number of models which incrementally add predictors. Ours are below:

m.base0 <- lmer(WeightChange ~ 1 + (1 + Assessment | ID), data=WeightMaintain3)
m.base <- lmer(WeightChange ~ Assessment + (1 + Assessment | ID), data=WeightMaintain3)
m.int <- lmer(WeightChange ~ Assessment + Condition + (1 + Assessment | ID), data=WeightMaintain3)
m.full <- lmer(WeightChange ~ Assessment * Condition + (1 + Assessment | ID), data=WeightMaintain3)

And many of these models were singular fits, and we just ignored them. We shouldn’t have.

What is the warning message telling us?
The warning is telling us that our model has resulted in a ‘singular fit.’ The easiest way to think of this is to think of it as indicating that the model is ‘overfitted’ - that there is not enough variation in our data for our model to be estimated properly.

What can we do?
In many cases, perhaps the most intuitive advice would be remove the most complex part of the random effects structure (i.e. random slopes). This leads to a simpler model that is not over-fitted. In other words, start simplifying from the top (where the most complexity is) to the bottom (where the lowest complexity is). Additionally, when variance estimates are very low for a specific random effect term, this indicates that the model is not estimating this parameter to differ much between the levels of your grouping variable. It might, in some experimental designs, be perfectly acceptable to remove this or simply include it as a fixed effect.

A key point here is that when fitting a mixed model, we should think about how the data are generated. Asking yourself questions such as “do we have good reason to assume subjects might vary over time, or to assume that they will have different starting points (i.e., different intercepts)?” can help you in specifying your random effect structure

You can read in depth about what this means by reading the help documentation for ?isSingular. For our purposes, a relevant section is copied below:

… intercept-only models, or 2-dimensional random effects such as intercept + slope models, singularity is relatively easy to detect because it leads to random-effect variance estimates of (nearly) zero, or estimates of correlations that are (almost) exactly -1 or 1.

Question A4

Re-read the description of the data, then ask yourself this question:

Do we think participants will vary in:

  1. their starting weight differences (1|ID)?
  2. their weight change over the course of the assessment period (0 + Assessment | ID)?
  3. both (1 + Assessment | ID)?

Hint: What do we think the baseline weight should be? Should it be the same for everyone? If so, might we want to remove the random intercept, which we do by setting it to 0

Can you re-fit your models without encountering singular fits?

Solution

Question A5

Make a graph of the model fit and the observed means and standard errors at each time point for each condition.

Try using the effects package (hint, does this help: as.data.frame(effect("Assessment:Condition", model))?)

Solution

Question A6

Now let’s move to interpreting the coefficients. Remember, we can get the coefficients using fixef(model). We can also use tidy(model), and similar to models fitted with lm(), we can pull out the bit of the summary() using:

summary(model)$coefficients

From your model from the previous question which investigates whether conditions differed in their rate of weight change, examine the parameter estimates and interpret them (i.e., what does each parameter represent?)

Solution

Question A7

Can you state how the conditions differed?

Solution

Exercises: Logistic MLM

Don’t forget to look back at other materials!

Back in USMR, we introduced logistic regression in week 10. The lectures followed the example of some singing aliens that either survived or were splatted, and the exercises used some simulated data based on a hypothetical study about inattentional blindness. That content will provide a lot of the groundwork for this week, so we recommend revisiting it if you feel like it might be useful.

lmer() >> glmer()

Remember how we simply used glm() and could specify the family = "binomial" in order to fit a logistic regression? Well it’s much the same thing for multi-level models!

  • Gaussian model:
    • lmer(y ~ x1 + x2 + (1 | g), data = data)
  • Binomial model:
    • glmer(y ~ x1 + x2 + (1 | g), data = data, family = binomial(link='logit'))
    • or just glmer(y ~ x1 + x2 + (1 | g), data = data, family = "binomial")
    • or glmer(y ~ x1 + x2 + (1 | g), data = data, family = binomial)
Binary? Binomial? Huh?

Novel Word Learning: Data Codebook

load(url("https://uoepsy.github.io/msmr/data/nwl.RData"))

In the nwl data set (accessed using the code above), participants with aphasia are separated into two groups based on the general location of their brain lesion: anterior vs. posterior. There is data on the numbers of correct and incorrect responses participants gave in each of a series of experimental blocks. There were 7 learning blocks, immediately followed by a test. Finally, participants also completed a follow-up test.

Data were also collect from healthy controls.
Figure 1 shows the differences between groups in the average proportion of correct responses at each point in time (i.e., each block, test, and follow-up)

Differences between groups in the average proportion of correct responses at each block

Figure 1: Differences between groups in the average proportion of correct responses at each block

variable description
group Whether participant is a stroke patient (‘patient’) or a healthy control (‘control’)
lesion_location Location of brain lesion: anterior vs posterior
block Experimental block (1-9). Blocks 1-7 were learning blocks, immediately followed by a test in block 8. Block 9 was a follow-up test at a later point
PropCorrect Proportion of 30 responses in a given block that the participant got correct
NumCorrect Number of responses (out of 30) in a given block that the participant got correct
NumError Number of responses (out of 30) in a given block that the participant got incorrect
ID Participant Identifier
Phase Experimental phase, corresponding to experimental block(s): ‘Learning,’ ‘Immediate,’‘Follow-up’
Question B1

Load the data. Take a look around. Any missing values? Can you think of why?

Solution

Question B2

Our broader research aim today is to compare two groups (those with anterior vs. posterior lesions) with respect to their accuracy of responses over the course of the study.

  1. What is our outcome?

Hint: Think carefully: there might be several variables which either fully or partly express the information we are considering the “outcome” here.

Solution

Question B3

Research Question 1:
Is the learning rate (training blocks) different between these two groups?

Hints:

  • Do we want cbind(num_successes, num_failures)?

  • Make sure we’re running models on only the data we are actually interested in.

  • Think back to what we did in the exercises above using model comparison via likelihood ratio tests (using anova(model1, model2, model3, ...). We could use this approach for this question, to compare:

    • A model with just the change over the sequence of blocks
    • A model with the change over the sequence of blocks and an overall difference between groups
    • A model with groups differing with respect to their change over the sequence of blocks
  • What about the random effects part?

    1. What are our observations grouped by?
    2. What variables can vary within these groups?
    3. What do you want your model to allow to vary within these groups?
Suggested answers to the hints if you don’t know where to start

Solution

Question B4

Research Question 2
In the testing phase, does performance on the immediate test differ between lesion location groups, and does their retention from immediate to follow-up test differ?

Let’s try a different approach to this. Instead of fitting various models and comparing them via likelihood ratio tests, just fit the one model which could answer both parts of the question above.

Hints:

  • This might required a bit more data-wrangling before hand. Think about the order of your factor levels (alphabetically speaking, “Follow-up” comes before “Immediate”)!

Solution

Interpreting coefficients in logistic regression
Take some time to remind yourself from USMR of the interpretation of logistic regression coefficients.

The interpretation of the fixed effects of a logistic multi-level model is not very different.
We can obtain the fixed effects from our model by using:

  • fixef(model)
  • summary(model)$coefficients
  • coef(summary(model))
  • tidy(model) from the broom.mixed package
  • (there are probably more ways, but I can’t think of them right now!)

It’s just that for multi-level models, we can model by-cluster random variation around these effects.

Question B5
  1. In family = binomial(link='logit'). What function is used to relate the linear predictors in the model to the expected value of the response variable?
  2. How do we convert this into something more interpretable?

Solution

Question B6

Make sure you pay attention to trying to interpret each fixed effect from your models.
These can be difficult, especially when it’s logistic, and especially when there are interactions.

  • What is the increase in the odds of answering correctly in the immediate test for someone with a posterior legion compared to someone with an anterior legion?

Solution

OPTIONAL: B7

Recreate the visualisation in Figure 2.

Differences between groups in the average proportion of correct responses at each block

Figure 2: Differences between groups in the average proportion of correct responses at each block

Solution

Coding Schemes for Categorical Predictors

Remember that categorical predictors get inputted into our model as a series of variables which are 0s and 1s? Previously the variable in our model for lesion location was actually being coded with 0 representing one level, and 1 representing the other. This is known as “treatment coding.”

There are lots of other ways we might encode our categorical predictors. One common approach is to use “effects coding.” In the case where we have a binary predictor, this makes zero the mid-point between the two - i.e., the overall mean (this is a bit like mean-centering a continuous predictor).

If we recall that the intercept is “when all predictors are zero,” and that when we have an interaction Y~A+B+A*B in our model, the individual coefficients for A and B are estimated “when the other variable is zero,” then we can start to understand how these different ways of encoding our categorical predictors can change what we are getting out of our model. (Note, they don’t actually change anything about the model fit, but they change what information we are estimating from the model).

If you want to understand this a little more, take a look here.

Question B8

This code is that we used to answer question B4 above, only we have edited it to change lesion location to be fitted with “effects coding.”

nwl_test <- filter(nwl, block > 7, !is.na(lesion_location)) %>%
  mutate(
    Phase = fct_relevel(factor(Phase),"Immediate")
  )

m.recall.loc.effcoding <- glmer(cbind(NumCorrect, NumError) ~ Phase * lesion_location + (Phase | ID), 
                      contrasts = list(lesion_location = "contr.sum"),
                  nwl_test, family="binomial")

The interpretation of this is going to get pretty tricky - we have a logistic regression, and we have different coding scheme for our categorical predictor, and we have an interaction.. 🤯

Can you work out the interpretation of the fixed effects estimates?

Solution

Hang on.. p-values are back?!

We noted at the end of last week that we don’t have p-values for lmer()1, but you might notice that we do now have them when we’ve fitted a model with glmer()?

The reason is partly just one of convention. There is a standard practice for determining the statistical significance of parameters in the generalised linear model, relying on asymptotic Wald tests which evaluate differences in log-likelihood.

Reading: Inference in MLM

In USMR, we fitted various simple linear models using lm(), and we could get out either a table of coefficients (for example, 1) or a table of the reduction in sums of squared residuals (for example, 2). In both of these cases we had a nice set of p-values for us to look at (in R, we often find the p-values in the column named something like Pr(>F)).

A quick reminder: a p-value is the probability of observing results as or more extreme than the data, if the data were really generated by a hypothesised chance process).

Table 1: t-tests for slope coefficients for predictors in a linear model
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.3703775 4.3205141 1.242995 0.2238259
outdoor_time 0.5923673 0.1689445 3.506284 0.0014995
social_int 1.8034489 0.2690982 6.701825 0.0000002
Table 2: F-tests for predictors in a linear model
Df Sum Sq Mean Sq F value Pr(>F)
outdoor_time 1 1427.938 1427.9375 37.77483 1.1e-06
social_int 1 1697.825 1697.8249 44.91446 2.0e-07
Residuals 29 1096.238 37.8013 NA NA

We could get p-values for these models because we know what the distribution of test statistics of interest (e.g. \(F\)-statistics and \(t\)-statistics), will look like if the null hypothesis were true (i.e., we can describe what they look like in terms of \(F\) and \(t\) distributions with specific degrees of freedom (look back to the USMR materials for a formula).

Unfortunately, the same is not true of multilevel models. For the multilevel model, we can think of it as having residuals at multiple levels: we have the random variation of clusters around the fixed effects, and then random variation of observations around the cluster-level effects (more on this in Week 4).
In the rare occasion that you have a perfectly balanced experimental design, then ratios of sums of squares for multi-level models follow an \(F\)-distribution, in which we know the numerator and denominator degrees of freedom (this means we can work out the degrees of freedom for the \(t\)-test of our fixed effect parameters). Unfortunately, in the real world where things are not often perfectly balanced, determining the denominator degrees of freedom becomes unclear.

Last week, we mentioned a couple of approaches that we might take for drawing inferences, finding ways to compute p-values or confidence intervals.2 We’re going to now extend this to include a few more, and discuss the strengths and limitations of the different approaches.

For these examples…

For the following examples, we’re going to return to our dataset of various toys, and we are going to be concerned with whether practice (the hrs_week variable) is associated with changes in reading ages (R_AGE variable).
To accommodate for the clustered nature of the data, we are going to fit a model with both intercepts and slopes varying by toy-type.

toys_read <- read_csv("https://uoepsy.github.io/data/toyexample.csv")
full_model <- lmer(R_AGE ~ hrs_week + (1 + hrs_week | toy_type), data = toys_read)
Use a normal approximation (not advisable)

Satterthwaite df approximation

Kenward Rogers df approximations

Likelihood Ratio Test (LRT)

Optional: Parametric Bootstrap LRT

Optional: Parametric Bootstrap Confidence Intervals

Optional: Case-based Bootstrap Confidence Intervals

If you want more information (not required reading for this course), then Julian Faraway has a page here with links to some worked examples, and Ben Bolker has a wealth of information on his GLMM FAQ pages.


  1. We also didn’t get \(R^2\). Check out http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#how-do-i-compute-a-coefficient-of-determination-r2-or-an-analogue-for-glmms↩︎

  2. It’s always important to note that aiming for a p-value or confidence interval to make a dichotomous decision is only one approach to doing statistics. If you continue to develop your learning of statistics after this course (which we hope you will!), then you will find that there are other schools of thought that bring different benefits.↩︎