Quick recap
Last week we learned about two methods of data reduction: Principal Components Analysis (PCA) and Factor Analysis.
In brief, PCA aims to summarise a set of measured variables into a set of uncorrelated (orthogonal) components, which are linear combinations (a weighted average) of the measured variables.
Factor analysis, on the other hand, assumes that the relationships between a set of measured variables can be explained by a number of underlying latent factors.
In the focus of last week we looked at Exploratory Factor Analysis (EFA), for which we tend to start with no hypothesis about either the number of latent factors or about the specific relationships between latent factors and measured variables (the factor structure). All variables load onto all factors, and often a transformation method (e.g., rotation) is applied to make the results more easily interpretable.
Introducing CFA
Confirmatory Factor Analysis (CFA) is a more hypothesis-driven form of factor analysis, which requires us to prespecfiy all aspects of our model: we need to have some a priori sense of how many factors that exist, which items a related to which factors, etc.
CFA is almost always used when developing scales, because it allows us to examine the underlying structure of our measure (e.g., questionnaires). It is also useful when investigating the convergent and disciminant validity of a theoretical construct (for instance, we might expect a measure of anxiety to positively relate to (‘converge’ with) a measure of depression, and to differ (‘discriminate’) from a measure of general happiness.
As shown in Figure 2, when we have clear a priori hypotheses about relationships between measured variables and latent factors, CFA imposes a specific factor structure on the data, where we pick and choose the paths (arrows) that we want to estimate, and leave out ones which our theory suggests are not present. It is important to note, however, that by excluding a specific path, our model is asserting that that specific relationship is 0 (a bit like if we leave out a predictor from our multiple regression model: y~w+x
assumes that y~z
is 0).
The purpose of CFA can be seen of as twofold:
- Firstly to obtain parameter estimates (i.e., factor loadings, variances and covariances of factors, residual variances of measured variables)
- and secondly to assess whether the model provides a good fit to the data.
CFA as SEM
CFA is a specific form of a Structural Equation Model (SEM) in which we are defining a (number of) factor structures. SEM is going to be the focus of weeks 10 and 11 of this course. In essence, SEM is a framework in which we can test our theoretical models and hypotheses.
You might be tempted to think “isn’t that what we’ve been doing already!?” and you would be right. However, SEM offers a huge amount more flexibility in the questions we can ask. In the multiple regression world, were restricted to focusing on one outcome variable, and examining the variance explained in that variable by some predictor variables. In SEM, our theoretical model may have multiple outcome variables, mediating paths (“z affects x which in turn affects y”), latent factors etc.
Sometimes the easiest way into thinking about things in the SEM framework is to draw all your variables on a whiteboard, draw any latent constructs you believe they measure, and then connect them all up with arrows according to your theoretical model. Sound familiar? Figure 2 shows a CFA model represented as a SEM diagram!
Introducing lavaan
For the remaining weeks of the course, we’re going to rely heavily on the lavaan (Latent Variable Analysis) package.
This is the main package in R for fitting structural equation mdoels, and there is a huge scope of what we can do with it.
The first thing to get to grips with is the various new operators which it allows us to use.
Our old multiple regression formula in R was specified as y ~ x1 + x2 + x3 + ...
.
In lavaan, we continue to fit regressions using the ~
symbol, but we can also specify the construction of latent variables using =~
and residual variances & covariances using ~~
.
latent variable definition |
=~ |
“is measured by” |
regression |
~ |
“is regressed on” |
(residual) (co)variance |
~~ |
“is correlated with” |
intercept |
~1 |
“intercept” |
(from https://lavaan.ugent.be/tutorial/syntax1.html)
In practice, fitting models in lavaan tends to be a little different from things like lm()
and (g)lmer()
.
Instead of including the model formula inside the fit function (e.g., lm(y ~ x1 + x2, data = df)
), we tend to do it in a step-by-step process.
This is because as our models become more complex, our formulas can pretty long!
You can see a multiple regression fitted with lavaan here, we define the model as a character string, and then we pass that to the relevant lavaan function, such as cfa()
or in this case sem()
.
library(tidyverse)
library(lavaan)
toys_read <- read_csv("https://uoepsy.github.io/data/toyexample.csv")
# the lm() way
mreg_lm <- lm(R_AGE~hrs_week + age, toys_read)
# setting up the model for SEM
mreg_model <- "
#regression
R_AGE ~ 1 + hrs_week + age
"
mreg_sem <- sem(mreg_model, data=toys_read)
comparing the estimates
These are the coefficients from our lm()
model:
coefficients(mreg_lm)
## (Intercept) hrs_week age
## 2.1223669 0.6861775 0.1884679
And you can see the estimated parameters are the same for our sem()
model!
summary(mreg_sem)
## lavaan 0.6-8 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 4
##
## Number of observations 132
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|)
## R_AGE ~
## hrs_week 0.686 0.484 1.419 0.156
## age 0.188 0.226 0.832 0.405
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|)
## .R_AGE 2.122 2.575 0.824 0.410
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## .R_AGE 27.268 3.356 8.124 0.000
SEM diagrams
In SEM diagrams (or ‘path diagrams’), we have different ways to show the covariances, regressions, observed variables and latent variables.
- Observed variables are represented by squares or rectangles. These are the named variables of interest which exist in our dataset - i.e. the ones which we have measured directly.
- Latent variables are represented as ovals/ellipses or circles.
- Covariances are represented by double-headed arrows. In many diagrams these are curved.
- Regressions are shown by single headed arrows (e.g., an arrow from \(x\) to \(y\) for the path \(y~x\)).
- Factor loadings are regressions. Recall that specifying a factor structure is simply to say that some measured variables \(y_i\) are each regressed onto some unmeasured factor(s) \(F_i\).
\(y = \lambda \cdot F + u\) looks an awful lot like \(y = \beta \cdot x + \epsilon\) !!
New terminology!
- Exogenous variables are a bit like what we have been describing with words like “independent variable” or “predictor.” In a SEM diagram, they have no paths coming from other variables in the system, but have paths going to other variables.
- Endogenous variables are more like the “outcome”/“dependent”/“response” variables we are used to. They have some path coming from another variable in the system (and may also - but not necessarily - have paths going out from them).
There are a couple of packages which can create visual diagrams of structural equation models. semPlot and tidySEM.
The semPlot package contains the function semPaths()
, which is well established and works “out of the box” but is harder to edit. Alternatively, you can try your hand at a newer package which promises more customisable features for SEM diagrams called tidySEM.
(often, if we want to include a SEM diagram in a report the raw output from semPaths()
would not usually meet publication standards, and instead we tend to draw them in programs like powerpoint!)
optional: visualising multiple regression model as SEM
library(semPlot)
semPaths(mreg_sem, whatLabels = "est", intercepts = F)
library(tidySEM)
graph_sem(mreg_sem,
nodes = get_nodes(mreg_sem))
A One factor model
Conduct Problems Dataset2: Data Codebook
Last week we conducted an exploratory factor analysis of a dataset to try and identify an optimal factor structure for a new measure of conduct (i.e., antisocial behavioural) problems. This week, we’ll conduct some confirmatory factor analyses (CFA) of the same inventory, using some new data collected by the researchers from n=600 adolescents.
The questionnaire items referred to the following 10 behaviours:
- item 1- Stealing
- item 2- Lying
- item 3- Skipping school
- item 4- Vandalism
- item 5- Breaking curfew
- item 6- Threatening others
- item 7- Bullying
- item 8- Spreading malicious rumours
- item 9- Using a weapon
- item 10 - Fighting
The data is available as a .csv at https://uoepsy.github.io/data/conduct_problems_2.csv
Question A1
Read in the new data, and construct a correlation matrix. Maybe create a visualisation of the correlation matrix?
Solution
df <- read.csv("https://uoepsy.github.io/data/conduct_problems_2.csv")
cor(df)
## item1 item2 item3 item4 item5 item6 item7
## item1 1.0000000 0.5254249 0.43498606 0.4831121 0.5644047 0.13450795 0.2806369
## item2 0.5254249 1.0000000 0.50099054 0.5119842 0.6740107 0.15239358 0.3031997
## item3 0.4349861 0.5009905 1.00000000 0.4897339 0.5990709 0.09463966 0.2557407
## item4 0.4831121 0.5119842 0.48973386 1.0000000 0.6199783 0.14847703 0.2492471
## item5 0.5644047 0.6740107 0.59907089 0.6199783 1.0000000 0.11353641 0.3112458
## item6 0.1345080 0.1523936 0.09463966 0.1484770 0.1135364 1.00000000 0.5461757
## item7 0.2806369 0.3031997 0.25574075 0.2492471 0.3112458 0.54617567 1.0000000
## item8 0.2576481 0.2748175 0.19355398 0.2553712 0.2463290 0.59441335 0.8009591
## item9 0.2444337 0.2759875 0.20798523 0.2260156 0.2333758 0.36915878 0.5988582
## item10 0.1783069 0.1618631 0.12924914 0.1433759 0.1394748 0.46316014 0.5316143
## item8 item9 item10
## item1 0.2576481 0.2444337 0.1783069
## item2 0.2748175 0.2759875 0.1618631
## item3 0.1935540 0.2079852 0.1292491
## item4 0.2553712 0.2260156 0.1433759
## item5 0.2463290 0.2333758 0.1394748
## item6 0.5944133 0.3691588 0.4631601
## item7 0.8009591 0.5988582 0.5316143
## item8 1.0000000 0.6169829 0.5557168
## item9 0.6169829 1.0000000 0.3613253
## item10 0.5557168 0.3613253 1.0000000
pheatmap::pheatmap(cor(df))
Question A2
Using lavaan syntax, specify a model in which all 10 items load on one latent variable.
Do not estimate the model yet, simply specify it in a character string, in preparation to fit it with the cfa()
function.
Hint: Remember that to specify items loading on a latent variable we use the =~
operator. The latent variables goes on the left hand side and the list of indicators (i.e., items used to measure the latent variable) go on the right hand side separated by ‘+.’ You can name the latent variable whatever you like.
Solution
model1<-'CP=~item1+item2+item3+item4+item5+item6+item7+item8+item9+item10'
Question A3
We’re going to use the cfa()
function to fit our model.
Why is it not necessary to refer to ALL of your CFA parameters in your model specification function to estimate your model? Which parameters do you not need to include in the model specification?
Solution
Some parameters are estimated or fixed by default when you estimate the model with the cfa()
function. In this case, the residual factor variances and loadings are missing because the former are estimated by default and the latter are fixed to 1 by default. Latent factor variances are also estimated by default.
Question A4
Estimate your model using the cfa
()` function from the lavaan package. Scale your latent variable by fixing the latent variable variance to 1.
The default scaling constraint/identification constraints imposed when using the cfa()
function are to fix the loading of the first item of each latent variable to 1.
We can override this by setting std.lv=TRUE
.
This will instead scale the latent variables by fixing them to 1.
It is helpful to save the results of cfa()
to a new object so that we can later inspect that object (to look at the model fit and parameter estimates).
Solution
model1.est<-cfa(model1, data=df, std.lv=TRUE)
Model Fit
Same name, different idea
You’ll have heard the term “model fit” many times since september. However, there is a crucial difference in what it means when it is used in the SEM framework.
In things like multiple regression, we have been using “model fit” to be the measure of “how much variance can we explain in y with our set of predictors?”
In SEM, examining “model fit” is more like asking “how well does our model reproduce the characteristics of the data that we observed?” If you think of the characteristics of our data being represented by a covariance matrix, then we might think of “model fit” as being “how well can our model reproduce our observed covariance matrix?”
Degrees of freedom
Degrees of freedom in the SEM framework correspond to the number of knowns (observed covariances/variances) minus the number of unknowns (parameters to be estimated by the model). A model is only able to be estimated if it has at least 0 degrees of freedom (if there are as many knowns as unknowns). A model with 0 degrees of freedom is termed just-identified.
under- and over- identified models correspond to those with \(<0\) and \(>0\) degrees of freedom respectively.
An example of a just-identified model is the multiple regression model! In multiple regression, everything is allowed to vary with everything else, which means that there is a unique solution for all of the model’s parameters because there are as many paths as there are observed covariances. This means that in the SEM world, a multiple regression model is “just-identified.”
huh?
If I have two predictors and one outcome variable, then there are 6 variances and covariances available. For instance:
cov(toys_read %>% select(R_AGE, hrs_week, age))
## R_AGE hrs_week age
## R_AGE 28.0894665 0.6486782 0.8917495
## hrs_week 0.6486782 0.8972986 0.1749482
## age 0.8917495 0.1749482 4.0946175
The multiple regression model will estimate the two variances of the exogenous variables (the predictors), their covariance, the two paths from each exogenous to the endogenous (each predictor to the outcome), and the error variance. This makes up 6 estimated parameters - which is exactly how many known covariances there are.
How many knowns are there?
The number of known covariances in a set of \(k\) observed variables is equal to \(\frac{k \cdot (k+1)}{2}\).
When learning about SEM, the visualisations can play a key part. It often helps to draw all our variables (both observed and latent) on the whiteboard, and connect them up according to your theoretical model. You can then count the number of paths (arrows) and determine whether the \(\text{number of knowns} > \text{number of unknowns}\). We can reduce the number of unknowns by fixing parameters to be specific values.
By constraining some estimated parameter to be some specific value, we free-up a degree of freedom! For instance “the correlation between x1 and x2 is equal to 0.7 (\(r_{x_1x_2} = .07\)).” This would turn a previously estimated parameter into a fixed parameter, and this gains us the prize of a lovely degree of freedom!
By removing a path altogether, we are constraining it to be zero.
For instance, in Figure 3 we can see a the model of a latent factor loading on to 4 items. The number of paths to be estimated here is greater than the number of known covariances. However, we can get around this by fixing certain parameters to be specific values. In Figure 4, the latent factor variance is set at 1, and the residual factor loadings are also set to 1.
This has the additional benefit of making our latent factor have some defining features. Because we don’t actually measure the latent variable (it is a hypothetical construct), it doesn’t really have any intrinsic ‘scale.’ When we fix the variance to be 1, we are providing some property (its variance) we create a reference from which the other paths to/from the variable are in relation to. A common alternative is to fix the factor loading of the first item to be 1 (see Figure 5).
We will return to this more in coming weeks, but it is an important notion to remember - “model fit” and “degrees of freedom” have quite different meanings to those you are likely used to.
There are too many different indices of model fit in SEM, and there’s lots of controversy over the various merits and disadvantages and proposed cutoffs of each method. We strongly encourage you to take a look at the accompanying reading on CFA which is posted on Learn, as this explains some of the more common measures. Additionally, there are many resources online, for instance David Kenny’s page on measuring model fit.
A Two factor model
Question B1
Examine the global fit of your one factor model. Does it fit well? (To obtain the global fit measures, we can use the summary()
function to inspect our estimated model, setting fit.measures=TRUE
).
Rules of thumb: If \(\textrm{RMSEA} < .05\), \(\textrm{SRMR} < .05\), \(\textrm{TLI} > .95\) and \(\textrm{CFI} > .95\) then the model fits well.
Smaller values of RMSEA and SRMR mean better fit while larger values of CFI and TLI mean better fit.
Solution
summary(model1.est, fit.measures=TRUE)
## lavaan 0.6-8 ended normally after 45 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 20
##
## Number of observations 600
##
## Model Test User Model:
##
## Test statistic 1089.192
## Degrees of freedom 35
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 2836.905
## Degrees of freedom 45
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.622
## Tucker-Lewis Index (TLI) 0.515
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -7722.655
## Loglikelihood unrestricted model (H1) -7178.058
##
## Akaike (AIC) 15485.309
## Bayesian (BIC) 15573.248
## Sample-size adjusted Bayesian (BIC) 15509.753
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.224
## 90 Percent confidence interval - lower 0.213
## 90 Percent confidence interval - upper 0.236
## P-value RMSEA <= 0.05 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.181
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|)
## CP =~
## item1 0.378 0.042 9.036 0.000
## item2 0.429 0.044 9.782 0.000
## item3 0.327 0.042 7.794 0.000
## item4 0.392 0.045 8.689 0.000
## item5 0.410 0.043 9.522 0.000
## item6 0.602 0.037 16.202 0.000
## item7 0.879 0.033 26.448 0.000
## item8 0.859 0.032 26.712 0.000
## item9 0.680 0.038 18.001 0.000
## item10 0.579 0.037 15.557 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## .item1 0.885 0.052 17.038 0.000
## .item2 0.963 0.057 16.984 0.000
## .item3 0.909 0.053 17.114 0.000
## .item4 1.037 0.061 17.061 0.000
## .item5 0.933 0.055 17.004 0.000
## .item6 0.581 0.036 16.213 0.000
## .item7 0.224 0.021 10.721 0.000
## .item8 0.202 0.019 10.379 0.000
## .item9 0.559 0.035 15.843 0.000
## .item10 0.596 0.037 16.324 0.000
## CP 1.000
According to conventionally accepted criteria for RMSEA, SRMR, TLI and CFI, the model fits poorly.
Question B2
Now let’s try a different model.
Specify a CFA model with two correlated latent factors.
Consider items 1 to 5 as indicators of the first latent factor and items 6 to 10 as indicators of the second latent factor.
Specifying models this way requires separating the different (sets of) paths onto new lines.
So for this model you will want something with 3 lines.
You can add comments in as well, which will help!
The first one below is filled in for you:
model2 <- '
# latent factor one "is measured by" items 1 to 5
LV1 =~ item1 + item2 + item3 + item4 + item5
# latent factor two "is measured by" items 6 to 10
...
# latent factor one is correlated with latent factor two
...
'
Strings split over lines making R get stuck?
If you have your cursor on the first of a multi-line character string, and you press ctrl+enter in order to run it (i.e., send it down to the console), then R will not automatically run the next line. It will give you a little blue +
in the console, and force you to run it line by line.
If you are seeing the little blue +
then you can press the escape key to cancel the command.
It might be easier to highlight the entire model and run it all at once.
Solution
We can specify the model as below.
model2<-'
LV1=~item1+item2+item3+item4+item5
LV2=~item6+item7+item8+item9+item10
LV1~~LV2
'
We now define two latent variables ‘Lv1’ and ‘Lv2’ using the ‘=~’ operator. We also introduce a new operator: ‘~~’ in order to specify the covariance between the two latent variables.
Question B3
Estimate this model using cfa()
.
Scale the latent variables using a reference indicator (rather than fixing the variance).
Does the model fit well?
Solution
cfa()
uses a reference indicator to scale the latent variables by default we only need to specify the name of the object with the model syntax and the name of the dataset.
model2.est<-cfa(model2, data=df)
summary(model2.est, fit.measures=T)
## lavaan 0.6-8 ended normally after 27 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 21
##
## Number of observations 600
##
## Model Test User Model:
##
## Test statistic 65.765
## Degrees of freedom 34
## P-value (Chi-square) 0.001
##
## Model Test Baseline Model:
##
## Test statistic 2836.905
## Degrees of freedom 45
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.989
## Tucker-Lewis Index (TLI) 0.985
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -7210.941
## Loglikelihood unrestricted model (H1) -7178.058
##
## Akaike (AIC) 14463.881
## Bayesian (BIC) 14556.217
## Sample-size adjusted Bayesian (BIC) 14489.548
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.039
## 90 Percent confidence interval - lower 0.025
## 90 Percent confidence interval - upper 0.054
## P-value RMSEA <= 0.05 0.884
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.035
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|)
## LV1 =~
## item1 1.000
## item2 1.217 0.076 15.965 0.000
## item3 1.012 0.070 14.424 0.000
## item4 1.146 0.077 14.967 0.000
## item5 1.360 0.078 17.412 0.000
## LV2 =~
## item6 1.000
## item7 1.419 0.082 17.331 0.000
## item8 1.437 0.081 17.662 0.000
## item9 1.093 0.077 14.125 0.000
## item10 0.950 0.073 13.073 0.000
##
## Covariances:
## Estimate Std.Err z-value P(>|z|)
## LV1 ~~
## LV2 0.156 0.023 6.856 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## .item1 0.574 0.037 15.398 0.000
## .item2 0.477 0.035 13.778 0.000
## .item3 0.551 0.036 15.262 0.000
## .item4 0.597 0.040 14.867 0.000
## .item5 0.264 0.028 9.537 0.000
## .item6 0.561 0.035 16.193 0.000
## .item7 0.228 0.021 10.767 0.000
## .item8 0.153 0.019 8.122 0.000
## .item9 0.566 0.035 15.978 0.000
## .item10 0.587 0.036 16.351 0.000
## LV1 0.453 0.052 8.701 0.000
## LV2 0.381 0.045 8.396 0.000
This model fits well according to RMSEA, SRMR, CFI and TLI. The \(\chi^2\) value is significant but we don’t need to worry about this because the \(\chi^2\) test has a strong tendency to reject even trivially mis-specified models.
Question B4
Are there any areas of local mis-fit?
We can look for these using the modindices()
function. This will give us the expected improvement in \(\chi^2\) if a parameter was added, and the expected parameter change associated with the addition of the parameter (an estimate of what the parameter estimate would be if the parameter was included in the model).
Solution
modindices(model2.est, sort=T)
## lhs op rhs mi epc sepc.lv sepc.all sepc.nox
## 72 item6 ~~ item10 10.747 0.082 0.082 0.144 0.144
## 25 LV1 =~ item7 8.106 0.119 0.080 0.080 0.080
## 65 item5 ~~ item7 6.720 0.039 0.039 0.160 0.160
## 71 item6 ~~ item9 6.675 -0.065 -0.065 -0.115 -0.115
## 24 LV1 =~ item6 6.100 -0.136 -0.092 -0.094 -0.094
## 33 LV2 =~ item5 5.273 -0.122 -0.075 -0.072 -0.072
## 60 item4 ~~ item7 4.156 -0.039 -0.039 -0.105 -0.105
## 27 LV1 =~ item9 4.086 0.113 0.076 0.075 0.075
## 78 item9 ~~ item10 4.029 -0.051 -0.051 -0.089 -0.089
## 44 item2 ~~ item4 4.021 -0.058 -0.058 -0.109 -0.109
## 26 LV1 =~ item8 3.912 -0.078 -0.053 -0.054 -0.054
## 66 item5 ~~ item8 3.702 -0.027 -0.027 -0.134 -0.134
## 61 item4 ~~ item8 3.126 0.031 0.031 0.103 0.103
## 29 LV2 =~ item1 2.920 0.105 0.065 0.064 0.064
## 55 item3 ~~ item8 2.869 -0.028 -0.028 -0.098 -0.098
## 64 item5 ~~ item6 2.812 -0.034 -0.034 -0.090 -0.090
## 37 item1 ~~ item5 2.750 -0.044 -0.044 -0.114 -0.114
## 70 item6 ~~ item8 2.726 0.034 0.034 0.115 0.115
## 49 item2 ~~ item9 2.563 0.039 0.039 0.074 0.074
## 54 item3 ~~ item7 2.295 0.027 0.027 0.077 0.077
## 73 item7 ~~ item8 1.964 -0.041 -0.041 -0.222 -0.222
## 30 LV2 =~ item2 1.869 0.081 0.050 0.046 0.046
## 74 item7 ~~ item9 1.602 0.028 0.028 0.077 0.077
## 69 item6 ~~ item7 1.395 -0.025 -0.025 -0.069 -0.069
## 28 LV1 =~ item10 1.357 -0.065 -0.044 -0.046 -0.046
## 68 item5 ~~ item10 1.356 -0.024 -0.024 -0.062 -0.062
## 45 item2 ~~ item5 1.352 0.035 0.035 0.099 0.099
## 34 item1 ~~ item2 1.322 0.031 0.031 0.060 0.060
## 52 item3 ~~ item5 1.175 0.029 0.029 0.076 0.076
## 43 item2 ~~ item3 1.120 -0.028 -0.028 -0.056 -0.056
## 42 item1 ~~ item10 0.944 0.025 0.025 0.043 0.043
## 76 item8 ~~ item9 0.731 0.018 0.018 0.062 0.062
## 59 item4 ~~ item6 0.688 0.022 0.022 0.037 0.037
## 67 item5 ~~ item9 0.672 -0.017 -0.017 -0.044 -0.044
## 41 item1 ~~ item9 0.613 0.020 0.020 0.035 0.035
## 36 item1 ~~ item4 0.604 0.022 0.022 0.038 0.038
## 53 item3 ~~ item6 0.541 -0.018 -0.018 -0.033 -0.033
## 35 item1 ~~ item3 0.528 -0.020 -0.020 -0.035 -0.035
## 51 item3 ~~ item4 0.467 0.019 0.019 0.034 0.034
## 56 item3 ~~ item9 0.373 0.015 0.015 0.027 0.027
## 32 LV2 =~ item4 0.341 0.037 0.023 0.021 0.021
## 58 item4 ~~ item5 0.241 0.014 0.014 0.036 0.036
## 31 LV2 =~ item3 0.224 -0.029 -0.018 -0.017 -0.017
## 40 item1 ~~ item8 0.185 0.007 0.007 0.025 0.025
## 38 item1 ~~ item6 0.145 -0.010 -0.010 -0.017 -0.017
## 50 item2 ~~ item10 0.082 -0.007 -0.007 -0.013 -0.013
## 48 item2 ~~ item8 0.077 0.004 0.004 0.017 0.017
## 62 item4 ~~ item9 0.057 0.006 0.006 0.011 0.011
## 47 item2 ~~ item7 0.054 -0.004 -0.004 -0.012 -0.012
## 75 item7 ~~ item10 0.048 -0.005 -0.005 -0.012 -0.012
## 63 item4 ~~ item10 0.031 -0.005 -0.005 -0.008 -0.008
## 57 item3 ~~ item10 0.024 0.004 0.004 0.007 0.007
## 77 item8 ~~ item10 0.011 -0.002 -0.002 -0.007 -0.007
## 46 item2 ~~ item6 0.007 0.002 0.002 0.004 0.004
## 39 item1 ~~ item7 0.006 -0.001 -0.001 -0.004 -0.004
While the modification indices suggest that the model could be improved with the addition of parameters, none of the expected parameter changes are very large. If we included any additonal parameters here it is likely we would be capitalising on chance. We would also have to consider whether we could justify their inclusion on substantive grounds, i.e., we would ask ourselves ‘does the addition of the parameter makes sense given our background knowledge of the construct?’
Question B5
Take a look at the parameter estimates, are all of your loadings satisfactory? Which items are the best measures of the underlying latent variables?
Hint: It may help to look at the standardised parameter estimates, which we can do by using summary(model, standardized = TRUE)
.
Typically we would want standardised loadings to be \(>|.3|\) (there is no consensus on this, sometimes you will see \(>|.4|\) suggested, other times \(>|.6|\)!)
Solution
summary(model2.est, standardized=T)
## lavaan 0.6-8 ended normally after 27 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 21
##
## Number of observations 600
##
## Model Test User Model:
##
## Test statistic 65.765
## Degrees of freedom 34
## P-value (Chi-square) 0.001
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## LV1 =~
## item1 1.000 0.673 0.664
## item2 1.217 0.076 15.965 0.000 0.819 0.765
## item3 1.012 0.070 14.424 0.000 0.681 0.676
## item4 1.146 0.077 14.967 0.000 0.771 0.706
## item5 1.360 0.078 17.412 0.000 0.915 0.872
## LV2 =~
## item6 1.000 0.618 0.636
## item7 1.419 0.082 17.331 0.000 0.877 0.878
## item8 1.437 0.081 17.662 0.000 0.887 0.915
## item9 1.093 0.077 14.125 0.000 0.675 0.668
## item10 0.950 0.073 13.073 0.000 0.587 0.608
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## LV1 ~~
## LV2 0.156 0.023 6.856 0.000 0.375 0.375
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .item1 0.574 0.037 15.398 0.000 0.574 0.559
## .item2 0.477 0.035 13.778 0.000 0.477 0.416
## .item3 0.551 0.036 15.262 0.000 0.551 0.543
## .item4 0.597 0.040 14.867 0.000 0.597 0.501
## .item5 0.264 0.028 9.537 0.000 0.264 0.239
## .item6 0.561 0.035 16.193 0.000 0.561 0.595
## .item7 0.228 0.021 10.767 0.000 0.228 0.229
## .item8 0.153 0.019 8.122 0.000 0.153 0.162
## .item9 0.566 0.035 15.978 0.000 0.566 0.554
## .item10 0.587 0.036 16.351 0.000 0.587 0.630
## LV1 0.453 0.052 8.701 0.000 1.000 1.000
## LV2 0.381 0.045 8.396 0.000 1.000 1.000
We can see that all loadings are statistically significant and all standardised loadings are >|.3|. This is good as it suggests all our items measure the relevant latent variable reasonably well. We can identify the ‘best’ indicators of our latent constructs according to those with the highest standardised loadings. We must, however, be aware that item loadings are model-specific: if we changed the model (e.g., the number of factors or which items we included) the loadings could change and so would our interpretation of the latent variables.
Question B6
Now it’s time to get R to draw some diagrams of our model for us!
Using R, represent the model, including the standardised parameters as a SEM diagram.
You can either use the semPaths()
functions from the semPlot package, or you can try your hand at a newer package which promises more customisable features for SEM diagrams called tidySEM.
(often, if we want to include a SEM diagram in a report the raw output from semPaths()
would not usually meet publication standards, and instead we tend to draw them in programs like powerpoint!)
Solution
In the semPaths()
function, we can include the unstandardised estimates by setting what='est'
and the standardised estimates by setting what='stand'
.
library(semPlot)
semPaths(model2.est, what='stand')
library(tidySEM)
lay = get_layout(rep(NA, 3), "LV1", rep(NA,2),"LV2", rep(NA,3),
paste0("item",1:10), rows = 2)
graph_sem(model2.est, layout = lay)
Question B7
Write a short paragraph summarising the method and results of the two factor model.
Remember: The main principle behind reporting any analysis is that you should be as transparent as possible (e.g., reporting any model modifications made) and a reader should be able to reproduce your analysis based on your description.
Solution
A two-factor model was tested. Items 1-5 loaded on a ‘non-aggressive conduct problems’ factor and items 6-10 loaded on an ‘aggression’ factor and these factors were allowed to correlate. Scaling and identification were achieved by fixing the loading of item 1 on the non-aggressive conduct problems factor and item 6 on the aggression factor to 1. The model was estimated using maximum likelihood estimation. The model fit well with CFI=.99, TLI=0.99, RMSEA=.04, and SRMR=.04 (Hu & Bentler, 1999). All loadings were statistically significant and >|.3| on the standardised scale. Modification indices and expected parameter changes were inspected but no modifications were made because no expected parameter changes were judged large enough to merit the inclusion of additional parameters given that there was little theoretical rationale for their inclusion. Overall, therefore, a two-factor oblique model was supported for the conduct problems items. The correlation between the factors was r=.38 (p<.001).
## lavaan 0.6-8 ended normally after 27 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 21
##
## Number of observations 600
##
## Model Test User Model:
##
## Test statistic 65.765
## Degrees of freedom 34
## P-value (Chi-square) 0.001
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|)
## LV1 =~
## item1 1.000
## item2 1.217 0.076 15.965 0.000
## item3 1.012 0.070 14.424 0.000
## item4 1.146 0.077 14.967 0.000
## item5 1.360 0.078 17.412 0.000
## LV2 =~
## item6 1.000
## item7 1.419 0.082 17.331 0.000
## item8 1.437 0.081 17.662 0.000
## item9 1.093 0.077 14.125 0.000
## item10 0.950 0.073 13.073 0.000
##
## Covariances:
## Estimate Std.Err z-value P(>|z|)
## LV1 ~~
## LV2 0.156 0.023 6.856 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## .item1 0.574 0.037 15.398 0.000
## .item2 0.477 0.035 13.778 0.000
## .item3 0.551 0.036 15.262 0.000
## .item4 0.597 0.040 14.867 0.000
## .item5 0.264 0.028 9.537 0.000
## .item6 0.561 0.035 16.193 0.000
## .item7 0.228 0.021 10.767 0.000
## .item8 0.153 0.019 8.122 0.000
## .item9 0.566 0.035 15.978 0.000
## .item10 0.587 0.036 16.351 0.000
## LV1 0.453 0.052 8.701 0.000
## LV2 0.381 0.045 8.396 0.000
parameter
|
est
|
se
|
z
|
pvalue
|
LV1=~item1
|
1.000
|
0.000
|
|
|
LV1=~item2
|
1.217
|
0.076
|
15.965
|
<.001
|
LV1=~item3
|
1.012
|
0.070
|
14.424
|
<.001
|
LV1=~item4
|
1.146
|
0.077
|
14.967
|
<.001
|
LV1=~item5
|
1.360
|
0.078
|
17.412
|
<.001
|
LV2=~item6
|
1.000
|
0.000
|
|
|
LV2=~item7
|
1.419
|
0.082
|
17.331
|
<.001
|
LV2=~item8
|
1.437
|
0.081
|
17.662
|
<.001
|
LV2=~item9
|
1.093
|
0.077
|
14.125
|
<.001
|
LV2=~item10
|
0.950
|
0.073
|
13.073
|
<.001
|
LV1~~LV2
|
0.156
|
0.023
|
6.856
|
<.001
|
item1~~item1
|
0.574
|
0.037
|
15.398
|
<.001
|
item2~~item2
|
0.477
|
0.035
|
13.778
|
<.001
|
item3~~item3
|
0.551
|
0.036
|
15.262
|
<.001
|
item4~~item4
|
0.597
|
0.040
|
14.867
|
<.001
|
item5~~item5
|
0.264
|
0.028
|
9.537
|
<.001
|
item6~~item6
|
0.561
|
0.035
|
16.193
|
<.001
|
item7~~item7
|
0.228
|
0.021
|
10.767
|
<.001
|
item8~~item8
|
0.153
|
0.019
|
8.122
|
<.001
|
item9~~item9
|
0.566
|
0.035
|
15.978
|
<.001
|
item10~~item10
|
0.587
|
0.036
|
16.351
|
<.001
|
LV1~~LV1
|
0.453
|
0.052
|
8.701
|
<.001
|
LV2~~LV2
|
0.381
|
0.045
|
8.396
|
<.001
|
When you write up a CFA for a report, you should make sure to include the parameter estimates. This can be in the form of a table with the unstandardised loadings and factor covariances, their standard errors, p-values, and corresponding standardised values.
Note: according to APA style, you should include a leading zero when a parameter can take values out of the 0-1 range (e.g., SD=0.99) but you should not include a leading zero when a parameter can only take values in the 0-1 range (e.g., r=.5). This is why we have TLI=0.99 but CFI=.99. The former can technically (though rarely does) take values >1 while the latter only takes values between 0 and 1.