Principal Component Analysis (PCA)

Relevant packages

  • psych

Thinking about measurement

Take a moment to think about the various constructs that you are often interested in as a researcher. This might be anything from personality traits, to language proficiency, social identity, anxiety etc. How we measure such constructs is a very important consideration for research. The things we’re interested in are very rarely the things we are directly measuring.

Consider how we might assess levels of anxiety or depression. Can we ever directly measure anxiety? 1. More often than not, we measure these things using questionnaire based methods, to capture the multiple dimensions of the thing we are trying to assess. Twenty questions all measuring different aspects of anxiety are (we hope) going to correlate with one another if they are capturing some commonality (the construct of “anxiety”). But they introduce a problem for us, which is how to deal with 20 variables that represent (in broad terms) the same thing. How can we assess “effects on anxiety”, rather than “effects on anxiety q1”, “effects on anxiety q2”, …, etc.

This leads us to the idea of reducing the dimensionality of our data. Can we capture a reasonable amount of the information from our 20 questions in a smaller number of variables?

The goal of principal component analysis (PCA) is to find a smaller number of uncorrelated variables which are linear combinations of the original ( many ) variables and explain most of the variation in the data.

Exercises: Police Performance

Data: Police Performance

The file police_performance.csv (available at https://uoepsy.github.io/data/police_performance.csv) contains data on fifty police officers who were rated in six different categories as part of an HR procedure. The rated skills were:

  • communication skills: commun
  • problem solving: probl_solv
  • logical ability: logical
  • learning ability: learn
  • physical ability: physical
  • appearance: appearance

The data also contains information on each police officer’s arrest rate (proportion of arrests that lead to criminal charges).

We are interested in if the skills ratings by HR are a good set of predictors of police officer success (as indicated by their arrest rate).

1. Explore

First things first, we should always plot and describe our data. This is always a sensible thing to do - while many of the datasets we give you are nice and clean and organised, the data you get out of questionnaire tools, experiment software etc, are almost always quite a bit messier. It is also very useful to just eyeball the patterns of relationships between variables.

Question A1

Load the job performance data into R and call it job. Check whether or not the data were read correctly into R - do the dimensions correspond to the description of the data above?

Solution

Question A2

Provide descriptive statistics for each variable in the dataset.

Solution

2. Is PCA needed?

There are many reasons we might want to reduce the dimensionality of data:

  • Theory testing
    • What are the number and nature of dimensions that best describe a theoretical construct?
  • Test construction
    • How should I group my items into sub-scales?
    • Which items are the best measures of my constructs?
  • Pragmatic
    • I have multicollinearity issues/too many variables, how can I defensibly combine my variables?

PCA is most often used for the latter - we are less interested in the theory behind our items, we just want a useful way of simplifying lots of variables down to a smaller number.

Recall that we are wanting to see how well the skills ratings predict arrest rate.
We might fit this model:

mod <- lm(arrest_rate ~ commun + probl_solv + logical + learn + physical + appearance, data = job)

However, we might have reason to think that many of these predictors might be quite highly correlated with one another, and so we may be unable to draw accurate inferences. This is borne out in our variance inflation factor (VIF):

library(car)
vif(mod)
    commun probl_solv    logical      learn   physical appearance 
     34.67       1.17       1.23      43.56      34.98      21.78 

As the original variables are highly correlated, it is possible to reduce the dimensionality of the problem under investigation without losing too much information.

On the other side, if the correlation between the variables under study are weak, a larger number of components is needed in order to explain sufficient variability.

Question A3

Working with only the skills ratings (not the arrest rate - we’ll come back to that right at the end), investigate whether or not the variables are highly correlated and explain whether or not you PCA might be useful in this case.

Hint: We only have 6 variables here, but if we had many, how might you visualise cor(job)?

Solution

3. Cov vs Cor?

Should we perform PCA on the covariance or the correlation matrix?

This depends on the variances of the variables in the dataset. If the variables have large differences in their variances, then the variables with the largest variances will tend to dominate the first few principal components.
A solution to this is to standardise the variables prior to computing the covariance matrix - i.e., compute the correlation matrix!

# show that the correlation matrix and the covariance matrix of the standardized variables are identical
all.equal(cor(job_skills), cov(scale(job_skills)))
[1] TRUE
Question A4

Look at the variance of the skills ratings in the data set. Do you think that PCA should be carried on the covariance matrix or the correlation matrix?

Solution

4. Perform PCA

Question A5

Using the principal() function from the psych package, we can perform a PCA of the job performance data, Call the output job_pca.

job_pca <- principal(job_skills, nfactors = ncol(job_skills), covar = ..., rotate = 'none')
job_pca$loadings

Depending on your answer to the previous question, either set covar = TRUE or covar = FALSE within the principal() function.

Warning: the output of the function will be in terms of standardized variables nevertheless. So you will see output with standard deviation of 1.

Solution

PCA OUTPUT

We can print the output by just typing the name of our PCA:

job_pca
Principal Components Analysis
Call: principal(r = job_skills, nfactors = ncol(job_skills), rotate = "none", 
    covar = TRUE)
Standardized loadings (pattern matrix) based upon correlation matrix
            PC1   PC2   PC3   PC4   PC5   PC6 h2       u2 com
commun     0.98 -0.12  0.02 -0.06  0.10 -0.05  1  6.7e-16 1.1
probl_solv 0.22  0.81  0.54  0.00  0.00  0.00  1  1.1e-15 1.9
logical    0.33  0.75 -0.58  0.00  0.00  0.00  1  1.1e-15 2.3
learn      0.99 -0.11  0.02 -0.05  0.00  0.11  1  0.0e+00 1.1
physical   0.99 -0.08  0.01 -0.05 -0.11 -0.05  1 -4.4e-16 1.0
appearance 0.98 -0.13  0.02  0.16  0.01  0.00  1  2.2e-16 1.1

                       PC1  PC2  PC3  PC4  PC5  PC6
SS loadings           4.04 1.26 0.63 0.03 0.02 0.02
Proportion Var        0.67 0.21 0.11 0.01 0.00 0.00
Cumulative Var        0.67 0.88 0.99 0.99 1.00 1.00
Proportion Explained  0.67 0.21 0.11 0.01 0.00 0.00
Cumulative Proportion 0.67 0.88 0.99 0.99 1.00 1.00

Mean item complexity =  1.4
Test of the hypothesis that 6 components are sufficient.

The root mean square of the residuals (RMSR) is  0 
 with the empirical chi square  0  with prob <  NA 

Fit based upon off diagonal values = 1

The output is made up of two parts.

First, it shows the loading matrix. In each column of the loading matrix we find how much each of the measured variables contributes to the computed new axis/direction (that is, the principal component). Notice that there are as many principal components as variables.

The second part of the output displays the contribution of each component to the total variance:

  • SS loadings: The sum of the squared loadings. The eigenvalues (see Lecture).
  • Proportion Var: how much of the overall variance the component accounts for out of all the variables.
  • Cumulative Var: cumulative sum of Proportion Var.
  • Proportion Explained: relative amount of variance explained (\(\frac{\text{Proportion Var}}{\text{sum(Proportion Var)}}\).
  • Cumulative Proportion: cumulative sum of Proportion Explained.

Let’s focus on the row of that output called “Cumulative Var”. This displays the cumulative sum of the variances of each principal component. Taken all together, the six principal components taken explain all of the total variance in the original data. In other words, the total variance of the principal components (the sum of their variances) is equal to the total variance in the original data (the sum of the variances of the variables).

However, our goal is to reduce the dimensionality of our data, so it comes natural to wonder which of the six principal components explain most of the variability, and which components instead do not contribute substantially to the total variance.

To that end, the second row “Proportion Var” displays the proportion of the total variance explained by each component, i.e. the variance of the principal component divided by the total variance.

The last row, as we saw, is the cumulative proportion of explained variance: 0.67, 0.67 + 0.21, 0.67 + 0.21 + 0.11, and so on.

We also notice that the first PC alone explains 67.3% of the total variability, while the first two components together explain almost 90% of the total variability. From the third component onwards, we do not see such a sharp increase in the proportion of explained variance, and the cumulative proportion slowly reaches the total ratio of 1 (or 100%).

Optional: (some of) the math behind it

5. How many components to keep?

There is no single best method to select the optimal number of components to keep, while discarding the remaining ones (which are then considered as noise components).

The following three heuristic rules are commonly used in the literature:

  • The cumulative proportion of explained variance criterion
  • Kaiser’s rule
  • The scree plot
  • Velicer’s Minimum Average Partial method
  • Parallel analysis

In the next sections we will analyse each of them in turn.

The cumulative proportion of explained variance criterion

The rule suggests to keep as many principal components as needed in order to explain approximately 80-90% of the total variance.

Question A6

Looking again at the PCA output, how many principal components would you keep if you were following the cumulative proportion of explained variance criterion?

Solution

Kaiser’s rule

According to Kaiser’s rule, we should keep the principal components having variance larger than 1. Standardized variables have a variance equal 1. Because we have 6 variables in the data set, and the total variance is 6, the value 1 represents the average variance in the data:

\[ \frac{1 + 1 + 1 + 1 + 1 + 1}{6} = 1 \]

Hint:

The variances of each PC are shown in the row of the output named SS loadings and also in job_pca$values. The average variance is:

mean(job_pca$values)
[1] 1
Question A7

Looking again at the PCA output, how many principal components would you keep if you were following Kaiser’s criterion?

Solution

The scree plot

The scree plot is a graphical criterion which involves plotting the variance for each principal component. This can be easily done by calling plot on the variances, which are stored in job_pca$values

plot(x = 1:length(job_pca$values), y = job_pca$values, 
     type = 'b', xlab = '', ylab = 'Variance', 
     main = 'Police officers: scree plot', frame.plot = FALSE)

where the argument type = 'b' tells R that the plot should have both points and lines.

A typical scree plot features higher variances for the initial components and quickly drops to small variances where the curve is almost flat. The flat part of the curve represents the noise components, which are not able to capture the main sources of variability in the system.

According to the scree plot criterion, we should keep as many principal components as where the “elbow” in the plot occurs. By elbow we mean the variance before the curve looks almost flat.

Alternatively, some people prefer to use the function scree() from the psych package:

scree(job_skills, factors = FALSE)

This also draws a horizontal line at y = 1. So, if you are making a decision about how many PCs to keep by looking at where the plot falls below the y = 1 line, you are basically following Kaiser’s rule. In fact, Kaiser’s criterion tells you to keep as many PCs as are those with a variance (= eigenvalue) greater than 1.

Question A8

According to the scree plot, how many principal components would you retain?

Solution

Velicer’s Minimum Average Partial method

The Minimum Average Partial (MAP) test computes the partial correlation matrix (removing and adjusting for a component from the correlation matrix), sequentially partialling out each component. At each step, the partial correlations are squared and their average is computed.
At first, the components which are removed will be those that are most representative of the shared variance between 2+ variables, meaning that the “average squared partial correlation” will decrease. At some point in the process, the components being removed will begin represent variance that is specific to individual variables, meaning that the average squared partial correlation will increase.
The MAP method is to keep the number of components for which the average squared partial correlation is at the minimum.

We can conduct MAP in R using:

VSS(data, plot = FALSE, method="pc", n = ncol(data))

(be aware there is a lot of other information in this output too! For now just focus on the map column)

Question A9

How many components should we keep according to the MAP method?

Solution

Parallel analysis

Parallel analysis involves simulating lots of datasets of the same dimension but in which the variables are uncorrelated. For each of these simulations, a PCA is conducted on its correlation matrix, and the eigenvalues are extracted. We can then compare our eigenvalues from the PCA on our actual data to the average eigenvalues across these simulations. In theory, for uncorrelated variables, no components should explain more variance than any others, and eigenvalues should be equal to 1. In reality, variables are rarely truly uncorrelated, and so there will be slight variation in the magnitude of eigenvalues simply due to chance. The parallel analysis method suggests keeping those components for which the eigenvalues are greater than those from the simulations.

It can be conducted in R using:

fa.parallel(job_skills, fa="pc", quant=.95)
Question A10

How many components should we keep according to parallel analysis?

Solution

Interpretation

Because three out of the five selection criteria introduced above suggest to keep 2 principal components, in the following we will work with the first two PCs only.

Let’s have a look at the selected principal components:

job_pca$loadings[, 1:2]
             PC1     PC2
commun     0.984 -0.1197
probl_solv 0.223  0.8095
logical    0.329  0.7466
learn      0.987 -0.1097
physical   0.988 -0.0784
appearance 0.979 -0.1253

and at their corresponding proportion of total variance explained:

job_pca$values / sum(job_pca$values)
[1] 0.67253 0.21016 0.10510 0.00577 0.00372 0.00273

We see that the first PC accounts for 67.3% of the total variability. All loadings seem to have the same magnitude apart from probl_solv and logical which are closer to zero. The first component looks like a sort of average of the officers performance scores excluding problem solving and logical ability.

The second principal component, which explains only 21% of the total variance, has two loadings clearly distant from zero: the ones associated to problem solving and logical ability. It distinguishes police officers with strong logical and problem solving skills and low scores on other skills (note the negative magnitudes).

We have just seen how to interpret the first components by looking at the magnitude and sign of the coefficients for each measured variable.

For interpretation purposes, it might help hiding very small loadings. This can be done by specifying the cutoff value in the print() function. However, this only works when you pass the loadings for all the PCs:

print(job_pca$loadings, cutoff = 0.3)

Loadings:
           PC1    PC2    PC3    PC4    PC5    PC6   
commun      0.984                                   
probl_solv         0.810  0.543                     
logical     0.329  0.747 -0.578                     
learn       0.987                                   
physical    0.988                                   
appearance  0.979                                   

                 PC1   PC2   PC3   PC4   PC5   PC6
SS loadings    4.035 1.261 0.631 0.035 0.022 0.016
Proportion Var 0.673 0.210 0.105 0.006 0.004 0.003
Cumulative Var 0.673 0.883 0.988 0.994 0.997 1.000


Optional: How well are the units represented in the reduced space?

6. Using PCA scores

Supposing that we decide to reduce our six variables down to two principal components:

job_pca2 <- principal(job_skills, nfactors = 2, covar = TRUE, rotate = 'none')

We can, for each of our observations, get their scores on each of our components.

head(job_pca2$scores)
       PC1    PC2
[1,] -6.10 -1.796
[2,] -4.69  4.164
[3,] -5.18 -0.131
[4,] -4.31 -1.758
[5,] -3.71  1.207
[6,] -3.88 -5.200

In the literature, some authors also suggest to look at the correlation between each principal component and the measured variables:

# First PC
cor(job_pca2$scores[,1], job_skills)
     commun probl_solv logical learn physical appearance
[1,]  0.985      0.214   0.319 0.988    0.989      0.981

The first PC is strongly correlated with all the measured variables except probl_solv and logical. As we mentioned above, all variables seem to contributed to the first PC.

# Second PC
cor(job_pca2$scores[,2], job_skills)
     commun probl_solv logical  learn physical appearance
[1,] -0.163      0.792   0.738 -0.154   -0.122     -0.169

The second PC is strongly correlated with probl_solv and logical, and slightly negatively correlated with the remaining variables. This separates police officers with clear logical and problem solving skills and a low rating on other skills.

Question A11

We have reduced our six variables down to two principal components, and we are now able to use the scores on each component in a subsequent analysis!

Join the two PC scores to the original dataset with the arrest rates in. Then fit a linear model to look at how the arrest rate of police officers is predicted by the two components representing different composites of the skills ratings by HR.

Solution

Plotting PCA scores

We can also visualise the statistical units (police officers) in the reduced space given by the retained principal component scores.

tibble(pc1 = job_pca2$scores[, 1],
       pc2 = job_pca2$scores[, 2]) %>%
  ggplot(.,aes(x=pc1,y=pc2))+
  geom_point()

Footnotes

  1. Even if we cut open someone’s brain, it’s unclear what we would be looking for in order to ‘measure’ it. It is unclear whether anxiety even exists as a physical thing, or rather if it is simply the overarching concept we apply to a set of behaviours and feelings↩︎