"

Foundational Concepts

10 Eigenanalysis

Learning Objectives

To use matrix algebra to conduct eigenanalysis.

To begin interpreting eigenvectors and eigenvalues.

Resources

Appendix from Gotelli & Ellison (2004)

Introduction

Eigenanalysis is a method of identifying a set of linear equations that summarize a square matrix.  It yields a set of eigenvalues (λ), each of which has an associated eigenvector (x).  The connection between these terms is expressed in Equation A.16 from Gotelli & Ellison:

Ax = λx

In words, this says that the multiplication of a square matrix A and a vector x yields the same values as the multiplication of a scalar value λ and the vector x.  While this may not sound very helpful, it means that data (A) can be rotated, reflected, stretched, or compressed in coordinate-space by multiplying the individual data points by an eigenvector (x).

We’ll see much more about eigenvectors when we discuss ordinations, particularly principal component analysis (PCA).

 

Eigenanalysis

Eigenanalysis is a method of summarizing a square matrix.  Each dimension is represented by an eigenvalue and associated eigenvector.

The dimensions (eigenvalues) are in decreasing order of importance.  This is why we can focus on the first few dimensions and be assured that we are seeing the broad patterns within the data.

If all of the eigenvectors are used, the patterns within the data cloud are perfectly preserved even though the cloud itself may be rotated, reflected, stretched, or compressed.

Spectral Decomposition (eigen())

The eigenvalues (λ) and eigenvectors (x) of a square matrix A can be calculated using the eigen() function:
A <- matrix(data = c(1, 2, 2, 5), nrow = 2)
E <- eigen(x = A); E

 

eigen() decomposition
$values
[1]  5.8284271 0.1715729

$vectors
          [,1]       [,2]
[1,] 0.3826834 -0.9238795
[2,] 0.9238795  0.3826834

 

Note that the object E contains both the eigenvalues and the eigenvectors.  These elements can be extracted for further manipulation:
Evalues <- E$values
Evectors <- E$vectors

The eigenvalues are always in order of decreasing size. The sum of the eigenvalues is equal to the sum of the diagonal values of the original matrix.

Each eigenvector is associated with the eigenvalue in the same relative position – for example, the first eigenvector (i.e., the first column) is associated with the first eigenvalue.

 

Now, let’s verify Equation A.16 (Ax = λx) from Gotelli & Ellison (2004):
A %*% Evectors[,1]  # Why is this matrix multiplied?
Evalues[1] * Evectors[,1] # Why isn’t this?

Note that the left-hand and right-hand sides of this equation give the same values when we focus on the first eigenvalue and its eigenvector.  Verify that the same is true for the second eigenvalue and its eigenvector.

 

The trace of a matrix is equal to the sum of its diagonal values or, equivalently, the sum of its eigenvalues:
sum(diag(A))
sum(Evalues)

 

The determinant of a matrix is equal to the product of its eigenvalues:
prod(Evalues)
The determinant can also be obtained using the det() function:
det(A)

Singular Value Decomposition (svd())

Another way to obtain the eigenvalues and eigenvectors of a matrix is through singular value decomposition using the svd() function:
A.svd <- svd(x = A)
A.svd

 

$d
[1] 5.8284271 0.1715729

$u
           [,1]       [,2]
[1,] -0.3826834 -0.9238795
[2,] -0.9238795  0.3826834

$v
           [,1]       [,2]
[1,] -0.3826834 -0.9238795
[2,] -0.9238795  0.3826834

 

Gotelli & Ellison (2004) use a different symbology than is used in the help file associated with svd().  The symbology in the R formulation, X = UDV’, is defined in the table below.

Matrix Dimension Notes
X m x n The matrix being analyzed
U m x n
D n x n Diagonal matrix with singular values of X on diagonal, in decreasing order
V n x n Note: transposed in equation

The output consists of the full U and V matrices and the singular values of D.  However, they are referred to using lower-case letters as shown in the above output.  The matrices U and V are identical in our simple example (A.svd) but this is not always the case.

 

The eigen() and svd() approaches give broadly similar – but not identical – results:

  • A.svd$d is analogous to E$values.  Each of these is an eigenvalue.
  • A.svd$u is analogous to E$vectors.  It is a set of vectors, each of which is associated with the eigenvalue in the same position, and relates to the rows of the original matrix.
  • A.svd$v is another set of vectors.  Each vector is associated with the eigenvalue in the same position, but relates to the columns of the original matrix.  This is used in Correspondence Analysis (CA), though many people find it problematic.  See that chapter for details.

Concluding Thoughts

While eigenvalues and eigenvectors may sound abstract, they are used to rotate, reflect, stretch, or compress data in coordinate-space by multiplying the individual data points by an eigenvector (x).

The fact that the eigenvectors are in descending order of importance is what allows us to use this as a data reduction approach.

We’ll see much more about eigenvectors when we discuss ordinations, particularly principal component analysis (PCA).

References

Gotelli, N.J., and A.M. Ellison. 2004. A Primer of Ecological Statistics. Sinauer Associates, Sunderland, MA.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Applied Multivariate Statistics in R Copyright © 2026 by Jonathan Bakker is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.