Foundational Concepts

# 10 Eigenanalysis

Learning Objectives

To use matrix algebra to conduct eigenanalysis, and to begin interpreting the resulting eigenvectors and eigenvalues.

To continue using R.

Resources

Gotelli & Ellison (2004, appendix)

# Introduction

Eigenanalysis is a method of identifying a set of linear equations that summarize a square matrix. It yields a set of **eigenvalues** (λ), each of which has an associated **eigenvector** **(x).** The connection between these terms is expressed in Equation A.16 from Gotelli & Ellison:

**Ax** = λ**x
**

In words, this says that the multiplication of a square matrix **A** and a vector **x** will yield the same values as the multiplication of a scalar value λ and the vector **x**. While this may not sound very helpful, it means that data (**A**) can be rotated, reflected, stretched, or compressed in coordinate-space by multiplying the individual data points by an eigenvector (**x**). We’ll see much more about eigenvectors when we discuss ordinations, particularly principal component analysis (PCA).

Eigenanalysis

Eigenanalysis is a method of summarizing a square matrix. Each dimension is represented by an eigenvector and associated eigenvector.

The dimensions are in decreasing order of importance; each dimension captures as much of the variation as possible. This is why we can focus on the first few dimensions and be assured that we are seeing the broad patterns within the data.

If all of the eigenvectors are used, the patterns within the data cloud are perfectly preserved even though the cloud itself may be rotated, reflected, stretched, or compressed.

# Spectral Decomposition (`eigen()`

)

The eigenvalues (λ) and eigenvectors (**x**) of a square matrix **A** can be calculated using the `eigen()`

function:

`A <- matrix(data = 1:4, nrow = 2)`

`E <- eigen(x = A); E`

```
eigen() decomposition
$values
[1] 5.3722813 -0.3722813
$vectors
[,1] [,2]
[1,] -0.5657675 -0.9093767
[2,] -0.8245648 0.4159736
```

Note that the eigenvalues and eigenvectors are both reported and can be extracted as necessary for further manipulation:

`Evalues <- E$values`

`Evectors <- E$vectors`

The eigenvalues are always in order of decreasing size. The sum of the eigenvalues is equal to the sum of the diagonal values of the original matrix.

Each eigenvector is associated with the eigenvalue in the same relative position – for example, the first eigenvector (a column) is associated with the first eigenvalue.

Now, let’s verify Equation A.16:

`A %*% Evectors[,1] # Why is this matrix multiplied?`

`Evalues[1] * Evectors[,1] # Why isn’t this?`

Gotelli & Ellison also state that (**A** – λ**I**)**x** = **0** (Equation A.17). Can you verify this equation?

The **trace** of a matrix is equal to the sum of its diagonal values or, equivalently, the sum of its eigenvalues:

`sum(diag(A))`

`sum(Evalues)`

The **determinant** of a matrix is equal to the product of its eigenvalues:

`prod(Evalues)`

It can also be obtained using the `det()`

function:

`det(A)`

# Singular Value Decomposition (`svd()`

)

Another way to obtain the eigenvalues and eigenvectors of a matrix is through singular value decomposition using the `svd()`

function:

`A.svd <- svd(x = A); A.svd`

```
$d
[1] 5.4649857 0.3659662
$u
[,1] [,2]
[1,] -0.5760484 -0.8174156
[2,] -0.8174156 0.5760484
$v
[,1] [,2]
[1,] -0.4045536 0.9145143
[2,] -0.9145143 -0.4045536
```

Gotelli & Ellison (2004) discuss singular value decomposition but use a different symbology in Equation A.22 than is used in the help file associated with `svd()`

. The symbology in the R formulation, **X** = **UDV**’, is defined in the table below.

Matrix | Dimension | Notes |

X |
m x n | The matrix being analyzed |

U |
m x n | |

D |
n x n | Diagonal matrix with singular values of X on diagonal |

V |
n x n | Note: transposed in equation |

The output consists of the full **U** and **V** matrices and the singular values of **D**. However, they are referred to using lower-case letters as shown in the above output.

The `eigen()`

and `svd()`

approaches give broadly similar – but not identical – results:

`A.svd$d`

is analogous to`E$values`

. Each of these is an eigenvalue.`A.svd$u`

is analogous to`E$vectors`

. It is a set of vectors, each of which is associated with an eigenvalue and relates to the rows of the original matrix.`A.svd$v`

is another set of vectors. Each vector is associated with an eigenvalue, but relates to the columns of the original matrix. This is used in Correspondence Analysis (CA), though many people find it problematic. See that chapter for details.

# Concluding Thoughts

While eigenvalues and eigenvectors may sound abstract, they are used to rotate, reflect, stretch, or compress data in coordinate-space by multiplying the individual data points by an eigenvector (**x**). The fact that the eigenvectors are in descending order of importance is what allows us to use this as a data reduction approach.

We’ll see much more about eigenvectors when we discuss ordinations, particularly principal component analysis (PCA).

# References

Gotelli, N.J., and A.M. Ellison. 2004. *A primer of ecological statistics*. Sinauer Associates, Sunderland, MA.