
CHAPTER 2. LINEAR ALGEBRA
We have seen that constructing matrices with specific eigenvalues and eigen-
vectors allows us to stretch space in desired directions. However, we often want
to decompose matrices into their eigenvalues and eigenvectors. Doing so can help
us to analyze certain properties of the matrix, much as decomposing an integer
into its prime factors can help us understand the behavior of that integer.
Not every matrix can be decomposed into eigenvalues and eigenvectors. In
some cases, the decomposition exists, but may involve complex rather than real
numbers. Fortunately, in this book, we usually need to decompose only a spe-
cific class of matrices that have a simple decomposition. Specifically, every real
symmetric matrix can be decomposed into an expression using only real-valued
eigenvectors and eigenvalues:
A = QΛQ
>
,
where Q is an orthogonal matrix composed of eigenvectors of A, and Λ is a
diagonal matrix, with λ
i,i
being the eigenvalue corresponding to Q
:,i
.
While any real symmetric matrix A is guaranteed to have an eigendecom-
position, the eigendecomposition is not unique. If any two or more eigenvectors
share the same eigenvalue, then any set of orthogonal vectors lying in their span
are also eigenvectors with that eigenvalue, and we could equivalently choose a Q
using those eigenvectors instead. By convention, we usually sort the entries of
Λ in descending order. Under this convention, the eigendecomposition is unique
only if all of the eigenvalues are unique.
The eigendecomposition of a matrix tells us many useful facts about the ma-
trix. The matrix is singular if and only if any of the eigenvalues are 0. The
eigendecomposition can also be used to optimize quadratic expressions of the
form f(x) = x
>
Ax subject to ||x||
2
= 1. Whenever x is equal to an eigenvector
of A, f takes on the value of the corresponding eigenvalue. The maximum value
of f within the constraint region is the maximum eigenvalue and its minimum
value within the constraint region is the minimum eigenvalue.
A matrix whose eigenvalues are all positive is called positive definite. A matrix
whose eigenvalues are all positive or zero-valued is called positive semidefinite.
Likewise, if all eigenvalues are negative, the matrix is negative definite, and if
all eigenvalues are negative or zero-valued, it is negative semidefinite. Positive
semidefinite matrices are interesting because they guarantee that ∀x, x
>
Ax ≥ 0.
Positive definite matrices additionally guarantee that x
>
Ax = 0 ⇒ x = 0.
2.8 Singular Value Decomposition
In Sec. 2.7, we saw how to decompose a matrix into eigenvectors and eigenval-
ues. The singular value decomposition (SVD) provides another way to factorize a
matrix, into singular vectors and singular values. The SVD allows us to discover
39