Skip to main content

Section6.5Complex Eigenvalues

Objectives
  1. Learn to find complex eigenvalues and eigenvectors of a matrix.
  2. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales.
  3. Understand the geometry of 2 × 2 and 3 × 3 matrices with a complex eigenvalue.
  4. Recipes: a 2 × 2 matrix with a complex eigenvalue is similar to a rotation-scaling matrix, the eigenvector trick for 2 × 2 matrices.
  5. Pictures: the geometry of matrices with a complex eigenvalue.
  6. Theorems: the rotation-scaling theorem, the block diagonalization theorem.
  7. Vocabulary: rotation-scaling matrix.

In Section 6.4, we saw that an n × n matrix whose characteristic polynomial has n distinct real roots is diagonalizable: it is similar to a diagonal matrix, which is much simpler to analyze. The other possibility is that a matrix has complex roots, and that is the focus of this section. It turns out that such a matrix is similar (in the 2 × 2 case) to a rotation-scaling matrix, which is also relatively easy to understand.

In a certain sense, this entire section is analogous to Section 6.4, with rotation-scaling matrices playing the role of diagonal matrices.

See Appendix A for a review of the complex numbers.

Subsection6.5.1Matrices with Complex Eigenvalues

As a consequence of the fundamental theorem of algebra as applied to the characteristic polynomial, we see that:

Every n × n matrix has exactly n complex eigenvalues, counted with multiplicity.

We can compute a corresponding (complex) eigenvector in exactly the same way as before: by row reducing the matrix A λ I n . Now, however, we have to do arithmetic with complex numbers.

If A is a matrix with real entries, then its characteristic polynomial has real coefficients, so this note implies that its complex eigenvalues come in conjugate pairs. In the first example, we notice that

1 + i hasaneigenvector v 1 = N i 1 O 1 i hasaneigenvector v 2 = N i 1 O .

In the second example,

4 + 3 i 5hasaneigenvector v 1 = G 12 9 i 9 + 12 i 25 H 4 3 i 5hasaneigenvector v 2 = G 12 + 9 i 9 12 i 25 H

In these cases, an eigenvector for the conjugate eigenvalue is simply the conjugate eigenvector (the eigenvector obtained by conjugating each entry of the first eigenvector). This is always true. Indeed, if Av = λ v then

A v = Av = λ v = λ v ,

which exactly says that v is an eigenvector of A with eigenvalue λ .

Let A be a matrix with real entries. If

λ isacomplexeigenvaluewitheigenvector v ,then λ isacomplexeigenvaluewitheigenvector v .

In other words, both eigenvalues and eigenvectors come in conjugate pairs.

Since it can be tedious to divide by complex numbers while row reducing, it is useful to learn the following trick, which works equally well for matrices with real entries.

Eigenvector Trick for 2 × 2 Matrices

Let A be a 2 × 2 matrix, and let λ be a (real or complex) eigenvalue. Then

A λ I 2 = N zw AA O = N wz O isaneigenvectorwitheigenvalue λ ,

assuming the first row of A λ I 2 is nonzero.

Indeed, since λ is an eigenvalue, we know that A λ I 2 is not an invertible matrix. It follows that the rows are collinear (otherwise the determinant is nonzero), so that the second row is automatically a (complex) multiple of the first:

N zw AA O = N zwczcw O .

It is obvious that A wz B is in the null space of this matrix, as is A w z B , for that matter. Note that we never had to compute the second row of A λ I 2 , let alone row reduce!

In this example we found the eigenvectors A i 1 B and A i 1 B for the eigenvalues 1 + i and 1 i , respectively, but in this example we found the eigenvectors A 1 i B and A 1 i B for the same eigenvalues of the same matrix. These vectors do not look like multiples of each other at first—but since we now have complex numbers at our disposal, we can see that they actually are multiples:

i N i 1 O = N 1 i O i N i 1 O = N 1 i O .

Subsection6.5.2Rotation-Scaling Matrices

The most important examples of matrices with complex eigenvalues are rotation-scaling matrices, i.e., scalar multiples of rotation matrices.

Definition

A rotation-scaling matrix is a 2 × 2 matrix of the form

N a bba O ,

where a and b are real numbers, not both equal to zero.

The following proposition justifies the name.

Geometrically, a rotation-scaling matrix does exactly what the name says: it rotates and scales (in either order).

The matrix in the second example has second column A C 31 B , which is rotated counterclockwise from the positive x -axis by an angle of 5 π/ 6. This rotation angle is not equal to tan 1 A 1 / ( C 3 ) B = π 6 . The problem is that arctan always outputs values between π/ 2 and π/ 2: it does not account for points in the second or third quadrants. This is why we drew a triangle and used its (positive) edge lengths to compute the angle ϕ :

N C 31 O θ 1 C 3 ϕ ϕ = tan 1 N 1 C 3 O = π 6 θ = π ϕ = 5 π 6.

Alternatively, we could have observed that A C 31 B lies in the second quadrant, so that the angle θ in question is

θ = tan 1 N 1 C 3 O + π .

When finding the rotation angle of a vector A ab B , do not blindly compute tan 1 ( b / a ) , since this will give the wrong answer when A ab B is in the second or third quadrant. Instead, draw a picture.

Subsection6.5.3Geometry of 2 × 2 Matrices with a Complex Eigenvalue

Let A be a 2 × 2 matrix with a complex, non-real eigenvalue λ . Then A also has the eigenvalue λ B = λ . In particular, A has distinct eigenvalues, so it is diagonalizable using the complex numbers. We often like to think of our matrices as describing transformations of R n (as opposed to C n ). Because of this, the following construction is useful. It gives something like a diagonalization, except that all matrices involved have real entries.

Proof

Here Re and Im denote the real and imaginary parts, respectively:

Re ( a + bi )= a Im ( a + bi )= b Re N x + yiz + wi O = N xz O Im N x + yiz + wi O = N yw O .

The rotation-scaling matrix in question is the matrix

B = N a bba O with a = Re ( λ ) , b = Im ( λ ) .

Geometrically, the rotation-scaling theorem says that a 2 × 2 matrix with a complex eigenvalue behaves similarly to a rotation-scaling matrix. See this important note in Section 6.3.

One should regard the rotation-scaling theorem as a close analogue of the diagonalization theorem in Section 6.4, with a rotation-scaling matrix playing the role of a diagonal matrix. Before continuing, we restate the theorem as a recipe:

Recipe: A 2 × 2 matrix with a complex eigenvalue

Let A be a 2 × 2 real matrix.

  1. Compute the characteristic polynomial
    f ( λ )= λ 2 Tr ( A ) λ + det ( A ) ,
    then compute its roots using the quadratic formula.
  2. If the eigenvalues are complex, choose one of them, and call it λ .
  3. Find a corresponding (complex) eigenvector v using the trick.
  4. Then A = CBC 1 for
    C = G || Re ( v ) Im ( v ) || H and B = N Re ( λ ) Im ( λ ) Im ( λ ) Re ( λ ) O .
    This scales by a factor of | λ | .

We saw in the above examples that the rotation-scaling theorem can be applied in two different ways to any given matrix: one has to choose one of the two conjugate eigenvalues to work with. Replacing λ by λ has the effect of replacing v by v , which just negates all imaginary parts, so we also have A = C A B A ( C A ) 1 for

C A = G || Re ( v ) Im ( v ) || H and B A = N Re ( λ ) Im ( λ ) Im ( λ ) Re ( λ ) O .

The matrices B and B A are similar to each other. The only difference between them is the direction of rotation, since A Re ( λ ) Im ( λ ) B and A Re ( λ ) Im ( λ ) B are mirror images of each other over the x -axis:

N Re ( λ ) Im ( λ ) O θ N Re ( λ ) Im ( λ ) O θ

The discussion that follows is closely analogous to the exposition in this subsection in Section 6.4, in which we studied the dynamics of diagonalizable 2 × 2 matrices.

Dynamics of a 2 × 2 Matrix with a Complex Eigenvalue

Let A be a 2 × 2 matrix with a complex (non-real) eigenvalue λ . By the rotation-scaling theorem, the matrix A is similar to a matrix that rotates by some amount and scales by | λ | . Hence, A rotates around an ellipse and scales by | λ | . There are three different cases.

| λ | > 1: when the scaling factor is greater than 1, then vectors tend to get longer, i.e., farther from the origin. In this case, repeatedly multiplying a vector by A makes the vector “spiral out”. For example,

A = 1 C 2 E C 3 + 1 21 C 3 1 F λ = C 3 i C 2 | λ | = C 2 > 1

gives rise to the following picture:

v Av A 2 v A 3 v

| λ | = 1: when the scaling factor is equal to 1, then vectors do not tend to get longer or shorter. In this case, repeatedly multiplying a vector by A simply “rotates around an ellipse”. For example,

A = 1 2 E C 3 + 1 21 C 3 1 F λ = C 3 i 2 | λ | = 1

gives rise to the following picture:

v Av A 2 v A 3 v

| λ | < 1: when the scaling factor is less than 1, then vectors tend to get shorter, i.e., closer to the origin. In this case, repeatedly multiplying a vector by A makes the vector “spiral in”. For example,

A = 1 2 C 2 E C 3 + 1 21 C 3 1 F λ = C 3 i 2 C 2 | λ | = 1 C 2 < 1

gives rise to the following picture:

A 3 v A 2 v Av v

Subsection6.5.4Block Diagonalization

For matrices larger than 2 × 2, there is a theorem that combines the diagonalization theorem in Section 6.4 and the rotation-scaling theorem. It says essentially that a matrix is similar to a matrix with parts that look like a diagonal matrix, and parts that look like a rotation-scaling matrix.

The block diagonalization theorem is proved in the same way as the diagonalization theorem in Section 6.4 and the rotation-scaling theorem. It is best understood in the case of 3 × 3 matrices.

Block Diagonalization of a 3 × 3 Matrix with a Complex Eigenvalue

Let A be a 3 × 3 matrix with a complex eigenvalue λ 1 . Then λ 1 is another eigenvalue, and there is one real eigenvalue λ 2 . Since there are three distinct eigenvalues, they have algebraic and geometric multiplicity one, so the block diagonalization theorem applies to A .

Let v 1 be a (complex) eigenvector with eigenvalue λ 1 , and let v 2 be a (real) eigenvector with eigenvalue λ 2 . Then the block diagonalization theorem says that A = CBC 1 for

C = G ||| Re ( v 1 ) Im ( v 1 ) v 2 ||| H B = Re ( λ 1 ) Im ( λ 1 ) 0 Im ( λ 1 ) Re ( λ 1 ) 0 0 0 λ 2 IK JL .