Professional Documents
Culture Documents
Diagonalization of Matrices
Diagonalization of Matrices
Diagonalization of Matrices
Recall that a diagonal matrix is a square n n matrix with non-zero entries only along the diagonal from the under left to the lower right (the main diagonal ). Diagonal matrices are particularly convenient for eigenvalue problems since the eigenvalues of a diagonal matrix
A=
a11
0 . . . 0
a22
..
0 . . .
. 0
ann
coincide with the diagonal entries {aii } and the eigenvector corresponding the eigenvalue aii is just the ith coordinate vector.
Example 16.1.
A=
2 0 0 3
2 0 0 3 = (2 ) (3 ) Evidently PA () has roots at = 2, 3. The eigenvectors corresponding to the eigenvalue = 2 are solutions of x1 0 0 0 x2 = 0 (A (2)I) x = 0 0 1 x2 = 0 1 x span 0 The eigenvectors corresponding to the eigenvalue = 3 are solutions of 1 0 x1 (A (3)I) x = 0 = 0 x1 = 0 0 0 0 x2 0 x span 1
PA () = det (A I) = det
This property (that the eigenvalues of a diagonal matrix coincide with its diagonal entries and the eigenvectors corresponds to the corresponding coordinate vectors) is so useful and important that in practice one often tries to make a change of coordinates just so that this will happen. Unfortunately, this is not always possible; however, if it is possible to make a change of coordinates so that a matrix becomes diagonal we say that the matrix is diagonalizable. More formally,
Lemma 16.2.
Let A be a real (or complex) n n matrix, let 1 , 2 , . . . , n be a set of n real (respectively, complex) scalars, and let v1 , v2 , . . . , vn be a set of n vectors in Rn (respectively, Cn ). Let C be the n n
1
matrix formed by using vj for j th column vector, and let D be the n n diagonal matrix whose diagonal entries are 1 , 2 , . . . , n . Then
AC = CD if and only if 1 , 2 , . . . , n are the eigenvalues of A and each vj is an eigenvector of A correponding the
eigenvalue j .
Proof.
AC CD
= A =
v1
|
vn
|
=
1
Av1
|
Avn
|
v1
|
vn
|
and so AC = CD implies
.. . . . . 0 n
0 | .. = 1 v1 .
|
n vn
|
Av1 = Avn =
1 v1 n vn
.. .
and vice-versa.
C as converting A into a diagonal matrix. Definition 16.3. An n n matrix A is diagonalizable if there is an invertible n n matrix C such that C1 AC is a diagonal matrix. The matrix C is said to diagonalize A. Theorem 16.4. An nn matrix A is diagonalizable i and only if it has n linearly independent eigenvectors.
And so we can think of the matrix
Now suppose AC = CD, and the matrix C is invertible. Then we can write D = C1AC.
Proof. The argument here is very simple. Suppose A has n linearly independent eigenvectors. Then the
of must coincide with the eigenvectors of . Since is invertible, these n column vectors must be linearly independent. Hence, has n linearly independent eigenvectors.
matrix C formed by using these eigenvectors as column vectors will be invertible (since the rank of C will be equal to n). On the other hand, if A is diagonalizable then, by denition, there must be an invertible matrix C such that D = C1 AC is diagonal. But then the preceding lemma says that the columns vectors
Example 16.5.
C
2 0
A=
First well nd the eigenvalues and eigenvectors of A. 6 0 = det (A I) = det 2 0 1 = (2 )(1 ) = 2, 1 The eigenvectors corresponding to the eigenvalue = 2 are solutions of (A (2)I) x = 0 or 0 6 x1 6x 2 = 0 1 = 0 x2 = 0 x = r 0 3 0 3x2 = 0 x2 0 The eigenvectors corresponding to the eigenvalue = 1 are solutions of (A (1)I) x = 0 or 3 6 x1 3x1 + 6x2 = 0 x = 2x x = r 2 = 0 1 2 0 0 0 0=0 1 x2
So the vectors v1 = [1, 0] and v2 = [2, 1] will be eigenvectors of A. We now arrange these two vectors as the column vectors of the matrix C. 1 2 C= 0 1
In order to compute the diagonalization of A we also need C1 . This we compute using the technique of Section 1.5: 1 2 1 0 2 1 0 1 2 R C1 = 1 1 R 1 + 2R 2 0 1 0 1 0 1 0 1 0 1 Finally,
D = C1 AC = C1 (AC)
= = = 1 0 1 0 2 0 2 1 2 1 0 1 2 6 0 1 2 2 0 1
1 2 0 1