Tuesday, 26 March 2013

Eigenvalues and eigen vectors part 5


When we introduced eigenvalues and eigenvectors, we wondered when a square matrix is similarly equivalent to a diagonal matrix? In other words, given a square matrix A, does a diagonal matrix D exist such that $A \sim D$? (i.e. there exists an invertible matrix P such that A = P-1DP)

In general, some matrices are not similar to diagonal matrices. For example, consider the matrix

\begin{displaymath}A = \left(\begin{array}{rrr}
2&1\\
0&2\\
\end{array}\right).\end{displaymath}


Assume there exists a diagonal matrix D such that A = P-1DP. Then we have

\begin{displaymath}A - \lambda I_n = P^{-1}DP - \lambda I_n = P^{-1}DP - \lambda P^{-1}P = P^{-1}\Big(D - \lambda I_n \Big)P,\end{displaymath}


i.e $A - \lambda I_n$ is similar to $D - \lambda I_n$. So they have the same characteristic equation. Hence A and D have the same eigenvalues. Since the eigenvalues of D of the numbers on the diagonal, and the only eigenvalue of A is 2, then we must have

\begin{displaymath}D = \left(\begin{array}{rrr}
2&0\\
0&2\\
\end{array}\right) = 2I_2.\end{displaymath}


In this case, we must have A = P-1DP = 2 I2, which is not the case. Therefore, A is not similar to a diagonal matrix.

Definition. A matrix is diagonalizable if it is similar to a diagonal matrix.

Remark. In a previous page, we have seen that the matrix

\begin{displaymath}A = \left(\begin{array}{rrr}
1&2&1\\
6&-1&0\\
-1&-2&-1\\
\end{array}\right)\end{displaymath}


has three different eigenvalues. We also showed that A is diagonalizable. In fact, there is a general result along these lines.

Theorem. Let A be a square matrix of order n. Assume that A has n distinct eigenvalues. Then A is diagonalizable. Moreover, if P is the matrix with the columns C1C2, ..., and Cn the n eigenvectors of A, then the matrixP-1AP is a diagonal matrix. In other words, the matrix A is diagonalizable.


Problem: What happened to square matrices of order n with less than n eigenvalues?

We have a partial answer to this problem.

Theorem. Let A be a square matrix of order n. In order to find out whether A is diagonalizable, we do the following steps:
1.
Write down the characteristic polynomial

\begin{displaymath}p(\lambda) = \det\Big(A - \lambda I_n\Big).\end{displaymath}

2.
Factorize $p(\lambda) $. In this step, we should be able to get

\begin{displaymath}p(\lambda) = (\lambda - \lambda_1)^{n_1} \cdot (\lambda - \lambda_2)^{n_2}\cdots (\lambda - \lambda_k)^{n_k}\end{displaymath}


where the $\lambda_i$$i=1,\cdots,k$, may be real or complex. For every i, the powers ni is called the (algebraic) multiplicity of the eigenvalue $\lambda_i$.
3.
For every eigenvalue, find the associated eigenvectors. For example, for the eigenvalue $\lambda_i$, the eigenvectors are given by the linear system

\begin{displaymath}A \cdot X = \lambda_i X \;\;\mbox{or}\;\; \Big(A - \lambda_i I_n \Big) X = {\cal O}.\end{displaymath}


Then solve it. We should find the unknown vector X as a linear combination of vectors, i.e.

\begin{displaymath}X = \alpha_1 C_1 + \alpha_2 C_2 + \cdots + \alpha_{m_i} C_{m_i}\end{displaymath}


where $\alpha_j$$j=1,\cdots, m_i$ are arbitrary numbers. The integer mi is called the geometric multiplicity of $\lambda_i$.
4.
If for every eigenvalue the algebraic multiplicity is equal to the geometric multiplicity, then we have

\begin{displaymath}m_1 + m_2 + \cdots + m_k = n\end{displaymath}


which implies that if we put the eigenvectors Cj, we obtained in 3. for all the eigenvalues, we get exactly n vectors. Set P to be the square matrix of order n for which the column vectors are the eigenvectors Cj. ThenP is invertible and

\begin{displaymath}P^{-1}\cdot A \cdot P\end{displaymath}


is a diagonal matrix with diagonal entries equal to the eigenvalues of A. The position of the vectors Cj in P is identical to the position of the associated eigenvalue on the diagonal of D. This identity implies that A is similar to D. Therefore, A is diagonalizable.

Remark. If the algebraic multiplicity ni of the eigenvalue $\lambda_i$ is equal to 1, then obviously we have mi = 1. In other words, ni = mi.

5.
If for some eigenvalue the algebraic multiplicity is not equal to the geometric multiplicity, then A is not diagonalizable. 


Example. Consider the matrix

\begin{displaymath}A = \left(\begin{array}{rrr}
-1&-1&1\\
0&-2&1\\
0&0&-1\\
\end{array}\right).\end{displaymath}


In order to find out whether A is diagonalizable, lt us follow the steps described above.
1.
The polynomial characteristic of A is

\begin{displaymath}p(\lambda) = \left\vert\begin{array}{rrr}
-1-\lambda&-1&1\\
...
...mbda\\
\end{array}\right\vert = (-1 -\lambda)^2(-2 - \lambda).\end{displaymath}


So -1 is an eigenvalue with multiplicity 2 and -2 with multiplicity 1.
2.
In order to find out whether A is diagonalizable, we only concentrate ur attention on the eigenvalue -1. Indeed, the eigenvectors associated to -1, are given by the system

\begin{displaymath}\Big(A + I_n \Big) X = \left(\begin{array}{rrr}
0&-1&1\\
0&-1&1\\
0&0&0\\
\end{array}\right) X= {\cal O}.\end{displaymath}


This system reduces to the equation -y + z = 0. Set $x = \alpha$ and $y = \beta$, then we have

\begin{displaymath}X = \left(\begin{array}{r}
x\\
y\\
z\\
\end{array}\right) ...
...\beta \left(\begin{array}{r}
0\\
1\\
1\\
\end{array}\right).\end{displaymath}


So the geometric multiplicity of -1 is 2 the same as its algebraic multiplicity. Therefore, the matrix A is diagonalizable. In order to find the matrix P we need to find an eigenvector associated to -2. The associated system is

\begin{displaymath}\Big(A + 2 I_n \Big) X = \left(\begin{array}{rrr}
1&-1&1\\
0&0&1\\
0&0&1\\
\end{array}\right) X= {\cal O}\end{displaymath}


which reduces to the system

\begin{displaymath}\left\{\begin{array}{rrr}
x-y&=& 0\\
z&=&0\\
\end{array}\right.\end{displaymath}


Set $x = \alpha$, then we have

\begin{displaymath}X = \left(\begin{array}{r}
x\\
y\\
z\\
\end{array}\right) ...
...alpha \left(\begin{array}{r}
1\\
1\\
0\\
\end{array}\right).\end{displaymath}


Set

\begin{displaymath}P = \left(\begin{array}{rrr}
1&0&1\\
0&1&1\\
0&1&0\\
\end{array}\right).\end{displaymath}


Then

\begin{displaymath}P^{-1}AP = \left(\begin{array}{rrr}
-1&0&0\\
0&-1&0\\
0&0&-2\\
\end{array}\right).\end{displaymath}


But if we set

\begin{displaymath}P = \left(\begin{array}{rrr}
1&0&1\\
1&1&0\\
0&1&0\\
\end{array}\right),\end{displaymath}


then

\begin{displaymath}P^{-1}AP = \left(\begin{array}{rrr}
-2&0&0\\
0&-1&0\\
0&0&-1\\
\end{array}\right).\end{displaymath}


We have seen that if A and B are similar, then An can be expressed easily in terms of Bn. Indeed, if we have A = P-1BP, then we have An = P-1BnP. In particular, if D is a diagonal matrix, Dn is easy to evaluate. This is one application of the diagonalization. In fact, the above procedure may be used to find the square root and cubic root of a matrix. Indeed, consider the matrix above

\begin{displaymath}A =
A = \left(\begin{array}{rrr}
-1&-1&1\\
0&-2&1\\
0&0&-1\\
\end{array}\right).\end{displaymath}


Set

\begin{displaymath}P = \left(\begin{array}{rrr}
1&0&1\\
1&1&0\\
0&1&0\\
\end{array}\right),\end{displaymath}


then

\begin{displaymath}P^{-1}AP = \left(\begin{array}{rrr}
-2&0&0\\
0&-1&0\\
0&0&-1\\
\end{array}\right)=D.\end{displaymath}


Hence A = P D P-1. Set

\begin{displaymath}B = P \left(\begin{array}{rrr}
-2^{1/3}&0&0\\
0&-1&0\\
0&0&-1\\
\end{array}\right) P^{-1},\end{displaymath}


Then we have

B3 = A.


In other words, B is a cubic root of A.

No comments:

https://www.youtube.com/TarunGehlot