\(\newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\P}{\mathcal P} \newcommand{\B}{\mathcal B} \newcommand{\F}{\mathbb F} \newcommand{\E}{\mathcal E} \newcommand{\brac}[1]{\left(#1\right)} \newcommand{\matrixx}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vmatrixx}[1]{\begin{vmatrix}#1\end{vmatrix}} \newcommand{\limn}{\lim_{n\to\infty}} \newcommand{\nul}{\mathop{\mathrm{Nul}}} \newcommand{\col}{\mathop{\mathrm{Col}}} \newcommand{\rank}{\mathop{\mathrm{Rank}}} \newcommand{\dis}{\displaystyle} \newcommand{\spann}{\mathop{\mathrm{span}}} \newcommand{\range}{\mathop{\mathrm{range}}} \newcommand{\inner}[1]{\langle #1 \rangle} \newcommand{\innerr}[1]{\left\langle #1 \right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\qed}{\quad \blacksquare} \newcommand{\tr}{\mathop{\mathrm{tr}}} \) Math2121 Tutorial (Spring 12-13): Tutorial Note 8

Thursday, April 18, 2013

Tutorial Note 8

PDF version is available:

Definition. In this tutorial note the characteristic polynomial of $A$ is defined and denoted by \[

p_A(x)=\det(A-xI).

\]
Problem 1. $p_A(x)=\det (A-x I) = x^2-5x - 2$.

Definition. Let $A\in M_{n\times n}(\R)$, a value $\lambda\in \R$ is said to be an eigenvalue of $A$ if there is nonzero $v\in \R^n$ such that $Av=\lambda v$. A nonzero vector $v$ such that $Av=\lambda v$ is said to be an eigenvector with eigenvalue $\lambda$. The vector space $\nul (A-\lambda I)$ is called the eigenspace (of $A$) of eigenvalue $\lambda$.
Remark. By definition eigenvectors are nonzero since later we will use eigenvectors to form a basis in order to simplify a matrix by means of matrix representation, zero vector is of course not of interest for that purpose.

Let's recall the following, note that in the definition of eigenvector and eigenvalue, $Av=\lambda v$ iff $(A-\lambda I)v=0$.
Fact.  The following are equivalent:
(i) $\lambda$ is an eigenvalue of $A$.
(ii) $(A-\lambda I)x=0$ has nontrivial solution.
(iii) $\nul (A-\lambda I)\neq \{0\}$.
(iv) $A-\lambda I$ is not injective.
(v) $A-\lambda I$ is not invertible.
(vi) $\det(A-\lambda I)=0$.
Part (vi) of the fact above is a useful tool to find eigenvalues since (i) $\iff$ (vi) means that $\lambda$ is an eigenvalue of $A$ $\iff$ $p_A(\lambda)=0$.

Problem 2. Since $\det (A-\lambda I)= (a_{11}-\lambda)(a_{22}-\lambda)\cdots (a_{nn}-\lambda)$, $a_{11},\dots,a_{nn}$ are the only eigenvalues (may repeat, say $a_{11}=a_{22}$).

Problem 3. Eigenvalue ($\lambda$) $=3,4$.

When $\lambda=3$, we have one eigenvector $\matrixx{1\\0}$.

When $\lambda = 4$, we have one eigenvector $\matrixx{2\\1}$.

Definition. Let $A\in M_{n\times n}(\R)$ and let $\lambda$ be a root of $p_A(x)$.
(i) Let $\lambda$ be a root of $p_A$ of multiplicity $k$, then $k$ is said to be the algebraic multiplicity of $\lambda$.
(ii) $\dim \nul (A-\lambda I)$ ($\ge 1$) is said to be the geometric multiplicity of $\lambda$.
Example. Let $A=\matrixx{1&2&3\\0&1&4\\0&0&5}$, then $p_A(x)=(1-x)^2(5-x)$. The algebraic multiplicty of $1$ is two and that of $5$ is one.

Remark.
Basically geometric multiplicity counts the number of "distinct" vectors in an eigenspace. Later we will see that if there are enough "distinct" eigenvectors of a matrix, then this matrix can be made diagonal under a change of basis.

Problem 4. Since $A$ is upper-triangular, by problem 2, $1,2$ are the only eigenvalues.

Algebraic Multiplicity. Since $p_A(x)=(1-x)^2(2-x)^2$, so algebraic multiplicty of 1 and 2 are both $\boxed{2}$.

Geometric Multiplicity of 1. We need to find $\dim (A-I)$, to do this, we count $\rank (A-I)$ by finding the number of pivots. Since \[

A-I=\matrixx{0&2&1&0\\0&0&1&0\\0&0&1&0\\0&0&0&1}\to \matrixx{0&2&1&0\\0&0&1&0\\0&0&0&1\\0&0&0&0},

\] so $\rank (A-I)=3$. By rank-nullity theorem, $\dim \nul(A-I)+\rank(A-I)=4$, so $\dim \nul (A-I)=4-3=\boxed{1}$.

Geometric Multiplicity of 2. Since \[

A-2I=\matrixx{-1&2&1&0\\0&-1&1&0\\0&0&0&0\\0&0&0&0},

\] the geometric multiplicty of 2 is $4-2=\boxed{2}$.

Note in general we have:
Multiplicity Theorem. Let $\lambda$ be an eigenvalue of $A\in M_{n\times n}(\R)$, then \[
1\leq \text{Geometric Multiplicty of $\lambda$}\leq \text{Algebraic Multiplicty of $\lambda$}
\]

Problem 5. For the first matrix $\matrixx{1&6&5\\0&2&4\\0&0&3}$, since $1,2,3$ are distinct eigenvalues, we have already three "distinct" (linearly independent) eigenvectors, hence it is diagonalizable.

For the second matrix $A:=\matrixx{0&1\\0&0}$, since $A$ is upper-triangular, the only eigenvalues are on the diagonal, namely, 0. But $\rank(A-0I)=\rank A=1$, hence \[

\text{GM of $0$}=\dim \nul (A-0I)=2-\rank(A-0I)=2-1=1<2,

\] so there are not enough eigenvectors $A$ to form a basis of $\R^2$, namely, $A$ is not diagonalizable.

Problem 6. A direct computation gives $p_A(t)=\det(A-tI)=-(t-1)^2(t-4)$, so we have two eigenvalues: $1$ and $4$. AM of $1$ is $\boxed{2}$ and AM of $4$ is $\boxed{1}$. To discuss diagonalizability, we must count the sum of all geometric multiplicities.

GM of 4. Since AM of $4$ is 1, by multiplicity theorem, GM of $4$ must also be $\boxed{1}$, i.e., there is only "one" vector in the eigenspace $\nul (A-4I)$.

GM of 1. AM of $1$ is two, we hope that GM of $1$ is also 2 (then $A$ is diagonalizable). To see this, since \[

A-I =\matrixx{1&1&1\\1&1&1\\1&1&1}\to \matrixx{1&1&1\\0&0&0\\0&0&0},

\] so $\rank (A-I)=1$, thus $\dim \nul(A-I)=3-\rank(A-I)=3-1=2$, thus GM of $1$ is $\boxed{2}$. Hence we have a basis of $\R^3$ consisting of eigenvectors of $A$, i.e., $A$ is diagonalizable.

By solving $(A-4I)x=0$, we have an eigenvector \[\matrixx{1\\1\\1}.\]
By solving $(A-I)x=0$, we have two eigenvectors \[
\matrixx{-1\\1\\0}\quad\text{and}\quad \matrixx{-1\\0\\1},
\] hence if we let $P=\matrixx{1&-1&-1\\1&1&0\\1&0&1}$, the diagonalization is \[
P^{-1}AP=\matrixx{4&0&0\\0&1&0\\0&0&1}.
\]


Problem 7. For the sake of contradiction, let's suppose on the contrary there is $\epsilon_0>0$ such that for every $t\in (-\epsilon_0,\epsilon_0)\setminus \{0\}$, $A-tI$ is not invertible. Then there will be infinitely many distinct eigenvalues, and hence infinitely many linearly independent eigenvectors (which is impossible in $\R^n$!), a contradiction.

Problem 8. Let $\lambda_1,\dots,\lambda_\ell$ be distinct eigenvalues of $A$, then there are linearly independent $v_1,\dots,v_\ell$, such that \[

Av_i=\lambda_iv_i.

\] Since $\spann\{Av_1,\dots,Av_\ell\}\subseteq \col A$, we have \[

\dim\big( \spann \{\lambda_1 v_1,\dots,\lambda_\ell v_\ell\}\big) \leq \dim \col A=k.

\] To know the dimension on LHS, we need to consider two possible cases: If $\lambda_i=0$, for some $i$, then $\ell-1\leq k\iff \ell\leq k+1$. If $\lambda_i\neq 0$ for all $i$, then $\ell\leq k$. So combining two cases, we have $\ell \leq k+1$. For example, consider the matrix \[

\matrixx{
0\\
&1\\
&&2\\
&&&\ddots\\
&&&&k\\
&&&&&0\\
&&&&&&\ddots\\
&&&&&&&0
},

\] here outside the diagonal every entry is zero. This matrix has rank $k$ and $k+1$ eigenvalues. So the equality in $\ell\leq k+1$ can happen.

No comments:

Post a Comment