Problem 1. Done.
Problem 2. Done.
Problem 3. Since\[
\begin{align*}
TE_1&=aE_1+cE_3\\
TE_2&=aE_2+cE_4\\
TE_3&=bE_1+dE_3\\
TE_4&=bE_2+dE_4
\end{align*},
\] the matrix representation of $T$ w.r.t. $\beta$ and $\beta$ is \[
[T]_\beta =\matrixx{
a&0&b&0\\
0&a&0&b\\
c&0&d&0\\
0&c&0&d
}.
\]
Problem 4. Let's recall the following:
Let $V$ be finite dimensional, let $\alpha$ and $\beta$ be two bases of $V$. A transition matrix (also called the change of coordinate matrix) from $\alpha$ to $\beta$ is a matrix $P:\R^n\to \R^n$ ($n=\dim V$) such that \[Fact (Properties of Matrix Representation).
(i) Let $S:U\to V$ and $T:V\to W$ be linear, also, let $\alpha$ be a basis of $U$; $\beta$ be a basis of $V$; $\gamma$ be a basis of $W$; then the matrix representation $TS:U\to W$ w.r.t. $\alpha$ and $\gamma$ is \[
[TS]_\alpha^\gamma = [T]_\beta^\gamma [S]_\alpha^\beta.
\](ii) Let $T:U\to V$ be linear and let $\alpha$ be a basis of $U$ and $\beta$ be a basis of $V$, then \[
[T]_\alpha^\beta [v]_\alpha = [Tv]_\beta.
\] These two are almost all computational formulas that we need to know.
P[v]_\alpha = [v]_\beta.
\] Indeed the matrix $[I]_\alpha^\beta$ will do the job, where $I:V\to V$ is identity map, i.e., $Iv=v$ for each $v\in V$. It is because $[I]_\alpha^\beta [v]_\alpha = [Iv]_\beta = [v]_\beta$.
Now we solve problem 4. Let $\alpha$ and $\beta$ be two bases as in problem 4. Let $\epsilon$ denote the standard basis in $\R^2$, then by the property (i) of matrix representation, \[
[I]_\alpha^\beta =[I]_\epsilon^\beta [I]_\alpha^\epsilon = \big([I]_\beta^\epsilon\big)^{-1}[I]_\alpha^\epsilon = \matrixx{
7&9\\8&10
}^{-1}\matrixx{
1&3\\2&4
} =\matrixx{
4&3\\-3&-2
}.
\]
\alpha = \left\{\matrixx{
1&0\\0&0},\matrixx{1&1\\0&0},\matrixx{1&1\\0&1},\matrixx{1&1\\1&1}
\right\}
\] to the following basis of $M_{2\times 2}(\R)$ \[
\beta=\left\{\matrixx{1&1\\0&1},\matrixx{1&0\\0&1},\matrixx{0&1\\0&1},\matrixx{1&1\\1&0}\right\}.
\] Answer: $\boxed{\matrixx{1&2&1&-1\\0&-1&0&1\\-1&-1&0&1\\0&0&0&1}}$.
For problem 5, 6 and 7 we will need the following result (which we have mentioned in the post of tutorial note 6)
Proof. (iii) $\Rightarrow (i)$ is trivial. For (i) $\Rightarrow$ (ii), assume $T$ is 1-1, by generalized Rank-Nullity Theorem, \[Theorem 1. Let $T:V\to W$ be linear and $\dim V=\dim W<\infty$, then the following are equivalent:
(i) $T$ is 1-1.
(ii) $T$ is onto.
(iii) $T$ is invertible.
\dim V = \dim \ker T +\dim \range T.
\] Where $\range T = \{Tv:v\in V\}$, also denoted by $\mathrm{Im}(T)$. Since $T$ is 1-1, $\ker T=\{0\}$, hence \[
\dim \range T = \dim V = \dim W,
\] since $\range T$ is also a vector subspace of $W$, by problem 4 of tutorial note 5 we have $\range T=W$, hence $T$ is onto.
Finally, for (ii) $\Rightarrow$ (iii), let's assume $T$ is onto, then by rank-nullity theorem again we have \[\dim V = \dim \ker T + \dim \range T =\dim \ker T + \dim W,\] hence $\dim \ker T =0$, i.e., $\ker T =\{0\}$, so $T$ is injective. Together with the assumption that $T$ is onto, $T$ is invertible.
Q.E.D.
Problem 5. The question asks us to show \[
A\mathop{\mapsto}\limits^{T} (B-I)A
\] is an onto map from $M_{n\times n}(\R)$ to $M_{n\times n}(\R)$. $T$ is clearly linear, hence by theorem 1 it is enough to show $T$ is 1-1. For this, let $A$ be such that $TA=0$. We try to argue $A$ must be zero.
Indeed, since $A=BA$, hence $BA=B^2A,B^2A=B^3A$, ..., $B^{k-1}A=B^kA$, hence \[
A=BA=B^2A=B^3A=\cdots = B^kA=0,
\] as desired.
Problem 6. The problem is the same as showing the linear map \[
T:\P_n\to \R^{n+1};\quad p\mapsto \matrixx{p(a_1)\\p(a_2)\\ \vdots \\p(a_{n+1})}
\] is onto. For this, since $\dim \P_n =n+1= \dim \R^{n+1}$, so by theorem 1 it is enough to show $T$ is 1-1. For this, let $p\in \P_n$ be such that $Tp=0$, then $p(a_1),p(a_2),\dots,p(a_{n+1})=0$, hence a degree $n$ polynomial has $n+1$ roots, from knowledge in basic algebra, $p$ must be the zero polynomial, i.e., $p=0$, we are done.
Problem 7. For $C\in M_{n\times n}(\R)$, the map $T_C:M_{n\times n}(\R)\to \R$ defined by \[
T_C(A)= \mathrm{tr} (CA)
\] is obviously a linear functional (a scalar-valued linear map on a vector space). To show that \[
\{T_C:C\in M_{n\times n}(\R)\}=(M_{n\times n}(\R))^*,
\] it is enough to show the map \[
S:M_{n\times n}(\R)\to (M_{n\times n}(\R))^*;\quad C\mapsto T_C
\] is onto. We can verify that $S$ is linear, and since $\dim M_{n\times n}(\R)=\dim (M_{n\times n}(\R))^*$, by theorem 1 it is enough to show $S$ is 1-1.
For this, let $C$ be a matrix such that $S(C)=T_C=0$, which means that \[
T_C(A)=0,\quad \forall A\in M_{n\times n}(\R).
\] In particular, let's choose $A=C^T$, write $C=\matrixx{c_1&c_2&\cdots&c_n}$, $c_i\in \R^n$, then \[\begin{align*}
0&=T_C(C^T)\\
&=\mathrm{tr}(CC^T) \\
&= \mathrm{tr}(C^TC)\\
&=\mathrm{tr}\matrixx{c_1\cdot c_1&c_1\cdot c_2&\cdots& c_1\cdot c_n\\
c_2\cdot c_1&c_2\cdot c_2&\cdots &c_2\cdot c_n\\
\vdots &\vdots &\ddots &\vdots\\
c_n\cdot c_1&c_n\cdot c_2&\cdots &c_n\cdot c_n
}\\
&=\|c_1\|^2+\|c_2\|^2+\cdots +\|c_n\|^2,
\end{align*}
\] so $c_1,c_2,\dots,c_n=0$, i.e., $C=0$. Thus we are done.
Remark. If a vector space has a norm $\|\cdot \|$ (for precise definition, see here), then the pair $(V,\|\cdot \|)$, or simply $V$, is called a normed space. $\R^n$ is a normed space because the function $x\mapsto \sqrt{x_1^2+x_2^2+\cdots +x_n^2}$ defines a norm on $\R^n$. Since $M_{n\times n}(\R)$ is a vector space, it can be viewed as $\R^{n^2}$ in many suitable way, in particular, the naturally defined function \[
A=[a_{ij}]_{n\times n}\mapsto \|A\|:=\sqrt{\sum_{i=1}^n\sum_{j=1}^n a_{ij}^2}
\] also defines a norm on $M_{n\times n}(\R)$. Interestingly by the computation above, \[
\|A\|=\sqrt{\mathrm{tr}(A^TA)}.
\] This norm is known as Frobenius norm. Which is helpful, for example, to show the set of orthogonal matrices $O(n)$ is bounded in $M_n(\R)$.
No comments:
Post a Comment