\(\newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\P}{\mathcal P} \newcommand{\B}{\mathcal B} \newcommand{\F}{\mathbb F} \newcommand{\E}{\mathcal E} \newcommand{\brac}[1]{\left(#1\right)} \newcommand{\matrixx}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vmatrixx}[1]{\begin{vmatrix}#1\end{vmatrix}} \newcommand{\limn}{\lim_{n\to\infty}} \newcommand{\nul}{\mathop{\mathrm{Nul}}} \newcommand{\col}{\mathop{\mathrm{Col}}} \newcommand{\rank}{\mathop{\mathrm{Rank}}} \newcommand{\dis}{\displaystyle} \newcommand{\spann}{\mathop{\mathrm{span}}} \newcommand{\range}{\mathop{\mathrm{range}}} \newcommand{\inner}[1]{\langle #1 \rangle} \newcommand{\innerr}[1]{\left\langle #1 \right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\qed}{\quad \blacksquare} \newcommand{\tr}{\mathop{\mathrm{tr}}} \) Math2121 Tutorial (Spring 12-13): Tutorial note 2

Wednesday, February 20, 2013

Tutorial note 2

PDF version: http://ihome.ust.hk/~cclee/document/blog_2121/02.pdf

Problem 1. In this example we have just finished part (a) and half of (b). Let's finish them.
(b). For $f,g\in \mathcal R[a,b]$ (collection of Riemann integrable function on $[a,b]$), we define the natural definition as in $C[a,b]$ that\[
(f+g)(x)=f(x)+g(x)
\] and also for every $\alpha\in \R$, \[
(\alpha f)(x)=\alpha f(x).
\] From calculus we know (although we should haven't seen the proof) that $f+g$ and $\alpha f$ are Riemann Integrable. Therefore $\mathcal R[a,b]$ is closed under addition and scalar multiplication under such $+$ and $\cdot$.

(c). Let $p,q\in \mathcal P_n$ such that \[

p=\sum_{i=0}^n a_ix^i,\quad q=\sum_{i=0}^n b_ix^i,

\] where $a_i,b_i\in \mathbb R$, $i=0,1,2,\dots,n$. Since $\mathcal P_n\subseteq C[a,b]$, it inherits natural definition of $+$ and $\cdot$ from that of $C[a,b]$. Namely, we define \[

(p+q)(x)=p(x)+q(x) = \sum_{i=0}^n (a_i+b_i)x^i

\] and also for $\alpha\in \R$, \[

(\alpha p)(x)=\alpha p(x)=\sum_{i=0}^n \alpha a_ix^i,

\] it is easy to see that $p+q,\alpha p\in \mathcal P_n$, so indeed, without checking the messy subtle detail, $\mathcal P_n$ forms a vector space.

(d). $\mathbb R^{m\times n}$ is the collection of real $m\times n$ matrices. Let $A,B\in \mathbb R^{m\times n}$, then there are $a_{ij},b_{ij}\in \mathbb R$ such that\[

A=[a_{ij}]_{m\times n},\quad  B=[b_{ij}]_{m\times n},

\] we define $A+B =[a_{ij}+b_{ij}]_{m\times n}$ and for $\alpha\in \mathbb R$, $\alpha A=[\alpha a_{ij}]_{m\times n}$. Of course these $+$ and $\cdot $ make $\mathbb R^{m\times n}$ into a vector space.

Problem 2. Done.

Problem 3. Done.

Problem 4. Done.

Problem 5. Done.

Problem 6. WLOG, let's assume \[

\alpha_1<\alpha_2<\cdots<\alpha_n.

\] Otherwise we relabel those $\alpha_i$'s by $\beta_i$'s which is strictly increasing. To show desired functions are linearly independent in $C(\mathbb R)$ (collection of continuous functions on $\mathbb R$), we let there be $a_1,a_2,\dots,a_n\in \mathbb R$ such that  \begin{equation}
\label{first eq}
a_1e^{\alpha_1t} +a_2e^{\alpha_2t}+\cdots + a_ne^{\alpha_nt}=0,

\end{equation} our target is to show necessarily $a_1=a_2=\cdots=a_n=0$, i.e., the trivial solution $(0,0,\dots,0)^T\in \R^n$ is the only solution of (\ref{first eq}).

We divide both sides by $e^{\alpha_nt}$ (which is largest among all other $e^{\alpha_it}$'s), then \begin{equation}
\label{take limit}
a_1\left(\frac{e^{\alpha_1}}{e^{\alpha_n}}\right)^t+a_2\left(\frac{e^{\alpha_2}}{e^{\alpha_n}}\right)^t+\cdots + a_{n-1}\brac{\frac{e^{\alpha_{n-1}}}{e^{\alpha_n}}}^t+a_n=0.

\end{equation} Since $\alpha_i<\alpha_n$, so $0<e^{\alpha_i}<e^{\alpha_n}$, thus $\displaystyle 0<\frac{e^{\alpha_i}}{e^{\alpha_n}}<1$ for each $i=1,2,\dots,n-1$, so by taking $t\to\infty$ on (\ref{take limit}), we have $a_n=0$. So (\ref{first eq}) becomes \[

 a_1e^{\alpha_1t} +a_2e^{\alpha_2t}+\cdots + a_{n-1}e^{\alpha_{n-1}t}=0.

\] We can repeat the process to conclude $a_{n-1}=a_{n-2}=\cdots =a_2=0$, so finally we have \[
a_1e^{\alpha_1t}=0,
\] by taking $t=0$, we have $a_1=0$ as well. So $a_1=a_2=\cdots=a_n=0$.

Problem 7.  This problem is important for students who are taking (elementary) ODE course. We will take the result $A\in \R^{n\times n}$ is invertible iff $\det A\neq 0$ for granted.

For convenience let's denote \[
D(x)=\vmatrixx{
f_1(x)&f_2(x)&\cdots &f_n(x)\\
f_1'(x)&f_2'(x)&\cdots &f_n'(x)\\
\vdots&\vdots&\ddots&\vdots\\
f_1^{(n-1)}(x)&f_2^{(n-1)}(x)&\cdots &f_n^{(n-1)}(x)
}.
\] Let $a_1,a_2,\dots,a_n\in \R$ are such that \[
 a_1f_1+a_2f_2+\cdots +a_nf_n=0,
\] then \begin{array}{cccccccc}
a_1f_1&+&a_2f_2&+&\cdots &+&a_nf_n&=&0,\\
a_1f_1'&+&a_2f_2'&+&\cdots &+&a_nf_n'&=&0,\\
&&&&\vdots &&&&\\
a_1f_1^{(n-1)}&+&a_2f_2^{(n-1)}&+&\cdots& +&a_nf_n^{(n-1)}&=&0,
\end{array} so we have \[
\matrixx{
f_1&\cdots &f_n\\
\vdots&\ddots&\vdots\\
f_1^{(n-1)}&\cdots &f_n^{(n-1)}\\
}\matrixx{
a_1\\ \vdots\\a_n
}=0.
\] But $D(x_0)\neq 0$ for some $x_0$, hence the matrix is invertible for some $x_0$, namely, we have $a_1=a_2=\cdots =a_n=0$.

To show $\{x,xe^x,x^2e^x\}$ is linearly independent. Note that this time \[
D(x)=\vmatrixx{
x&e^xx&e^xx^2\\
1&e^x(x+1)&e^x(x^2+2x)\\
0&e^x(x+2)&e^x(x^2+4x+2)
}.
\] Let's try $x=1$, then $D(1)=e^2\neq 0$, so we are done.

No comments:

Post a Comment