\documentclass{article} \title{Topics In Analysis} \author{Prof. T. W. Gowers} \date{Michaelmas 2004} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{pifont} \usepackage{cancel} \usepackage{hyperref} \usepackage{graphicx} \usepackage{color} \hypersetup{colorlinks=true, linkcolor=red} \theoremstyle{definition} \newtheorem{TATheorem}{Theorem} \newtheorem{TALemma}[TATheorem]{Lemma} \newtheorem{TACorollary}[TATheorem]{Corollary} \newtheorem{TAProposition}[TATheorem]{Proposition} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\dist}{dist} \setlength{\parindent}{0pt} \begin{document} \maketitle \tableofcontents \newpage \section{Review of Metric Spaces} \begin{description} \item[Metric Space] - A metric space is a set $X$ with a function $d: X \times X \rightarrow X$ such that \begin{quote} i) $d(x, y) = 0$ iff $x = y \quad \forall x, y \in X$ ii) $d(x, y) = d(y, x) \quad \forall x, y \in X$ iii) $d(x, y) + d(y, z) \leq d(x, z) \quad \forall x, y, z \in X$ \end{quote} \end{description} Examples: \begin{itemize} \item $\mathbb{R}^n$ with $d(x, y) = \sqrt{\sum_{i=1}^n (x_i - y_i)^2}$ \item Subsets of $\mathbb{R}^n$ with $d(x, y)$ as above \item Any set $X$ where $d(x, y) = \left\{\begin{array}{lcl} 0 & \quad & x=y\\ 1 & & x \neq y \end{array}\right.$, the discrete metric \item $l_p (1 \leq p < \infty)$ is the set of all sequences $(x_n)_{n=1}^\infty$ such that $(\sum_{n=1}^\infty |x_n|^p)^\frac{1}{p} < \infty$. On this set we take $d((x_n), (y_n)) = (\sum_{n=1}^\infty |x_n - y_n|^p)^\frac{1}{p}$ \end{itemize} A subset $U$ of a metric space $X$ is open if $\forall x \in U \ \exists \ \delta > 0$ such that $\forall y \in X, d(x, y) < \delta \Rightarrow y \in U$ Notation: $B_\delta(x) = \{y \in X : d(x, y) < \delta\}$ So $U$ is open if $\forall x \in U \ \exists \ \delta$ such that $B_\delta(x) \subset U$ A subset $\mathbb{F}^n$ is closed if $X \setminus F$ is open.\\ Examples: \begin{enumerate} \item $\mathbb{R}$ is open and closed \item $(0, 1)$ is open but not closed \item $[0, 1]$ is closed but not open \item $[0, 1)$ is neither open nor closed \end{enumerate} Let $X$ be $[0, 1] \subset \mathbb{R}$ Then $(\frac{1}{2}, 1]$ is open in $X \because B_\delta(1) = \{y \in X : d(x, y) < \delta\} = (1-\delta, 1] \subset X$ \begin{description} \item[Continuous Function] - Let $X, Y$ be metric spaces. A function $f: X \rightarrow Y$ is continuous at $x \in X$ if $\forall \epsilon > 0 \ \exists \ \delta > 0$ such that $d(x, y) < \delta \Rightarrow d(f(x), f(y)) < \epsilon$ \end{description} \begin{TAProposition} Let $X$ and $Y$ be metric spaces and $f: X \rightarrow Y$. Then $f$ is continuous iff $f^{-1}(U)$ is open for every open set $U \subset Y$ \end{TAProposition} \begin{proof} $f^{-1}(U) = \{x : f(x) \in U\}$ ($\Rightarrow$) Suppose that $f$ is continuous. Let $U$ be an open subset of $Y$. $f^{-1}(U)$ is open $\Rightarrow \ \forall x \in f^{-1}(U) \ \exists \ \delta > 0$ such that $B_\delta(x) \subset f^{-1}(U)$ Let $x \in f^{-1}(U) \Rightarrow f(x) \in U$ Hence, as $U$ is open, $\exists \ \epsilon > 0$ such that $d(f(x), z) < \epsilon \Rightarrow z \in U$ Since $f$ is continuous $\exists \ \delta > 0$ such that $d(x, y) < \delta \Rightarrow d(f(x), f(y)) < \epsilon \Rightarrow f(y) \in U \Rightarrow y \in f^{-1}(U)$ so $f^{-1}(U)$ is open. ($\Leftarrow$) Take $f: X \rightarrow Y$. We want that $\forall \epsilon > 0 \ \exists \ \delta > 0$ such that $\forall x, y \ d(x, y) < \delta \Rightarrow d(f(x), f(y)) < \epsilon$ $\Leftrightarrow$ given $\epsilon > 0 \ \exists \ \delta > 0$ such that $f(B(x, \delta)) \subset B(f(x), \epsilon)$ Let $U = \{z : d(f(x), z) < \epsilon) = B(f(x), \epsilon)$ By the hypothesis, $U$ is open, so $f^{-1}(U)$ is open. [$f(B(x, \delta)) \subset B(f(x), \epsilon)$] Since $f(x) \in U, x \in f^{-1}(U)$ so $\exists \ \delta > 0$ such that $d(x, y) < \delta \Rightarrow y \in f^{-1}(U) \Rightarrow d(f(x), f(y)) < \epsilon$ \end{proof} \begin{description} \item[Open Cover] Let $Y$ be a subset of a metric space $X$. Then an open cover of $Y$ is a collection $\{U_\gamma : \gamma \in \Gamma\}$ of open subsets of $X$ such that $Y \subset \bigcup_{\gamma \in \Gamma} U_\gamma\}$ Note: If you define $Y$ itself as a metric space (with the metric inherited from $X$) you get an equivalent definition \item[Compact] - A subset $Y$ of a metric space $X$ is compact if for every open cover $\{U_\gamma : \gamma \in \Gamma\}$ of $Y$ you can find a finite subcover, that is, a sequence $\gamma_1, \gamma_2, \ldots, \gamma_n$ such that $Y \in U_{\gamma_1} \cup U_{\gamma_2} \cup \cdots \cup U_{\gamma_n}$ \item[Sequentially Compact] - $Y$ is sequentially compact if every sequence $y_1, y_2, \ldots, y_n$ in $Y$ has a convergent subsequence \end{description} We have already seen the Bolzano-Weierstrass Theorem, which is equivalent to the statement that $[0, 1)$ is sequentially compact. We now can also introduce \begin{TATheorem}[The Heine-Borel Theorem] $[0, 1]$ is compact \end{TATheorem} \begin{proof} Let $\{U_\gamma : \gamma \in \Gamma\}$ be a collection of open subsets of $\mathbb{R}$ such that $[0, 1] \subset \bigcup_{\gamma \in \Gamma} U_\gamma$ Let $X = \{x \in [0, 1] :$ it's possible to cover $[0, x]$ with finitely many $U_\gamma \}$ We want to show that $1 \in X$ We know $0 \in X$, since there must be some $\gamma$ with $0 \in U_\gamma$ $\therefore X$ is non-empty and bounded above by 1 Let $s = \sup X \therefore s \leq 1$ $\exists$ some $\gamma$ such that $s \in U_\gamma$. But as $U_\gamma$ is open, there must be some $\delta > 0$ such that $[s - \delta, s + \delta] \subset U_\gamma$ So $s-\delta < s$, we can cover $[0, s - \delta]$ with finitely many sets $U_{\gamma_1} \cup \cdots \cup U_{\gamma_n}$. But then $[0, s + \frac{\delta}{2}] \subset U_{\gamma_1} \cup \cdots \cup U_{\gamma_n} \cup U_\gamma$, so $s + \frac{\delta}{2} \in X$. The only way to avoid a contradiction is if $s = 1$ \end{proof} \newpage \subsection{Basic facts about Compact Metric Spaces} \begin{enumerate} \item A subset of $\mathbb{R}^n$ is compact iff it is closed and bounded \item A continuous function from a compact metric space to $\mathbb{R}$ is bounded and attains its bounds \item A continuous function on a compact set is uniformly continuous \item A metric space is compact iff it is sequentially compact \item Compact subsets of metric spaces are closed \item A continuous image of a compact set is compact \begin{proof} Let $X$ be compact, $f: X \rightarrow Y$ continuous Let $U_\gamma (\gamma \in \Gamma)$ be open sets in $Y$ such that $f(X) \subset \bigcup_{\gamma \in \Gamma} U_\gamma$ Then $X \subset \bigcup_{\gamma \in \Gamma}f^{-1}(U_\gamma)$ (in fact, $X = \bigcup_{\gamma \in \Gamma} f^{-1}(U_\gamma)$) and since $X$ is compact we can find $\gamma_1, \ldots, \gamma_n$ such that $X \subset \bigcup_{i=1}^n f^{-1}(U_{\gamma_i}) \Rightarrow f(X) \subset \bigcup_{i=1}^n U_{\gamma_i}$ \end{proof} \end{enumerate} \begin{description} \item[Complete] A metric space $X$ is complete if every Cauchy sequence converges. A Cauchy sequence $(x_n)$ is one that satisfies: \begin{equation*} \forall \epsilon > 0 \ \exists \ N \in \mathbb{N} \text{ such that } \forall p, q \geq N \ d(x_p, x_q) < \epsilon \end{equation*} Note: It must converge in the space e.g. $\frac{1}{2}, \frac{1}{3}, \frac{1}{4} \ldots$ in $(0, 1)$ does not converge \end{description} \newpage \section{Brouwer's Fixed Point Theorem in the Plane} \begin{TATheorem}[Intermediate Value Theorem - discrete version] \label{TA-DiscreteIVT} Let $F : \{0, 1, \ldots, n\} \rightarrow \mathbb{R}$ be a function such that $F(0) < 0, F(n) > 0$ Then $\exists m < n$ such that $F(m) < 0, F(m+1) \geq 0$ \end{TATheorem} \begin{proof} 1) Let $m$ be maximal such that $F(m) < 0$. Then $F(m+1) \geq 0$ 2) Let $G(m) = \left\{\begin{array}{lcl} 0 & \quad & F(m) \geq 0\\ 1 & & F(m) < 0 \end{array}\right.$ For each $m < n$ let $h(m) = G(m+1) - G(m)$ Then $1 = G(n) - G(0) = \sum_{i=0}^{n-1} (G(i+1) - G(i))$ So $\exists m$ such that $G(m+1) - G(m) > 0$, which can only happen if $F(m+1) \geq 0, F(m) \leq 0$ \end{proof} \begin{TACorollary}[Intermediate Value Theorem] Let $f: [a, b] \rightarrow \mathbb{R}$ be a continuous function such that $f(a) < 0, f(b) > 0$ Then $\exists \ x \in (a, b)$ such that $f(x) = 0$ \end{TACorollary} \begin{proof} 1) Let $n \in \mathbb{N}$ and define $F : \{0, 1, \ldots, n\} \rightarrow \mathbb{R}$ by $F(m) = f(a + \frac{m}{n}(b-a))$ (discretised version). By Theorem \ref{TA-DiscreteIVT} we can find $m_n$ such that $F(m_n) < 0, F(m_n+1) \geq 0$ Let $x_n = a + \frac{m_n}{n}(b-a), y_n = a + \frac{m_n+1}{n}(b-a)$ So $\forall n$ we can find $x_n, y_n$ such that $y_n = x_n + \frac{b-a}{n}, f(x_n) < 0, f(y_n) \geq 0$ By Bolzano-Weierstrass we can find a convergent subsequence $n_1 < n_2 < \ldots$ such that $(x_{n_k})^\infty$ converges and hence (since $\frac{b-a}{n} \rightarrow 0$) so does $(y_{n_k})^\infty$, and both to the same limit $x$ Since $f$ is continuous $f(x_{n_k}) \rightarrow f(x), f(y_{n_k}) \rightarrow f(x)$. Since every $f(x_{n_k}) < 0, f(x) \leq 0$, and equally $f(y_{n_k}) \geq 0 \Rightarrow f(x) \geq 0$ Thus $f(x) = 0$\\ 2) Suppose this is not true. Then define $g : [a, b] \rightarrow \mathbb{R}$ as $g(x) = \left\{\begin{array}{lcl} 1 & \quad & f(x) > 0\\ 0 & & f(x) < 0 \end{array}\right.$ Claim: $g$ is continuous \begin{quote} \begin{proof} $g^{-1}(\{1\}) = \{x : f(x) > 0\} = f^{-1}(\{y : y > 0\})$ which is open, since $\{y : y > 0\}$ is also open and $f$ is continuous. Similarly for $g^{-1}(\{0\})$. So $g^{-1}$ of any set is $\phi$ or a union of some of those two sets, and this is open, so $g$ is continuous. \end{proof} \end{quote} We also know that $g(a) = 0, g(b) = 1$ Let $n \in \mathbb{N}$. Let $G(m) - g(a + \frac{m}{n}(b-a))$ By the previous theorem we can find $m$ such that $G(m) = 0, G(m+1) = 1$, so we get $x_n, y_n$ such that $y_n = x_n + \frac{b-a}{n}, g(x_n) = 0, g(y_n)=1$ Take a convergent subequence $(x_{n_k})$. Then $g(x_{n_k}) \rightarrow 0, g(y_{n_k}) \rightarrow 1$ so $g(x) = 0, g(x) = 1 \#$ \end{proof} \begin{TATheorem} The following are equivalent: \begin{enumerate} \item (Brouwer's Fixed Point Theorem) Let $D$ be a closed unit in $\mathbb{R}^2$ and let $f: D \rightarrow D$ be continuous. Then $\exists \ x \in D$ such that $f(x) = x$ \item There is no continuous function $f: D \rightarrow \delta D$ (the boundary of $D$) such that $f(x) = x \ \forall x$ on the boundary (there is no continuous retraction to its boundary) \item Let $f: D \rightarrow \mathbb{R}^2$ be continuous and suppose that $f(x) = x$ for every $x \in \delta D$. Then $D \subset f(D)$ \item Let $\delta D$ be divided into three intervals, $I, J, K$. Let $A, B, C \subset D$ be closed such that $I \subset A, J \subset B, K \subset C$. Then $A \cap B \cap C \neq \phi$ \end{enumerate} \begin{figure}[htbp] \centering \includegraphics{Images/TA01.jpg} \caption{Partitions of the disc boundary} \label{Partitions of the disc boundary} \end{figure} \end{TATheorem} \begin{proof} 1) $\Rightarrow$ 2) If 2) is false, then let $g$ be a continuous retraction from $D$ to $\delta D$ Compose $g$ with a non-trivial rotation and you obtain a continuous map $f: D \rightarrow D$ with no fixed point. So not 2) $\Rightarrow$ not 1), thus 1) $\Rightarrow$ 2) In fact, we can deduce from 1) a stronger consequence: There is no continuous function $g: D \rightarrow \delta D$ such that $g(I) \subset I, g(J) \subset J, g(K) \subset K$. To see this, let $g$ be such a map and this time compose it with a rotation through 180$^\circ$ to obtain once again a continuous function with no fixed point.\\ 2) $\Rightarrow$ 1) Let $f: D \rightarrow D$ be a continuous function with no fixed point. For each $x \in D$ extend the line joining $x$ to $f(x)$ (which is unique since $x \neq f(x)$) in the direction until it hits the boundary, and call $g(x)$ this point. Then $g$ is a continuous retraction $\# \therefore$ 2) $\Rightarrow$ 1)\\ 2) $\Rightarrow$ 3) Suppose we have a map $f: D \rightarrow \mathbb{R}^2$ that fixed $\delta D$ but such that $x \in D$ has no preimage. Given $y \in D$ draw a line segment from $x$ to $f(y)$ and extend it in the $x \rightarrow f(g)$ direction. The intersection of the resulting half line with $\delta D$ is $g(y)$. This function is a continuous retraction from $D$ to $\delta D$. $\therefore$ 3) $\Rightarrow$ 2). Trivially, $\neg$ 2) $\Rightarrow \neg$ 3)\\ Strengthened 2) $\Rightarrow$ 4). Suppose we have closed sets $A, B, C$ with the properties of 4) such that $A \cap B \cap C = \phi$ By using a homeomorphism $h$ from $D$ to the triangle in $\mathbb{R}^3$ with vertices $(1, 0, 0), (0, 1, 0), (0, 0, 1)$ such that $h(I), h(J), h(K)$ are the edges of the triangle, so we may move our discussion to this $T$. \begin{figure}[htbp] \centering \includegraphics{Images/TA02.jpg} \caption{The triangle T} \label{The triangle T} \end{figure} So now we have the triangle $T$ with edges $I, J, K$ and closed sets $A, B, C$ such that $I \subset A, J \subset B, K \subset C, A \cup B \cup C = T, A \cap B \cap C = \phi$ Define $f: T \rightarrow \delta T$ by \begin{equation*} x \mapsto \frac{(d(x, A), d(x, B), d(x, C))}{d(x, A) + d(x, B) + d(x, C)} \end{equation*} (here $d(x, A) = \inf_{y \in A} d(x, y)$) It is easy to check that the functions $d(x, A)$ are continuous. Now $x \notin A \Rightarrow d(x, A) > 0$ (since $A$ is closed and same for $B$ and $C$) Since $A \cap B \cap C = \phi$ at least one of $d(x, A), d(x, B), d(x, C)$ is always positive, so it's easy to show that $f$ is continuous. If $x \in I$ then $d(x, A) = 0$ so $f(x)$ lies on the outside edge joining $(0, 1, 0)$ and $(0, 0, 1)$ i.e. $I$ Similarly $f(J) \subset J, f(K) \subset K$ Finally, since $A \cup B \cup C = T$, at least one of $d(x, A), d(x, B), d(x, C)$ is always zero, so $f: T \rightarrow \delta T$. This contradicts strengthened 2)\\ 4) $\Rightarrow$ (strengthened) 2). Suppose we have $f: D \rightarrow \delta D, F(I) \subset I, f(J) \subset J, f(K) \subset K$. Let $A = f^{-1}(I), B = f^{-1}(J), C = f^{-1}(K)$. Then these are closed, and $A \cup B \cup C = D$. Also, $A \cap B \cap C = f^{-1}(I \cap J \cap K) = \phi$ Finally $I \subset f^{-1}(I), J \subset f^{-1}(J), K \subset f^{-1}(K)$ by assumption \# \end{proof} \begin{TATheorem}[Brouwer's fixed point Theorem - discrete version] \label{TA-BrouwerFixedDiscrete} Let $T$ be a triangle divided into $n$ parts along each edge and triangulated as shown in the diagram \begin{figure}[htbp] \centering \includegraphics{Images/TA03.jpg} \caption{Brouwer's triangle for n=4} \label{Brouwer's triangle for n=4} \end{figure} Suppose that the vertices of the triangle are coloured red, green and blue (in an anti-clockwise direction). Now colour the rest of the triangulation in an arbitrary way, subject to the constraint that along the edges of the main triangle you are not allowed to use the colour of the opposite vertex. Then there must be a little triangle with all its vertices coloured differently. \end{TATheorem} \begin{proof} Given a little triangle, we assign a number to it as follows. First, orient its edges anticlockwise. Then look at the colours of its vertices and assign numbers to the oriented edges as follows If the edges go $\left\{\begin{array}{lclcl} \begin{array}{c} R \rightarrow G\\ G \rightarrow B\\ B \rightarrow R \end{array} & \quad & +1 & \qquad &\text{same as main triangle}\\ \hline \begin{array}{c} R \rightarrow R\\ G \rightarrow G\\ B \rightarrow B \end{array} & & 0 & &\text{same colour repeated}\\ \hline \begin{array}{c} R \rightarrow B\\ B \rightarrow G\\ G \rightarrow R \end{array} & & -1 & & \text{same as main triangle} \end{array}\right.$ Add up the values of the edges to get the value of each triangle. Any triangle with 2 vertices of the same colour scores 0. An example is included in figure 4 \begin{figure}[htbp] \centering \includegraphics{Images/TA04.jpg} \caption{Example of Brouwer's Triangle} \label{Example of Brouwer's Triangle} \end{figure} Now let us consider the sum of the values of all the triangles. This is a sum over the values of all their directed edges. Each internal edge is counted twice, once in each direction, and so contributes nothing. So we only need to add up the contributions from external edges. Along an edge of the main triangle the contribution is 1 (To prove this, for e.g. the red-green edge, assign a value 0 to $R$, 1 to $G$, and note that the value of the directed edge is the value of the end vertex minus the value of the start vertex. So we have a telescoping sum which gives $1-0=1$) Hence the sum over all the triangles is 3. Hence some triangle has positive value $\Rightarrow$ value 3 $\Rightarrow$ vertices of different colours \end{proof} \begin{TACorollary} Let $T$ be a triangle and let its boundaries be partitioned into sets $I, J, K$ as shown in figure 5. \begin{figure}[htbp] \centering \includegraphics{Images/TA05.jpg} \caption{Triangle boundary partition} \label{Triangle boundary partition} \end{figure} Then let $A, B, C$ be closed sets with $I \subset A, J \subset B, K \subset C, T \subset A \cup B \cup C$ Then $A \cap B \cap C \neq \phi$ \end{TACorollary} \begin{proof} For each $n$ apply Theorem \ref{TA-BrouwerFixedDiscrete} colouring a vertex red if it belongs to $A$, green if it belongs to $B$ and blue if it belongs to $C$. If it belongs to more than one, choose either as long as you don't violate the conditions of \ref{TA-BrouwerFixedDiscrete}. Theorem \ref{TA-BrouwerFixedDiscrete} then gives us points $x_n \in A, y_n \in B, z_n \in C$ with $(x_n, y_n, z_n)$ the vertices of a small triangle. $T$ is sequentially compact, so we can find $n_1 < n_2 < \cdots$ such that the subsequence $(x_{n_k})$ converges. Since the sidelengths of the small triangle tend to 0 $(y_{n_k})$ and $(z_{n_k})$ tend to the same limit $x$. Then $x \in A \cap B \cap C$ since $A, B, C$ are closed (for example, $x \in B$ since each $y_{n_k} \in B$ and $y_{n_k} \rightarrow x$) \end{proof} \newpage \subsection{Winding Numbers} Given a function $f: [0, 1] \rightarrow \mathbb{R}^2 \setminus \{0\}$ such that $f(0) = f(1)$ we would like to say rigorously what we mean by ''the number of times that $f$ goes round zero'' It's convenient to identify $\mathbb{R}^2$ with $\mathbb{C}$ Given a continuous function $f: [0, 1] \rightarrow \mathbb{C} \setminus \{0\}$ we can write $f(t) = r(t) e^{i \theta (t)}$ where $r(t)$ is a continuous function (as it's the composition of $f$ and $|\ |$) Also $r(t) \neq 0$ so $g(t) = \frac{f(t)}{r(t)} = e^{i \theta (t)}$ is continuous \begin{TATheorem} \label{TA-ComplexFunctionDecomposition} Let $\mathbb{T} = \{z : |z|=1\}$. Let $g : [0, 1] \rightarrow \mathbb{T}$ be continuous. Then there is a continuous function $\theta : [0, 1] \rightarrow \mathbb{R}$ such that $g(t) = e^{i \theta (t)} \ \forall t$. Moreover, once $\theta(0)$ is chosen, the rest of $\theta$ is uniquely determined. \end{TATheorem} \begin{proof} Define the following two functions from $\mathbb{T} \rightarrow \mathbb{R}$ Any $z \in \mathbb{T}$ can be written uniquely as $e^{i \theta}$ with $0 \leq \theta < 2 \pi$. Call this $\text{Arg } z$ Also any $z \in \mathbb{T}$ can be written uniquely as $e^{i \theta}$ with $-\pi \leq \theta < \pi$. Call this $\arg z$ Note that Arg is continuous except at 1, and arg is continuous except at -1. Since $g$ is continuous and $[0, 1]$ is compact, $g$ is uniformly continuous. Hence there exists $\delta > 0$ such that $\forall x, y \ |x-y|< \delta \Rightarrow |g(x) - g(y)| < 1$. In particular, it is not possible for $\{g(x), g(y)\}$ to be $\{-1, 1\}$. Now divide $[0, 1]$ into intervals $I_j = [t_{j-1}, t_j] \ j = 1, 2, \ldots, n$ of width $< \delta$ Now fixed $\theta(0)$ to be some $\theta$ with $g(0) = e^{i \theta}$ Let's show that $\theta$ exists and is uniquely determined on $I_1 = [0, t_n]$ By our choice of $\delta$ it is not possible for $g$ to take both the values $\pm 1$ in $[0, t_1]$ If $g$ misses $-1$ then define $\theta(x) = \arg(g(x)) + \theta(0) - \arg(g(0))$ (consistent at $x=0$) This is continuous since $g$ is continuous and $\arg$ is continuous on $g[0, t_1]$ If $\phi(x)$ is another function that works, then $\phi(x) - \theta(x)$ is always a multiple of $2 \pi$ and $\phi(0) = \theta(0)$, and $\phi - \theta$ is continuous, so $\phi - \theta$ is identically $\arg$ (by the IVT). So $\theta$ is uniquely determined on $[0, t_1]$. If $g$ misses $+1$ then do the same proof with Arg. Once $\theta$ is defined up to $t_1$ repeat for $[t_1, t_2], [t_2, t_3] \ldots$ \end{proof} \begin{description} \item[Winding Number] - If $f: [0, 1] \rightarrow \mathbb{C} \setminus \{0\}$ and $g, \theta$ are defined as above, then the winding number of $f$ about zero denoted $w(f, 0)$ is \begin{equation*} w(f, 0) = \frac{\theta(1) - \theta(0)}{2 \pi} \end{equation*} \end{description} \begin{TAProposition} If $f_1, f_2: [0, 1] \rightarrow \mathbb{C} \setminus \{0\}, f_1(0) = f_1(1), f_2(0) = f_2(1)$ then $w(f_1 f_2, 0) = w(f_1, 0) + w(f_2, 0)$ \end{TAProposition} \begin{proof} Let $f_1(t) = r_1(t) e^{i \theta_1(t)}$ with $r_1, \theta_1$ continuous $f_2(t) = r_2(t) e^{i \theta_2(t)}$ with $r_2, \theta_2$ continuous Then $f_1(t) f_2(t) = r_1(t) r_2(t) e^{i (\theta_1(t) + \theta_2(t))}$ and $\theta_1(1) + \theta_2(t)$ is continuous. Hence \begin{align*} w (f_1 f_2, 0) &= \frac{\theta_1(1) + \theta_2(1) - \theta_1(0) - \theta_2(0)}{2 \pi}\\ &= w(f_1, 0) + w(f_2, 0) \end{align*} \end{proof} \begin{TATheorem}[Topological version of Rouch\'{e}'s Theorem] \label{TA-TopologicalRouchesTheorem} Let $f_1, f_2: [0, 1] \rightarrow \mathbb{C}$ with $f_1(0) = f_1(1), f_2(0) = f_2(1), f_1, f_2$ continuous and $|f_2(t)| < |f_1(t)| \ \forall t$. Then \begin{equation*} w(f_1 + f_2, 0) = w(f_1, 0) \end{equation*} \end{TATheorem} \begin{proof} By Theorem \ref{TA-ComplexFunctionDecomposition} we can write \begin{align*} f_1(t) &= r_1(t) e^{i \theta_1 (t)}\\ f_1(t) + f_2(t) &= s(t) e^{i \phi(t)} \end{align*} with $r, s, \theta, \phi$ all continuous. Thus $\exists k \in \mathbb{Z}$ such that $\theta(0) - \phi(0) \in [2 \pi k - \frac{\pi}{2}, 2 \pi k + \frac{\pi}{2}]$ So far we have not ruled out the possibility that $k$ relies on $t$. By adding an appropriate multiple of $2 \pi$ to $\phi$ we can ensure that $\phi(0) - \theta(0) \in [-\frac{\pi}{2}, \frac{\pi}{2}]$. But $\phi - \theta$ is a continuous function so by the IVT it can't get to a different $(2 \pi k - \frac{pi}{2}, 2 \pi k + \frac{\pi}{2})$ without crossing $\pi$ or $-\pi$, which is impossible. So $k$ is constant. Therefore $\phi(1) - \theta(1) = \phi(0) - \theta(0)$ (since they differ by a multiple of $2 \pi$ and this multiple must be zero.) But this implies \begin{equation*} \frac{\phi(1) - \phi(0)}{2 \pi} = \frac{\theta(1) - \theta(0)}{2 \pi} \end{equation*} \end{proof} \begin{description} \item[Homotopic] - Let $f, g: [0, 1] \rightarrow \mathbb{C} \setminus \{0\}, f(0) = f(1), g(0) = g(1)$ with both functions continuous. They are homotopic if there is a continuous family of closed curves $f_s : [0, 1] \rightarrow \mathbb{C} \setminus \{0\}$ for $0 \leq s \leq 1$ with $f_0 = f, f_1 = g$ By a continuous family of closed curves we mean that each $f_s$ is continuous with $f_s(0) = f_s(1)$ and that the function $F(s, t) = f_s(t)$ is a continuous functions from $[0, 1]^2$ to $\mathbb{C} \setminus \{0\}$ \end{description} Fact from 1B: A continuous function defined on a compact metric space is uniformly continuous \begin{TATheorem} \label{TA-HomotopicsSameWinding} Let $f, g: [0, 1] \rightarrow \mathbb{C} \setminus \{0\}$ be homotopic. Then $w(f, 0) = w(g, 0)$ \end{TATheorem} \begin{proof} Since $F$ is continuous and $[0, 1]^2$ is compact, $|F|$ attains its infinimum. So $\exists \ \epsilon > 0$ such that $|f_s(t)| \geq \epsilon \ \forall t, \forall s$ Let $f_s(t), 0 \leq s \leq 1$ be a continuous family of closed curves with $f_0 = f, f_1 = g$ and write $F(s, t) = f_s(t)$ Then $F$ is continuous on $[0, 1]^2$, so is uniformly continuous (as $[0, 1]$ is compact). Hence there is some $\delta > 0$ such that $d((s, t), (s', t')) \leq \delta \Rightarrow |f_s(t) - f_{s'}(t')| < \epsilon$ In particular, $|s - s'| < \delta, t \in [0, 1] \Rightarrow |f_s(t) - f_{s'}(t)| < \epsilon$ Now let's define $w(s) = w(f_s, 0)$. Then $w$ takes integer values, so by the IVT it is enough to show that $w$ is continuous, so let $s \in [0, 1]$ By our construction of $\delta$, if $|s - s'| < \delta$ then $|f_{s'}(t) - f_s(t)| < \epsilon \ \forall t$ But $|f_s(t)| \geq \epsilon \ \forall t$ so $w(f_s, 0) = w(f_s + (f_{s'} - f_s), 0) = w(f_{s'}, 0)$ by Theorem \ref{TA-TopologicalRouchesTheorem}. This shows that $w$ is constant on the interval $(s-\delta, s+\delta)$. Hence, $w$ is constant everywhere and $w(0) = w(1)$ as we wanted \end{proof} \begin{TATheorem}[The Fundamental Theorem of Algebra] Let $p$ be a non-constant polynomial (with coefficients in $\mathbb{C}$). Then $p$ has at least one root (and hence, if $\deg p = n$ it has $n$ roots up to multiplicity by factorising them out) \end{TATheorem} \begin{proof} Let $p(z) = a_n z^n + a_{n-1}z^{n-1} + \cdots + a_1 z + a_0$ WLOG let $a_n = 1$ (or divide through by $a_n$) If $a_0 = 0$, then $z=0$ is a root, so assume $a_0 \neq 0$ Pick $R > 0$ such that $R^n > |a_{n-1}| R^{n-1} + |a_{n-2}|R^{n-2} + \cdots + |a_0|$ (This is always possible, e.g. take $R > \{|a_{n-1}| + |a_{n-2}| + \cdots + |a_0|, 1\}$) Then $|p(z) - z^n| < |z^n|$ whenever $|z| = R$ so it follows that if we let $f(t) = R^n e^{2 \pi i n t}, g(t) = p(Re^{2 \pi i t}) - R^n e^{2 \pi i n t}$ then $w(f, 0) = w(f + g, 0) = w(p(Re^{2 \pi i t}), 0)$ But $w(f, 0) = n$ since we've already written it as $R^n e^{i \theta (t)}$ with $\theta(t) = 2 \pi n t$ Define $f_s(t) = p(s Re^{2 \pi i t}) \ 0 \leq s \leq 1$ Then $f_0(t) = p(0) = a_0$ so $w(f_0, 0) = 0$ $f_1(t) = p(Re^{2 \pi i t})$ so $w(f_1, 0) = n$ The $\{f_s(t)\}$ form a continuous family, so we have contradicted the preceeding theorem (\ref{TA-HomotopicsSameWinding}) unless some $f_s$ takes the value 0 for some $t$ That gives $p(s Re^{2 \pi i t}) = 0$ and we have a root \end{proof} \begin{TACorollary} (Corollary of Theorem \ref{TA-HomotopicsSameWinding}) There is no continuous retraction from $D$ to $\delta D$ \end{TACorollary} \begin{proof} Suppose $f$ is such a retraction. Define $f_s(t) = f(s e^{2 \pi i t}), 0 \leq s, t \leq 1$ Then the $f_s$ form a continuous family. $f_1(t) = f(e^{2 \pi i t}) = e^{2 \pi i t}$ so $w(f_1, 0) = 1$ $f_0(t) = f(0)$ so $w(f_0, 0) = 0$ Contradicting Theorem \ref{TA-HomotopicsSameWinding} \end{proof} \begin{TACorollary}[The genuine topological version of Rouch\'{e}'s Theorem] Let $f: D \rightarrow \mathbb{C}$ be a continuous function, suppose $f(z) \neq 0 \ \forall z \in \delta D$ ($f$ non-zero on the boundary). Then $\exists \ z \in D$ such that $f(z) = 0$ if $w(g, 0) \neq 0$ with $g(t) = f(e^{2 \pi i t})$ \end{TACorollary} \begin{proof} For $0 \leq s, t \leq 1$, let $f_s(t) = f(se^{2 \pi i t})$. Then $f_s(t) = f(0), w(f, 0) = w(g, 0) \neq 0$ Contradiction unless $f_s(t) = f(se^{2 \pi i t})$ is somewhere 0 \end{proof} \newpage \section{Approximation by Polynomials} The basic aim is to take a continuous function $f: [a, b] \rightarrow \mathbb{R}$ and approximate it well by polynomials. In this section, we shall mainly concentrate on uniform approximation. That is, we'd like to find polynomials $p$ such that \begin{equation*} \sup_t |f(t) - p(t)| \end{equation*} is small. For convenience we'll take $[a, b]$ to be $[0, 1]$ in general, since anything proved on $[0, 1]$ can be generalised by composing with a linear function. \begin{TALemma} \label{TA-BinaryRandomVariableProperties} For $1 \leq i \leq n$, let $\epsilon_1, \ldots, \epsilon_n$ be independent random variables taking the value $1$ with probability $t$ and $0$ with probability $1-t$. Let $X = \epsilon_1 + \cdots + \epsilon_n$ Then $\mathbb{E}[X] = nt, \Var[X] = nt(1-t)$ \end{TALemma} \begin{proof} $\mathbb{E}[\epsilon_i] = t$ so $\mathbb{E}[X] = nt$ by linearity of expectation. \begin{align*} \Var [\epsilon_i] &= \mathbb{E}[(\epsilon_i - t)^2]\\ &= \mathbb{E}[\epsilon_i^2 - 2t \epsilon_i + t^2]\\ &= \mathbb{E}[(1-2t)\epsilon_i + t^2] \quad \text{as }\epsilon_i^2 = \epsilon_i\\ &=t(1-2t) + t^2\\ &=t-t^2\\ &=t(1-t) \end{align*} Since the $\epsilon_i$ are independent, $\Var[X] = nt(1-t)$ \end{proof} \begin{TALemma}[Chebyshev's Theorem] Let $X$ be a random variable with mean $\mu$ and variance $\sigma^2$. Then \begin{equation*} \mathbb{P}(|X - \mu| \geq t) \leq \frac{\sigma^2}{t^2} \end{equation*} \end{TALemma} \begin{proof} Let $\alpha$ be this probability. Then \begin{equation*} \sigma^2 = \mathbb{E}[|X - \mu|^2] \geq t^2 \alpha \end{equation*} \end{proof} \begin{TATheorem}[The Weierstrass Approximation Theorem] Every continuous function $f: [0, 1] \rightarrow \mathbb{R}$ can be uniformly approximated by polynomials. That is \begin{equation*} \forall \epsilon > 0 \ \exists \text{ a polynomial } p \text{ such that } \|f-p\|_\infty \equiv \sup_t|f(t) - p(t)| \leq \epsilon \end{equation*} \end{TATheorem} [Recall our definition of uniform convergence: $f_n \rightarrow f$ uniformly if \begin{equation*} \forall \epsilon > 0 \ \exists \ N \text{ such that } \forall n \geq N \underbrace{\forall t |f_n(t) - f(t)|}_{\equiv \|f_n - f\|_\infty} < \epsilon \end{equation*} Hence $\|\ \|_\infty$ is known as the Uniform Norm - it gives rise the uniform convergence] \begin{proof} We shall show that, for sufficiently large $n$, the polynomial $p(t) = \sum_{m=0}^n \binom{n}{m} t^m (1-t)^{n-m} f(\frac{n}{m})$ $p(t)$ has an interpretation. It is $\mathbb{E}(f(\frac{1}{n} X))$ where $X$ is the random variable from Lemma \ref{TA-BinaryRandomVariableProperties}. Therefore \begin{align*} |p(t) - f(t)| &= |\mathbb{E}(f(\frac{1}{n}X) - f(t))|\\ &\leq \mathbb{E}(|f(\frac{1}{n}X) - f(t)|) \end{align*} Since $f$ is continuous, it is uniformly continuous, so choose $\delta > 0$ such that $\forall s, t \ |s-t| < \delta \Rightarrow |f(s) - f(t)| < \frac{\epsilon}{2}$ Then \begin{align*} \mathbb{E}(|f(\frac{1}{n}X) - f(t)|) &\leq \frac{\epsilon}{2} \mathbb{P}(|\frac{1}{n}X - t| < \delta) + 2 \|f\|_\infty \mathbb{P}(|\frac{1}{n}X - t| \geq \delta)\\ &\leq \frac{\epsilon}{2} + 2 \|f\|_\infty \mathbb{P}(|\frac{1}{n}X - t| \geq \delta) \end{align*} (Here $\|f\|_\infty$ exists since $f$ is continuous $\Rightarrow f$ is bounded) But \begin{align*} \mathbb{P}(|\frac{1}{n}X - t| \geq \delta) &= \mathbb{P}(|X - nt| \geq \delta n)\\ &\underbrace{\leq}_{\text{by Chebyshev}} \frac{t(1-t)n}{\delta^2 n^2}\\ &\leq \frac{1}{\delta^2 n} \end{align*} Let $n > \frac{4 \|f\|_\infty}{\epsilon \delta^2}$. Then $\frac{2 \|f\|_\infty}{\delta^2 n} < \frac{\epsilon}{2}$ so $|p(t) - f(t)| < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$ Since $t$ was arbitrary, $\|p-f\|_\infty \leq \epsilon$ \end{proof} \newpage \subsection{Chebyshev Polynomials} We start with the observation that $\cos (n \theta)$ is a polynomial of degree $n$ in $\cos \theta$. Indeed, \begin{align*} \cos n \theta + i \sin n \theta &= e^{i n \theta} = (e^{i \theta})^n = (\cos \theta + i \sin \theta)^n\\ &= \cos^n \theta + i \binom{n}{1} \cos^{n-1}\theta \sin \theta - \binom{n}{2} \cos^{n-2}\sin^2 - i \binom{n}{3} \cos^{n-3} \theta \sin^3 \theta + \cdots \end{align*} Taking real parts and using the fact that $\sin^2 \theta = 1 - \cos^2 \theta$, we have expressed $\cos n \theta$ as a polynomial of degree $n$ in $\cos \theta$, and the leading coefficient $1 + \binom{n}{2} + \binom{n}{4} + \cdots = 2^{n-1}$ (from Numbers and Sets, half of the subsets are even. Look at $\frac{(1+1)^n + (1-1)^n}{2}$) \begin{TALemma}[Chebyshev's equal-ripple criterion] Let $f: [a, b] \rightarrow \mathbb{R}$ be a continuous function and let $p$ be a polynomial of degree $\leq n-1$. Let $M = \sup_{t \in [0, 1]} |f(t) - p(t)|$ Suppose that we can find $x_0 < x_1 < \cdots < x_n$ such that $\forall i \ |f(x_i) - p(x_i)| = M$ and such that the numbers $f(x_i) - p(x_i)$ alternate in sign $i = 0, 1, 2, \ldots$ Then, $\cancel{\exists}$ a polynomial $q$ of degree $\leq n-1$ such that \begin{equation*} \sup_{t \in [a, b]} |f(t) - q(t)| < M \end{equation*} i.e. $p$ is best at approximating a polynomial of degree $n-1$ \end{TALemma} \begin{proof} Let $q$ be a polynomial of degree $\leq n-1$ and suppose that $q$ has a better approximation, meaning that $\sup_{t \in [a, b]} |f(t) - q(t)| < M$ Then if $f(x_i) - p(x_i) = M$ we must have $q(x_i) > p(x_i)$ And if $p(x_j) - f(x_j) = M$ we must have $q(x_j) < p(x_j)$ Hence the numbers $p(x_i) - q(x_i)$ alternate in sign $(i = 0, 1, 2, \ldots)$ Hence $p-q$ has a root in every interval $(x_{i-1}, x_i)$ by the Intermediate Value Theorem. It follows, since $\deg(p-q) \leq n-1$ that $p \equiv q$ \# \end{proof} Remarks: 1) It can be shown that $p$ is the unique best approximating polynomial of $\deg \leq n-1$ 2) It can also be shown that any best approximating polynomial satisfies the equal-ripple criterion. \begin{TATheorem} The polynomial of degree $\leq n-1$ that best approximates $t^n$ on $[-1, 1]$ is $t^n - \frac{1}{2^{n-1}} T_n(t)$ where $T_n(\cos \theta) = \cos n \theta$ \end{TATheorem} \begin{proof} The different is $\frac{1}{2^{n-1}T_n(t)}$ For $-1 \leq t \leq 1, t = \cos \theta$ for some $\theta$ so $|T_n(t)| = |\cos n \theta| \leq 1$ We shall use the equal ripple criterion with $M = \frac{1}{2^{n-1}}$ For $\theta = \frac{k \pi}{n}, \cos (n \theta) = \cos (k \pi) = (-1)^k$ For $0 \leq k \leq n$ let $x_k = \cos (\frac{n-k}{n} \pi)$ Then $-1 = x_0 < x_1 < \cdots < x_n = 1$ $T_n(x_k) = T_n(\cos (\frac{n-k}{n} \pi)) = \cos ((n-k)\pi) = (-1)^{n-k}$ The $T_n(x_n)$ alternate between $\pm 1$ Hence the equal-ripple criterion is satisfied. \end{proof} \newpage \subsection{Legendre Polynomials} \begin{TATheorem} There is a sequence of polynomials $p_0, p_1, \ldots,$ such that $p_n$ has degree $n$ and $\int_{-1}^1 p_n(t) p_m(t) \,dt = 0$ whenever $m \neq n$. Moreover, each $p_n$ is unique up to a scalar multiple \end{TATheorem} \begin{proof}\ \\ 1) Suppose that we have found $p_0, p_1, \ldots, p_{n-1}$. We'd like a polynomial $q$ of degree $n$ with $\int_{-1}^1 q(t) p_m(t) \,dt = 0 \ \forall m \in \{0, \ldots, n-1\}$ The polynomials $p_0, \ldots, p_{n-1}$ are orthogonal, so as they are linearly independent it follows that they span the space of polynomials of degree $\leq n-1$ So any degree $n$ polynomial can be written in the form \begin{equation*} (\mu_0 p_0 + \mu_1 p_1 + \cdots + \mu_{n-1}p_{n-1}) (t) + \mu t^n \end{equation*} Up to a scalar multiple, this has the form $\mu_0 p_0(t) + \cdots + \mu_{n-1} p_{n-1}(t) + t^n$ (rewriting the $\mu$s) In practice it is easy to work out $\mu_{n-1}$, then $\mu_{n-2}$ etc. Then $\int_{-1}^1 p_m(t) q(t)\,dt = \mu_m \int_{-1}^1 (p_m(t))^2\,dt + \int_{-1}^1 p_m(t) t^n \,dt$ since $\int_{-1}^1 p_m(t) p_n(t) \,dt = 0$ whenever $n \neq m$ For this to be 0 we must have \begin{equation*} \mu_m = \frac{-\int_{-1}^1 p_m(t) t^n \,dt}{\int_{-1}^1 (p_m(t))^2\,dt} \end{equation*} If we have this value we get 0, and so the existence and uniqueness are proved 2) Use the Gram Schmidt orthogonalisation process in the inner product space of polynomials, with \begin{equation*} = \int_{-1}^1p(t)q(t)\,dt \end{equation*} The theorem one uses is that if $\underline{v_0}, \underline{v_1}, \underline{v_2}, \ldots$ are linearly independent in an inner product space $V$, then you can find an othonormal sequence $\underline{w_0}, \underline{w_1}, \underline{w_2}, \ldots$ with $\underline{w_i} \in <\underline{v_0}, \underline{v_1}, \ldots, \underline{v_i}> \ \forall i$ \begin{quote} \begin{proof} Set $\underline{w_0} = \frac{\underline{v_0}}{\|\underline{v_0}\|}$ Given $\underline{w_0}, \ldots, \underline{w_k}$ let \begin{equation*} \underline{w_{k+1}'} = \underline{v_{k+1}} - \sum_{i=0}^k <\underline{v_{k+1}}, \underline{w_i}> \underline{w_i} \end{equation*} Then \begin{align*} <\underline{w_{k+1}'}, \underline{w_j}> &= <\underline{v_{k+1}}, \underline{w_j}> - \sum_{i=0}^k <\underline{v_{k+1}}, \underline{w_j}> \delta_{ij}\\ &= 0 \end{align*} Finally, let $\underline{w_{k+1}} = \frac{\underline{w_{k+1}'}}{\|\underline{w_{k+1}'}\|}$ \end{proof} \end{quote} \end{proof} The $n^\text{th}$ Legendre polynomial is $\frac{1}{2^n n!}\frac{d^n}{dx^n} (x^2-1)^n = P_n(x)$ We shall show that $\int_{-1}^1 P_n(x) P_m(x) \,dx = \left\{\begin{array}{ccc} 0 & \quad & n \neq m\\ \frac{2}{2n+1} & & n=m \end{array}\right.$ \begin{TATheorem} \begin{equation*} \int_{-1}^1 P_n(x) P_m(x) \,dx = \left\{\begin{array}{ccc} 0 & \quad & n \neq m\\ \frac{2}{2n+1} & & n=m \end{array}\right. \end{equation*} \end{TATheorem} \begin{proof} WLOG assume $m \leq n$ Then let's look at \begin{equation*} \int_{-1}^1 \frac{d^{n-k}}{dx^{n-k}} (x^2-1)^n \frac{d^{m+k}}{dx^{m+k}} (x^2 - 1)^m \,dx \end{equation*} Integrating by parts \begin{equation*} \Big[\frac{d^{n-k-1}}{dx^{n-k-1}}(x^2-1)^n \frac{d^{m+k}}{dx^{m+k}}(x^2-1)^m\Big]_{-1}^1 - \int_{-1}^1 \frac{d^{n-k-1}}{dx^{n-k-1}} (x^2-1)^n \frac{d^{m+k+1}}{dx^{m+k+1}}(x^2-1)^m \,dx \end{equation*} Here the first term is zero since $\frac{d^{n-k-1}}{dx^{n-k-1}}(x^2-1)^n$ vanishes at $\pm 1$. This is because $(x^2-1)^n = (x-1)^n (x+1)^n$ has $n$-fold roots at $\pm 1$ Hence by induction: \begin{equation*} \int_{-1}^1 \frac{d^n}{dx^n} (x^2-1)^n \frac{d^m}{dx^m} (x^2 - 1)^m \,dx = (-1)^k \int_{-1}^1 \frac{d^{n-k}}{dx^{n-k}} (x^2-1)^n \frac{d^{m+k}}{dx^{m+k}} (x^2 - 1)^m \,dx \end{equation*} when $0 \leq k \leq n$ So set $k=n$ If $m < n$ the $\frac{d^{m+n}}{dx^{m+n}} (x^2-1)^m = 0$ (as $(x^2-1)^m$ is a polynomial of degree $<2m < m+n$) If $m=n$ then $\frac{d^{m+n}}{dx^{m+n}} (x^2-1)^m = \frac{d^{2n}}{dx^{2n}} (x^2-1)^n = (2n)!$ so we get \begin{equation*} (-1)^n (2n)! \int_{-1}^1 (x^2-1)^n \,dx \end{equation*} Let $I_n = \int_{-1}^1 (x^2-1)^n \,dx$. Then \begin{align*} I_n &= [x(x^2-1)^n]_{-1}^1 - \int_{-1}^1 2nx^2 (x^2-1)^{n-1}\,dx\\ &= -x^2\\ &= -(x^2-1)-1\\ \therefore I_n &= 2n(-I_n - I_{n-1})\\ \Rightarrow I_n &= -\frac{2n}{2n+1}I_{n-1} \end{align*} Also $I_0 = \int_{-1}^1 1\,dx = 2$ so \begin{align*} I_n &= (-1)^n \frac{2n}{2n+1} . \frac{2(n-1)}{2n-1} . \cdots . \frac{2}{3} . 2\\ &= (-1)^n \frac{2^{n+1}(n!) 2^n n!}{(2n+1)!}\\ &= \frac{(-1)^n 2.(2^n n!)^2}{(2n+1)(2n)!} \end{align*} (For the second line of which, use that the product of the denominator is $\frac{2^n n!}{(2n+1)!}$) So $(-1)^n (2n)! \int_{-1}^1 (x^2-1)^n\,dx = \frac{2.(2^n n!)^2}{2n+1}$ It follows that $\int_{-1}^1 P_n(x)^2\,dx = \frac{2}{2n+1}$ (since we multiply by $(\frac{2}{2^n n!})^2$) \end{proof} \begin{TALemma} Each Legendre polynomial has precisely $n$ (simple disjoint) roots in $[-1, 1]$ \end{TALemma} \begin{proof} Let $x_1, \ldots, x_k$ be the points in $[-1,1]$ where $P_n(x)$ changes sign (roots with odd multiplicity). Suppose $k < n$. Then define $Q(x) = \prod_{i=1}^k(x-x_i)$. Then $Q$ also changes sign at $x_1, \ldots, x_k$ and nowhere else, so $P_nQ$ never changes sign. Since $P_nQ$ is not identically zero and doesn't change sign $\int_{-1}^1 P_n(x) Q(x)\,dx \neq 0$. This is a contradiction since $\deg Q < n$ and $P_n$ is constructed to be orthogonal to all such polynomials (as $\{P_0, P_1, \ldots, P_{n-1}\}$ spans all polynomials of degree $0 \ \forall i$ ii) $\sum_{i=1}^n A_i = 2$ \end{TALemma} \begin{proof} Use Theorem \ref{TA-IntegrationCoefficientsTheorem} with an appropriately chosen $f$. i) Let $f(x) = \prod_{i \neq j} (x-x_j)^2$. Then by Theorem \ref{TA-IntegrationCoefficientsTheorem} \begin{equation*} \int_{-1}^1 f(x)\,dx = \sum_{j=1}^n A_j f(x_j) = A_i f(x_i) \end{equation*} But $f(x) > 0$ expect at $x_j$ for $j \neq i$, so $A_i = \frac{\int_{-1}^1 f(x)\,dx}{f(x_i)} > 0$ ii) By Theorem \ref{TA-IntegrationCoefficientsTheorem} $\int_{-1}^1 1 \,dx = \sum_{i=1}^n A_i(1)$ \end{proof} \begin{TATheorem} \label{TA-IntegrationCoeffientsErrorSmall} Let $f: [-1, 1] \rightarrow \mathbb{R}$ be a continuous (or even just integrable) function and suppose that there is some polynomial $P$ of degree $\leq 2n-1$ such that \begin{equation*} \sup_{t \in [-1, 1]} |f(t) - P(t)| \leq \epsilon \end{equation*} Let $x_1 < \cdots < x_n, A_1, \ldots, A_n$ be as in Theorem \ref{TA-IntegrationCoefficientsTheorem}. Then \begin{equation*} |\int_{-1}^1 f(t)\,dt - \sum_{i=1}^\infty A_i f(x)| \leq 4 \epsilon \end{equation*} \end{TATheorem} \begin{proof} \begin{align*} \text{LHS} &\leq |\int_{-1}^1 f(t)\,dt - \int_{-1}^1 P(t)\,dt| + |\int_{-1}^1 P(t)\,dt - \sum_{i=1}^\infty A_i P(x_i)| + |\sum_{i=1}^n A_i P(x_i) - \sum_{i=1}^\infty A_i f(x_i)|\\ &\leq \int_{-1}^1 |f(t) - P(t)|\,dt + 0 + \sum_{i=1}^\infty |A_i| |P(x_i) - f(x_i)|\\ &\leq \int_{-1}^1 \epsilon\,dt + (\sum_{i=1}^\infty |A_i|) \epsilon\\ &= 4 \epsilon \end{align*} \end{proof} \begin{TACorollary} Let $f: [-1, 1] \rightarrow \mathbb{R}$ be any continuous function and let $\epsilon > 0$. Then, as long as $n$ is sufficiently large and $x_1 < \cdots < x_n, A_1, \ldots, A_n$ are in Theorem \ref{TA-IntegrationCoefficientsTheorem} \begin{equation*} |\int_{-1}^1 f(t)\,dt - \sum_{i=1}^n A_i f(x_i)| < \epsilon \end{equation*} \end{TACorollary} \begin{proof} By the Weierstrass Approximation Theorem we can find a polynomial $P$ with $\sup_{t \in [-1, 1]} |f(t) - P(t)| < \frac{\epsilon}{4}$ Then Theorem \ref{TA-IntegrationCoeffientsErrorSmall} gives the result (if $n \geq \frac{\deg P +1}{2}$) \end{proof} \newpage \subsection{Review of Complex Analysis} \begin{align*} F(x) - F(0) &= \sum_{i=1}^n F(x_i) - F(x_{i-1})\\ &= \sum_{i=1}^n \underbrace{(x_i - x_{i-1})}_{dt} f(x_i) \rightarrow \int_0^x f(t)\,dt \end{align*} $\phi: [0, 1] \rightarrow \mathbb{C}, F(\phi(t_i)) - F(\phi(t_{i-1}))$ $\phi(t_i) - \phi(t_{i-1}) \approx \phi'(t_i) (t_i - t_{i-1})$ If $f$ is $F'$ then $F(z) - F(w) = \int_P f(z)\,dz$ for any path $P$ from $w$ to $z$. So if $P$ is a closed path then $\int_Pf(z)\,dz = 0$ \begin{TATheorem}[Cauchy's Theorem] If $D$ is a simply connected domain (a domain is an open connected set) (simply connected means there are no 'holes' in the set) and $C$ is a closed curve in $D$ and $f: D \rightarrow \mathbb{C}$ is analytic, then $\int_C f(x)\,dz = 0$ \end{TATheorem} Recall that $f$ is differentiable means $f(z+h) = f(z) + h f'(z) + o(h)$ with $\frac{o(h)}{h} \rightarrow 0$ as $h \rightarrow 0$ Let $z = x + iy \quad f(z) = u(x, y) + iv(x, y) \quad h = k + il$ $f(z+h) = u(x+k, y+l) + iv(x+k, y+l) = u(x, y) + k \frac{\partial u}{\partial x} + l\frac{\partial u}{\partial y} + ik \frac{\partial v}{\partial x} + ik \frac{\partial v}{\partial y} + o(k, l)$ Look at $f'(z)h: f'(z) = \left(\begin{array}{cc} \frac{\partial u}{\partial x} & \frac{\partial u}{\partial y}\\ \frac{\partial v}{\partial x} & \frac{\partial x}{\partial y} \end{array}\right)$ Consider $(a+ib)(x+iy) = ax-by +i(ay+bx)$ \begin{equation*} \left(\begin{array}{c} x\\ y \end{array}\right) \mapsto \left(\begin{array}{c} ax-by\\ ay+bx \end{array}\right) = \left(\begin{array}{cc} a & -b\\ b & a \end{array}\right)\left(\begin{array}{c} x \\ y \end{array}\right) \end{equation*} By comparison between the 2 cases we get the Cauchy Riemann equations \begin{equation*} \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \qquad \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x} \end{equation*} \newpage \subsection{Cauchy's Integral Formula} Let $D$ be a domain, let $C$ be a circle such that it and its interior lie in $D$. Let $f: D \rightarrow \mathbb{C}$ be analytic. Let $z$ be in the interior of $C$. Then \begin{equation*} f(z) = \frac{1}{2 \pi i} \int_C \frac{f(w)}{w-z} \,dz \end{equation*} \begin{proof}(Sketch) By homotropy invariance (the integral around a curve is equal to the integral around a homotopically equivalent curve) we can replace $C$ by a very small circle $C'$ about $z$ (as only at $z$ is the function (possibly) not analytic). But then \begin{align*} \int_{C'} \frac{f(w)}{z-w}\,dz &\approx \int_{C'} \frac{f(z)}{z-w}\,dw \quad \text{using continuity of }f\\ &= f(z) \int_{C'} \frac{dw}{w-z}\\ &= 2 \pi i f(z) \end{align*} \end{proof} \begin{TATheorem}[Taylor's Theorem] Let $D$ be a domain and let $f: D \rightarrow \mathbb{C}$ be analytic. Let $z_0$ be in $D$ and suppose $r > 0$ is such that $B_r(z_0) \subset D$. Then there are unique coefficients $a_0, a_1, \ldots \in \mathbb{C}$ such that \begin{equation*} f(z) = \sum_{n=0}^\infty a_n(z-z_0)^n \end{equation*} for every $z \in B_r(z_0)$ For any $\rho < r \ \sum_{n=0}^N a_n (z-z_0)^n \rightarrow f(z)$ uniformly on $B_\rho(z_0)$ \end{TATheorem} \begin{proof} Let $z \in B_r(z_0)$ and let $\rho$ be such that $|z-z_0| < \rho < r$, and let $C$ be the circle of radius $\rho$ about $z_0$. Then by Cauchy's integral formula: \begin{align*} f(z) &= \frac{1}{2 \pi i} \int_C \frac{f(z)}{w-z}\,dw\\ &=\frac{1}{2 \pi i}\int_C \frac{f(w)}{w-z_0 - (z-z_0)}\,dw\\ &= \frac{1}{2 \pi i}\int_C \frac{f(w)}{w-z_0 (1 - \frac{z-z_0}{w-z_0})}\,dw\\ &= \frac{1}{2 \pi i}\int_C \frac{f(w)}{w-z_0} \sum_{n=0}^\infty (\frac{z-z_0}{w-z_0})^n \,dw\\ &= \sum_{n=0}^\infty a_n (z-z_0)^n \end{align*} where $a_n = \frac{1}{2 \pi i}\int_C \frac{f(w)}{(w-z_0)^{n+1}}\,dw$. Note we have used that the sum converges uniformly when we interchanged $\sum$ and $\int$. To see uniqueness, if $f(z) = \sum_{n=0}^\infty a_n (z-z_0)^n = \sum_{n=0}^\infty b_n(z-z_0)^n$, take first $N$ such that $a_N \neq b_N$ and look at \begin{equation*} \lim_{z \rightarrow z_0} \frac{\sum_{n=0}^\infty a_n (z-z_0)^n - \sum_{n=0}^\infty b_n (z-z_0)^n}{(z-z_0)^N} \end{equation*} This limit should tend to 0, but in practice tends to $a_N - b_N \rightarrow a_N = b_N$. The uniform convergence comes from the fact that $\rho$ is less than the radius of convergence \end{proof} \begin{TATheorem}[The Identity Theorem] Let $D$ be a domain and let $f, g: D \rightarrow \mathbb{C}$ be analytic. Let $z_0 \in D$ and let $(z_n)$ be a sequence in $D$ with $z_n \rightarrow z_0$. Suppose that $f(z_n) = g(z_n)$ for every $n$. Then $f \equiv g$ on $D$ \end{TATheorem} \begin{proof}(Sketch) \begin{figure}[htbp] \centering \includegraphics{Images/TA06.jpg} \caption{Proof of the Identity Theorem} \label{Proof of the Identity Theorem} \end{figure} \textcircled{1} - Using these $z_n$ expand $f, g$ in this circle and get that $f \equiv g$ in this circle \textcircled{2} - Take a new point, make a new circle, and $f \equiv g$ in this circle as well by the above argument \textcircled{3} - Repeat until the whole domain is covered \end{proof} \begin{TATheorem}[Laurent's Theorem] Let $D$ be a domain that includes the annulus $A = \{z: r < |z-z_0| < R\}$. Then there are unique coefficients $(a_n)_{n \in \mathbb{Z}}$ such that $f(z) = \sum_{n=-\infty}^\infty a_n(z-z_0)^n \ \forall z \in A$ \end{TATheorem} \begin{proof}(Sketch) Let $z \in A$. Let $\alpha, \beta$ be such that $r < \alpha < |z-z_0| < \beta < R$. Let $C_1$ and $C_2$ be circles of radius $\alpha, \beta$ respectively about $z_0$. By Cauchy's Integral formula (connecting the circles by a narrow tube): \begin{equation*} f(z) = \frac{1}{2 \pi i} \int_{C_2} \frac{f(w)}{w-z}\,dz - \frac{1}{2 \pi i}\int_{C_1} \frac{f(w)}{w-z}\,dz \end{equation*} The $1^\text{st}$ term expands just as Taylor. For the second, write $\frac{1}{w-z}$ as $\frac{1}{w-z_0 - (z-z_0)} = \frac{1}{z-z_0} (\frac{1}{\frac{w-z_0}{z-z_0} - 1})$ and expand. \end{proof} MISSING LECTURE 15 \begin{proof} of Runge's Theorem Let $f: \mathbb{C} \cup \{\infty\} \rightarrow \mathbb{C}$ be a rational function with poles $w_1, \ldots, w_n \in \Omega$. By partial fractions, we can write $f$ as a sum $f_1 + \cdots + f_k$ with $f_i \in R(w_i)$ (i.e. $f$ is a linear combination of functions of the form $(z-w_i)^m \quad m \in \mathbb{Z}$) By the main lemma, if $w$ is any element of $\Omega$, then $R(w_i) \subset R(w)$ for every $i$. But $\overline{R(w)}$ is a vector space (easy to check) so $f = f_1 + \cdots + f_k \in \overline{R(w)}$ i.e. $f$ can be uniformly approximated by functions $\sum_{m = -N}^N (z-w)^m$. Choose $w$ far enough away that there is a closed disc containing $K$ and a larger open disc not containing $w$. Then by an earlier lemma, any function $\sum_{m=-N}^N (z-w)^m$ can be uniformly approximated on $D$ and hence on $K$, by the partial sums of its Taylor expansion i.e. by polynomials. \end{proof} \begin{TATheorem} There exists a sequence of analytic functions $f_n : D \rightarrow \mathbb{C}$ (where now $D$ is the open unit disc) such that $f_n(z) \rightarrow \left\{\begin{array}{ccc} 0 & \quad & z \neq 0\\ 1 & & z=0 \end{array}\right.$ \end{TATheorem} \begin{proof} For each $n$ let $C_n$ be the circle of radius $\frac{1}{n}$ about 0, and let $K_n = \{0\} \cup \{z : \frac{z}{n} \leq |z| \leq 1, \frac{2 \pi}{n} \leq \arg z \leq 2 \pi\}$ Note that $\bigcup_n K_n = \overline{D}$ The function $g_n(z) = \frac{1}{2 \pi i} \int_{C_n} \frac{dw}{w-z}$ is 1 at 0 and 0 in the rest of $K_n$ by Cauchy's Integral Formula and Cauchy's Theorem. The integral can be approximated by a finite sum of the form $\sum_{i=1}^t \frac{a_i}{w_i - z}$ and since $\dist(C_n, K_n) > 0$ this approximation can be made uniform on $K_n$ (see exercise on sheet 3). Therefore we have a ration function $f_n$ with poles outside $K_n$ such that $|f(0) - 1| < \frac{1}{n}$, $|f_n(z) - g_n(z)| = |f_n(z)| < \frac{1}{n}$ for every $z \in K_n \setminus \{0\}$. By Runge's Theorem we can find a polynomial $p_n$ such that $|p_n(z) - f_n(z)| < \frac{1}{n}$ for every $z \in K_n$ Then $p_n(z) \rightarrow \left\{\begin{array}{ccc} 1 & \quad & z=0\\ 0 & & 0 < |z| \leq 1 \end{array}\right.$ \end{proof} \newpage \section{Irrationality and Transcendence} \begin{TATheorem} Let $p$ be a monic polynomial with integer coefficients. Then every root of $p$ is either an integer or irrational. \end{TATheorem} \begin{proof} Let $p(x) = x^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0$ and suppose $p(\frac{r}{s}) = 0$ where $\frac{r}{s}$ is a fraction in its lowest terms. Then \begin{align*} \frac{r^n}{s^n} + a_{n-1}\frac{r^{n-1}}{s^{n-1}} + \cdots + a_1 \frac{r}{s} + a_0 &= 0\\ \Rightarrow r^n + s a_{n-1} r^{n-1} + \cdots + s^{n-1}a_1 r + s^n a_0 = 0 \end{align*} whence $s|r^n$ since $r^n + s\lambda = 0$ But this is possible only if $s=1$ (otherwise by considering prime factorisation of both, $s$ and $r$ must have a common factor, or by Bezout's Theorem \end{proof} (recall Bezout's Theorem: $(a, m) = (b,m) = 1 \Rightarrow (ab, m) = 1$, and $(r, s) = 1 \Rightarrow$ by induction $(r^n, s) = 1$) \newpage \subsection{Continued Fractions: A first look} Take a fraction such as $\frac{50}{37}$. We can write it as $1 + \frac{13}{37}$, which in turn can be written as $1 + \frac{1}{\frac{37}{15}}$, and can thus be extended: \begin{align*} \frac{50}{37} &= 1 + \frac{13}{37}\\ &= 1 + \frac{1}{\frac{37}{13}}\\ &= 1 + \frac{1}{2 + \frac{11}{13}}\\ &= 1 + \frac{1}{2 + \frac{1}{\frac{13}{11}}}\\ &= 1 + \frac{1}{2 + \frac{1}{1+\frac{2}{11}}}\\ &= 1+\frac{1}{2+\frac{1}{1+\frac{1}{\frac{11}{2}}}}\\ &= 1+\frac{1}{2+\frac{1}{1+\frac{1}{5+\frac{1}{2}}}} \end{align*} This is the simple continued fraction expansion of $\frac{50}{37}$. Simple refers to the fact that all the numerators are 1. Sometimes non-simple expansions are neater e.g. for $\pi$ Compare with Euclid's Algorithm: \begin{align*} 50 &= \textcolor{red}{1}.37 + 13\\ 37 &= \textcolor{red}{2}.13 + 11\\ 13 &= \textcolor{red}{1}.11 + 2\\ 11 &= \textcolor{red}{5}.2 + 1\\ 2 &= \textcolor{red}{2}.1 \end{align*} The numbers in red are the denominators of the continued fraction expansion.\\ Remark: If we start with a fraction $\frac{p}{q}$ then the successive denominators gets smaller, so the continued fraction expansion is finite. We can also work out a continued-fraction expansions for certain irrational numbers (They always exist, but do not always have any discernible pattern). For example: \begin{align*} \sqrt{2} &= 1 + (\sqrt{2} +1)\\ &= 1 + \frac{1}{\frac{1}{\sqrt{2} - 1}}\\ &= 1 + \frac{1}{1 + \sqrt{2}} \quad \text{as }\frac{1}{\sqrt{2} - 1} = \sqrt{2} +1\\ &= 1 + \frac{1}{2 + (\sqrt{2} -1)}\\ &= 1 + \frac{1}{2 + \frac{1}{1 + \sqrt{2}}}\\ &= 1 + \frac{1}{2 + \frac{1}{2 + (\sqrt{2} - 1)}}\\ &= \cdots = 1 + \frac{1}{2 + \frac{1}{2 + \frac{1}{2 + \ldots}}} \end{align*} which is the simplest continued fraction expansion of $\sqrt{2}$ Since this expression is infinite, $\sqrt{2}$ is irrational.\\ Alternative (essentially the same) proof: (see figure 7) \begin{figure}[htbp] \centering \includegraphics{Images/TA07.jpg} \caption{Expansion of the square root of 2} \label{Expansion of the square root of 2} \end{figure} Continuing repeated cuts of this rectangle at some point we get a rectangle the same shape as before, so this process continues forever. But we only had $pq$ little squares. \newpage \subsection{The irrationality of $e$ and $\pi$} Define $e$ to be $\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots$ \begin{TATheorem} $e$ is irrational \end{TATheorem} \begin{proof} Suppose that $e = \frac{p}{q}$ with $p, q$ positive integers Then $q!e$ must be an integer. But \begin{align*} q!e &= \frac{q!}{0!} + \frac{q!}{1!} + \cdots + \frac{q!}{q!} + \frac{q!}{(q+1)!} + \frac{q!}{(q+2)!} + \cdots\\ &=M + \frac{1}{q+1} + \frac{1}{(q+1)(q+2)} + \cdots \end{align*} for some integer $M$. Thus \begin{align*} M < q!e &< M + \frac{1}{q+1} + \frac{1}{(q+1)^2} + \frac{1}{(q+1)^3} + \cdots\\ &=m + \frac{1}{q} < M+1 \end{align*} so $q!e \notin \mathbb{N} \#$ \end{proof} \begin{TATheorem} $\pi$ is irrational \end{TATheorem} \begin{proof} Consider the integral $s_n(x) = \frac{1}{2^n n!} \int_0^x (x^2 - t^2)^n \cos t \,dt$. Then \begin{align*} s_0(x) &= \int_0^x \cos t \,dt = \sin x\\ s_1(x) &= \frac{1}{2} \int_0 ^x (x^2 - t^2) \cos t \,dt\\ &= \cancel{\Big[\frac{1}{2} (x^2 - t^2) \sin t\Big]_0^x} - \frac{1}{2} \int_0^x -2t \sin t \,dt\\ &= \int_0^x t \sin t \,dt\\ &= -[t \cos t]_0^x - \int_0^x \cos t \,dt\\ &= \sin x - x \cos x \end{align*} In general \begin{align*} s_n(x) &= \frac{1}{2^n n!} \int_0^x(x^2 - t^2)^n \cos t \,dt\\ &= \frac{1}{2^n n!} \cancel{[(x^2 - t^2)^n \sin t]_0^x} - \frac{1}{2^n n!} \int_0^x -2tn(x^2 - t^2)^{n-1} \sin t\,dt\\ &= \frac{1}{2^{n-1} (n-1)!} \int_0^x t(x^2 - t^2)^{n-1} \sin t \,dt\\ &= \frac{-1}{2^{n-1} (n-1)!} \cancel{[t (x^2 - t^2)^{n-1} \cos t]_0^x} + \frac{1}{2^{n-1} (n-1)!} \int_0^x \{(x^2 - t^2)^{n-1} -2t^2 (n-1)(x^2 - t^2)^{n-2}\}\cos t\,dt\\ &= \frac{1}{2^{n-1}(n-1)!} \int_0^x (x^2 - t^2)^{n-1} \cos t \,dt - \frac{1}{2^{n-2}(n-2)!} \int_0^x t^2(x^2 - t^2)^{n-2}\cos t\,dt\\ &= s_{n-1}(x) - x^2 s_{n-2}(x) + 2(n-1)s_{n-1}(x)\\ &= (2n-1)s_{n-1}(x) - x^2 s_{n-2}(x) \end{align*} Hence by induction $s_n(x) = p_n(x) \sin x + q_n(x) \cos x$ for polynomials $p_n$ and $q_n$ with integer coefficients. Hence $s_n(\frac{\pi}{2}) = p_n(\frac{\pi}{2})$ is a polynomial in $\frac{\pi}{2}$ It $\pi$ is rational then we can write $\frac{\pi}{2} = \frac{u}{v}$ for $u, v \in \mathbb{N}$. $s_n(\frac{\pi}{2})$ is clearly positive (as the integrand is positive), but the smallest possibilites for $p_n(\frac{u}{v})$ is $\frac{1}{v}^n$, also by induction, as $p_n$ has degree $\leq n$ But also $p_n(\frac{\pi}{2}) = s_n(\frac{\pi}{2}) \leq \frac{1}{2^n n!} . \frac{\pi}{2} . (\frac{\pi^2}{4})^n$, suppose $n$ even. Using the estimate $n! \geq n(n-1) \ldots (\frac{n}{2} + 1) \geq (\frac{n}{2})^\frac{n}{2} \geq n^\frac{n}{4}$ This gives $s_n(\frac{\pi}{2}) \leq \frac{\pi}{2} (\frac{\pi^2}{8 n^{\frac{1}{4}}})^n$ If $n^{\frac{1}{4}} > 10 \frac{\pi^2 v}{8}$ this is less than $\frac{\pi}{2} (\frac{1}{10v})^n < (\frac{1}{v})^n$ \# (or use $n! \approx (\frac{n}{e})^n$ (Stirling's formula)) \end{proof} \begin{TATheorem}[Liouville's Theorem] Let $\theta$ be an irrational root of the polynomial $a_n x^n + a_{n-1} x^{n-1} + \cdots + a_0$ where the $a_i$ are integers. Then $\exists$ a constant $c > 0$ (depending on $\theta$ and the polynomial) such that $|\theta - \frac{p}{q}| \geq \frac{c}{q^n}$ for every $p, q \in \mathbb{Z}, q \neq 0$ \end{TATheorem} \begin{proof} There are only finitely many rational roots, so there exists $a > 0$ such that $|\theta - \frac{p}{q}| \geq a$ whenever $\frac{p}{q}$ is a rational root of the polynomial. Now let $\frac{p}{q}$ be a rational that is not a root. Since $a_0, a_1, \ldots, a_n$ are integers, $P(\frac{p}{q})$ is a multiple of $\frac{1}{q^n}$ where $P$ is the above polynomial, and as $\frac{p}{q}$ is a root, $|P(\frac{p}{q})| \geq \frac{1}{q^n}$ Recall that $(x^k - y^k) = (x-y) (x^{k-1} + x^{k-2}y + \cdots + xy^{k-2} y^{k-1})$ which implies that \begin{align*} |x^k - y^k| &\leq |x-y| (|x|^{k-1} + |x|^{k-2}|y| + \cdots + |x| |y|^{k-2} + |y|^{k-1})\\ &\leq |x-y| k .(\max \{|x|, |y|\})^{k-1} \end{align*} It follows that $|P(x) - P(y)| \leq |x-y| \sum_{k=0}^n k |a_k| (\max \{|x|, |y|\})^{k-1}$ If $|\frac{p}{q}| > |\theta| + 1$ then $|\theta - \frac{p}{q}| \geq 1$ giving $c$, and we are done. Otherwise, we may deduce that: \begin{equation*} \frac{1}{q^n} \leq |P(\theta) - P(\frac{p}{q})| \leq |\theta - \frac{p}{q}| \underbrace{\sum_{k=0}^n k |a_k| (|\theta| + 1)^{k-1}}_{\text{call this }c_1} \end{equation*} Then $|\theta - \frac{p}{q}| \geq \frac{c_1}{q^n}$ as required. Now let $c = \min\{a, 1, c_1\}$ and we get $|\theta - \frac{p}{q}| \geq \frac{c}{q^n} \ \forall \frac{p}{q}$ (we used $c=a$ if $\frac{p}{q}$ is a root, $c=1$ if $|\frac{p}{q}| > |\theta| + 1$, $c=c_1$ otherwise). \end{proof} Note: We could also have used $c = \min\{q^n a, q^n, c\}$ but as $1 \geq \frac{1}{q^n}$, $a \geq \frac{a}{q^n}$ this also works. \begin{TACorollary} There are transcendental numbers. \end{TACorollary} \begin{proof} Let $\theta = \frac{1}{10^{1!}} + \frac{1}{10^{2!}} + \frac{1}{10^{3!}} + \cdots$ Then for appropriate $p$ \begin{equation*} |\theta - \frac{p}{10^{n!}}| < \frac{2}{10^{(n+1)!}} = 2. (\frac{1}{10^{n!}})^{n+1} \end{equation*} (Take $p$ such that $\frac{p}{10^{n!}} = \frac{1}{10^{1!}} + \frac{1}{10^{2!}} + \cdots + \frac{1}{10^{n!}}$) Let $c > 0, m \in \mathbb{N}$. For large enough $n \in \mathbb{N}$ \begin{equation*} \frac{2}{(10^{n!})^{n+1}} < \frac{c}{(10^{n!})^m} \end{equation*} so $\theta$ is not algebraic of degree $m$ by Liouville's Theorem \end{proof} \newpage \subsection{More on continued fractions} Consider a general expression of the form \begin{equation*} a_0 + \frac{b_0}{a_1 + \frac{b_1}{a_2 + \frac{b_2}{\ldots + \frac{b_{n-2}}{a_{n-1} + \frac{b_{n-1}}{a_n}}}}} \end{equation*} Notice that $a_k + \frac{b_k}{\frac{p}{q}} = \frac{a_k p + b_k q}{p}$ So if we work out this continued fraction from the bottom up when when we reach $a_k + \frac{b_k}{\ldots}$ then we turn the pair $\left(\begin{array}{cc} p & q \end{array}\right)$ into the pair \begin{equation*} \left(\begin{array}{cc} a_k p + b_k q & p \end{array}\right) = \left(\begin{array}{cc} p & q \end{array}\right)\left(\begin{array}{cc} a_k & 1\\ b_k & 0 \end{array}\right) \end{equation*} Hence the whole fraction equals $\frac{p_n}{q_n}$ with: \begin{align*} \left(\begin{array}{cc} p_n & q_n \end{array}\right) &= \left(\begin{array}{cc} a_n & 1 \end{array}\right)\left(\begin{array}{cc} a_{n-1} & 1\\ b_{n-1} & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-2} & 1\\ b_{n-2} & 0 \end{array}\right)\ldots \left(\begin{array}{cc} a_0 & 1\\ b_0 & 0 \end{array}\right)\\ \text{So }\left(\begin{array}{cc} p_{n-1} & q_{n-1} \end{array}\right) &= \left(\begin{array}{cc} a_{n-1} & 1 \end{array}\right)\left(\begin{array}{cc} a_{n-2} & 1\\ b_{n-2} & 0 \end{array}\right)\ldots \left(\begin{array}{cc} a_0 & 1\\ b_0 & 0 \end{array}\right)\\ &= \left(\begin{array}{cc} 1 & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-1} & 1\\ b_{n-1} & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-2} & 1\\ b_{n-2} & 0 \end{array}\right) \ldots \left(\begin{array}{cc} a_0 & 1\\ b_0 & 0 \end{array}\right)\\ \Rightarrow \left(\begin{array}{cc} p_n & q_n\\ p_{n-1} & q_{n-1} \end{array}\right) &= \left(\begin{array}{cc} a_n & 1\\ 1 & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-1} & 1\\ b_{n-1} & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-2} & 1\\ b_{n-2} & 0 \end{array}\right) \ldots\left(\begin{array}{cc} a_0 & 1\\ b_0 & 0 \end{array}\right)\\ &= \left(\begin{array}{cc} a_n & b_{n-1}\\ 1 & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-1} & 1\\ 1 & 0 \end{array}\right)\left(\begin{array}{cc} a_{n-2} & 1\\ b_{n-2} & 0 \end{array}\right)\ldots \left(\begin{array}{cc} a_0 & 1\\ b_0 & 0 \end{array}\right)\\ &= \left(\begin{array}{cc} a_n & b_{n-1}\\ 1 & 0 \end{array}\right)\left(\begin{array}{cc} p_{n-1} & q_{n-1}\\ p_{n-2} & q_{n-2} \end{array}\right) \end{align*} This yields the recurrence relations \begin{align*} p_n &= a_n p_{n-1} + b_{n-1} p_{n-2}\\ q_n &= a_n q_{n-1} + b_{n-1} q_{n-2} \end{align*} Example: Let $a_i = b_i = 1$ we get the fraction \begin{equation*} 1 + \frac{1}{1 + \frac{1}{1 + \frac{1}{\ldots + \frac{1}{1 + \frac{1}{1}}}}} \end{equation*} (This is the continued fraction expansion of the golden ratio when taken to $\infty$) Then $\frac{p_0}{q_0} = 1$, so let $p_0 = q_0 = 1$ At the next stage $1 + \frac{1}{1} = 2$ so $p_1 = 2, q_1 = 1$. Thereafter \begin{align*} p_n &= p_{n-1} + p_{n-2}\\ q_n &= q_{n-1} + q_{n-2} \end{align*} So $p_n$ and $q_n$ are successive Fibonacci numbers. In this case \begin{equation*} \left(\begin{array}{cc} p_n & q_n\\ p_{n-1} & q_{n-1} \end{array}\right) = \left(\begin{array}{cc} 1 & 1\\ 1 & 0 \end{array}\right)^{n+1} \end{equation*} Declaring $p_n$ to be the $(n+1)^\text{th}$ Fibonacci number $F_{n+1}$ we get \begin{equation*} \left(\begin{array}{cc} F_{n+1} & F_n\\ F_n & F_{n-1} \end{array}\right) = \left(\begin{array}{cc} 1 & 1\\ 1 & 0 \end{array}\right)^{n+1} \end{equation*} So by taking determinants \begin{equation*} F_{n+1} F_{n-1} - F_n^2 = (-1)^{n+1} \end{equation*} e.g. for $n=6$ \begin{align*} \text{LHS: }8.21 - 13^2 &= -1\\ \text{RHS: }(-1)^7 &= -1 \end{align*} It follows that \begin{equation*} \frac{F_{n+1}}{F_n} - \frac{F_n}{F_{n-1}} = \frac{(-1)^{n+1}}{F_n F_{n-1}} \end{equation*} from which it follows that the two fractions $\frac{F_{n+1}}{F_n}$ and $\frac{F_n}{F_{n-1}}$ are as close as they can be: If $\frac{p}{q} \neq \frac{r}{s}$ then $|\frac{p}{q} - \frac{r}{s}| = |\frac{ps - rq}{qs}| \geq \frac{1}{qs}$ and here we have equality.\\ Our next goal is to discuss the following expansion for $\tan x$ \begin{equation*} \frac{x}{1 - \frac{x^2}{3 - \frac{x^2}{5 - \frac{x^2}{7 - \frac{x^2}{\ldots}}}}} \end{equation*} Start by looking at the truncated fraction: \begin{equation*} \frac{p_n(x)}{q_n(x)} = \frac{x}{1 - \frac{x^2}{3 - \frac{x^2}{\ldots - \frac{x^2}{2n-1}}}} \end{equation*} Let $a_0 = 0, b_0 = x, \left.\begin{array}{ccc} a_i = 2i - 1 & \quad & i> 0\\ b_i = -x^2 & & i > 0 \end{array}\right.$. Then \begin{align*} p_n(x) &= (2n-1)p_{n-1}(x) - x^2 p_{n-2}(x)\\ q_n(x) &= (2n-1)q_{n-1}(x) - x^2 q_{n-2}(x) \end{align*} $\frac{p_0(x)}{q_0(x)} = 0$ so $p_0(x) = 0, q_0(x) = 1$ $\frac{p_1(x)}{q_1(x)} = x$ so $p_1(x) = x, q_1(x) = 1$ Also $\tan x - \frac{p_n(x)}{q_n(x)} = \frac{\sin x q_n(x) - \cos x p_n(x)}{\cos x q_n(x)} = \frac{s_n(x)}{\cos x q_n(x)}$ ($s_n(x)$ as in the proof of irrationality of $\pi$) To check this: Clearly $s_n(x)$ satisfies the same recurrence relation as $\sin x q_n(x) - \cos x p_n(x)$. We also get $s_0(x) = \sin x$, $s_1(x) = \sin x - x \cos x$ as before, so this is true. We know that $s_n(x)$ is very small when $x \leq \frac{\pi}{2}$ It remains to show that $q_n(x)$ isn't too small. MISSING LECTURE 20 - 22nd November 2004 \newpage \subsection{Quotient Spaces} Let $(X, \tau)$ be a topological space and let $\sim$ be an equivalence relation on $X$. Then $\frac{X}{\tau}$ denotes the set of all equivalence classes. The projection map $\pi: X \rightarrow \frac{X}{\tau}$ is defined by $x \mapsto [x]$ where $[x]$ is the equivalence class of $x$. If this is to be continuous for some topology $\sigma$ on $\frac{X}{\tau}$, we need $\pi^{-1}(U) \in \tau$ for every $U \in \sigma$. The quotient topology on $\frac{X}{\tau}$ is $\{U \subset \frac{X}{\tau} : \pi^{-1}(U) \in \tau\}$ This is the finest (=biggest) topology on $\frac{X}{\tau}$ such that $\pi$ is continuous. Note $U$ is a set of equivalence classes, and $\pi^{-1}(U)$ is the union of those equivalence classes.\\ Example: Let $X = \mathbb{R}$ and set $x \sim y$ iff $x-y$ is an integer. $\frac{\mathbb{R}}{\sim}$ is regarded as a circle. The quotient topology is the obvious one (exercise on sheet) e.g. the set $U = \{[x]: 0 < x < \frac{1}{2}\}$ is open since $\pi^{-1}(U) = \bigcup_{n=-\infty}^\infty (n, n+\frac{1}{2})$ which is open in $\mathbb{R}$ \newpage \subsection{Product Spaces} Let $\Gamma$ be a set and for each $\gamma \in \Gamma$ let $X_\gamma$ be a topological space. We want to put a topology on $\prod_{\gamma \in \Gamma} X_\gamma$. If $\Gamma = \{1, \ldots, n\}$ then $\prod_{\gamma \in \Gamma}X_\gamma = X_1 \times X_2 \times \cdots \times X_n$ If $\Gamma = \mathbb{N}$ then it's $X_1 \times X_2 \times \cdots = \{(x_1, x_2, \ldots) | x_i \in X_i\}$ Formally, $\prod_{\gamma \in \Gamma} X_\gamma$ is the set of all functions $\phi : \Gamma \rightarrow \bigcup_{\gamma \in \Gamma} X_\gamma$ such that $\phi(\gamma) \in X_\gamma$ for every $\gamma$. For each $\gamma \in \Gamma$ the projection map $\pi_\gamma : \prod_{\delta \in \Gamma} X_\delta \rightarrow X_\gamma$ is the map that "picks out the $\gamma$-coordinate" i.e. sends $\phi$ to $\phi(\gamma)$ In the earlier examples: $\pi_j(x_1, \ldots, x_n) = x_j$; $\pi_j(x_1, x_2, \ldots) = x_j$ For these to be continuous, we need $\pi_\gamma^{-1}(U)$ to be open for every $U$ open in $X_\gamma$ But $\pi_\gamma^{-1}(U) = \prod_{\delta \in \Gamma} Y_\delta$ where $Y_\delta = \left\{\begin{array}{ccc} U & \quad & \delta = \gamma\\ X_\delta & & \delta \neq \gamma \end{array}\right.$ But we must also have finite intersections of these, so any basic open set i.e. one of the form $\prod_{\delta \in \Gamma} Y_\delta$ where $Y_\delta = \left\{\begin{array}{ccc} U_i & \quad & \delta = \gamma_i\\ X_\delta & & \text{otherwise} \end{array}\right.$ where $\{\gamma_1, \gamma_2, \ldots, \gamma_n\}$ is some finite subset of $\Gamma$, must be open. The product topology on $\prod_{\gamma \in \Gamma} X_\gamma$ is defined to be the set of all unions of basic open sets.\\ Example: Let $X = \{0, 1\}^{(k)} = \prod_{n=1}^\infty = \{(x_1, x_2, \ldots) | x_i = 0 \text{ or }1\}$ A basic open set takes the following form: fix finitely many places and let the rest vary (i.e. pick $n_1 < n_2 < \cdots < n_k$ and $\epsilon_1, \epsilon_2, \ldots, \epsilon_k = 0$ or $1$, and let $U = \{(x_1, x_2, \ldots) | \forall i \leq k \ x_{n_i} = \epsilon_i\}$. An open set is a union of these). \begin{description} \item[Hansdorff Topological Space] - A topological space $X$ is Hansdorff if for any pair $x \neq y$, $x, y \in X$ there are disjoint open sets $U$ and $V$ such that $x \in U$ and $y \in V$ \end{description} Examples: Any metric space is Hansdorff. If $|X| \geq 2$ then $X$ with the indiscrete topology is not Hansdorff. Let $X = \mathbb{R} \cup \{0'\}$ where we think of $0'$ as a 'false zero'. Given $U \subset X$ we call it open if it has one of the following properties. i) $0' \in U$ and $U$ is open in $\mathbb{R}$ ii) $0' \in U$ and $(U \setminus \{0'\}) \cup \{0\}$ is open in $\mathbb{R}$ This isn't Hansdorff since you can't find disjoint open $U$, $V$ with $0 \in U$, $0 \in V$. However, given any $x \neq y$ you can find $U$ open such that $x \in U, y \notin U$ \begin{description} \item[Compact Topological Space] - A topological space is compact if every open cover has a finite subcover \end{description} We have the following theorems \begin{TATheorem} (1) A closed subspace of a compact space is compact (2) A continuous image of a compact space is compact (3) A compact subset of a Hansdorff topological space is closed \end{TATheorem} \begin{proof} (1) Let $X$ be compact and let $F \subset X$ be closed. Let $F \subset \bigcup_{\gamma \in \Gamma} U_\gamma$ with all $U_\gamma$ open in $X$ (so $U_\gamma \cap F$ is open in $F$). Then $X \setminus F$ is open, so $X \setminus F$ together with the $U_\gamma$ forms an open cover of $X$. $X$ is compact, so we can find $\gamma_1, \ldots, \gamma_n$ such that \begin{align*} X &= (X \setminus F) \cup \bigcup_{i=1}^n U_{\gamma_i}\\ \Rightarrow F &\subset \bigcup_{i=1}^n U_{\gamma_1} \end{align*} Thus $F$ is compact\\ (2) Let $X$ be compact and let $f: X \rightarrow Y$ be continuous Let $f(X) \subset \bigcup_{\gamma \in \Gamma} U_\gamma$ with each $U_\gamma$ open in $Y$. Then $X = \bigcup_{\gamma \in \Gamma}f^{-1}(U_\gamma)$ and the $f^{-1}(U_\gamma)$ are open, since $f$ is continuous. Since $X$ is compact, $\exists \gamma_1, \ldots, \gamma_n$ such that $X = \bigcup_{i=1}^n f^{-1}(U_{\gamma_i})$ But then $f(X) \subset \bigcup_{i=1}^n U_{\gamma_i}$ so $f(X)$ is compact\\ (3) Let $X$ be Hansdorff and $Y \subset X$ be compact. Let $x \in X \setminus Y$. Since $X$ is Hansdorff, for each $y \in Y$ we can find disjoint open $U_y, V_y$ such that $x \in U_y, y \in V_y$. The $V_y$ form an open cover of $Y$. Since $Y$ is compact, we can find $y_1, \ldots, y_n$ such that $Y \subset \bigcup_{i=1}^n V_{y_i}$. But $\bigcap_{i=1}^n U_{y_i}$ is open and disjoint from $\bigcup_{i=1}^n V_{y_i}$ and hence disjoint from $Y$. Call this set $W_x$. Then $\bigcup_{x \notin Y} W_x$ is open and disjoint from $Y$, so equal to $X \setminus Y$ (since it contains all $x \in X \setminus Y$). So $X \setminus Y$ is open $\Rightarrow Y$ is closed. \end{proof} \begin{TATheorem} Let $f: X \rightarrow Y$ be a continuous bijection from a compact space $X$ to a Hansdorff space $Y$. Then $f(U)$ is open for every open set $U \subset X$ [$f$ is an open map]. Equivalently $f^{-1}$ is continuous. \end{TATheorem} Remark: This is useful if you want to show that $X$ and $Y$ are homeomorphic. If the assumptions are true, then you only need to show continuity in one direction. \begin{proof} We can use: \begin{quote} (1) A closed subspace of a compact space is compact (2) A continuous image of a compact space is compact (3) A compact subset of a Hansdorff topological space is closed \end{quote} Let $U \subset X$ be open. Then \begin{align*} X \setminus U \text{ is closed } &\underbrace{\Rightarrow}_{(1)} X \setminus U \text{ is compact}\\ &\underbrace{\Rightarrow}_{(2)} f(X \setminus U) \text{ is compact}\\ &\underbrace{\Rightarrow}_{(3)} f(X \setminus U) \text{ is closed} \end{align*} But $f(X \setminus U) = f(X) \setminus f(U) = Y \setminus f(U)$ as $f$ is a bijection so $Y \setminus f(U)$ is closed $\Rightarrow f(U)$ is open \end{proof} \begin{TACorollary} Let $(X, \tau)$ be a compact Hansdorff topological space. If $\sigma$ is a strictly cruder (=smaller) topology on $X$ then $(X, \sigma)$ is not Hansdorff, and if $\sigma$ is strictly finer (=bigger) then $(X, \sigma)$ is not compact. \end{TACorollary} \begin{proof} If $\sigma \subset \tau$ and $(X, \sigma)$ is Hansdorff, then the identity function $f: (X, \tau) \rightarrow (X, \sigma)$ is continous. ($f(x) = x \ \forall x \in X$), so by the theorem it is a homeomorphism, so $\sigma = \tau$. If $\sigma \supset \tau$ then the identity function $f: (X, \sigma) \rightarrow (X, \tau)$ is continous. If $(X, \sigma)$ is compact, then the theorem again givens that $f$ is a homeomorphism, so $\sigma = \tau$ \end{proof} \begin{description} \item[Regular Topological Space] - A topological space $X$ is regular if for every $x \in X$ and every closed set $F \subset X$ with $x \notin F$ there exist disjoint open sets $U$ and $V$ with $x \in U$ and $F \subset V$ \item[Normal Topological Space] - A topological space $X$ is normal if, for any two disjoint closed sets $F$ and $G$ there are disjoint open sets $U$ and $V$ such that $F \subset U$ and $G \subset V$ \end{description} \begin{TATheorem} Every compact Hansdorff topological space is normal \end{TATheorem} \begin{proof} Let $X$ be a compact Hansdorff space. First, show that $X$ is regular. Let $x \in X, F \subset X$ be closed, $x \notin F$. For each $y \in F$ we can find $U_y, V_y$, disjoint open sets with $x \in U_y, y \in V_y$. The $V_y$ cover $F$ and $F$ is closed, so compact, so we can find $y_1, \ldots, y_n$ such that $F \subset V_{y_1} \cup \cdots \cup V_{y_n}$. Let $U = U_{y_1} \cap \cdots \cap U_{y_n}$ and $V = V_{y_1} \cup \cdots \cup V_{y_n}$. Then $U, V$ are open and disjoint, $x \in U$ and $F \subset V$. Thus $X$ is regular.\\ Now, assume this result and let $F, G$ be disjoint closed subsets of $X$. Then for every $x \in F$ we can find disjoint open sets $U_x, V_x$ with $x \in U_x$. $G \subset V_x$ (by what we have just proved). $F$ is compact and the $U_x$ form an open cover, so we can find $x_1, \ldots, x_n$ such that $F \subset U_{x_1} \cup \cdots \cup U_{x_n}$. Let $U = U_{x_1} \cup \cdots \cup U_{x_n}$, $V = V_{x_1} \cap \cdots \cap V_{x_n}$. Then $U$ and $V$ are open and disjoint, $F \subset U, G \subset V$ \end{proof} \begin{description} \item[Dense Topological Subspace] - Let $X$ be a topological space and let $Y \subset X$. Then $Y$ is dense if for every non-empty open set $U$, $Y \cap U \neq \phi$ (e.g. $\mathbb{Q}$ is dense in $\mathbb{R}$) If $V$ is a non-empty open set in $X$, then $Y$ is dense in $V$ if $Y \cap U \neq \phi$ for every non-empty open subset $U \subset V$. $Y$ is nowhere dense if it isn't dense in any $V$, that is: \begin{quote} $\forall$ non-empty open $V \ \exists$ non-empty open $U \subset V$ such that $Y \cap U = \phi$ \end{quote} \end{description} [e.g. $\mathbb{Q} \setminus (0, 1)$ is not dense in $\mathbb{R}$, but it is nowhere dense. The Cantor set: Take $[0, 1]$, remove the middle third, remove the middle third of the remaining strips, continue to $\infty$ - is nowhere dense] \begin{TATheorem}[The Baire-Category Theorem] \label{TA-BaireCategoryTheorem} Let $X$ be a complete metric space and let $U_1, U_2, U_3, \ldots$ be a sequence of dense open subsets of $X$. Then $\bigcap_{n=1}^\infty U_n \neq \phi$ (in fact, $\bigcap_{n=1}^\infty U_n$ is dense.)\\ Alternative version: Let $X$ be a complete metric space, and let $Y_1, Y_2, \ldots$ be nowhere dense. Then $\bigcup_{n=1}^\infty Y_n \neq X$ \end{TATheorem} \begin{proof} First we prove version 1: Let $X$ be a complete metric space and let $U_1, U_2, \ldots$ be dense open subsets of $X$. We shall build sequences $x_1, x_2, \ldots \in X$ with $\delta_1, \delta_2, \ldots > 0$ and the following properties \begin{quote} i) $\overline{B_{\delta_1}(x_1)} \supset \overline{B_{\delta_2}(x_2)} \supset \overline{B_{\delta_3}(x_3)} \ldots$ ii) $\forall n \ \overline{B_{\delta_n}(x_n)} \subset U_n$ iii) $\delta_n \rightarrow 0$ as $n \rightarrow \infty$ \end{quote} To start off, let $x_1 \subset U_1$. Since $U_1$ is open we can find $\delta_1 > 0$ such that $\overline{B_{\delta_1}(x_1)} \subset U_1$. Choose $\delta_1$ to be $\leq 1$. Now suppose we have chosen $x_1, x_2, \ldots, x_n$ and $\delta_1, \delta_2, \ldots, \delta_n > 0$ such that the 3 conditions hold, specifically with $\delta_j \leq \frac{1}{j} \ \forall j \leq k$. Since $B_{\delta_k}(x_k)$ is open and $U_{k+1}$ is dense, we can find $x_{n+1} \in B_{\delta_k}(x_k) \cap U_{k+1}$. Also, $U_{k+1}$ and $B_{\delta_k}(x_k)$ are open, so we can find $0 < \delta_{k+1} \leq \frac{1}{k+1}$ such that $\overline{B_{\delta_{k+1}}(x_{k+1})} \subset B_{\delta_k}(x_k) \cap U_{k+1}$ so we have got from $k$ to $k+1$. So by induction we have a sequence with the required properties. The sequence is Cauchy, let $\epsilon > 0$, let $N > \frac{2}{\epsilon}$, let $p, q \geq N$, then $x_p \in B_{\delta_p}(x_p) \subset \overline{B_{\delta_N}(x_N)}$ and similarly $x_q \in \overline{B_{\delta_N}(x_N)}$ so $d(x_p, x_q) \leq 2\delta_N \leq \frac{2}{N} < \epsilon$. Let $x$ be the limit of the sequence. Let $N \in \mathbb{N}, \forall n \geq N, x_n \in \overline{B_{\delta_N}(x_N)}$ (as we've just seen) and $\overline{B_{\delta_N}(x_N)}$ is closed, so $x \in \overline{B_{\delta_N}(x_N)} \subset U_N$ This is true for all $N$, so $x \in \cap_{n=1}^\infty U_n \neq \phi$ \end{proof} \begin{TALemma} Let $X$ be a topological (metric) space and let $Y \subset X$ be nowhere dense. Then $\overline{Y}$ (the intersection of all closed sets containing $Y$) is nowhere dense and $X \setminus \overline{Y}$ is a dense open set. \end{TALemma} \begin{proof} Let $U$ be a non-empty open set. Then since $Y$ is nowhere dense we can find a non-empty open subset $V \subset U$ such that $Y \cap V = \phi$. But then $X \setminus V$ is a closed set containing $Y$, so $\overline{Y} \subset X \setminus V$, so $\overline{Y} \cap V = \phi$ as well. So $\overline{Y}$ is nowhere dense. In particular, for every non-empty open set $U$, $U \cancel{\subset} \overline{Y}$ so $U \cap (X \setminus \overline{Y}) \neq \phi$ so $X \setminus \overline{Y}$ is dense. It's also open, since $\overline{Y}$ is closed \end{proof} We can now provide the proof of the second form of the Baire-Category Theorem (Theorem \ref{TA-BaireCategoryTheorem}) \begin{proof} Let $Y_1, Y_2, \ldots$ be nowhere dense subsets of $X$. Then $\overline{Y_1}, \overline{Y_2}, \ldots$ are closed and nowhere dense. So $X \setminus \overline{Y_1}, X \setminus \overline{Y_2}, \ldots$ are open and dense, so $\bigcap_{n=1}^\infty (X \setminus \overline{Y_n}) \neq \phi$ by version 1 of the Baire-Category theorem. $\Rightarrow X \setminus \bigcup_{n=1}^\infty \overline{Y_n} \neq \phi \Rightarrow \bigcup_{n=1}^\infty \overline{Y_n} \neq X \Rightarrow \bigcup_{n=1}^\infty Y_n \neq X$ \end{proof} Remark: Suppose that $X$ is a complete metric space and $F_1, F_2, \ldots$ are closed sets such that $\bigcup_{n=1}^\infty F_n = X$. Then the $F_n$ cannot all be nowhere dense (by Baire-Category II). So there is some $n$ and some non-empty open set $U$ such that $F_n$ is dense in $U$. But that means $U \subset F_n$ since $F_n$ is closed. (If $\exists \ y \in U \setminus F_n$, then $F_n \cap U$ is open, non-empty disjoint from $F_n$ so $F_n$ isn't dense in $U$ \#) i.e. if the $F_n$ are closed and $\bigcup_{n=1}^\infty F_n = X$ then some $F_n$ contains a non-empty open set $U$. This is referred to as Baire Category III and is useful for some applications.\\ Application: Let $f_n: [0, 1] \rightarrow \mathbb{R}$ be continuous functions. Suppose they are pointwise bounded i.e. $\forall x \ \exists \ M$ such that $\forall n |f_n(x)| \leq M$ $\forall M \in \mathbb{N}$ let $F_M = \{x : \forall n \in \mathbb{N}, |f_n(x)| \leq M\}$ \begin{align*} \therefore F_m &= \bigcap_{n=1}^\infty \{x : |f_x(x)| \leq M\}\\ &= \bigcap_{n=1}^\infty f_n^{-1} [-M, M] \text{ is closed} \end{align*} (because the inverse images of closed sets are closed, and the intersections of closed sets are closed) But $\bigcup_{M=1}^\infty F_M = [0, 1]$ and $[0, 1]$ is complete, so some $F_M$ contains a non-empty open set and hence an open interval $(a, b)$ with $a \leq b$. On that interval we have $\forall x \ \forall n \ |f_n(x)| \leq M \Rightarrow$ uniform boundedness.\\ Take $f: [0, 1] \rightarrow \mathbb{R}$ continuous and $g(t) = \sum_{k=0}^n \binom{n}{k}t^k (1-t)^{n-k} f(\frac{k}{n})$ $\binom{n}{k}t^k (1-t)^{n-k}$ is the probability of getting $k$ heads out of $n$ tosses of a biased coin with probability $t$ of coming up heads. Steps of proof: Variance of binomial distribution is $nt(1-t)$ so the standard deviation is $\alpha \sqrt{n}$. Chebyshev's Theorem says that it is unlikely that the number of heads differs by many standard deviations from the mean $nt$. $g(t) = \mathbb{E}f(\frac{1}{n} X_{n, t})$ where $X_{n, t}$ is the number of heads from $n$ tosses with $\mathbb{P}$(head)$=t$ With high probability $\frac{1}{n} X_{n, t}$ is close to $t$, so $f(\frac{1}{n}X_{n, t})$ is close to $f(t)$ [as it is uniformly continuous], so $\mathbb{E}(\frac{1}{n}X_{n, t})$ has a contribution from the cases when $\frac{1}{n}X_{n, t} \approx t$ of about $f(t)$ and a contribution from the other cases which is small.\\ Example: Find polynomials that approximate $\frac{1}{z}$ uniformly on $K$ where $K$ is a semi-circle about the origin. First approximate $\frac{1}{z}$ on $K$ by a function of the form $\sum_{n=-N}^N (z + 2)^n$ $z = (z + 2)-2$ \begin{align*} \therefore \frac{1}{z} = \frac{1}{(z + 2)-2} &= \frac{1}{z + 2} (\frac{1}{1 - \frac{z}{z+2}})\text{ (taking out }(z+2)\text{ as }|z + 2| > 2\text{ on }K\text{)}\\ &= \frac{1}{z + 2}[1 + \frac{2}{z+2} + (\frac{2}{z+2})^2 + \cdots] \end{align*} which converges uniformly as a power series. So the functions \begin{equation*} R_N(z) = \frac{1}{z+2} (1+\frac{2}{z+2} + \frac{4}{(z+2)^2} + \cdots + \frac{2^N}{(z + 2)^N}) \end{equation*} converge to $\frac{1}{z}$ uniformly on $K$. Further they have poles only at -2, so are analytic on a disc of radius $\frac{3}{2}$ about O. So we can approximate then by polynomials just take partial sums of Taylor expansions. So to reconstruct the normal $\epsilon, \delta$ argument: Take $N$ such that $R_n(z)$ is $\frac{\epsilon}{2}$ from $\frac{1}{z}$ Take sufficient Taylor terms so that the Taylor expansion is $\frac{\epsilon}{2}$ from $R_n(z)$\\ Example: $e^2$ is irrational \begin{proof} Let $e^2 = \frac{p}{q} \Rightarrow q e^2 - p = 0 \Rightarrow eq - pe^{-1} = 0$ $\Rightarrow q-p + \frac{q+p}{1!} + \frac{q-p}{2!} + \frac{q+p}{3!} + \cdots = 0$ Take $N$ even, much larger than $p, q$ Therefore $N! (qe - pe^{-1}) = 0 = \frac{q+p}{N+1} + \frac{q-p}{(N+1)(N+2)} + \frac{q+p}{(N+1)(N+2)(N+3)} + \cdots$ is an integer. $e^2 \in (7, 8)$ so $p \approx 7q, p+q \approx 8q, q-p \approx -6q$ So these terms form a descending-in-magnitude alternating sequence so the number is between \begin{equation*} \frac{q+p}{N+1} - \frac{p-q}{(N+1)(N+2)} \quad \text{and} \quad \frac{q+p}{N+1} \end{equation*} If $N$ is large enough then this is not an integer \# \end{proof} \end{document}