News

My basil finally developed roots! I'm currently reading about quantum mechanics, ferrofluids, and language models.

Blog

[08/09/2024:] Motivating Ladder Operators II
[08/08/2024:] A Cool Way to Garden
[08/08/2024:] Life and Basil Limeade
[08/05/2024:] Motivating Ladder Operators
[08/04/2024:] Migrated to AstroJS
[07/31/2024:] The Classical in the Quantum

Notes

Working on notes on the quantum mechanics, derivatives, and uploading my previous course notes onto this blog!

Projects

Finally started a projects page! I've recently made some nice upgrades to my post component, so it looks pretty clean! ;)

🌊

I'm considering whether or not to continue this project using WebGL or Three.js.

I'm also researching methods for generating the 3D scenes I want for this project automatically.

In the meantime, I've decided to proceed with some preliminary prototypes of the other interactive parts of this project.

Orange Juice

I like orange juice. :)

Mlog


MATH 206 Notes: Functional Analysis

Fall 2024
By Aathreya Kadambi
Expanding on Lectures by Professor Algebraic Topology

I’m taking algebraic topology by Professor Zworski!

Story 1: Linear Algebra

Relevant Lectures: Lecture 1

We started out by defining cosets and quotient spaces, and then the first isomorphism theorem for vector spaces. Hahaha this is a homework problem I did for COMPSCI 189 yesterday!

We study the rest of mathematics because linear algebra is too hard. - Taylor, according to Zworski.

Zworski’s Favorite Fact from Linear Algebra (Shur’s Complement Formula). Suppose [T11T12T21T22]\begin{bmatrix} T_{11} & T_{12} \\ T_{21} & T_{22}\end{bmatrix} is invertible, so [T11T12T21T22]βˆ’1=[S11S12S21S22],\begin{bmatrix} T_{11} & T_{12} \\ T_{21} & T_{22}\end{bmatrix}^{-1} = \begin{bmatrix} S_{11} & S_{12} \\ S_{21} & S_{22}\end{bmatrix}, Sij:Wjβ†’ViS_{ij} : W_j \rightarrow V_i. Then [T11T_{11} is invertible if and only if S22S_{22} is invertible]. In that case, T11βˆ’1=S11βˆ’S12S22βˆ’1S21.T_{11}^{-1} = S_{11} - S_{12} S_{22}^{-1}S_{21}.

Why is this interesting? Suppose we start with P:H1β†’H2P : H_1 \rightarrow H_2 and we can find Rβˆ’:Hβˆ’β†’H2R_{-} : H_- \rightarrow H_2 and R+:H1β†’H+R_+ : H_1 \rightarrow H_+ such that (PRβˆ’R+0):HβŠ•Hβˆ’β†’HβŠ•H+\begin{pmatrix} P & R_{-} \\ R_+ & 0 \end{pmatrix} : H \oplus H_- \rightarrow H\oplus H_+ is invertible and (PRβˆ’R+0)βˆ’1=(EE+Eβˆ’Eβˆ’+):H2βŠ•H+β†’H1βŠ•Hβˆ’\begin{pmatrix} P & R_- \\ R_+ & 0 \end{pmatrix}^{-1} = \begin{pmatrix} E & E_+ \\ E_- & E_{-+}\end{pmatrix} : H_2 \oplus H_+ \rightarrow H_1 \oplus H_- then PP is invertible if and only if the stuff from the theorem. Then we have reduced the invertibility of PP to something else. It is used in linear algebra for reducing huge systems in linear algebra to much smaller finite ones.

This problem of something is called a Grushih problem. β€œThe key here is stability under perturbations and so on”. Although I have no clue what perturbations he is talking about.

Example. Take P=Ξ»βˆ’JP = \lambda - J where JJ is a Jordan block matrix (has an off-diagonal of ones).

Remark. There are two types of multiplicity for the eigenvalues of the jordan block matrix. The geometric and algebraic multiplicities are 1 and nn. We can try to find R+R_+ and Rβˆ’R_-.

We would lke to compute the determinant of (Ξ»βˆ’Jene10)\begin{pmatrix} \lambda - J & e_n \\ e_1 & 0 \end{pmatrix} where ene_n and e1e_1 are the basis vectors. Then this matrix is invertible since it has determinant βˆ’1-1, so we can reduce its invertibility so another property. We can choose this Rβˆ’R_- and R+R_+ because one of them lies in the kernel of JJ and the other lies in the kernel of J+J^+. In the generral case though, it is an art.

Story 2: Dimension

Relevant Lectures: 1

Dimension of VV being nn is the same as saying there exists a corresponding basis of nn vectors for VV. This is for finite idmensional vector spaes, and if this is not possible, we say that the dimension is infinite. Hormander actually has another different amusing definition of dimension for vector spaces.

Equivalently, dim⁑\dim is a functor from the category of vector spaces to N\N such that

  • dim⁑Kn=n\dim K^n = n,
  • T:V1β†’V2T : V_1 \rightarrow V_2 is surjective is equvialent to dim⁑V1β‰₯dim⁑V2\dim V_1 \ge \dim V_2,
  • T:V2β†’V1T : V_2 \rightarrow V_1 is injective is equivalent to dim⁑V1≀dim⁑V2\dim V_1 \le \dim V_2. Zworski kept laughing while writing this.

Example. dim⁑(V1/ker⁑T)=dim⁑Im T\dim (V_1/\ker T) = \dim \text{Im }T.

We say: RankΒ T=dim⁑ImΒ T=codimΒ ker⁑T\text{Rank }T = \dim \text{Im }T = \text{codim } \ker T where codimΒ W=dim⁑V/W\text{codim } W = \dim V/W with WβŠ†VW \subseteq V.

Theorem. Suppose WβŠ†VW \subseteq V. The n dim⁑W+codimΒ W=dim⁑V\dim W + \text{codim } W = \dim V.

Proof.

Since dim⁑W≀dim⁑V\dim W \le \dim V and codimΒ W≀dim⁑V\text{codim } W \le \dim V, we can assume that dim⁑W\dim W and codimΒ W\text{codim } W are finite, because otherwise we just have infinities summing together.

T1:Knβ†’VT_1 : K^n \rightarrow V and T2:Kmβ†’V/WT_2 : K^m \rightarrow V/W bijection. There exists T~2:Kmβ†’V\tilde{T}_2 : K^m \rightarrow V such that T~2βˆ˜Ο€=T2\tilde{T}_2 \circ \pi = T_2, Ο€:Vβ†’V/W\pi : V \rightarrow V/W. The tilde mappings are called β€œlifts” of the non-tilde mappings. T2ej=xj+W,T~2(βˆ‘ajej):=βˆ‘ajxjT_2 e_j = x_j + W, \tilde{T}_2(\sum a_j e_j) := \sum a_j x_j eje_j a basis of KmK^m.

T:T1βŠ•T~2:Kn+mβˆ‹(a,b)↦T1a+T~2b∈VT: T_1 \oplus \tilde{T}_2 : K^{n+m} \ni (a,b) \mapsto T_1 a + \tilde{T}_2 b \in V

Claim. TT is an isomorphism.

  • T(a,b)=0T(a,b) = 0, 0=Ο€(T1+T~2b)=T2bβ‡’b=0β‡’T1a=00 = \pi(T_1 + \tilde{T}_2 b) = T_2 b \Rightarrow b = 0\Rightarrow T_1 a = 0
  • Suppose x∈Vx \in V, T2T_2 is surjective, x=βˆ‘bjxj+yx = \sum b_j x_j + y, y∈Wy \in W. But T1T_1 is surjective so there exists aa such that T1a=yT_1 a = y.

Theorem. Take T:V1β†’V2T : V_1 \rightarrow V_2. Then, dim⁑ImΒ T+dim⁑ker⁑T=dim⁑V1\dim \text{Im }T + \dim \ker T = \dim V_1, and codimΒ ImΒ T+codimΒ ker⁑T=dim⁑V2\text{codim } \text{Im }T + \text{codim } \ker T = \dim V_2.

Remark. These are both the same infinite dimensions but in infinite dimensions, one might be trivial and the other is useful. Apparently long ago in Zworski’s linear algebra class they did this proof via row and column operations to prove an equivalent statement that the row major is the same as the column major. With algebra we can do it much easier.

If dim⁑Vj<∞\dim V_j < \infty, then we define the index: ind(T)L=dim⁑ker⁑Tβˆ’codimΒ ImΒ T=dim⁑V1βˆ’dim⁑V2\text{ind}(T) L= \dim \ker T - \text{codim } \text{Im }T = \dim V_1 - \dim V_2

Story 3: Index of a Transformation

Relevant Lectures: Lecture 1, Lecture 2

Suppose dim⁑ker⁑T<∞\dim \ker T < \infty or dim⁑(V2/ImΒ T)<∞\dim (V_2/\text{Im }T) < \infty. Then ind(T)=dim⁑ker⁑Tβˆ’codimΒ ImΒ T\text{ind}(T) = \dim \ker T - \text{codim } \text{Im }T

Theorem 1. Suppose that T1:V1β†’V2T_1 : V_1 \rightarrow V_2 and T2:V2β†’V3T_2 : V_2 \rightarrow V_3. Suppose that dim⁑ker⁑Tj<∞\dim \ker T_j < \infty or dim⁑cokerTj<∞\dim \text{coker} T_j < \infty (cokerT=V2/ImΒ T\text{coker} T = V_2/\text{Im }T). Then the index of T2T1T_2T_1 is well defined and ind(T2T1)=ind(T1)+ind(T2)\text{ind}(T_2T_1) = \text{ind}(T_1) + \text{ind}(T_2) follows from the definition of the index.

Theorem 2. If S:V1β†’V2S : V_1 \rightarrow V_2 has finite rank and indΒ T\text{ind }T is defined, then ind\text{ind} of T+ST+ S is well defined and ind(T+S)=ind(T)\text{ind}(T+ S) = \text{ind}(T).

0β†’V0β†’T0V1β†’T1β‹―β†’TNβˆ’1VNβ†’00 \rightarrow V_0 \xrightarrow{T_0} V_1 \xrightarrow{T_1} \dots \xrightarrow{T_{N-1}} V_N \rightarrow 0 is called a complex if Tk+1Tk=0T_{k+1}T_k = 0 or ker⁑Tk+1βŠ‡ImΒ Tk\ker T_{k+1} \supseteq \text{Im }T_k.

An example of a complex from MATH 53:

Example. V0=V3=C∞(R3,R)V_0 = V_3 = C^\infty(\R^3, \R), V1=V2=C∞(R3,R3)V_1 = V_2 = C^\infty(\R^3, \R^3), T0=βˆ‡T_0 = \nabla, T1=βˆ‡Γ—T_1 = \nabla \times, T2=βˆ‡β‹…T_2 = \nabla \cdot. For sophisticated people, this is a special example of the de Rham complex.

A complex is called exact if Im Tk=ker⁑Tk+1\text{Im }T_k = \ker T_{k+1}.

Example. The complex from the previous example is actually exact! Lol this is related to the question I asked the quantum mechanics professor.

Example (Exact sequences).

  • N=0N = 0, then T0T_0 is a bijection.
  • N=1N = 1, 0β†’Wβ†ͺVβ†’Ο€V/Wβ†’00 \rightarrow W \hookrightarrow V \xrightarrow{\pi} V/W \rightarrow 0

Theorem. For exact complexes, βˆ‘dim⁑V2j=βˆ‘dim⁑V2j+1.\sum \dim V_{2j} = \sum \dim V_{2j +1}. If dim⁑Vj<∞\dim V_j < \infty, then βˆ‘(βˆ’1)jdim⁑Vj=0\sum (-1)^j \dim V_j = 0.

This result will allow us to show Theorem 2 by constructing an appropriate exact complex.

Proof.

Let Rk+1=ker⁑Tk+1=ImΒ TkR_{k+1} = \ker T_{k+1} = \text{Im }T_k. Then dim⁑Vk=dim⁑Rk+dim⁑Rk+1\dim V_k = \dim R_k + \dim R_{k+1} Now, βˆ‘dim⁑V2k=βˆ‘dim⁑R2k+dim⁑R2k+1==βˆ‘dim⁑V2k+1\sum \dim V_{2k} = \sum \dim R_{2k} + \dim R_{2k+1} == \sum \dim V_{2k+1}

Remark. Another method is to actually shorten the exact sequence by taking quotients and then do stuff.

Story 4: Fedholm Operations

Recall that an operator T:V1β†’V2T : V_1 \rightarrow V_2 is Fredholm when dim⁑ker⁑T<∞\dim \ker T < \infty and dim⁑cokerΒ T<∞\dim \text{coker } T < \infty.

ind(T)=dim⁑ker⁑Tβˆ’dim⁑cokerΒ T\text{ind}(T) = \dim \ker T - \dim \text{coker } T

Example. V1=V2={a=a1a2a3...aj∈K}V_1 = V_2 = \{a = a_1a_2a_3...a_j \in K\}.

Tna=an+1an+2...n∈Z,al=0,l≀0T_na = a_{n+1}a_{n+2}...\qquad n \in \Z, a_l = 0, l \le 0
To be honest I’m a bit unsure on what was going on here.

\dim \ker T_n = \left\{ \begin{array}\{0\} & n \le 0 \\ n & n > 0 \end{array} \right.

Last time, we said V1βŠ‡VV_1 \supseteq V and x∉V1x \not\in V_1 implies there is a hyperplane V2V_2 such that V2V_2 contains V1V_1 and xx isn’t in V2V_2.

Theorem. Suppose W1βŠ†VW_1 \subseteq V, dim⁑W1<Ξ±\dim W_1 < \alpha. Then βˆƒ\exists W2W_2 such that W1∩W2={0}W_1 \cap W_2 = \{0\}, W1+W2=VW_1 + W_2 = V, and codimΒ W2=dim⁑W1\text{codim }W_2 = \dim W_1.

Proof Sketch.

Take x∈W1\{0}x \in W_1 \backslash \{0\}. There is a hyperplane H1H_1 such that x∉H1x \not \in H_1, so that H1+W1=VH_1 + W_1 = V, and so

dim⁑(W1∩H1)+codim H1=codim (W1+H1)+dim⁑W1\dim (W_1 \cap H_1) + \text{codim }H_1 = \text{codim }(W_1 + H_1) + \dim W_1
so dim⁑(W1∩H1)=dim⁑W1βˆ’1\dim(W_1 \cap H_1) = \dim W_1 - 1. So now repeating this process, we can find:
dim⁑(W1∩H1∩H2∩...∩Hk)=dim⁑W1βˆ’k\dim(W_1 \cap H_1 \cap H_2 \cap ... \cap H_k) = \dim W_1 - k
for 0≀k≀dim⁑W1=d0 \le k \le \dim W_1 = d, so that W2=H1∩...∩HdW_2 = H_1 \cap ... \cap H_d. Now we get:
dim⁑(W1∩W2)+codim W2=codim(W1+W2)+dim⁑W1\dim (W_1 \cap W_2) + \text{codim }W_2 = \text{codim}(W_1 + W_2) + \dim W_1
and now dim⁑(W1∩W2)=0\dim (W_1 \cap W_2) = 0. Also,
codim(H1∩H2)≀codim(H1∩H2)+codim(H1+H2)=codim(H1)+codim(H2)\text{codim}(H_1 \cap H_2) \le \text{codim}(H_1 \cap H_2) + \text{codim}(H_1 + H_2) = \text{codim}(H_1) + \text{codim}(H_2)
so inductively,
codimΒ W2β‰€βˆ‘j=1dcodimΒ Hj=d\text{codim }W_2 \le \sum_{j=1}^d \text{codim }H_j = d
and also
codimΒ W2β‰₯dim⁑W1=d\text{codim }W_2 \ge \dim W_1 = d
so codimΒ W2=d\text{codim }W_2 = d.

Theorem. T:V1β†’V2T : V_1 \rightarrow V_2 is a Fredholm operator if and only if there exist Rβˆ’:Vβˆ’β†’V2R_- : V_- \rightarrow V_2, R+:V+β†’V1R_+ : V_+ \rightarrow V_1 such that (TRβˆ’R+0)βˆ’1=(EE+Eβˆ’Eβˆ’+)\begin{pmatrix} T & R_- \\ R_+ & 0 \end{pmatrix}^{-1} = \begin{pmatrix} E & E_+ \\ E_- & E_{-+}\end{pmatrix} exists and Eβˆ’+E_{-+} is a Fredholm operator in which case indΒ T=indΒ Eβˆ’+\text{ind }T = \text{ind }E_{-+}

For the first direction, we use Theorem 1 to get a W2W_2 satisfying ker⁑T∩W2=0\ker T \cap W_2 = 0, W2+ker⁑T=VW_2 + \ker T = V, dim⁑ker⁑T=n+\dim \ker T = n_+. Lte x1,…,xn+x_1,\dots,x_{n_+} be a basis of ker⁑T\ker T. There exists Lj:V1β†’KL_j : V_1 \rightarrow K such that Lj(xi)=Ξ΄ijL_j(x_i) = \delta_{ij}, y∈V1y \in V_1, y=βˆ‘ajxj+yβ€Ύy = \sum a_j x_j + \overline{y}, yβ€ΎβˆˆW2\overline{y} \in W_2, Lj(y)=ajL_j(y) = a_j.

V+:=Kn+V_+ := K^{n_+}, R+:V1β†’Kn+R_+ : V_1 \rightarrow K^{n_+}. y↦(L1(y),...,Ln+(y))y \mapsto (L_1(y),...,L_{n_+}(y)).

Choose a basis of V2/ImΒ TV_2/\text{Im }T.

Similarly, we do V=:=Knβˆ’V_= := K^{n_-}, and (b1,...,bnβˆ’)β†¦βˆ‘j=1nβˆ’bjyj(b_1,...,b_{n_-}) \mapsto \sum_{j=1}^{n_-} b_j y_j.

We now wish to show that R+R_+ is surjective. So

(TRβˆ’R+0):V1βŠ•Vβˆ’β†’V2βŠ•V+\begin{pmatrix} T & R_{-} \\ R_{+} & 0\end{pmatrix} : V_1 \oplus V_- \rightarrow V_2 \oplus V_+
is surjective. So what we have shown (I’ve omitted most of the proof about Tu+βˆ‘j=1nbjyj=0Tu + \sum_{j=1}^n b_j y_j = 0 and stuff) is that if the operator is Fredholm, then we can set up this thing so that it is invertible. We also get that Eβˆ’+:Kn+β†’Knβˆ’E_{-+} : K^{n_+} \rightarrow K_{n_-} is clearly Fredholm since n+n_+ and nβˆ’n_- are finite.

To show the other direction, first we observed that R+R_+ is surjective and Rβˆ’R_- is injective by actually multiplying out (TRβˆ’R+0)(EE+Eβˆ’Eβˆ’+)\begin{pmatrix}T & R_- \\ R_+ & 0 \end{pmatrix}\begin{pmatrix} E & E_+ \\ E_- & E_{-+} \end{pmatrix} and (EE+Eβˆ’Eβˆ’+)(TRβˆ’R+0).\begin{pmatrix} E & E_+ \\ E_- & E_{-+} \end{pmatrix}\begin{pmatrix}T & R_- \\ R_+ & 0 \end{pmatrix}.

Then, we took (uuβˆ’)∈V1βŠ•Vβˆ’\begin{pmatrix} u \\ u_-\end{pmatrix} \in V_1 \oplus V_- and (vv+)∈V2βŠ•V+\begin{pmatrix} v \\ v_+\end{pmatrix} \in V_2 \oplus V_+, and saw that Tu=vTu = v, uβˆ’=0u_- = 0, Ev+E+v+=uEv + E_+ v_+ = u and Eβˆ’v+Eβˆ’+v+=0E_- v + E_{-+} v_+ = 0.

Since vv is in the image of TT, Eβˆ’vE_-v is in the image of Eβˆ’+E_{-+}. This allows us to make a well defined map from Eβˆ’:ImΒ Tβ†’ImΒ Eβˆ’+E_- : \text{Im }T \rightarrow \text{Im }E_{-+}:

Eβˆ’#:V2/ImΒ Tβ†’Vβˆ’/ImΒ Eβˆ’+E_-^\# : V_2/\text{Im }T \rightarrow V_- / \text{Im }E_{-+}
Then by playing around with the above results, we get injectivity. It is also surjective, so the dimensions of Vβˆ’/ImΒ Eβˆ’+V_-/\text{Im }E_{-+} and V2/ImΒ TV_2/\text{Im }T are isomorphic.

Now we can define a map E+:ker⁑Eβˆ’+β†’ker⁑TE_+ : \ker E_{-+} \rightarrow \ker T. We get E+E_+ is injective on all of V+V_+ and thus on ker⁑Eβˆ’+\ker E_{-+} from the fact that R+E+R_+ E_+ is identity. Surjectivity comes from the same two equations in the iff statement.

The main helpful idea was that Tu=v,uβˆ’=0Tu = v, u_- = 0 was equivalent to Ev+E+v+=uEv + E_+ v_+ = u and Eβˆ’v+Eβˆ’+v+=0E_- v + E_{-+} v_+ = 0.

Theorem. If indΒ T=0\text{ind }T = 0, then there exists SS finite rank such that T+ST + S is invertible.

We can purturb our operator by a finite rank operator and stuff.

Proof similar to above.

We can write V1≃ker⁑TβŠ•W1V_1 \simeq \ker T \oplus W_1 and V2≃W2βŠ•ImΒ TV_2 \simeq W_2 \oplus \text{Im }T, so that the matrix of TT is

T=(000T~)T = \begin{pmatrix} 0 & 0 \\ 0 &\tilde{T} \end{pmatrix}
S=(S~000)S = \begin{pmatrix} \tilde{S} & 0 \\ 0 & 0 \end{pmatrix}
with T~:W1β†’ImΒ T\tilde{T} : W_1 \rightarrow \text{Im }T and S~:ker⁑Tβ†’W2\tilde{S} : \ker T \rightarrow W_2 invertible.

Convex Sets

Now we take K=RK = \R.

A convex set AβŠ†VA \subseteq V, K=RK = \R is convex if for all x,y∈Ax, y\in A, RβŠ‡{t∈R:x+ty∈A}\R \supseteq \{t \in \R : x + ty \in A\} is an interval in R\R.

The reason we define it weirdly like this as opposed to the canonical definition is that we want to say that a convex set is linearly open if that interval is open.

Geometric Hahn-Banach Theorem. If {0}∉A\{0 \} \not\in A and AA is a convex linearly open set, then there exists a hyperplane VV such that V∩A=βˆ…V \cap A = \varnothing. In more generality, if V1∩A=βˆ…V_1 \cap A = \varnothing and ..,

Hahn-Banach Theorem

This one was taught by Zhongkai.

Theorem. Let VV be a vector space over R\R. AβŠ†VA \subseteq V convex and linearly open.

Theorem. V1βŠ†VV_1 \subseteq V linear subspace, V1∩A=βˆ…V_1 \cap A = \varnothing, then there exists a hyperplane V2V_2, V1βŠ†V2V_1\subseteq V_2, and V2∩A=βˆ…V_2 \cap A = \varnothing.

Proof.

First looked at V=R2V = \R^2 and made observation that for any ∈V \in V, half line {tx:tΒ ge0}\{t x : t\ ge 0\} if it intersects AA, then {tx:t≀0}∩A=βˆ…\{tx : t\le 0\} \cap A = \varnothing (or else 0∈A0 \in A by convexity).

He did a ton of stuff and then had to use Zorn’s Lemma.

Now we will stop this topic for a while and go back to Hahn Banach with a version in topology or something.

Metrics and Topologies

Defined topological space and topological structure. Defined Metric space as well, and how metric space induces a topology.

Defined closed and also what it means for a set to be complete.

Theorem (Baire Category Theorem). If (E,d)(E,d) is complete metric space, {Un}n=1∞\{U_n\}_{n=1}^\infty a countable family of dense open sets then β‹‚n=1∞Un\bigcap_{n=1}^\infty U_n is dense in EE.

A set AβŠ†EA\subseteq E is called nowhere dense if its closure Aβ€Ύ\overline{A} doesn’t have interior. A set is called first category if it is a countable union of nowhere dense sets. A set is called second category if it is NOT first category.

Baire Category Theorem

(E,d)(E,d) is a complete metric space, {Un}n=1∞\{U_n\}_{n=1}^\infty countable family of open dense sets. Then ∩n=1∞Un\cap_{n=1}^\infty U_n is dense in EE.

Definition. AβŠ†EA \subseteq E is called generic if it contains the countable intersection of open dense sets.

Example. E=C0([0,1];R)E = C^0([0,1]; \R), d(f,g)=sup⁑x∈[0,1]∣f(x)βˆ’g(x)∣d(f,g) = \sup_{x\in [0,1]}|f(x)-g(x)|

AA is nowhere differentiable functions as a subset of EE.

Fn={f∈E:βˆƒx0∈[0,1],Β s.t. ∣f(x)βˆ’f(x0)βˆ£β‰€n∣xβˆ’x0∣}F_n = \{f \in E : \exists x_0 \in [0,1], \text{ s.t. } |f(x) - f(x_0)|\le n|x-x_0|\}

When discussing something, he did a proof by picture where he drew a zig zaggy function with slope on any interval being either 2n2n or βˆ’2n-2n.

Tychonov Theorem

Locally convex, balanced, absorbing.

  • MβŠ†VM \subseteq V is called balanced if for all x∈Mx \in M, ∣aβˆ£β‰€1|a| \le 1, ax∈Max \in M.
  • MβŠ†VM\subseteq V is called absorbing if for all x∈Vx \in V, ∣a∣|a| sufficiently small, ax∈Max \in M.

Theorem. A Housedorff, first countable, TBS is locally convex if and only if the topology is given countable family of seminorms. In this case, the topology is metrizable.

Locally convex: NN is open, balanced, and absorbing. Pn(x):=inf⁑{t>0:xt∈N}P_n(x) := \inf\{t > 0 : \frac{x}{t} \in N\}

Seminorms Ni,j={x∈V:pi(x)<Ο΅j}N_{i,j} = \{x \in V : p_i(x) < \epsilon_j\}, Ο΅jβ†’0\epsilon_j \rightarrow 0. Hausdorff implies ∩i,jNij={0}\cap_{i,j}N_{ij} = \{0\}, which is how we can guarantee that d(x,y)=0β‡’pi(xβˆ’y)=0β‡’xβˆ’y=0d(x,y) = 0 \Rightarrow p_i(x-y) = 0\Rightarrow x - y = 0. Some classical consutrction d(x,y)=βˆ‘n=1∞26βˆ’npn(xβˆ’y)(1+pn(xβˆ’y))d(x,y) = \sum_{n=1}^\infty 26{-n} \frac{p_n(x-y)}(1+p_n(x-y)) Triangle inequality is a bit tricky but we can show it from the regular triangle inequality on pnp_n.

TBS is a supset of locally convex TBS which is a supset of seminorm space which is a supset of Frechet space which is a supset of Banach space is a supset of Hilbert space.

Metrizable Locally Convex Topological Veector Spaces

As a consequence of a bunch of theorems, a topology is defined using a countable family of seminorms pj:Vβ†’[,∞)p_j : V \rightarrow [,\infty), pj(x+y)≀pj(x)+pj(y)p_j(x+y) \le p_j(x) + p_j(y), pj(ax)=∣a∣pj(x)p_j(a_x) = |a|p_j(x), i.e., xnβ†’xx_n \rightarrow x is equivalent to pj(xnβˆ’x)β†’0p_j(x_n - x) \rightarrow 0 as nβ†’βˆžn \rightarrow \infty for all jj.

V,{pj}V, \{p_j\} is called a Frechet space if and only if it is complete.

Seminormal space (i.e. top space defined by one seminorm).

The βˆ₯fβˆ₯p\|f\|_p norm is a seminorm based on Minkowski’s theorem or something.

WβŠ†VW \subseteq V, we would like to define a norm on V/WV/W. p~(x+W):=inf⁑y∈x+Wp(y)\tilde{p}(x+W) := \inf_{y \in x + W} p(y) p~\tilde{p} is continuous if and only if WW is closed.

Issue in theorem statement, basically we have a seminorm, but we want to show that when we divide it by points that go to zero, we get a norm.

Claim. p~\tilde{p} is a norm on V/WV/W if and only if WW is closed.

Now notice that C(Rn)C(\R^n) is not a normed space. Why? Suppose there exists a norm pp such that βˆ€jβˆƒcj\forall j \exists c_j such that pj(x)≀cjp(x)p_j(x) \le c_j p(x). Just take f(j)=jcjf(j) = jc_j (connected by linear pieces in between), and then the condition would be jcj≀cjp(f)j c_j \le c_j p(f) which is impossible if it has to happen over all jj.

We have a whole bunch of different Hahn-Banach theorems. These are all based on the geometric version. There is one version though here:

Theorem. Suppose VV is a locally convex topological vector space. WβŠ†VW \subseteq V is a linear subspace. Then x∈Wβ€Ύx \in \overline{W} if and only if for all f:Vβ†’Kf : V \rightarrow K such that f∣W=0f|_W = 0, f(x)=0f(x) = 0.

Theorem (The Runge Approximation Theorem). Suppose KβŠ†CK \subseteq \mathbb{C} and uu is holomorphic in an open neighborhood of KK. If C\K\mathbb{C} \backslash K is connected, then for every Ο΅\epsilon there exists a polynomial pp such thatsup⁑K∣p(x)βˆ’u(x)∣<Ο΅\sup_K |p(x) - u(x)| < \epsilon.

In Rudin’s Real and Complex Analysis, in an appendix, there is a proof that you only need it to be holomorphic on the interior of KK. βˆ₯uβˆ₯=sup⁑x∈K∣u(x)∣\|u\| = \sup_{x \in K} |u(x)|.

There exists a nontrivial fact which is that every continuous linear functional ff from … f(u)=∫u(z)β€…β€ŠdΞΌ(z)f(u) = \int u(z) \; d\mu(z) for a finite complex supported something. (I think he’s saying we can find ΞΌ\mu so that this works for any functional ff). The proof is also in the book by Rudin that he mentioned apparently.

Suppose vv is compactly supported and smooth in C\mathbb{C}.

βˆ‚β€Ύ=12(βˆ‚x+iβˆ‚y)\overline{\partial} = \frac{1}{2}(\partial_x + i\partial_y) u∈C1(U)u \in C^1(U) u∈O(U)β‡”βˆ‚β€Ύu=0u \in \mathcal{O}(U) \Leftrightarrow \overline{\partial} u = 0

∫Cβˆ‚β€Ύv(z)β€…β€Šdm(z)=0\int_{\mathbb{C}} \overline{\partial}v(z) \; dm(z) = 0 βˆ’1Ο€βˆ«Cβˆ‚β€Ύv(z)(ΞΆβˆ’z)βˆ’1β€…β€Šdm(z)=v(ΞΆ)-\frac{1}{\pi}\int_{\mathbb{C}} \overline{\partial} v(z) (\zeta - z)^{-1} \; dm(z) = v(\zeta) Basically proved using Green’s formula to reduce it to Cauchy’s integral formula.

From Hahn-Banach, we need that ∫΢nβ€…β€ŠdΞΌ(ΞΆ)=0\int \zeta^n \; d\mu(\zeta) = 0 finite for all measure supported in KK for all nn implies ∫u(ΞΆ)dΞΌ(ΞΆ)=0\int u(\zeta) d\mu(\zeta) = 0. This left hand side of the implication is the condition that it vanishes on the sapce of polynomials.

Eventually, zβ†’βˆ«(ΞΆβˆ’z)βˆ’1dΞΌ(ΞΆ)z \rightarrow \int (\zeta - z)^{-1} d\mu(\zeta) is holomorphic on KcK^c.

Next time we’ll solve PDEs using Hahn-Banach theorem :eyes:.

Recollection of Theorems

Geometric Version of Heine-Borel. Let VV be a real vector space and AβŠ†VA \subseteq V be a convex linearly open set. Then if FF is an affine set F∩A=βˆ…F \cap A =\varnothing, there exists an affine hyperplane HH such that H∩A=βˆ…H \cap A = \varnothing, HβŠ‡FH \supseteq F, x∈Hx \in H.

Theorem. If VV is a topological vector space over K=R,CK = \R,\mathbb{C}, AA convex and open which are nonempty, FF is an affine subspace such that F∩A=βˆ…F \cap A = \varnothing, then βˆƒH\exists H a closed affine hyperspace FβŠ†HF \subseteq H and H∩A=βˆ…H \cap A = \varnothing.

  • If K=RK = \R, apply gen? theory so HβŠ‡FH \supseteq F, H∩A=βˆ…H \cap A = \varnothing, codimΒ H=1\text{codim }H = 1, all we need is Hβ€Ύ=H\overline{H} = H, since Hβ€Ύβ‰ V\overline{H} \neq V as AA is open and nonempty.
  • If K=CK = \mathbb{C}, take it as a real vector space to get some HH from the first part, then consider H1=H∩(iH)H_1 = H \cap (iH). Then it is not too hard to show that it works.

Theorem. If AβŠ†VA \subseteq V, VV is locally convex topological vector space, AA is a closed convex set, x∉Ax \not\in A. Then there exists f:Vβ†’Kf : V \rightarrow K, continuous? and linear such that

inf⁑y∈A∣f(x)βˆ’f(y)∣>0\inf_{y\in A} |f(x) - f(y)| > 0
(in particular, the affine hyperplane H={y:f(x)=f(y)}H = \{y : f(x) = f(y)\} satisfies H∩A=βˆ…H \cap A = \varnothing).

As a corollary,

Corollary. WβŠ†VW \subseteq V linear subspace, x∈Wβ€Ύβ‡”βˆ€f:Vβ†’Kx \in \overline{W} \Leftrightarrow \forall f : V \rightarrow K continuous, f∣W=0f |_W = 0, we have f(x)=0f(x) = 0.

Theorem. IF WβŠ†VW \subseteq V, pp is a seminorm of VV, f:Wβ†’Kf : W \rightarrow K, ∣f(x)βˆ£β‰€p(x)|f(x)| \le p(x), then βˆƒf1:Vβ†’K\exists f_1 : V \rightarrow K such that ∣f1(x)βˆ£β‰€p(x)|f_1(x)| \le p(x) and f1∣W=ff_1|_W = f.

Remark. We defined Hilbert space and discussed inner products, and in the definition he said (ax,y)=a(x,y)(ax, y) = a(x,y). I pointed out that isn’t it nicer to have (x,ay)=a(x,y)(x,ay) = a(x,y), to which he asked if I was a physicist. Apparently that is the physics notation and this version he is using is the math notation, and to my surprise, he’s right, even Linear Algebra Done Right actually used that convention! It’s funny that my mind has been rewired in this way to think of inner products as applications of linear functionals via the notation itself.

Theorem (Riesz). Suppose L:Hβ†’KL : H \rightarrow K, linear and continuous, then βˆƒ!\exists ! y∈Hy \in H such that L(x)=(x,y)L(x) = (x,y).

Remark. By constructing linear functionals, we can actually obtain elements of the Hilbert space. Differential equations students seminar at 2 oclock on Tuesday.

Theorem (Application of Hein Borel and R\R). For Ο†βˆˆC2(C)\varphi \in C^2(\mathbb{C}), suppose Ο†:Cβ†’R\varphi : \mathbb{C} \rightarrow \R such that Δφ(z)=ΞΊ(z)>0\Delta \varphi(z) = \kappa(z) > 0 for all z∈Cz \in \mathbb{C}. Then for all f∈L2(C,ΞΊβˆ’1eeβˆ’Ο†β€…β€Šdm(z))f \in L^2 (\mathbb{C}, \kappa^{-1}ee^{-\varphi}\; dm(z)), (βˆ₯fβˆ₯ψ2:=∫C∣f(z)∣2eβˆ’Οˆ(z)β€…β€Šdm(z)\|f\|_\psi^2 := \int_{\mathbb{C}} |f(z)|^2e^{-\psi(z)}\;dm(z), ψ(z)=Ο†(z)+log⁑κ(z)\psi(z) = \varphi(z) + \log \kappa(z)), βˆ‚β€Ύu=f\boxed{\overline{\partial} u = f}, βˆ₯uβˆ₯φ≀βˆ₯fβˆ₯Ο†+log⁑κ\|u\|_\varphi \le \|f\|_{\varphi + \log \kappa}.

Remark. In our Hilbert space setting, Cauchy-Schwarz inequality implies the triangle inequality.

Theorem. Suppose AA is a closed convex subset of HH and x∉Ax \not \in A. Then βˆƒ!\exists ! y∈Ay \in A such that βˆ₯xβˆ’yβˆ₯=inf⁑z∈Aβˆ₯xβˆ’zβˆ₯\|x-y\| = \inf_{z\in A} \|x-z\| and L(z):=⟨z,xβˆ’y⟩L(z) := \langle z, x-y\rangle satisfies β„œL(zβˆ’y)≀0\Re L(z-y) \le 0, z∈Az\in A. This implies ∣L(z)βˆ’L(x)∣β‰₯∣L(y)βˆ’L(x)∣=βˆ₯xβˆ’yβˆ₯2>0|L(z) - L(x)| \ge |L(y) - L(x)| = \|x-y\|^2 > 0, z∈Az \in A.

Remark. For next theorem, 1 dimensional case was he forget who showed it. The 2 dimensional case was proved in Berkeley, and the nn dimensional case was shown by Hormander.

Remark. β€œOld mathematicians are like old boxers, they can still dish out punishment but they cannot take any.” he says as he remarks that he doesn’t actually know how to solve three of the problems on the homework.



As a fun fact, it might seem like this website is flat because you're viewing it on a flat screen, but the curvature of this website actually isn't zero. ;-)

Copyright Β© 2024, Aathreya Kadambi

Made with Astrojs, React, and Tailwind.