I’m taking algebraic topology by Professor Zworski!
Story 1: Linear Algebra
Relevant Lectures: Lecture 1
We started out by defining cosets and quotient spaces, and then the first isomorphism theorem for vector spaces. Hahaha this is a homework problem I did for COMPSCI 189 yesterday!
We study the rest of mathematics because linear algebra is too hard. - Taylor, according to Zworski.
Zworski’s Favorite Fact from Linear Algebra (Shur’s Complement Formula). Suppose [T11T21T12T22] is invertible, so
[T11T21T12T22]−1=[S11S21S12S22],Sij:Wj→Vi. Then [T11 is invertible if and only if S22 is invertible]. In that case,
T11−1=S11−S12S22−1S21.
Why is this interesting? Suppose we start with P:H1→H2 and we can find R−:H−→H2 and R+:H1→H+ such that
(PR+R−0):H⊕H−→H⊕H+
is invertible and
(PR+R−0)−1=(EE−E+E−+):H2⊕H+→H1⊕H−
then P is invertible if and only if the stuff from the theorem. Then we have reduced the invertibility of P to something else. It is used in linear algebra for reducing huge systems in linear algebra to much smaller finite ones.
This problem of something is called a Grushih problem. “The key here is stability under perturbations and so on”. Although I have no clue what perturbations he is talking about.
Example. Take P=λ−J where J is a Jordan block matrix (has an off-diagonal of ones).
Remark. There are two types of multiplicity for the eigenvalues of the jordan block matrix. The geometric and algebraic multiplicities are 1 and n. We can try to find R+ and R−.
We would lke to compute the determinant of
(λ−Je1en0)
where en and e1 are the basis vectors. Then this matrix is invertible since it has determinant −1, so we can reduce its invertibility so another property. We can choose this R− and R+ because one of them lies in the kernel of J and the other lies in the kernel of J+. In the generral case though, it is an art.
Story 2: Dimension
Relevant Lectures: 1
Dimension of V being n is the same as saying there exists a corresponding basis of n vectors for V. This is for finite idmensional vector spaes, and if this is not possible, we say that the dimension is infinite. Hormander actually has another different amusing definition of dimension for vector spaces.
Equivalently, dim is a functor from the category of vector spaces to N such that
dimKn=n,
T:V1→V2 is surjective is equvialent to dimV1≥dimV2,
T:V2→V1 is injective is equivalent to dimV1≤dimV2.
Zworski kept laughing while writing this.
Example.dim(V1/kerT)=dimIm T.
We say:
Rank T=dimIm T=codim kerT
where codim W=dimV/W with W⊆V.
Theorem. Suppose W⊆V. The n dimW+codim W=dimV.
Proof.
Since dimW≤dimV and codim W≤dimV, we can assume that dimW and codim W are finite, because otherwise we just have infinities summing together.
T1:Kn→V and T2:Km→V/W bijection. There exists T~2:Km→V such that T~2∘π=T2, π:V→V/W. The tilde mappings are called “lifts” of the non-tilde mappings.
T2ej=xj+W,T~2(∑ajej):=∑ajxjej a basis of Km.
T:T1⊕T~2:Kn+m∋(a,b)↦T1a+T~2b∈V
Claim.T is an isomorphism.
T(a,b)=0, 0=π(T1+T~2b)=T2b⇒b=0⇒T1a=0
Suppose x∈V, T2 is surjective, x=∑bjxj+y, y∈W. But T1 is surjective so there exists a such that T1a=y.
Theorem. Take T:V1→V2. Then, dimIm T+dimkerT=dimV1, and codim Im T+codim kerT=dimV2.
Remark. These are both the same infinite dimensions but in infinite dimensions, one might be trivial and the other is useful. Apparently long ago in Zworski’s linear algebra class they did this proof via row and column operations to prove an equivalent statement that the row major is the same as the column major. With algebra we can do it much easier.
If dimVj<∞, then we define the index:
ind(T)L=dimkerT−codim Im T=dimV1−dimV2
Story 3: Index of a Transformation
Relevant Lectures: Lecture 1, Lecture 2
Suppose dimkerT<∞ or dim(V2/Im T)<∞. Then
ind(T)=dimkerT−codim Im T
Theorem 1. Suppose that T1:V1→V2 and T2:V2→V3. Suppose that dimkerTj<∞ or dimcokerTj<∞ (cokerT=V2/Im T). Then the index of T2T1 is well defined and
ind(T2T1)=ind(T1)+ind(T2)
follows from the definition of the index.
Theorem 2. If S:V1→V2 has finite rank and ind T is defined, then ind of T+S is well defined and ind(T+S)=ind(T).
0→V0T0V1T1⋯TN−1VN→0
is called a complex if Tk+1Tk=0 or kerTk+1⊇Im Tk.
An example of a complex from MATH 53:
Example.V0=V3=C∞(R3,R), V1=V2=C∞(R3,R3), T0=∇, T1=∇×, T2=∇⋅. For sophisticated people, this is a special example of the de Rham complex.
A complex is called exact if Im Tk=kerTk+1.
Example. The complex from the previous example is actually exact! Lol this is related to the question I asked the quantum mechanics professor.
Example (Exact sequences).
N=0, then T0 is a bijection.
N=1, 0→W↪VπV/W→0
Theorem. For exact complexes,
∑dimV2j=∑dimV2j+1.
If dimVj<∞, then ∑(−1)jdimVj=0.
This result will allow us to show Theorem 2 by constructing an appropriate exact complex.
Proof.
Let Rk+1=kerTk+1=Im Tk. Then
dimVk=dimRk+dimRk+1
Now,
∑dimV2k=∑dimR2k+dimR2k+1==∑dimV2k+1
Remark. Another method is to actually shorten the exact sequence by taking quotients and then do stuff.
Story 4: Fedholm Operations
Recall that an operator T:V1→V2 is Fredholm when dimkerT<∞ and dimcoker T<∞.
ind(T)=dimkerT−dimcoker T
Example.V1=V2={a=a1a2a3...aj∈K}.
Tna=an+1an+2...n∈Z,al=0,l≤0
To be honest I’m a bit unsure on what was going on here.
\dim \ker T_n = \left\{ \begin{array}\{0\} & n \le 0 \\ n & n > 0 \end{array} \right.
Last time, we said V1⊇V and x∈V1 implies there is a hyperplane V2 such that V2 contains V1 and x isn’t in V2.
Theorem. Suppose W1⊆V, dimW1<α. Then ∃W2 such that W1∩W2={0}, W1+W2=V, and codim W2=dimW1.
Proof Sketch.
Take x∈W1\{0}. There is a hyperplane H1 such that x∈H1, so that H1+W1=V, and so
dim(W1∩H1)+codim H1=codim (W1+H1)+dimW1
so dim(W1∩H1)=dimW1−1. So now repeating this process, we can find:
dim(W1∩H1∩H2∩...∩Hk)=dimW1−k
for 0≤k≤dimW1=d, so that W2=H1∩...∩Hd. Now we get:
Theorem.T:V1→V2 is a Fredholm operator if and only if there exist R−:V−→V2, R+:V+→V1 such that (TR+R−0)−1=(EE−E+E−+) exists and E−+ is a Fredholm operator in which case ind T=ind E−+
For the first direction, we use Theorem 1 to get a W2 satisfying kerT∩W2=0, W2+kerT=V, dimkerT=n+. Lte x1,…,xn+ be a basis of kerT. There exists Lj:V1→K such that Lj(xi)=δij, y∈V1, y=∑ajxj+y, y∈W2, Lj(y)=aj.
V+:=Kn+, R+:V1→Kn+. y↦(L1(y),...,Ln+(y)).
Choose a basis of V2/Im T.
Similarly, we do V=:=Kn−, and (b1,...,bn−)↦∑j=1n−bjyj.
We now wish to show that R+ is surjective. So
(TR+R−0):V1⊕V−→V2⊕V+
is surjective. So what we have shown (I’ve omitted most of the proof about Tu+∑j=1nbjyj=0 and stuff) is that if the operator is Fredholm, then we can set up this thing so that it is invertible. We also get that E−+:Kn+→Kn− is clearly Fredholm since n+ and n− are finite.
To show the other direction, first we observed that R+ is surjective and R− is injective by actually multiplying out
(TR+R−0)(EE−E+E−+)
and
(EE−E+E−+)(TR+R−0).
Then, we took (uu−)∈V1⊕V− and (vv+)∈V2⊕V+, and saw that Tu=v, u−=0, Ev+E+v+=u and E−v+E−+v+=0.
Since v is in the image of T, E−v is in the image of E−+. This allows us to make a well defined map from E−:Im T→Im E−+:
E−#:V2/Im T→V−/Im E−+
Then by playing around with the above results, we get injectivity. It is also surjective, so the dimensions of V−/Im E−+ and V2/Im T are isomorphic.
Now we can define a map E+:kerE−+→kerT. We get E+ is injective on all of V+ and thus on kerE−+ from the fact that R+E+ is identity. Surjectivity comes from the same two equations in the iff statement.
The main helpful idea was that Tu=v,u−=0 was equivalent to Ev+E+v+=u and E−v+E−+v+=0.
Theorem. If ind T=0, then there exists S finite rank such that T+S is invertible.
We can purturb our operator by a finite rank operator and stuff.
Proof similar to above.
We can write V1≃kerT⊕W1 and V2≃W2⊕Im T, so that the matrix of T is
T=(000T~)
S=(S~000)
with T~:W1→Im T and S~:kerT→W2 invertible.
Convex Sets
Now we take K=R.
A convex set A⊆V, K=R is convex if for all x,y∈A, R⊇{t∈R:x+ty∈A} is an interval in R.
The reason we define it weirdly like this as opposed to the canonical definition is that we want to say that a convex set is linearly open if that interval is open.
Geometric Hahn-Banach Theorem. If {0}∈A and A is a convex linearly open set, then there exists a hyperplane V such that V∩A=∅. In more generality, if V1∩A=∅ and ..,
Hahn-Banach Theorem
This one was taught by Zhongkai.
Theorem. Let V be a vector space over R. A⊆V convex and linearly open.
Theorem.V1⊆V linear subspace, V1∩A=∅, then there exists a hyperplane V2, V1⊆V2, and V2∩A=∅.
Proof.
First looked at V=R2 and made observation that for any ∈V, half line {tx:tge0} if it intersects A, then {tx:t≤0}∩A=∅ (or else 0∈A by convexity).
He did a ton of stuff and then had to use Zorn’s Lemma.
Now we will stop this topic for a while and go back to Hahn Banach with a version in topology or something.
Metrics and Topologies
Defined topological space and topological structure. Defined Metric space as well, and how metric space induces a topology.
Defined closed and also what it means for a set to be complete.
Theorem (Baire Category Theorem). If (E,d) is complete metric space, {Un}n=1∞ a countable family of dense open sets then ⋂n=1∞Un is dense in E.
A set A⊆E is called nowhere dense if its closure A doesn’t have interior. A set is called first category if it is a countable union of nowhere dense sets. A set is called second category if it is NOT first category.
Baire Category Theorem
(E,d) is a complete metric space, {Un}n=1∞ countable family of open dense sets. Then ∩n=1∞Un is dense in E.
Definition.A⊆E is called generic if it contains the countable intersection of open dense sets.
A is nowhere differentiable functions as a subset of E.
Fn={f∈E:∃x0∈[0,1], s.t. ∣f(x)−f(x0)∣≤n∣x−x0∣}
When discussing something, he did a proof by picture where he drew a zig zaggy function with slope on any interval being either 2n or −2n.
Tychonov Theorem
Locally convex, balanced, absorbing.
M⊆V is called balanced if for all x∈M, ∣a∣≤1, ax∈M.
M⊆V is called absorbing if for all x∈V, ∣a∣ sufficiently small, ax∈M.
Theorem. A Housedorff, first countable, TBS is locally convex if and only if the topology is given countable family of seminorms. In this case, the topology is metrizable.
Locally convex: N is open, balanced, and absorbing.
Pn(x):=inf{t>0:tx∈N}
Seminorms Ni,j={x∈V:pi(x)<ϵj}, ϵj→0. Hausdorff implies ∩i,jNij={0}, which is how we can guarantee that d(x,y)=0⇒pi(x−y)=0⇒x−y=0.
Some classical consutrction
d(x,y)=∑n=1∞26−n(pn(x−y)1+pn(x−y))
Triangle inequality is a bit tricky but we can show it from the regular triangle inequality on pn.
TBS is a supset of locally convex TBS which is a supset of seminorm space which is a supset of Frechet space which is a supset of Banach space is a supset of Hilbert space.
As a consequence of a bunch of theorems, a topology is defined using a countable family of seminorms pj:V→[,∞), pj(x+y)≤pj(x)+pj(y), pj(ax)=∣a∣pj(x), i.e., xn→x is equivalent to pj(xn−x)→0 as n→∞ for all j.
V,{pj} is called a Frechet space if and only if it is complete.
Seminormal space (i.e. top space defined by one seminorm).
The ∥f∥p norm is a seminorm based on Minkowski’s theorem or something.
W⊆V, we would like to define a norm on V/W.
p~(x+W):=infy∈x+Wp(y)p~ is continuous if and only if W is closed.
Issue in theorem statement, basically we have a seminorm, but we want to show that when we divide it by points that go to zero, we get a norm.
Claim.p~ is a norm on V/W if and only if W is closed.
Now notice that C(Rn) is not a normed space. Why? Suppose there exists a norm p such that ∀j∃cj such that pj(x)≤cjp(x). Just take f(j)=jcj (connected by linear pieces in between), and then the condition would be jcj≤cjp(f) which is impossible if it has to happen over all j.
We have a whole bunch of different Hahn-Banach theorems. These are all based on the geometric version. There is one version though here:
Theorem. Suppose V is a locally convex topological vector space. W⊆V is a linear subspace. Then x∈W if and only if for all f:V→K such that f∣W=0, f(x)=0.
Theorem (The Runge Approximation Theorem). Suppose K⊆C and u is holomorphic in an open neighborhood of K. If C\K is connected, then for every ϵ there exists a polynomial p such thatsupK∣p(x)−u(x)∣<ϵ.
In Rudin’s Real and Complex Analysis, in an appendix, there is a proof that you only need it to be holomorphic on the interior of K. ∥u∥=supx∈K∣u(x)∣.
There exists a nontrivial fact which is that every continuous linear functional f from …
f(u)=∫u(z)dμ(z)
for a finite complex supported something. (I think he’s saying we can find μ so that this works for any functional f). The proof is also in the book by Rudin that he mentioned apparently.
Suppose v is compactly supported and smooth in C.
∂=21(∂x+i∂y)u∈C1(U)u∈O(U)⇔∂u=0
∫C∂v(z)dm(z)=0−π1∫C∂v(z)(ζ−z)−1dm(z)=v(ζ)
Basically proved using Green’s formula to reduce it to Cauchy’s integral formula.
From Hahn-Banach, we need that ∫ζndμ(ζ)=0 finite for all measure supported in K for all n implies ∫u(ζ)dμ(ζ)=0. This left hand side of the implication is the condition that it vanishes on the sapce of polynomials.
Eventually, z→∫(ζ−z)−1dμ(ζ) is holomorphic on Kc.
Next time we’ll solve PDEs using Hahn-Banach theorem :eyes:.
Recollection of Theorems
Geometric Version of Heine-Borel. Let V be a real vector space and A⊆V be a convex linearly open set. Then if F is an affine set F∩A=∅, there exists an affine hyperplane H such that H∩A=∅, H⊇F, x∈H.
Theorem. If V is a topological vector space over K=R,C, A convex and open which are nonempty, F is an affine subspace such that F∩A=∅, then ∃H a closed affine hyperspace F⊆H and H∩A=∅.
If K=R, apply gen? theory so H⊇F, H∩A=∅, codim H=1, all we need is H=H, since H=V as A is open and nonempty.
If K=C, take it as a real vector space to get some H from the first part, then consider H1=H∩(iH). Then it is not too hard to show that it works.
Theorem. If A⊆V, V is locally convex topological vector space, A is a closed convex set, x∈A. Then there exists f:V→K, continuous? and linear such that
infy∈A∣f(x)−f(y)∣>0
(in particular, the affine hyperplane H={y:f(x)=f(y)} satisfies H∩A=∅).
As a corollary,
Corollary.W⊆V linear subspace, x∈W⇔∀f:V→K continuous, f∣W=0, we have f(x)=0.
Theorem. IF W⊆V, p is a seminorm of V, f:W→K, ∣f(x)∣≤p(x), then ∃f1:V→K such that ∣f1(x)∣≤p(x) and f1∣W=f.
Remark. We defined Hilbert space and discussed inner products, and in the definition he said (ax,y)=a(x,y). I pointed out that isn’t it nicer to have (x,ay)=a(x,y), to which he asked if I was a physicist. Apparently that is the physics notation and this version he is using is the math notation, and to my surprise, he’s right, even Linear Algebra Done Right actually used that convention! It’s funny that my mind has been rewired in this way to think of inner products as applications of linear functionals via the notation itself.
Theorem (Riesz). Suppose L:H→K, linear and continuous, then ∃!y∈H such that L(x)=(x,y).
Remark. By constructing linear functionals, we can actually obtain elements of the Hilbert space. Differential equations students seminar at 2 oclock on Tuesday.
Theorem (Application of Hein Borel and R). For φ∈C2(C), suppose φ:C→R such that Δφ(z)=κ(z)>0 for all z∈C. Then for all f∈L2(C,κ−1ee−φdm(z)), (∥f∥ψ2:=∫C∣f(z)∣2e−ψ(z)dm(z), ψ(z)=φ(z)+logκ(z)), ∂u=f, ∥u∥φ≤∥f∥φ+logκ.
Remark. In our Hilbert space setting, Cauchy-Schwarz inequality implies the triangle inequality.
Theorem. Suppose A is a closed convex subset of H and x∈A. Then ∃!y∈A such that ∥x−y∥=infz∈A∥x−z∥ and L(z):=⟨z,x−y⟩ satisfies ℜL(z−y)≤0, z∈A. This implies ∣L(z)−L(x)∣≥∣L(y)−L(x)∣=∥x−y∥2>0, z∈A.
Remark. For next theorem, 1 dimensional case was he forget who showed it. The 2 dimensional case was proved in Berkeley, and the n dimensional case was shown by Hormander.
Remark. “Old mathematicians are like old boxers, they can still dish out punishment but they cannot take any.” he says as he remarks that he doesn’t actually know how to solve three of the problems on the homework.
… (I may have missed some things in here)
Analytic Fredholm Theory
We say that z→B(z) is meromorphic with a pole of finite rank at z=z0 if there exists finite rank operators B1,...,BJ and an analytic family of open z→B~(z), for z near z0 such that B(z)=∑j=1J(z−z0)jBj+B~(z).
Theorem. Suppose z0∈Ω at which A(z0)−1:B2→B1 exists. Then z↦A(z)−1 is a meromorphic family of open ? with poles of finite rank.
Notice ind(A(z))=ind(A(z0))=0. Fix w∈Ω. dimkerA(w)=n(q)=n=dimcoker A(w). There exists R−:Kn→B2 and R+:B→Kn such that ?? E−+W(z):Kn→Kn
(A(w)R+R−0):B1⊕Kn→B2⊕Kn
is invertible. There exists U(w) neighborhood of w wsuch that we have the schur’s complement thing. Now A(z)−1 exists is equivalent to E−+2(z)− existing, and then A(z)−1=Ew(z)−E+w(z)E−+w(z)−1E−w(z)
Strat. The idea in this lecture is in part about how we can exapnd things into Laurant series and work with those.
Theorem. Suppose P:B1→B2, B1⊆B2, P−zI is Fredholm for z∈Ω connected open ?. Assume ∃z0∈Ω such that (P−z0I)−1 exists. Then z→(P−zI)−1 is meromorphic in Ω with poles of finite rank. Moreover, Πz:=2πi1∮z(wI−P)−1dw where integral is over a positive orientation circle containing only a pole, satisfies:
Πz2=Πz,rank Πz<∞
Proof.
Πz=2πi1∫γj(w−P)−1dww=wI,
Πz2=(2πi1)2∫γ1∫γ2(w1−P)−1(w2−P)−1dw1dw2(w1−P)−1−(w2−P)−1=(w1−P)−1((w2−P)−(w1−P))(w2−P)−1=(w1−P)−1(w2−w1)(w2−P)−1(w1−P)−1(w2−P)−1=w2−w11((w1−P)−1−(w2−P)−1)Πz2=(2πi)21∫γ1∫γ2(w1−P)−1w2−w11dw1dw2(2πi)21∫γ1∫γ2(w2−P)−1w2−w11dw1dw2
Integral outside is zero.
Weak Topology and Weak Star Topology and Other things
F, G vector spaces, F×G∋(x,y)↦⟨x,y⟩ nilpotent.
Fact: If L:F→K is continuous iwth respect to σ(F,G) where σ(F,G) is the topology on F by seminorms x↦∣⟨x,y⟩∣, y∈G, then ∃y∈G such that L(x)=⟨x,y⟩.
Banach-Alaoglou Theorem
Theorem.∃ a positive finite measure π1∫Imf(x+iy)φ(x)dx→∫φ(x)dμ(x) and f(z)=∫x−z1dμ(x).