Working on notes on the quantum mechanics, derivatives, and uploading my previous course notes onto this blog!
Projects
Finally started a projects page! I've recently made some nice upgrades to my post component, so it looks pretty clean! ;)
π
I'm considering whether or not to continue this project using WebGL or Three.js.
I'm also researching methods for generating the 3D scenes I want for this project automatically.
In the meantime, I've decided to proceed with some preliminary prototypes of the other interactive parts of this project.
Orange Juice
I like orange juice. :)
Mlog
MATH 206 Notes: Functional Analysis
Fall 2024 By Aathreya Kadambi Expanding on Lectures by Professor Algebraic Topology
Iβm taking algebraic topology by Professor Zworski!
Story 1: Linear Algebra
Relevant Lectures: Lecture 1
We started out by defining cosets and quotient spaces, and then the first isomorphism theorem for vector spaces. Hahaha this is a homework problem I did for COMPSCI 189 yesterday!
We study the rest of mathematics because linear algebra is too hard. - Taylor, according to Zworski.
Zworskiβs Favorite Fact from Linear Algebra (Shurβs Complement Formula). Suppose [T11βT21ββT12βT22ββ] is invertible, so
[T11βT21ββT12βT22ββ]β1=[S11βS21ββS12βS22ββ],Sijβ:WjββViβ. Then [T11β is invertible if and only if S22β is invertible]. In that case,
T11β1β=S11ββS12βS22β1βS21β.
Why is this interesting? Suppose we start with P:H1ββH2β and we can find Rββ:HβββH2β and R+β:H1ββH+β such that
(PR+ββRββ0β):HβHβββHβH+β
is invertible and
(PR+ββRββ0β)β1=(EEβββE+βEβ+ββ):H2ββH+ββH1ββHββ
then P is invertible if and only if the stuff from the theorem. Then we have reduced the invertibility of P to something else. It is used in linear algebra for reducing huge systems in linear algebra to much smaller finite ones.
This problem of something is called a Grushih problem. βThe key here is stability under perturbations and so onβ. Although I have no clue what perturbations he is talking about.
Example. Take P=Ξ»βJ where J is a Jordan block matrix (has an off-diagonal of ones).
Remark. There are two types of multiplicity for the eigenvalues of the jordan block matrix. The geometric and algebraic multiplicities are 1 and n. We can try to find R+β and Rββ.
We would lke to compute the determinant of
(Ξ»βJe1ββenβ0β)
where enβ and e1β are the basis vectors. Then this matrix is invertible since it has determinant β1, so we can reduce its invertibility so another property. We can choose this Rββ and R+β because one of them lies in the kernel of J and the other lies in the kernel of J+. In the generral case though, it is an art.
Story 2: Dimension
Relevant Lectures: 1
Dimension of V being n is the same as saying there exists a corresponding basis of n vectors for V. This is for finite idmensional vector spaes, and if this is not possible, we say that the dimension is infinite. Hormander actually has another different amusing definition of dimension for vector spaces.
Equivalently, dim is a functor from the category of vector spaces to N such that
dimKn=n,
T:V1ββV2β is surjective is equvialent to dimV1ββ₯dimV2β,
T:V2ββV1β is injective is equivalent to dimV1ββ€dimV2β.
Zworski kept laughing while writing this.
Example.dim(V1β/kerT)=dimImΒ T.
We say:
RankΒ T=dimImΒ T=codimΒ kerT
where codimΒ W=dimV/W with WβV.
Theorem. Suppose WβV. The n dimW+codimΒ W=dimV.
Proof.
Since dimWβ€dimV and codimΒ Wβ€dimV, we can assume that dimW and codimΒ W are finite, because otherwise we just have infinities summing together.
T1β:KnβV and T2β:KmβV/W bijection. There exists T~2β:KmβV such that T~2ββΟ=T2β, Ο:VβV/W. The tilde mappings are called βliftsβ of the non-tilde mappings.
T2βejβ=xjβ+W,T~2β(βajβejβ):=βajβxjβejβ a basis of Km.
Suppose xβV, T2β is surjective, x=βbjβxjβ+y, yβW. But T1β is surjective so there exists a such that T1βa=y.
Theorem. Take T:V1ββV2β. Then, dimImΒ T+dimkerT=dimV1β, and codimΒ ImΒ T+codimΒ kerT=dimV2β.
Remark. These are both the same infinite dimensions but in infinite dimensions, one might be trivial and the other is useful. Apparently long ago in Zworskiβs linear algebra class they did this proof via row and column operations to prove an equivalent statement that the row major is the same as the column major. With algebra we can do it much easier.
If dimVjβ<β, then we define the index:
ind(T)L=dimkerTβcodimΒ ImΒ T=dimV1ββdimV2β
Story 3: Index of a Transformation
Relevant Lectures: Lecture 1, Lecture 2
Suppose dimkerT<β or dim(V2β/ImΒ T)<β. Then
ind(T)=dimkerTβcodimΒ ImΒ T
Theorem 1. Suppose that T1β:V1ββV2β and T2β:V2ββV3β. Suppose that dimkerTjβ<β or dimcokerTjβ<β (cokerT=V2β/ImΒ T). Then the index of T2βT1β is well defined and
ind(T2βT1β)=ind(T1β)+ind(T2β)
follows from the definition of the index.
Theorem 2. If S:V1ββV2β has finite rank and indΒ T is defined, then ind of T+S is well defined and ind(T+S)=ind(T).
0βV0βT0ββV1βT1βββ―TNβ1ββVNββ0
is called a complex if Tk+1βTkβ=0 or kerTk+1ββImΒ Tkβ.
An example of a complex from MATH 53:
Example.V0β=V3β=Cβ(R3,R), V1β=V2β=Cβ(R3,R3), T0β=β, T1β=βΓ, T2β=ββ . For sophisticated people, this is a special example of the de Rham complex.
A complex is called exact if ImΒ Tkβ=kerTk+1β.
Example. The complex from the previous example is actually exact! Lol this is related to the question I asked the quantum mechanics professor.
Example (Exact sequences).
N=0, then T0β is a bijection.
N=1, 0βWβͺVΟβV/Wβ0
Theorem. For exact complexes,
βdimV2jβ=βdimV2j+1β.
If dimVjβ<β, then β(β1)jdimVjβ=0.
This result will allow us to show Theorem 2 by constructing an appropriate exact complex.
Proof.
Let Rk+1β=kerTk+1β=ImΒ Tkβ. Then
dimVkβ=dimRkβ+dimRk+1β
Now,
βdimV2kβ=βdimR2kβ+dimR2k+1β==βdimV2k+1β
Remark. Another method is to actually shorten the exact sequence by taking quotients and then do stuff.
Story 4: Fedholm Operations
Recall that an operator T:V1ββV2β is Fredholm when dimkerT<β and dimcokerΒ T<β.
Theorem.T:V1ββV2β is a Fredholm operator if and only if there exist Rββ:VβββV2β, R+β:V+ββV1β such that (TR+ββRββ0β)β1=(EEβββE+βEβ+ββ) exists and Eβ+β is a Fredholm operator in which case indΒ T=indΒ Eβ+β
is surjective. So what we have shown (Iβve omitted most of the proof about Tu+βj=1nβbjβyjβ=0 and stuff) is that if the operator is Fredholm, then we can set up this thing so that it is invertible. We also get that Eβ+β:Kn+ββKnβββ is clearly Fredholm since n+β and nββ are finite.
To show the other direction, first we observed that R+β is surjective and Rββ is injective by actually multiplying out
(TR+ββRββ0β)(EEβββE+βEβ+ββ)
and
(EEβββE+βEβ+ββ)(TR+ββRββ0β).
Then, we took (uuβββ)βV1ββVββ and (vv+ββ)βV2ββV+β, and saw that Tu=v, uββ=0, Ev+E+βv+β=u and Eββv+Eβ+βv+β=0.
Since v is in the image of T, Eββv is in the image of Eβ+β. This allows us to make a well defined map from Eββ:ImΒ TβImΒ Eβ+β:
Eβ#β:V2β/ImΒ TβVββ/ImΒ Eβ+β
Then by playing around with the above results, we get injectivity. It is also surjective, so the dimensions of Vββ/ImΒ Eβ+β and V2β/ImΒ T are isomorphic.
Now we can define a map E+β:kerEβ+ββkerT. We get E+β is injective on all of V+β and thus on kerEβ+β from the fact that R+βE+β is identity. Surjectivity comes from the same two equations in the iff statement.
The main helpful idea was that Tu=v,uββ=0 was equivalent to Ev+E+βv+β=u and Eββv+Eβ+βv+β=0.
Theorem. If indΒ T=0, then there exists S finite rank such that T+S is invertible.
We can purturb our operator by a finite rank operator and stuff.
Proof similar to above.
We can write V1ββkerTβW1β and V2ββW2ββImΒ T, so that the matrix of T is
T=(00β0T~β)
S=(S~0β00β)
with T~:W1ββImΒ T and S~:kerTβW2β invertible.
Convex Sets
Now we take K=R.
A convex set AβV, K=R is convex if for all x,yβA, Rβ{tβR:x+tyβA} is an interval in R.
The reason we define it weirdly like this as opposed to the canonical definition is that we want to say that a convex set is linearly open if that interval is open.
He did a ton of stuff and then had to use Zornβs Lemma.
Now we will stop this topic for a while and go back to Hahn Banach with a version in topology or something.
Metrics and Topologies
Defined topological space and topological structure. Defined Metric space as well, and how metric space induces a topology.
Defined closed and also what it means for a set to be complete.
Theorem (Baire Category Theorem). If (E,d) is complete metric space, {Unβ}n=1ββ a countable family of dense open sets then βn=1ββUnβ is dense in E.
A set AβE is called nowhere dense if its closure A doesnβt have interior. A set is called first category if it is a countable union of nowhere dense sets. A set is called second category if it is NOT first category.
When discussing something, he did a proof by picture where he drew a zig zaggy function with slope on any interval being either 2n or β2n.
Tychonov Theorem
Locally convex, balanced, absorbing.
MβV is called balanced if for all xβM, β£aβ£β€1, axβM.
MβV is called absorbing if for all xβV, β£aβ£ sufficiently small, axβM.
Theorem. A Housedorff, first countable, TBS is locally convex if and only if the topology is given countable family of seminorms. In this case, the topology is metrizable.
Locally convex: N is open, balanced, and absorbing.
Pnβ(x):=inf{t>0:txββN}
TBS is a supset of locally convex TBS which is a supset of seminorm space which is a supset of Frechet space which is a supset of Banach space is a supset of Hilbert space.
As a consequence of a bunch of theorems, a topology is defined using a countable family of seminorms pjβ:Vβ[,β), pjβ(x+y)β€pjβ(x)+pjβ(y), pjβ(axβ)=β£aβ£pjβ(x), i.e., xnββx is equivalent to pjβ(xnββx)β0 as nββ for all j.
V,{pjβ} is called a Frechet space if and only if it is complete.
Seminormal space (i.e. top space defined by one seminorm).
The β₯fβ₯pβ norm is a seminorm based on Minkowskiβs theorem or something.
WβV, we would like to define a norm on V/W.
p~β(x+W):=infyβx+Wβp(y)p~β is continuous if and only if W is closed.
Issue in theorem statement, basically we have a seminorm, but we want to show that when we divide it by points that go to zero, we get a norm.
Claim.p~β is a norm on V/W if and only if W is closed.
Now notice that C(Rn) is not a normed space. Why? Suppose there exists a norm p such that βjβcjβ such that pjβ(x)β€cjβp(x). Just take f(j)=jcjβ (connected by linear pieces in between), and then the condition would be jcjββ€cjβp(f) which is impossible if it has to happen over all j.
We have a whole bunch of different Hahn-Banach theorems. These are all based on the geometric version. There is one version though here:
Theorem. Suppose V is a locally convex topological vector space. WβV is a linear subspace. Then xβW if and only if for all f:VβK such that fβ£Wβ=0, f(x)=0.
Theorem (The Runge Approximation Theorem). Suppose KβC and u is holomorphic in an open neighborhood of K. If C\K is connected, then for every Ο΅ there exists a polynomial p such thatsupKββ£p(x)βu(x)β£<Ο΅.
In Rudinβs Real and Complex Analysis, in an appendix, there is a proof that you only need it to be holomorphic on the interior of K. β₯uβ₯=supxβKββ£u(x)β£.
There exists a nontrivial fact which is that every continuous linear functional f from β¦
f(u)=β«u(z)dΞΌ(z)
for a finite complex supported something. (I think heβs saying we can find ΞΌ so that this works for any functional f). The proof is also in the book by Rudin that he mentioned apparently.
β«Cββv(z)dm(z)=0βΟ1ββ«Cββv(z)(ΞΆβz)β1dm(z)=v(ΞΆ)
Basically proved using Greenβs formula to reduce it to Cauchyβs integral formula.
From Hahn-Banach, we need that β«ΞΆndΞΌ(ΞΆ)=0 finite for all measure supported in K for all n implies β«u(ΞΆ)dΞΌ(ΞΆ)=0. This left hand side of the implication is the condition that it vanishes on the sapce of polynomials.
Eventually, zββ«(ΞΆβz)β1dΞΌ(ΞΆ) is holomorphic on Kc.
Next time weβll solve PDEs using Hahn-Banach theorem :eyes:.
Theorem. If AβV, V is locally convex topological vector space, A is a closed convex set, xξ βA. Then there exists f:VβK, continuous? and linear such that
Corollary.WβV linear subspace, xβWββf:VβK continuous, fβ£Wβ=0, we have f(x)=0.
Theorem. IF WβV, p is a seminorm of V, f:WβK, β£f(x)β£β€p(x), then βf1β:VβK such that β£f1β(x)β£β€p(x) and f1ββ£Wβ=f.
Remark. We defined Hilbert space and discussed inner products, and in the definition he said (ax,y)=a(x,y). I pointed out that isnβt it nicer to have (x,ay)=a(x,y), to which he asked if I was a physicist. Apparently that is the physics notation and this version he is using is the math notation, and to my surprise, heβs right, even Linear Algebra Done Right actually used that convention! Itβs funny that my mind has been rewired in this way to think of inner products as applications of linear functionals via the notation itself.
Theorem (Riesz). Suppose L:HβK, linear and continuous, then β!yβH such that L(x)=(x,y).
Remark. By constructing linear functionals, we can actually obtain elements of the Hilbert space. Differential equations students seminar at 2 oclock on Tuesday.
Theorem (Application of Hein Borel and R). For ΟβC2(C), suppose Ο:CβR such that ΞΟ(z)=ΞΊ(z)>0 for all zβC. Then for all fβL2(C,ΞΊβ1eeβΟdm(z)), (β₯fβ₯Ο2β:=β«Cββ£f(z)β£2eβΟ(z)dm(z), Ο(z)=Ο(z)+logΞΊ(z)), βu=fβ, β₯uβ₯Οββ€β₯fβ₯Ο+logΞΊβ.
Remark. In our Hilbert space setting, Cauchy-Schwarz inequality implies the triangle inequality.
Remark. For next theorem, 1 dimensional case was he forget who showed it. The 2 dimensional case was proved in Berkeley, and the n dimensional case was shown by Hormander.
Remark. βOld mathematicians are like old boxers, they can still dish out punishment but they cannot take any.β he says as he remarks that he doesnβt actually know how to solve three of the problems on the homework.
As a fun fact, it might seem like this website is flat because you're viewing it on a flat screen, but the curvature of this website actually isn't zero. ;-)