News

Classes just started again. I've been thinking about proteins in my research and preparing for grad apps.

Latest Posts

[August 29, 2025:] Economics of Efficiency
[June 6, 2025:] A New Era
[June 6, 2025:] Bourgain's Problem Resolved!

Notes

Started taking live LaTeX notes in classes again. I might be uploading stuff soon!

Sea 🌊

This is something I'm working on!

Orange Juice

Now my "professional" website.

Mlog


P.E.K.K.A.s and Determinants

January 15, 2023
By Aathreya Kadambi

β€œIs P.E.K.K.A a knight? A samurai? A robot? No one knows!”
- Clash of Clans Loading Screen Hint

What does a P.E.K.K.A. (from the amazing game Clash of Clans) and a determinant (from the amazing mathematics) have in common?

No one knows what they are!

Just kidding, there are many people that could say they know what a determinant is. Yet different people might say different things! And maybe it’s just me, but after all of the exterior algebra wedge product tensors math that we did at the end of my linear algebra class, I’ve been utterly unconfident in my grasp of determinants. First of all, what even is a determinant? Is it the combinatorially-inspired formula for matrices? Is it a measure of the β€œdialation factor” of a linear map? Is it a geometrically inspired measure of nn-dimensional volume or size? There are so many perspectives on what a determinant really is, and I want to take a deeper look at these perspectives in this post.

That being said, it’s been a while since my last post, so for now, I’ll just give an introduction to this big dilemma, and in future posts I’ll dive in deeper.

Table of Contents

Definitions

There are several definitions I have seen for what a determinant means. In order to understand these, there are two structures we must first understand: matrices and linear maps. Secretly (or not so secretly), these two things are actually identical structures in math in that they behave in exactly the same way. Below, I made a simple crash course on the fundamental definitions for each structure. I will do my best to introduce the prerequisite definitions below, but without significant mathematical maturity, having prior seen what is below is practically necessasry.

We start with a matrix. In plain English, a matrix is an β€œarray” of numbers. These numbers could be real, complex, or numbers mod 2 for example (more generally, they could belong to any field). We can precisely define a matrix as follows:

Definition (Matrix). Consider a collection of values aija_{ij} for 1≀i≀m1 \le i \le m and 1≀j≀n1 \le j \le n. We say that these values form the matrix AA, which can be denoted as:

[a11a12...a1na21a22...a2n............am1am2...amn]\begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & a_{mn} \end{bmatrix}

Remark. Some authors may conveniently use AijA_{ij} rather than aija_{ij} to make it more clear to which matrix the elements belong.

Additionally, matrices are related via the following operations:

Matrix Addition:

[a11a12...a1na21a22...a2n............am1am2...amn]+[b11b12...b1nb21b22...b2n............bm1bm2...bmn]=[c11c12...c1nc21c22...c2n............cm1cm2...cmn]⇔cij=aij+bijβ€…β€Šβˆ€β€…β€Š1≀i≀m,1≀j≀n\begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & a_{mn} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} & ... & b_{1n} \\ b_{21} & b_{22} & ... & b_{2n} \\ ... & ... & ... & ... \\ b_{m1} & b_{m2} & ... & b_{mn} \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & ... & c_{1n} \\ c_{21} & c_{22} & ... & c_{2n} \\ ... & ... & ... & ... \\ c_{m1} & c_{m2} & ... & c_{mn} \end{bmatrix} \Leftrightarrow c_{ij} = a_{ij} + b_{ij}\;\forall\; 1 \le i \le m, 1 \le j \le n

Matrix Multiplication:

[a11a12...a1na21a22...a2n............am1am2...amn][b11b12...b1pb21b22...b2p............bn1bn2...bnp]=[c11c12...c1nc21c22...c2n............cm1cm2...cmn]⇔cij=βˆ‘k=1naikbkjβ€…β€Šβˆ€β€…β€Š1≀i≀m,1≀j≀n\begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & a_{mn} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} & ... & b_{1p} \\ b_{21} & b_{22} & ... & b_{2p} \\ ... & ... & ... & ... \\ b_{n1} & b_{n2} & ... & b_{np} \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & ... & c_{1n} \\ c_{21} & c_{22} & ... & c_{2n} \\ ... & ... & ... & ... \\ c_{m1} & c_{m2} & ... & c_{mn} \end{bmatrix} \Leftrightarrow c_{ij} = \sum_{k = 1}^n a_{ik} b_{kj} \;\forall\; 1 \le i \le m, 1 \le j \le n

Matrix Multiplication may seem a bit weird at first, but the motivation for this method of multiplication comes from the relationship between matrices and linear maps, or some would motivate it as a straightforward extension of the dot product (the story of how dot products and in general inner products are related to the linear maps perspective of linear algebra is very interesting in my opinion and is a good idea for a future post).

To understand linear maps, we must first understand maps as defined on vector spaces. It can be difficult to motivate vector spaces without introducing the notion of groups and fields, but for those who have not seen the formal definition of a field, it is a structure with similar properties to that of the integers. A vector space, then, is a collection of β€œvectors” (which are related by an addition operator and satisfy commutativity, associativity, and closure (so that the sum of any two vectors is also a vector in the vector space)) and β€œscalars” which are elements of a field. Vectors are generally denoted with arrows or bolding, and there are two distributive laws: a(v+u)=av+aua(\textbf{v} + \textbf{u}) = a\textbf{v} + a\textbf{u}, and (a+b)v=av+bv(a+b)\textbf{v} = a\textbf{v} + b\textbf{v}. A map is simply an object that for each element aa in a vector space AA, assigns an element bb in the vector space BB (and assigns only one element in BB to each element in AA). We can denote a map using:

f:A→Bf : A \rightarrow B
a↦ba \mapsto b
Now,

Definition (Linear Map). A map f:Aβ†’Bf : A \rightarrow B is linear if f(x+y)=f(x)+f(y)f(\textbf{x} + \textbf{y}) = f(\textbf{x}) + f(\textbf{y}) for all x,y∈A\textbf{x},\textbf{y} \in A and f(cx)=cf(x)f(c\textbf{x}) = cf(\textbf{x}) for scalar cc.

The two structures above may seem extremely different, but in reality, they are exactly the same. However, that is the subject of most introductory linear algebra textbooks and courses, and below we will hope that the reader has seen the material, or has had sufficient time and mathematical maturity to play with the ideas above.

There are a few definitions of the determinant that I am currently aware of, depending on where you look. Two of them, I take from Linear Algebra Done Right by Sheldon Axler (LADR), but in general these definitions are canonical and repeated everywhere. However, what I have found (at least so far in my math lifetime) is that people generally pick the definition that is most convenient for the problem they are trying to solve; while we discussed how each definition is equivalent in my linear algebra class, I believe there are fundamental ideas behind this notion of a β€œdeterminant” that I have yet to pick up on. And that’s where these (plural implying future posts) blog posts come in.

First we have an algebray definition of determinant for linear maps: (taken from LADR)

Definition (Definition of Determinant for Linear Maps). Suppose TT is a linear map from a vector space VV over a field F\mathbb{F} to itself (more generally this is called a linear operator). If F=C\mathbb{F} = \mathbb{C}, then the determinant of TT det⁑T\det T is the product of the eigenvalues of TT, with each eigenvalue repeated according to its multiplicity. If F=R\mathbb{F} = \mathbb{R}, then instead of the eigenvalues of TT, we take the eigenvalues of TCT_{\mathbb{C}} (TCT_{\mathbb{C}} is the β€œcomplexification of TT”, and is essentially the same as treating ii as a scalar).

Next we have a combinatoricsy definition of determinant for matrices: (taken from LADR)

Definition (Definition of Determinant for Matrices). Suppose AA is an nn-by-nn matrix

A=[A11...A1n......An1...Ann]A = \begin{bmatrix} A_{11} & ... & A_{1n} \\ ... & & ... \\ A_{n1} & ... & A_{nn}\end{bmatrix}
Then the determinant of AA, det⁑A\det A, is defined by
det⁑A=βˆ‘(m1,...,mn)∈permΒ n(sign(m1,...,mn))Am1,1...Amn,n\det A = \sum_{(m_1,...,m_n) \in \text{perm }n} (\text{sign}(m_1,...,m_n))A_{m_1,1}...A_{m_n,n}

And finally we have a totally weird algebray geometry-y definition:

Definition (Definition of Determinant from Wedge Products). This is honestly so complicated that I will leave this to your imagination for now. Or maybe I will update this post later today.

My Dilemma

So these definitions exist. Maybe you’re thinking β€œbut what’s the big deal?” The problem is that now we have three definitions of the same thing! For the determinant to be well-defined, we’ll need to thoroughly relate these definitions. In doing so, I believe that we’ll be able to see amazing connections between our different theories, just as having various equations for the area of a triangle can help one see connections in Euclidean geometry.

And these connections are what I seek to find in the posts that follow. Email me at aathreyakadambi@gmail.com to discuss these ideas or to share any websites, blog posts, or other media on this topic! I look forward to it.

Problems from Putnam

Before I end this post, I want to discuss what started it: Putnam. Right after my linear algebra course ended, I had begun writing my own linear algebra resource which I believed covered linear algebra well (and in my opinion obviously the best). Over time, however, I realized the nature of the task, and I went on hiatus. While in the Putnam class at my university though, I came to realize that I have only a surface-level handle of determinants. However, for the purposes of Putnam, the section on determinants from my professor’s (Alexander Givental) linear algebra book allowed us to solve all of the questions we needed to solve. Here are some of those problems, which generally are solved easily using theory following the definition of determinant for matrices (to be added).

Below, I compiled some links I found particularly interesting throughout this project.

Comments

Not signed in. Sign in with Google to make comments unanonymously!




As a fun fact, it might seem like this website is flat because you're viewing it on a flat screen, but the curvature of this website actually isn't zero. ;-)

Copyright Β© 2024, Aathreya Kadambi

Made with Astrojs, React, and Tailwind.