P.E.K.K.A.s and Determinants

January 15, 2023

"Is P.E.K.K.A a knight? A samurai? A robot? No one knows!"
- Clash of Clans Loading Screen Hint

What do a P.E.K.K.A. (from the amazing game Clash of Clans) and a determinant (from the amazing mathematics) have in common?

No one knows what they are!

Just kidding, there are many people that could say they know what a determinant is. Yet different people might say different things! And maybe it's just me, but after all of the exterior algebra wedge product tensors math that we did at the end of my linear algebra class, I've been utterly unconfident in my grasp of determinants. First of all, what even is a determinant? Is it the combinatorially-inspired formula for matrices? Is it a measure of the "dialation factor" of a linear map? Is it a geometrically inspired measure of $n$-dimensional volume or size? There are so many perspectives on what a determinant really is, and I want to take a deeper look at these perspectives in this post.

That being said, it's been a while since my last post, so for now, I'll just give an introduction to this big dilemma, and in future posts I'll dive in deeper.

Table of Contents

Definitions

There are several definitions I have seen for what a determinant means. In order to understand these, there are two structures we must first understand: matrices and linear maps. Secretly (or not so secretly), these two things are actually identical structures in math in that they behave in exactly the same way. Below, I made a simple crash course on the fundamental definitions for each structure. I will do my best to introduce the prerequisite definitions below, but without significant mathematical maturity, having prior seen what is below is practically necessasry.

We start with a matrix. In plain English, a matrix is an "array" of numbers. These numbers could be real, complex, or numbers mod 2 for example (more generally, they could belong to any field). We can precisely define a matrix as follows:

Definition (Matrix). Consider a collection of values $a_{ij}$ for $1 \le i \le m$ and $1 \le j \le n$. We say that these values form the matrix $A$, which can be denoted as: $$\begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & a_{mn} \end{bmatrix}$$

Remark. Some authors may conveniently use $A_{ij}$ rather than $a_{ij}$ to make it more clear to which matrix the elements belong.

Additionally, matrices are related via the following operations:

Matrix Addition: $$\begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & a_{mn} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} & ... & b_{1n} \\ b_{21} & b_{22} & ... & b_{2n} \\ ... & ... & ... & ... \\ b_{m1} & b_{m2} & ... & b_{mn} \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & ... & c_{1n} \\ c_{21} & c_{22} & ... & c_{2n} \\ ... & ... & ... & ... \\ c_{m1} & c_{m2} & ... & c_{mn} \end{bmatrix} \Leftrightarrow c_{ij} = a_{ij} + b_{ij}\;\forall\; 1 \le i \le m, 1 \le j \le n$$

Matrix Multiplication: $$\begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & a_{mn} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} & ... & b_{1p} \\ b_{21} & b_{22} & ... & b_{2p} \\ ... & ... & ... & ... \\ b_{n1} & b_{n2} & ... & b_{np} \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & ... & c_{1n} \\ c_{21} & c_{22} & ... & c_{2n} \\ ... & ... & ... & ... \\ c_{m1} & c_{m2} & ... & c_{mn} \end{bmatrix} \Leftrightarrow c_{ij} = \sum_{k = 1}^n a_{ik} b_{kj} \;\forall\; 1 \le i \le m, 1 \le j \le n$$

Matrix Multiplication may seem a bit weird at first, but the motivation for this method of multiplication comes from the relationship between matrices and linear maps, or some would motivate it as a straightforward extension of the dot product (the story of how dot products and in general inner products are related to the linear maps perspective of linear algebra is very interesting in my opinion and is a good idea for a future post).

To understand linear maps, we must first understand maps as defined on vector spaces. It can be difficult to motivate vector spaces without introducing the notion of groups and fields, but for those who have not seen the formal definition of a field, it is a structure with similar properties to that of the integers. A vector space, then, is a collection of "vectors" (which are related by an addition operator and satisfy commutativity, associativity, and closure (so that the sum of any two vectors is also a vector in the vector space)) and "scalars" which are elements of a field. Vectors are generally denoted with arrows or bolding, and there are two distributive laws: $a(\textbf{v} + \textbf{u}) = a\textbf{v} + a\textbf{u}$, and $(a+b)\textbf{v} = a\textbf{v} + b\textbf{v}$. A map is simply an object that for each element $a$ in a vector space $A$, assigns an element $b$ in the vector space $B$ (and assigns only one element in $B$ to each element in $A$). We can denote a map using: $$f : A \rightarrow B$$ $$a \mapsto b$$ Now,

Definition (Linear Map). A map $f : A \rightarrow B$ is linear if $f(\textbf{x} + \textbf{y}) = f(\textbf{x}) + f(\textbf{y})$ for all $\textbf{x},\textbf{y} \in A$ and $f(c\textbf{x}) = cf(\textbf{x})$ for scalar $c$.

The two structures above may seem extremely different, but in reality, they are exactly the same. However, that is the subject of most introductory linear algebra textbooks and courses, and below we will hope that the reader has seen the material, or has had sufficient time and mathematical maturity to play with the ideas above.

There are a few definitions of the determinant that I am currently aware of, depending on where you look. Two of them, I take from Linear Algebra Done Right by Sheldon Axler (LADR), but in general these definitions are canonical and repeated everywhere. However, what I have found (at least so far in my math lifetime) is that people generally pick the definition that is most convenient for the problem they are trying to solve; while we discussed how each definition is equivalent in my linear algebra class, I believe there are fundamental ideas behind this notion of a "determinant" that I have yet to pick up on. And that's where these (plural implying future posts) blog posts come in.

First we have an algebray definition of determinant for linear maps: (taken from LADR)

Definition (Definition of Determinant for Linear Maps). Suppose $T$ is a linear map from a vector space $V$ over a field $\mathbb{F}$ to itself (more generally this is called a linear operator). If $\mathbb{F} = \mathbb{C}$, then the determinant of $T$ $\det T$ is the product of the eigenvalues of $T$, with each eigenvalue repeated according to its multiplicity. If $\mathbb{F} = \mathbb{R}$, then instead of the eigenvalues of $T$, we take the eigenvalues of $T_{\mathbb{C}}$ ($T_{\mathbb{C}}$ is the "complexification of $T$", and is essentially the same as treating $i$ as a scalar).

Next we have a combinatoricsy definition of determinant for matrices: (taken from LADR)

Definition (Definition of Determinant for Matrices). Suppose $A$ is an $n$-by-$n$ matrix $$A = \begin{bmatrix} A_{11} & ... & A_{1n} \\ ... & & ... \\ A_{n1} & ... & A_{nn}\end{bmatrix}$$ Then the determinant of $A$, $\det A$, is defined by $$\det A = \sum_{(m_1,...,m_n) \in \text{perm }n} (\text{sign}(m_1,...,m_n))A_{m_1,1}...A_{m_n,n}$$

And finally we have a totally weird algebray geometry-y definition:

Definition (Definition of Determinant from Wedge Products). This is honestly so complicated that I will leave this to your imagination for now. Or maybe I will update this post later today.

My Dilemma

So these definitions exist. Maybe you're thinking "but what's the big deal?" The problem is that now we have three definitions of the same thing! For the determinant to be well-defined, we'll need to thoroughly relate these definitions. In doing so, I believe that we'll be able to see amazing connections between our different theories, just as having various equations for the area of a triangle can help one see connections in Euclidean geometry.

And these connections are what I seek to find in the posts that follow. Email me at aathreyakadambi@gmail.com to discuss these ideas or to share any websites, blog posts, or other media on this topic! I look forward to it.

Problems from Putnam

Before I end this post, I want to discuss what started it: Putnam. Right after my linear algebra course ended, I had begun writing my own linear algebra resource which I believed covered linear algebra well (and in my opinion obviously the best). Over time, however, I realized the nature of the task, and I went on hiatus. While in the Putnam class at my university though, I came to realize that I have only a surface-level handle of determinants. However, for the purposes of Putnam, the section on determinants from my professor's (Alexander Givental) linear algebra book allowed us to solve all of the questions we needed to solve. Here are some of those problems, which generally are solved easily using theory following the definition of determinant for matrices (to be added).

Below, I compiled some links I found particularly interesting throughout this project.