A foray into Algebraic Combinatorics

I’m trying to understand this paper by Alexander Postnikov. This post is mainly a summary of some the concepts that I do not understand. Some examples.

  • Grassmannian- A Grassmannian G(r,V) of a vector space V is a space that parametrizes all the r dimensional subspaces of V. For instance, G(1,\Bbb{C}^n) would be \Bbb{P}^{n-1}. Why do we need Grassmannians? Because we need a continuous, and hopefully smooth way to characterize the r dimensional subspaces of V. An example would be the tangent spaces on a real m-manifold M. The map \phi which maps x\in M to the tangent space at x is \phi:M\to G(m,\Bbb{R}^m). Some interesting things to note here. First that the tangent space of any manifold has a dimension that is equal to the dimension of the manifold. This much we know. Let us assume for easy visualization that the space in which the manifold and its tangent spaces have been embedded have a dimension bigger than m. What we’re doing here is that we’re mapping each x\in M to the parameter corresponding to the tangent space at x. In general this map may not even be surjective. Because as x changes slightly, so does the parameter corresponding to the tangent space at that points, we have a feel for why this map may be continuous.
  • Plucker coordinates- This is a way to assign six homogeneous coordinates to each line in \Bbb{P}^3. How does one go about doing this, and why is it useful? A brilliant explanation is given on the Wikipedia page for Plucker coordinates. Say we take a line in \Bbb{R}^3. It is uniquely determined by 2 points (say x and y). However, is it uniquely determined by the vector between those two points? No. This vector can be translated and placed anywhere. Hence, we need both the vector between the two points and some sort of an indication as to where this vector lies with respect to the origin. One such indication would be the cross product of the two points x and y. The direction would give the orientation of the plane containing x and y, and the magnitude would give the distance that x and y are from the origin. Hence, we need six coordinates- three for the vector between x and y, and three for x\times y. Will these six coordinates uniquely describe the line? Yes. The direction will specify a plane that the three points x, y and 0 can lie in, and the magnitude will specify how far in that plane the vectors x and y lie. The vector x-y along with the direction of x\times y will specify exactly where x and y lie with respect to 0.

How we shall talk a little about the formal definition of Plucker coordinates. In \Bbb{P}^3, let (x_0,x_1,x_2,x_3) and (y_0,y_1,y_2,y_3) be two coordinates. Let p_{ij}=\begin{vmatrix} x_i&y_i\\ x_j&y_j\end{vmatrix}.

There are {4\choose 2}=6 ways of selecting two elements from \{0,1,2,3\}. Why do we need i\neq j? Because if i=j, then p_{ij}=0. This is because the second row would just be the same as the first row. Also, p_{ij}=-p_{ji}. This is because we’d be exchanging two columns. Hence, there are only {4\choose 2} independent coordinates here. This ratifies the assertion that we need just 6 homogeneous coordinates to specify a line in \Bbb{P}^3.

  • Matroid- A matroid is a structure that generalizes linear independence in vector spaces. More formally, it is a pair (E,I), where E is a finite set, and I is a set of subsets of E which are “linearly independent”. The first property is that \emptyset is a linearly independent set. Secondly, if A is linearly independent, and A'\subset A, then A' is linearly independent too. This is called the hereditary property. Third, if A and B are linearly independent sets, and if A contains more elements than B, then there is an element in A that can be added to B to give a larger linearly independent subset than B.

The first two properties of linearly independent sets carry over smoothly from our intuition of what linearly independent sets are. The third property seems strange, but on a little thinking becomes clear. Think of the two independent sets \{i\} and \{i+j,i-j\} in \Bbb{R}^2. We can add either of i+j or i-j to i to create a larger linearly independent set. However, what if the smaller set was contained within the bigger set; i.e. what if the two sets were \{i\} and \{i,j\}? We could still find j to i to create a bigger linearly independent set. On a little experimentation, you will be able to convince yourself that this is a natural property of linearly independent sets of vector spaces.

Now we discuss some more properties of matroids. A subset that is not independent is called, you guessed correctly, a dependent set. A maximal independent set, one that becomes dependent on the addition of any element outside of it, is called a basis. A minimal dependent set, which becomes independent on the removal of any element, is called a circuit. Does a basis, on addition of an element, become a circuit? I don’t know. But I intend to find out.

Published by -

Graduate student

Leave a comment