Exterior Algebra and Differential Forms I

by ayushkhaitan3437

This is going to be a post about exterior algebra and differential forms. I have studied these concepts multiple times in the past, and feel that I have an idea of what’s going on. However, it would be good to straighten the chinks, of which there are many, once and for all.

For a vector space V, a p-tensor is a multilinear function from V^p\to\Bbb{R} (or maybe \Bbb{C}, depending upon the context). For example, a 1-tensor is a linear functional. The determinant of an n\times n matrix is a famous example of an n-tensor. Here, the vector space V has to be n-dimensional too. The space of p-tensors is called \mathfrak{J^p}(V^*). This is analogous to the space of linear functionals on a vector space.

Let \{\phi_1,\dots,\phi_k\} be a basis for $V^*$. Then the p-tensors \{\phi_{i_1}\otimes\dots\otimes \phi_{i_p}:1\leq i_1,\dots,i_p\leq k\} form a basis for \mathfrak{J}^p (V^*). Consequently, \dim \mathfrak{J}^p (V^*)=k^p. Why is this? Why should a p-tensor be a tensor product of 1-tensors?

Vectors here are n-dimensional, if we assume V to be an n-dimensional vector space. We’re taking an p-tuple of these n-dimensional vectors. Does this tuple have to contain basis vectors of the vector space? Or can it contain any vectors in the vector space? It just needs to contain the basis vectors. However, remember that this is not an ordered p-tuple. The basis vectors may be arranged in any order whatsoever. We want to be able to determine the maps of every p-tuple of basis vectors, arranged in any order whatsoever. Then we’ll be able to uniquely determine the multi-linear map- the p-tensor. I have still not generated a basis. Shouldn’t a basis be specific to the p-tuple that we’re considering? No wait. We just want to be able to express when a certain p-tuple goes to a certain value. That is it. So we construct a basis by designing a sum such that a tuple of basis vectors is non-zero for only one of the components- and then you multiply that component by the appropriate scalar. Every possible tuple of the basis vectors needs to be assigned values, and not just the basis vectors. This is how we cover all possibilities. Basically the indices of the p-tuple and the basis vectors have to be exactly the same.

Now let us define an alternating tensor from a regular tensor. An alternating p-tensor is one in which a permutation \sigma of the p-tuple that the tensor is acting on causes the p-tensor to be multiplied by (-1)^\sigma. In general, the action of a p-tensor on a tuple would have no relation with its action on a permutation of the tuple. Hence, this is a special kind of tensor. Any tensor can be mapped to an alternating tensor. This is done by taking permutations of the p-tuple you’re given. Then adding the action of the tensor on the tuple multiplied with alternating signs. But shouldn’t this be universal for all p-tensors? It is. There is no fixed or “first” initial configuration of the vector. You break any p-tuple the same way. In all possible permutations. You just have equivalence classes. How are these classes formed? But the input comes later! Yes it does. And the input is treated the same way it always is.

Now let us think about the tensor product of two alternating forms. The alternating forms are p and q tensors, say. Why the division? So that we can eliminate needless repetition. The jump back is quite obvious. But we need to be consistent with the representation of the permutations of the (p+q)-tuple. The division seemed needless at first when we described division by p! and q!.

The space of p-tensors that are also alternating forms is denoted as \bigwedge^p(V^*). The dimension of this will predictably be of cardinality {n\choose p}. Whenever you have a vector, permute it, and then find the mapping of those elements by those p-tensors composed of basis elements. To be more specific, we permute the vector such that the vector elements are in “increasing order”. Obviously these vector elements are the basis elements of V.

Now we shall talk about p_forms, which are just a specific case of differential forms. Let X be a smooth manifold with or without boundary. A p-form on X is a function \omega that assigns to each point x\in X an alternating p-tensor \omega(x) on the tangent space of X at x; \omega(x)\in \bigwedge^p[T_x(X)^*]. It’s just an alternating p-tensor! It has a basis, that is smaller in cardinality then the basis of a general p-tensor. It can be constructed from the tensor products of the basis elements of the dual space. What makes it so intimidating? The layers of new machinery. Think of a bunch of honey traps that will take care of all the pieces. And give you exactly what you want. The honey traps are the basis elements of the differential form, and the “pieces” are the input vector split in terms of the basis vectors of V. All these operations are happening on vectors of the tangent space at point x.

What are 0-forms? They don’t take in any vector. Hence, for all vectors, they’re constant. This implies that they’re just constant functions.

What about 1-forms? They take in one vector from the tangent space, and then probably have a simple mapping of that vector to the co-domain field (which is generally \Bbb{C}). Turns out many examples of 1-forms can be manufactured from smooth functions. If \phi:X\to \Bbb{R} is a smooth function, where X is the smooth manifold, then d\phi_x: T_x(X)\to\Bbb{R} is a linear map at each point x. Thus the assignment x\to d\phi_x defines a 1-form d\phi on X.

This discussion on differential forms will be continued in the next post.

Advertisements