cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

Month: September, 2016

Presheaves that are not sheaves

This is a post about pre-sheaves that are not sheaves. The two properties that a sheaf satisfies that a presheaf does not, are the “Gluability” axiom and the identity axiom.

Gluability- Over the open set U, consider the set F(U), which is the set of all bounded continuous functions. Sections which have the same restriction to intersections of open sets sometimes cannot be glued together to give global sections; i.e. they cannot be glued together to give a section defined on the whole topological space that is bounded. For instance, consider the function f(x)=x. Consider the cover \{(k,k+2)\}_{k\in\Bbb{N}} of \Bbb{R}. The restriction of f(x)=x on each such bounded interval is bounded. Hence, f(x)|_{(k,k+2)}\in F((k,k+2)). However, on gluing together over the whole real line, we get an unbounded function.

Another example seems slightly trickier to me, and requires knowledge of complex analysis. Vakil’s book says that “holomorphic functions that admit a square root” form a presheaf and not a sheaf. Note that we just need to consider functions that admit a square root, and not the square roots themselves. Take the function f(z)=z. It admits a square root on both U=\Bbb{C}\setminus \Bbb{R^+} and V=\Bbb{C}\setminus \Bbb{R^-}. Both U and V are open sets. Moreover, the restrictions of the function f(z)=z on the overlap of the two sets agree at each point. However, when we glue the two parts together, we get f(z)=z defined on the whole complex plane, which is holomorphic, but does not admit a square root.

Advertisements

Projective varieties

Today I will try and study projective varieties and their ideals. Although my understanding of these objects has improved over time, there are still a lot of chinks that need to be filled.

Something that has defied complete understanding is what kinds of polynomials are projective varieties the zeroes of? Do these polynomials have to be homogeneous? The point is that each homogeneous component has to satisfy the zeroes independently. Hence, a polynomial corresponding to a projective variety need not be homogeneous. In fact, a sum of homogeneous polynomials, possibly of different degrees, satisfies a smaller algebraic set than each of the individual components do. This follows from what is written above.

What does the ideal corresponding to an affine projective variety look like? Why does it have to be generated by homogeneous polynomials? It does not have to be generated by homogeneous polynomials. However, it *can* be. This is because each of the generators, of which there are finite, can be broken down into their homogeneous components. And it is easy to see that the ideal generated by the homogeneous components is the same as that generated by the generators mentioned above.

What does the coordinate ring of a projective variety look like? It is constructed in exactly the same way that the coordinate ring of an affine variety is constructed. The only difference is that the ideal, which we quotient the polynomial ring k[x_1,x_2,\dots,x_n] by, is homogeneous in this case.

But what about the non-homogeneous polynomials in this coordinate ring? Are they even well defined over a projective variety? This is a question I will get back to and resolve as soon as I can.

The Jacobian of a linear map

This is a small blog post. What is the Jacobian of a linear map? Say I have an n\times n matrix- call it A. Also, I have a linear map L:\Bbb{R}^n\to\Bbb{R}^n which is given by v\to Av. What is the Jacobian of L? This is a question that has confused me before.

The function f_i takes a vector (x_1,x_2,\dots,x_n), and maps it to a_{i1}x_1+a_{i2}x_2+\dots+a_{in}x_n. The derivatives of this function with respect to the x_k‘s are just a_{ik}. Hence, the Jacobian of a linear map L is the L itself.

Proving that an isomorphism between varieties implies an isomorphism between their coordinate rings, and vice-versa

Let X and Y be two algebraic sets in \Bbb{A}^n. Today I will try and prove that X\cong Y\iff \Bbb{C}[x_1,x_2,\dots,x_n]/I(X)\cong \Bbb{C}[x_1,x_2,\dots,x_n]/I(Y). Not writing a full proof of this before has eaten away at my existence for way too long.

First let us assume that X\cong Y. This means that there exist polynomial maps f_1(x_1,x_2,\dots,x_n),f_2(x_1,x_2,\dots,x_n),\dots,f_n(x_1,x_2,\dots,x_n) from X to Y. Hence, for any polynomial that is zero on the image of X in Y (we shall call it f(X)), we get a polynomial that is zero on the whole of X. Hence, we get a map from I(f(X))\to I(X). In this case, f(X)=Y. Hence, we have a map \phi from I(Y)\to I(X). Similarly, as there also exists a map from Y to X, let the map induced from I(X) to I(Y) be called \psi.

Let us explore the isomorphism between X and Y. Let us take (a_1,a_2,\dots,a_n)\in X. Let the maps between X and Y be f:X\to Y. Similarly, the map between Y and X is f^{-1}:Y\to X. By definition, f\circ f^{-1}=id_Y and f^{-1}\circ f=id_X. We can see that f^{-1}\circ f ((a_1,a_2,\dots,a_n))=(a_1,a_2,\dots,a_n). This is obviously not true for all points in \Bbb{A}^n. If we take an arbitrary point (y_1,y_2,\dots,y_n), then we get a point of the form (y_1+c_1,y_2+c_2,\dots,y_n+c_n) back such that c_1,c_2,\dots,c_n=0 iff (y_1,y_2,\dots,y_n)\in X. When is this possible? This is possible if and only if all the c_i‘s are evaluations of polynomials in I(X) at the point (y_1,y_2,\dots,y_n). The same argument works for the action of f\circ f^{-1} on an arbitrary point (z_1,z_2,\dots,z_n).

Now let us explore the map from I(Y) to I(X). As we can guage from the explanation above, any polynomial f(y_1,y_2,\dots,y_n)\in I(Y), when acted on by \psi\circ\phi, gives us f(y_1+d_1,y_2+d_2,\dots,y_n+d_n), where the d_i‘s belong to I(Y). Similarly, any polynomial g(x_1,x_2,\dots,x_n)\in I(X), when acted on by \phi\circ\psi, gives us g(x_1+c_1,x_2+c_2,\dots,x_n+c_n), where the c_i‘s belong to I(X). Thus, we don’t really have an isomorphism from I(X) to I(Y). However, we clearly have an isomorphism from \Bbb{C}[x_1,x_2,\dots,x_n]/I(X)\cong \Bbb{C}[x_1,x_2,\dots,x_n]/I(Y)! Hence, we have proved one side of the assertion.

We shall now try and prove the converse. Let us assume that \Bbb{C}[x_1,x_2,\dots,x_n]/I(X)\cong \Bbb{C}[x_1,x_2,\dots,x_n]/I(Y). Let f:\Bbb{C}[x_1,x_2,\dots,x_n]/I(X)\to \Bbb{C}[x_1,x_2,\dots,x_n]/I(Y) and g:\Bbb{C}[x_1,x_2,\dots,x_n]/I(Y)\to \Bbb{C}[x_1,x_2,\dots,x_n]/I(X). It is then clear that (g\circ f)(x_i)=x_i+p_i(x_1,\dots,x_n), where p_i(x_1,\dots,x_n) is a polynomial in I(X). Similarly, (f\circ g)(x_i)=x_i+p'_i(x_1,\dots,x_n), where p'_i(x_1,\dots,x_n) is a polynomial in I(Y). Hence, using the maps f and g, we can construct an isomorphism between X and Y.

Now both sides have been proven,

Exterior Algebra and Differential Forms I

This is going to be a post about exterior algebra and differential forms. I have studied these concepts multiple times in the past, and feel that I have an idea of what’s going on. However, it would be good to straighten the chinks, of which there are many, once and for all.

For a vector space V, a p-tensor is a multilinear function from V^p\to\Bbb{R} (or maybe \Bbb{C}, depending upon the context). For example, a 1-tensor is a linear functional. The determinant of an n\times n matrix is a famous example of an n-tensor. Here, the vector space V has to be n-dimensional too. The space of p-tensors is called \mathfrak{J^p}(V^*). This is analogous to the space of linear functionals on a vector space.

Let \{\phi_1,\dots,\phi_k\} be a basis for $V^*$. Then the p-tensors \{\phi_{i_1}\otimes\dots\otimes \phi_{i_p}:1\leq i_1,\dots,i_p\leq k\} form a basis for \mathfrak{J}^p (V^*). Consequently, \dim \mathfrak{J}^p (V^*)=k^p. Why is this? Why should a p-tensor be a tensor product of 1-tensors?

Vectors here are n-dimensional, if we assume V to be an n-dimensional vector space. We’re taking an p-tuple of these n-dimensional vectors. Does this tuple have to contain basis vectors of the vector space? Or can it contain any vectors in the vector space? It just needs to contain the basis vectors. However, remember that this is not an ordered p-tuple. The basis vectors may be arranged in any order whatsoever. We want to be able to determine the maps of every p-tuple of basis vectors, arranged in any order whatsoever. Then we’ll be able to uniquely determine the multi-linear map- the p-tensor. I have still not generated a basis. Shouldn’t a basis be specific to the p-tuple that we’re considering? No wait. We just want to be able to express when a certain p-tuple goes to a certain value. That is it. So we construct a basis by designing a sum such that a tuple of basis vectors is non-zero for only one of the components- and then you multiply that component by the appropriate scalar. Every possible tuple of the basis vectors needs to be assigned values, and not just the basis vectors. This is how we cover all possibilities. Basically the indices of the p-tuple and the basis vectors have to be exactly the same.

Now let us define an alternating tensor from a regular tensor. An alternating p-tensor is one in which a permutation \sigma of the p-tuple that the tensor is acting on causes the p-tensor to be multiplied by (-1)^\sigma. In general, the action of a p-tensor on a tuple would have no relation with its action on a permutation of the tuple. Hence, this is a special kind of tensor. Any tensor can be mapped to an alternating tensor. This is done by taking permutations of the p-tuple you’re given. Then adding the action of the tensor on the tuple multiplied with alternating signs. But shouldn’t this be universal for all p-tensors? It is. There is no fixed or “first” initial configuration of the vector. You break any p-tuple the same way. In all possible permutations. You just have equivalence classes. How are these classes formed? But the input comes later! Yes it does. And the input is treated the same way it always is.

Now let us think about the tensor product of two alternating forms. The alternating forms are p and q tensors, say. Why the division? So that we can eliminate needless repetition. The jump back is quite obvious. But we need to be consistent with the representation of the permutations of the (p+q)-tuple. The division seemed needless at first when we described division by p! and q!.

The space of p-tensors that are also alternating forms is denoted as \bigwedge^p(V^*). The dimension of this will predictably be of cardinality {n\choose p}. Whenever you have a vector, permute it, and then find the mapping of those elements by those p-tensors composed of basis elements. To be more specific, we permute the vector such that the vector elements are in “increasing order”. Obviously these vector elements are the basis elements of V.

Now we shall talk about p_forms, which are just a specific case of differential forms. Let X be a smooth manifold with or without boundary. A p-form on X is a function \omega that assigns to each point x\in X an alternating p-tensor \omega(x) on the tangent space of X at x; \omega(x)\in \bigwedge^p[T_x(X)^*]. It’s just an alternating p-tensor! It has a basis, that is smaller in cardinality then the basis of a general p-tensor. It can be constructed from the tensor products of the basis elements of the dual space. What makes it so intimidating? The layers of new machinery. Think of a bunch of honey traps that will take care of all the pieces. And give you exactly what you want. The honey traps are the basis elements of the differential form, and the “pieces” are the input vector split in terms of the basis vectors of V. All these operations are happening on vectors of the tangent space at point x.

What are 0-forms? They don’t take in any vector. Hence, for all vectors, they’re constant. This implies that they’re just constant functions.

What about 1-forms? They take in one vector from the tangent space, and then probably have a simple mapping of that vector to the co-domain field (which is generally \Bbb{C}). Turns out many examples of 1-forms can be manufactured from smooth functions. If \phi:X\to \Bbb{R} is a smooth function, where X is the smooth manifold, then d\phi_x: T_x(X)\to\Bbb{R} is a linear map at each point x. Thus the assignment x\to d\phi_x defines a 1-form d\phi on X.

This discussion on differential forms will be continued in the next post.

Sweating out the homology

This is going to be a rather long post on homology. I hope I do manage to understand it. It will ultimately go up in a polished form on my blog. The reason why it is difficult to understand homology and cohomology without typing it all out is that the information given is so little. One has to construe so much from relatively dry language. I think that is the place where writing things out would help tremendously.

If two spaces X and Y are homotopic, then their homology groups are isomorphic. If the homology groups differ, then they clearly cannot be homotopic, and more specifically, homeomorphic.

A chain map f:C^*\to D^* is a map between homotopy groups C_n for each n. Hence, a chain map encodes information about an infinite number of maps between those homotopy groups. Also, these maps commute with boundary maps. What does it really mean for a map to commute? It means that the maps \partial and f_n literally commute! \partial f_n=f_n\partial for all x\in C_n. Hence the name commutative diagrams. This is honestly the first time that I have thought of this, in spite of having read about commutative diagrams all this bloody while. Maybe typing does have its benefits.

What kinds of maps take cycles to cycles and boundaries to boundaries? Commuting maps definitely do. But the commuting structure has to be present throughout. Above and below. Do other kinds of maps have similar properties? Yes. It is possible. In such a commuting structure, we have a natural map between homology groups. It is only because of the commutativity that the map is well defined; i.e. that elements in B_n go to C_n.

We now construct a functor: for a chain map f: C_*\to D_*, we construct a functor H_n such that H_n (f): H_n (C_*)\to H_n (D_*). As far as the mappings of objects go, we see a mapping of C_n‘s to H_n‘s. Do we have to have an infinite number of functors to be able to successfully create all maps between the images of chains? Yes. We need a separate functor for each map f: C_n\to D_n. H_n is known as the nth homology functor.

Now we talk about chain homotopic maps. Two maps f,g: C_*\to D_* are chain homotopic if there exist maps h_n: C_n\to D_{n+1} such that f_n-g_n=h_{n-1}d_n+d_{n+1}h_n. What does it mean for two maps to be chain homotopic? And why should such h_n‘s exist? In understanding this, this answer came in handy. Essentially, two chain homotopic maps induce the same maps between homological groups. Hence, homotopic chain maps induce isomorphic homological maps. Why is that? This is because f(z)-g(z) for z\in Ker \partial _n belongs to B_n (D_*). Hence, f and g induce the same maps between the homology groups.

Just a quick note: the kernels and the cokernels in the snake lemma and other lemmas are all with respect to the boundary maps, and not the chain maps. Why is that? Because the chain maps just act like connecting and commuting linkages. The main action happens with sets related to the boundary maps- like the kernels and the cokernels.

Now I shall study the Snake Lemma. With this lemma, there is one thing that has always confused me- how is the map well-defined? What if their difference goes to $0$? What does all this mean? OK. First of all, in whichever direction we go in the process of diagram chasing, we might or might not have a well-defined choice. We always have to check for a well-defined choice. Checking in some instances is easier than checking in others. In this case, everything works out as the lower left map is injective, and the whole diagram is commutative. Sorry this is not a completely rigorous or complete explanation, but it has made me understand something that I was at a loss to understand for far too long.

Now we shall talk about the long exact homology sequence. Say you have a short exact sequence of chain complexes 0\to C_*\to D_*\to E_*\to 0. How is the long exact homology chain induced? The main issue here is the construction of the connecting map H_n(E_*)\to H_{n-1}(C_*). This is done by using Snake’s lemma on the following diagram¬†capture

What does it mean for a group to split as a direct sum of subgroups

Today I shall be talking about what it means for a group to split into a direct sum. In other words, if G=\bigoplus\limits_{i\in I} G_i, then what does it mean for the structure of the group?

G is obviously not the union \bigcup\limits_{i\in I} G_i. It is a much bigger set than that in general. But it contains G_i as subgroups. So what? Can we write any group as the direct sum of its subgroups? No. This means that we can write any element of the group uniquely as a linear sum of elements of G_i‘s. This is a much stronger statement.

The uniqueness is what is important here. Otherwise, it is true in general that we can write any element of the group as a linear combination of elements in its subgroups.

Ramblings on quasi-projective varieties

This blog post is mean to be an exposition on quasi-projective varieties, something that I am having problems understanding. A quasi-projective variety is a locally closed projective variety. What does that mean? It means that it is the intesection of a Zariski open set and a Zariski closed set in some projective space. Does this align with what a locally closed set means? This article  would confirm that this is indeed the case.

The wikipedia article on quasi-projective varieties states that an affine space is an open set in a projective space. How is this? I can surely understand that an affine space can be embedded in projective space. Then? Oh c’mon. Say the affine space A^n is embedded as U_i in \Bbb{P}^n. The variety that lies in its complement is z_i=0. Hence, being the complement of a closed set, the affine subspace is open.

As any affine variety in \Bbb{A}^n can be embedded in \Bbb{P}^n as the intersection of its topological closure in \Bbb{P}^n and the affine chart U_i, we see that any affine variety is a quasi-projective variety.

Notes from my last meeting with Pete

b_{i,i+2}=h^1(\bigwedge^{i+1} M_L(1))

\bigwedge^i E=\bigwedge^{n-i}E^*\otimes \text{det }E=h^1(\bigwedge^{n-i-1}M^{*}_L)=(by Serre duality) h^0(K_C\otimes \bigwedge^{n-i-1}M_L)

0\to M_L\to \Gamma\to L\to 0

0\to M_L\otimes K\to \Gamma\otimes K\to L\otimes K\to 0

Estimate h^0(M_L\otimes K)\approx h^0(\Gamma\otimes K)-h^0(L\otimes K) (we can refine this later)

h^0(\Gamma\otimes K)=(\text{dim}\Gamma)h^0(K)=(n+1)g (Riemann’s result, h^0(K)=g)

h^0(L\otimes K)=(Riemann-Roch)(d+2g-2)-g+1=d+g-1

h^0(L\otimes K)\approx (n+1)g-(d+g-1)

As described in Hirzebruch’s book, from 0\to M_L\to\Gamma\to L\to 0, we get 0\to \bigwedge^2 M_L\to \bigwedge^2 \Gamma\to M_L\otimes L\to 0, and then 0\to \bigwedge^2 M_L\otimes K\to \bigwedge^2 \Gamma\otimes K\to M_L\otimes L\otimes K

There are two approaches that we can take:

1. Tensor (0\to M_L\otimes K\to\Gamma\otimes K\to L\otimes K\to 0) by L and compute
2. Riemann-Roch for vector bundles: we have d+r(1-g) instead of d+1-g. Here, r is the rank.

0\to A\to B\to C

\text{deg}(B)=\text{deg} (A)+\text{deg} (C)

\text{deg}(E\otimes L)=\text{deg} (E)+r.\text{deg} (L)

\text{deg}(\Gamma)=0