4 out of 5 dentists recommend this WordPress.com site

### Experiments in Learning Math

I’ve been away for some time. I went to India for a month and a half to attend my sister’s wedding. I could only access internet regularly from my mobile phone, and hence couldn’t blog about what I’ve been up to. Now that I’m back in the United States, I intend to make up for that deficit.

During the holidays, at least in the first half, I got some time to reflect on the Math that I know and understand. I suddenly realized that although I do have a passing familiarity with some areas of Math, there are very few things in it that I deeply understand. Most of my mathematical knowledge comprises of words that I may have seen somewhere but do not understand, and theorems whose statements I may have crammed but don’t know how to prove. I would stick my neck out and say that this is true for many math students, at least in the initial stages.

I spent a lot of time studying Hartshorne’s Chapter 2 on schemes, and other books including Bredon’s book on Geometry. After a couple of weeks of reading those, I can safely say that I retain only a skeleton of the statements of the theorems. This statement is broadly true for most math that I’ve studied.

I then adopted a slightly different approach. I would take a topic that I had a passing familiarity with, and then try to develop it on my own. I spent a considerable amount of time stating and proving Louiville’s theorem, the fact that holomorphic functions defined on an open set have a zero integral around a triangle, etc. I felt that for the first time in my life, I really understood these fairly elementary theorems. I have been trying this method for about a month. I can safely say that I understand Math much better than I used to. I try to make minimal use of books to learn facts. I first try and develop concepts, definitions, theorems and proofs, and then refer to books to try and see where I went wrong.

This approach has especially come in handy in understanding more abstract forms of Math like Algebraic Geometry and Category Theory. These disciplines are infamous for seemingly arbitrary definitions. In developing these concepts on my own, I could sometimes see the need for said definitions by comparing them with other possible definitions. Dry and unmotivated Math often pushes students away from certain fields, which is a tragedy as those techniques could have been useful in many fields of research. Trying to develop those concepts independently somewhat attenuates this problem, and gives the prospective mathematician his/her own perspective on things. Which is perhaps the most useful thing he/she could have.

More important than understanding Math better, is the fact that now I enjoy Math and other disciplines more. I feel an urge to take an exercise book, develop whatever trivial/obvious concepts I can, and see how far I can go. I don’t yet know whether this is a good way to learn Math….only time will tell. However, this certainly is a much much more enjoyable way.

### Decisions decisions

Hi! I’ve decided to join Penn State for a PhD in Mathematics this fall. I am excited about this leg of my life. Thanks all.

-Ayush

### Examples- II

The set of irritating examples continues:

1. $V(I\cap J)=V(I.J)=V(I)\cup V(J)$: let $I$ be the ideal generated by the polynomial $x+y=0$ and $J$ be the polynomial generated by $x-y=0$. Then $I\cap J$ consists of the polynomials that are present in both ideals. As $I$ and $J$ are both prime ideals, their intersection is exactly the product of the two ideals. When we take a product of two ideals, their set of common zeroes is the union of the set of zeroes of the individual ideals. Hence, we get $V(I\cap J)=V(I)\cup V(J)$.

Why is the intersection of two prime ideals equal to their product? It is easy to see that the product of the two ideals would be contained within the intersection. But what if the intersection is bigger than the product?

2. Explicitly write down a morphism between two varieties, that leads to a morphism between their coordinate rings: Consider the map $t\to (t,t^2)$, which is a morphism between the affine varieties $\Bbb{R}$ and $V(y-x^2)\subset \Bbb{R}^2$. We should not construct a morphism between the coordinate rings $\Bbb{R}[x,y]/(y-x^2)$ and $\Bbb{R}[x]/(0)=\Bbb{R}[x]$. How do we go about doing that?
Consider any polynomial in $\Bbb{R}[x,y]/(y-x^2)$; say something of the form $x^2+y^2$. We can now replace $x$ by $t$ and $y$ by $t^2$. That is how we get a morphism from $\Bbb{R}[x,y]/(y-x^2)$ to $\Bbb{R}[x]$.

Now we shall start with a morphism between coordinate varieties. Consider the morphism $\Bbb{R}[x,y]/(y-x^2) \to \Bbb{R}[x]$. Let $x$ be mapped to $x$ and let $y$ be mapped to $x^2$. We can see why the ideal $(y-x^2)$ goes to $0$. Hence, this map is well-defined. Now we need to construct a morphism between the varieties $V((0))$ and $V(y-x^2)$. We may $t\to (t,t^2)$. In general, if we have a map $\Bbb{C}[x_1,x_2,\dots,x_m]/I\to \Bbb{C}[x_1,x_2,\dots,x_n]/J$, with the corresponding mappings $x_i\to p_i(x_1,x_2,\dots,x_n)$, then the map between the varieties $V(J)\to V(I)$ is defined as follows: $(a_1,a_2,\dots,a_n)\to (p_1(a_1,a_2,\dots,a_n), p_2(a_1,a_2,\dots,a_n),\dots, p_m(a_1,a_2,\dots,a_n))$. How do we know that the image of the point belongs to $V(I)$? Let $f$ be a polynomial in $I$. This is not a difficult argument, and follows from the fact that every polynomial in $I$ maps to a polynomial in $J$ (as the mapping between the coordinate rings is well-defined). We shall try and replicate that argument here.

Let $f\in I$. Then $f(x_1,x_2,\dots,x_m)$ is mapped to $f(p_1(x_1,x_2,\dots,x_n), p_2(x_1,x_2,\dots,x_n),\dots,p_m(x_1,x_2,\dots,x_n))$, which is a polynomial in $J$. This polynomial satisfies the point $(a_1,a_2,\dots,a_n)$. Hence, $f(p_1(a_1,a_2,\dots,a_n), p_2(a_1,a_2,\dots,a_n),\dots,p_m(a_1,a_2,\dots,a_n))=0$. This proves that the map of $(a_1,a_2,\dots,a_n)$ is again a point in $V(I)$, and we have defined a map between the varieties $V(J)$ and $V(I)$.

Where does the isomorphism figure in this picture? One step at a time, young Padawan.

3. Explicit example of a differential form: $(x+y+z)dx+(x^2+y^2+z^2)dy+(x^3+y^3+z^3)dz$. A differential form is just a bunch of functions multiplied with $dx,xy,dz$.

4. Explicit example of the snake lemma in action: I am going to try and talk a bit about Jack Schmidt’s answer [here](http://math.stackexchange.com/questions/182562/intuition-behind-snake-lemma). It promises to be very illustrative.

### Irritating set of examples- I

I am trying to collect explicit examples for concepts and calculations. My hope is that this website becomes a useful repository of examples for anyone looking for them on the internet.

First some words of wisdom from the master himself, Professor Ravi Vakil: “Finally, if you attempt to read this without workign through a significant number of exercises, I will come to your house and pummel you with the EGA until you beg for mercy. As Mark Kisin has said, “You can wave your hands all you want, but it still won’t make you fly.”

1. Can we have two products of the same two objects, say $A$ and $B$, in the same category? This question is much more general than I am making it out to be. Can we have two distinct universal objects of the same kind in a category (although they may be isomorphic, and even through unique isomorphism)? The only example of the product of objects being isomorphic but not the same is the following: $A\times B$ and $B\times A$. These aren’t the same objects, but they’re isomorphic through unique isomorphism.

2. Groupoid- In the world of categories, a groupoid is a category in which all morphisms between objects are isomorphisms. An example of a groupoid, which is not a group, is the category $\mathfrak{Set}$ with the following restriction: $\text{Hom(A,B)}$ now only consists of isomorphisms, and not just any morphisms. This example, although true, is not very illustrative. This [link](http://mathoverflow.net/questions/1114/whats-a-groupoid-whats-a-good-example-of-a-groupoid) provides a much better demonstration of what is going on. Wikipedia says that the way in which a groupoid is different from a group is that the product of some pairs of elements may not be defined. The Overflow link suggests the same thing. You can’t take any any pair of moves that one may make on the current state of the jigsaw puzzle, and just compose them. The most important thing to note here is that the elements of the group do not correspond to objects of the categories. They correspond to morphisms between those objects. This is the most diabolical shift of perspective that one encounters while dealing with categories. Suddenly, morphisms encode much more information than you expect them to.

3. Algebraic Topology example: Consider a category in which points are objects of the category, and the paths between points, upto homotopy, are morphisms. This is a groupoid, as paths between points are invertible. The return path should not wrap around a wayward hole, obviously. One may consider the path as the same, just travelling in the opposite direction. The automorphism group of a point would be the fundamental group of paths centred at that point.

Another category that stems from Algebraic Topology is one in which all objects are topological spaces, and the morphisms between maps are the continuous maps between those spaces. Predictably, the isomorphisms are the homeomorphisms.

4. Subcategory: An example would be one in which objects are sets with cardinaly $1$, and morphisms would be the same as those defined in the parent category- $\mathfrak{Set}$.

5. Covariant functor: Consider the forgetful functor from $\mathfrak{Set}$ to $\mathfrak{Vec}_k$. The co-domain is bigger than the domain. One could think of this functor as an embedding.

A topological example is the following: one which sends the topological space $X$, with the choice of a point $x_0$, to the object $\pi(X,x_0)$. How does this functor map morphisms? It just maps paths in $X$ to their image under the same continous map. How do we know that the image is a path? This is easy to see. We can prove that we ultimately have just a continuous map from $[0,1]$ to that image, and we will be done. Do we have to choose a point in each topological space? Yes. What if we have the following two tuple $(X,x_0), (Y,y_o)$, such that $x_0$ is not mapped to $y_0$? Then there is no morhism between these two objects. In other words, the set of morphisms $\text{Hom}((X,x_0), (Y,y_o))$ consists of only those morphisms which map $x_0$ to $y_0$. An illustrative example is the following: $f_1: [0,1]\to (\cos (2\pi t),\sin (2\pi t))$ and $f_2: [0,1]\to (\cos (4\pi t),\sin (4\pi t))$. These are two different continuous maps between the same two topological spaces. They both map $0$ to the point $(1,0)$ in $S^1$, but they map a path starting and ending

Side note: Example of two homotopic paths being mapped to homotopic paths under a continuous map. Let $f: [0,1]\to S^1$ be the continuous map under consideration. Consider any path $p$ in $[0,1]$ which starts and ends at $0$. We know that this is homotopic to the constant path at $0$ (one may visualize the homotopy as shrinking this path successively toward $0$). Then the image of this homotopy is mapped to a path in $S^1$ that shrinks toward the constant map at $(1,0)$.

6. Contravariant functors: Mapping a vector space to its dual. This example is pretty self-explanatory.

7. Natural Transformation: A natural transformation is a morphism between functors. Abelianization is a common example of a natural transformation. The two functors, both of which are covariant, are $id$ and $id^{ab}$. The first one maps a group to itself, and the second one maps a group to its commutator. The resultant commutative diagram is easy to see too. The data of the natural transformation is just $m:G\to G^{ab}$ and $m:G'\to G'^{ab}$.

The double dual of a vector space is another example of a natural transformation. The dual would have worked too, except for the fact that the dual functor is contravariant. Note: one of the functors, in both these natural transformations, is the identity functor.

8. Equivalence of categories- This is exactly what you think it is. Two categories that are not equivalent are $\mathfrak{Grp}$ and $\mathfrak{Grp^{ab}}$. Too much information is lost while abelianizing the group, which cannot be regained easily.

9. Initial object- The empty set is the initial object in the category $\mathfrak{Set}$. Why not a singleton? Because the map from the initial object to any object also has to be unique. Moreover, a singleton will not map to an object- namely the empty set. And an initial object should map to all objects.

10. Final object- A singleton will be a good final object in the category $\mathfrak{Set}$.

11. Zero object- The identity element in the category $\mathfrak{Grp}$ would be such an object.

12. Localization through universal property: Consider $\Bbb{Z}$, with the multiplicative subset $\Bbb{Z}-\langle 7\rangle$. The embedding $\iota: \Bbb{Z}\to \Bbb{Q}$ ensures that every integer goes to an invertible element. Trivially, so does every element of $\Bbb{Z}-\langle 7\rangle$. Hence, there exists a unique map from $(\Bbb{Z}-\langle 7\rangle)^{-1}\Bbb{Z}$ to $\Bbb{Q}$. We can clearly see that this is overkill. Many more elements than just those of $\Bbb{Z}-\langle 7\rangle$ are mapped to invertible elements. The point is that there may be a ring $A$ such that only elements of $\Bbb{Z}-\langle 7\rangle$ are mapped to invertible elements in $A$. Hence, in that case too, there will exist a unique map from $(\Bbb{Z}-\langle 7\rangle)^{-1}\Bbb{Z}$ to $A$. Why do we care about there existing a map from some other object to rings which $\Bbb{Z}$ maps to at all? When we have a morphism $\phi:A\to B$, and we can say that there exists a map $A/S\to B$, where $S$ is a set of relations between elements of $A$, then we’re saying something special about the properties of elements in $B$ (at least the properties of elements mapped to by $S$).

### Sheaf (Čech) Cohomology: A glimpse

This is a blogpost on Sheaf Cohomology. We shall be following this article.

From the word cohomology, we can guess that we shall be talking about a complex with abelian groups and boundary operators. Let us specify what these abelian groups are.

Given an open cover $\mathcal{U}=(U_i)_{i\in I}$ and a sheaf $\mathcal{F}$, we define the $0^{th}$ cochain group $C^0(\mathcal{U}, \mathcal{F})=\prod_{i\in I}\mathcal{F}(U_i)$. Note that we are not assuming that the sections over the individual $U_i$‘s agree on the intersections. This is simply a tuple in which each coordinate is a section. We are interested in finding out whether we can glue these sections together to get a global section. This is only possible if the sections agree on the intersections of the open sets.

We now define $C^1(\mathcal{U}, \mathcal{F})=\prod_{i,j\in I}\mathcal{F}(U_i\cap U_j)$. Here we are considering the tuple of sections defined on the intersections of two sets. Note that these intersections may not cover the whole of the topological space. Hence, we are no longer interested in gluing sections together to see whether they form a global section.

Similarly, we define $C^2(\mathcal{U}, \mathcal{F})=\prod_{i,j,k\in I}\mathcal{F}(U_i\cap U_j\cap U_k)$.

Now, we come to the boundary maps. $\delta: C^0(\mathcal{U}, \mathcal{F})\to C^1(\mathcal{U}, \mathcal{F})$ is defined in the following way: $\delta(f_i)=(g_{i,j})$, where $(g_{i,j})= f_{j|U_i\cap U_j}-f_{i|U_i\cap U_j}$. What we’re doing is that we’re taking a tuple of sections, and mapping it to another tuple; the second tuple is generated by choosing two indices $k,l$, determining the sections defined over $U_k$ and $U_l$, and then calculating $f_{l|U_k\cap U_l}-f_{k|U_k\cap U_l}$. In the image tuple, $f_{l|U_k\cap U_l}-f_{k|U_k\cap U_l}$ would be written at the $k,l$ coordinate.

Now we define the second boundary map. $\delta: C^1(\mathcal{U}, \mathcal{F})\to C^2(\mathcal{U}, \mathcal{F})$ is defined in the following way: $\delta(f_{ij})=(g_{i,j,k})$, where $(g_{i,j,k})= f_{i,j|U_i\cap U_j\cap U_k}-f_{k,i|U_i\cap U_j\cap U_k}+f_{j,k|U_i\cap U_j\cap U_k}$. What does this seemingly arbitrary definition signify? The first thing to notice is that if $f_{i,j}$ is an image of an element in $C^0(\mathcal{U}, \mathcal{F})$, then $\delta(f_{i,j})=0$. Hence, at the very least, this definition of a boundary map gives us a complex on our hands. Maybe that is all that it signifies. We’re looking for definitions of $C^i(\mathcal{U},\mathcal{F})$ which keep us giving sections over smaller and smaller open sets, and definitions of $\delta$ over these $C^i(\mathcal{U},\mathcal{F})$ which keep on mapping images from $C^{i-1}(\mathcal{U},\mathcal{F})$ to $0$.

Predictably, $H^i(\mathcal{U},\mathcal{F})=Z(\mathcal{U},\mathcal{F})/B^i(\mathcal{U},\mathcal{F})$, where $Z(\mathcal{U},\mathcal{F})$ is the kernel of $\delta$ acting on $C^i(\mathcal{U},\mathcal{F})$ and $B^i(\mathcal{U},\mathcal{F})$ is the image of $\delta$ acting on $C^{i-1}(\mathcal{U},\mathcal{F})$. Sheaf cohomology, measures the extent to which tuples of sections over an open cover fail to be global sections. The longer the non-zero tail of the cohomology complex, the farther the sections of this sheaf lie from gluing together amicably. In other words, the length of the non-zero tail measures how “complex” the topological space and the sheaf on it are. However, there is still hope. By a theorem of Grothendieck, we know that the length of the complex is bounded by the dimension of the (noetherian) topological space.

### Prūfer Group

This is a short note on the Prūfer group.

Let $p$ be a prime integer. The Prūfer group, written as $\Bbb{Z}(p^\infty)$, is the unique $p$-group in which each element has $p$ different $p$th roots. What does this mean? Take $\Bbb{Z}/5\Bbb{Z}$ for example. Can we say that for any element $a$ in this group, there are $5$ mutually different elements which, when raised to the $5$th power, give $a$? No. Take $\overline{2}\in\Bbb{Z}/5\Bbb{Z}$ for instance. We know that only $2$, when raised to the $5$th power, would give $2$. What about $\Bbb{Z}/2^2\Bbb{Z}$? Here $p=2$. Does every element have two mutually different $2$th roots? No. For instance, $\overline{2}\in\Bbb{Z}/2^2\Bbb{Z}$ doesn’t. We start to get the feeling that this condition would only be satisfied in a very special kind of group.

The Prūfer $p$-group may be identified with the subgroup of the circle group $U(1)$, consisting of all the $p^n$-th roots of unity, as $n$ ranges over all non-negative integers. The circle group is the multiplicative group of all complex numbers with absolute value $1$. It is easy to see why this set would be a group. And using the imagery from the circle, it easy to see why each element would have $p$ different $p$th roots. Say we take an element $a$ of the Prūfer group. Assume that it is a $p^{n}$th root of $1$. Then its $p$ different $p$th roots are $p^{n+1}$th roots of $1$. It is nice to see a geometric realization of this rather strange group that seems to rise naturally from groups of the form $\Bbb{Z}/p^n\Bbb{Z}$.

### Sheafification

This is a blog post on sheafification. I am broadly going to be following Ravi Vakil’s notes on the topic.

Sheafification is the process of taking a presheaf and giving the sheaf that best approximates it, with an analogous universal property. In a previous blog post, we’ve discussed examples of pre-sheaves that are not sheaves. A classic example of sheafification is the sheafification of the presheaf of holomorphic functions admitting a square root on $\Bbb{C}$ with the classical topology.

Let $\mathcal{F}$ be a presheaf. Then the morphism of presheafs $\mathcal{F}\to\mathcal{F}^{sh}$ is a sheafification of $\mathcal{F}$ if $\mathcal{F}^{sh}$ is a sheaf, and for any presheaf morphism $\mathcal{F}\to \mathcal{G}$, where $\mathcal{G}$ is a sheaf, there exists a unique morphism $\mathcal{F}^{sh}\to \mathcal{G}$ such that the required diagram commutes. What this means is that $\mathcal{F}^{sh}$ is the “smallest” or “simplest” sheaf containing the presheaf $\mathcal{F}$.

Because of the uniqueness of the maps, it is easy to see that the sheafification is unique upto unique isomorphism. This is just another way of saying that all sheafifications are isomorphic, and that there is only one (one each side) isomorphism between each pair of sheafifications. Also, sheafification is a functor. This is because if we have a map of presheaves $\phi:\mathcal{F}\to \mathcal{G}$, then this extends to a unique map $\phi':\mathcal{F}^{sh}\to\mathcal{G}^{sh}$. How does this happen? Let $g:\mathcal{G}\to\mathcal{G}^{sh}$. Then $g\circ\phi:\mathcal{F}\to\mathcal{G}^{sh}$ is a map from $\mathcal{F}$ to a sheaf. Hence, there exists a unique map from $\mathcal{F}^{sh}\to\mathcal{G}^{sh}$, as per the definition of sheafification. Hence, sheafification is a covariant functor from the category of presheaves to the category of sheaves.

We now show that any presheaf of sets (groups, rings, etc) has a sheafification. If the presheaf under consideration is $\mathcal{F}$, then define for any open set $U$, define $\mathcal{F}^{sh}$ to be the set of all compatible germs of $\mathcal{F}$ over $U$. What exactly are we doing? Are we just taking the union of all possible germs of that presheaf? How does that make it a sheaf? This is because to each open set, we have now assigned a unique open set. These open sets can easily be glued, and uniquely too, to form the union of all germs at each point of $\mathcal{F}$. Moreover, the law of the composition of restrictions holds too. But why is this not true for every presheaf, and just the presheaves of sets? Are germs not defined for a presheaf in general?

A natural map of presheaves $sh: \mathcal{F}\to\mathcal{F}^{sh}$ can be defined in the following way: for any open set $U$, map a section $s\in \mathcal{F}(U)$ to the set of all germs at all points of $U$, which in other words is just $\mathcal{F}^{sh}(U)$. We can see that all the restriction maps to smaller sets hold. Moreover, $sh$ satisfies the universal property of sheafification. This is because $sh$ can be extended to a unique map between $\mathcal{F}^{sh}$ and $\mathcal{F}^{sh}$: the unique map is namely the identity map.

We now check that the sheafification of a constant presheaf is the corresponding constant sheaf. We recall that the constant sheaf assigns a set $S$ to each open set. Hence, each germ at each point is also precisely an element of $S$, which implies that the sheaf too is just the set of all elements of $S$: in others words, just $S$. The stalk at each point is also just $S$, which implies that this is a constant sheaf.

What is the overall picture that we get here? Why is considering the set of all germs the “best” way of making a sheaf out of a pre-sheaf? I don’t know the exact answer to this question. However, it seems that through the process of sheafification, to each open set, we’re assigning a set that can be easily and uniquely glued. It is possible that algebraic geometers were looking for a way to glue the information encoded in a presheaf easily, and it is that pursuit which led to this seemingly arbitrary method.

### Toric Varieties: An Introduction

This is a blog post on toric varieties. We will be broadly following Christopher Eur’s Senior Thesis for the exposition.

A toric variety is an irreducible variety with a torus as an open dense subset. What does a dense subset of a variety look like? For instance, in $\Bbb{R}$ consider the set of integers. Or any infinite set of points for that matter. The closure of that set, under the Zariski Topology, is clearly the whole real line. Hence, a dense set under the Zariski topology looks nothing like a dense set under the standard topology.

An affine algebraic group $V$ is a variety with a group structure. The group operation is given by $\phi:V\times V\to V$, which is interpreted as a morphism of varieties (remember that the cartesian product of two varieties is a variety). The set of algebraic maps of two algebraic groups $V,W$, denoted as $\text{Hom}(V,W)$ is the set of group homomorphisms between $V$ and $W$ which are also morphisms between varieties. Are there variety morphisms which are not group homomorphisms? Yes. Consider the morphism $f:\Bbb{R}\to \Bbb{R}$ defined as $x\to x+1$.

The most important example for us is $(\Bbb{C}^*)^n\simeq \Bbb{C}^n-V(x_1x_2\dots x_n)$. This is the same as removing all the hyperplanes $x_i=0$ from $\Bbb{C}^n$. Again, this is the same as $V(1-x_1x_2\dots x_n y)\subset \Bbb{C}^{n+1}$, which is the same as embedding a variety in a higher dimensional space. The coordinate ring of $(\Bbb{C}^*)^n$ looks like $\Bbb{C}[x_1^{\pm},x_2^{\pm},\dots,x_n^{\pm}]\simeq \Bbb{C}[\Bbb{Z}^n]$. Why does the coordinate ring look like this? This is because in the ring $\Bbb{C}[x_1,x_2,\dots,x_n,y]/(1-x_1x_2\dots x_ny)$, all the $x_i's$ become invertible (in general, all of the $n+1$ variables become invertible. However, $y$ can be expressed in terms of the $x_i$‘s).

A torus is an affine variety isomorphic to $(\Bbb{C^*})^n$ for some $n$, whose group structure is inherited from that of $(\Bbb{C^*})^n$ through a group isomorphism.

Example: Let $V(x^2-y)\subset \Bbb{C}^2$, and consider $V_{xy}=V\cap (\Bbb{C}^*)^n$. We will now establish an isomorphism between $\Bbb{C}^*$ and $V_{xy}$. Consider the map $t\to (t,t^2)$ from $\Bbb{C}^*$ to $V_{xy}$. This map is bijective. How? If $t$ is non-zero, then so is each coordinate of $(t,t^2)$. Also, each point in $X_{xy}$ looks like $(t,t^2)$, where $t$ is a non-zero number, and each such point has been mapped to by $t\in\Bbb{C}^*$. Hence, we have a bijection. How does $V_{xy}$ inherit the group structure of $\Bbb{C}^*$? By the following relation: $(a,a^2).(b,b^2)=(ab,(ab)^2)$. Remember that $V_{xy}$ had no natural group structure before. Now it has one.

A map $\phi:(\Bbb{C}^*)^n\to (\Bbb{C}^*)^m$ is algebraic if and only if the map $\phi^*: \Bbb{C}[y_1^{\pm},y_2^{\pm},\dots,y_m^{\pm}]\to \Bbb{C}[x_1^{\pm},x_2^{\pm},\dots,x_n^{\pm}]$ is given by $y_i\to x^{\alpha_i}$ for $\alpha_i\in \Bbb{Z}^n$. In other words, the maps correspond bijectively to lattice maps $\Bbb{Z}^m\to \Bbb{Z}^n$. What does this mean? The condition that the variety morphism also be a group homomorphism was surely expected to place certain restrictions on the the nature of the nature of the morphism. The way that this condition places restrictions is that a unit can only map to a unit. And the only units in $\Bbb{C}[x_1^{\pm},x_2^{\pm},\dots,x_n^{\pm}]$ are monomials times a constant. Why’s that? Why isn’t an expression of the form $x_1+x_2$, for instance, a unit? Because $\Bbb{C}[x_1^{\pm},x_2^{\pm},\dots,x_n^{\pm}]$ is not a field! It is just a polynomial ring in which the variables happen to be invertible. Polynomials in those variables need not be! This is not the same as the rational field corresponding to the polynomial ring $\Bbb{C}[x_1,x_2,\dots,x_n]$. Returning to the proof, the constant is found to be $1$, and one side of the theorem is proved. The converse is trivial.

A character of a Torus $T$ is an element $\chi\in\text{Hom}(T,\Bbb{C}^*)$. An analogy that immediately comes to mind is that of a functional on an $n$-dimensional vector space. Characters are important in studying toric varieties.

### Birational Geometry

This is a blog post on birational geometry. I will broadly be following this article for the exposition.

A birational map $f:X\to Y$ is a rational map such that its inverse map $g:Y\to X$ is also a rational map. The two (quasiprojective) varieties $X$ and $Y$ are known as birational varieties. An example is $X=Y=\Bbb{R}\setminus \{0\}$, and $f=g: x\to \frac{1}{x}$.

Varieties are birational if and only if their function fields are isomorphic as extension fields of $k$. What are function fields? A function field of a variety $X$ is the field of rational functions defined on $X$. In a way, it is the rational field of the coordinate ring on $X$. But what about the functions which are $0$ on some part of $X$, although not all of it? They can still be inverted. In the complex domain, such functions are called meromorphic functions (isolated poles are allowed).

A variety $X$ is called rational if it is birational to affine space of some dimension. For instance, take the circle $x^2+y^2=1$. This is birational to the affine space $\Bbb{R}$. Consider the map $\Bbb{R}\to \Bbb{R}^2: t\to (\frac{2t}{1+t^2}, \frac{1-t^2}{1+t^2})$. This is a rational map, for which the inverse is $(x,y)\to (1-y)/x$.

In general, a smooth quadric hypersurface (degree 2) is rational by stereographic projection. How? Choose a point on the hypersurface, say $p$, and consider all lines through $p$ to the various other points on the hypersurface. Each such line goes to a point in $\Bbb{P}^n$. Note that this map is not defined on the whole of the hypersurface. How do we know that the line joining $p$ and point does not pass through another point on the hypersurface? This is precisely because this is a quadric surface. A quadratic equation can only have a maximum of two distinct solutions, and one of them is already $p$.

Now we state some well-known theorems. Chow’s Theorem states that every algebraic variety is birational to a projective variety. Hence, if one is to classify varieties up to birational isomorphism, then considering only the projective varieties is sufficient. Then Hironaka further went on to prove that every variety is birational to a smooth projective variety. Hence, we now have to classify a much smaller set of varieties. In dimension, $1$, if two smooth projective curves are birational, then they’re isomorphic. However, this breaks down in higher dimensions due to blowing up. Due to the blowing up construction, every smooth projective variety of at least degree $2$ is birational to infinitely many “bigger” varieties with higher Betti numbers. This leads to the idea of minimal models: is there a unique “simplest variety” in each birational equivalence class? The modern definition states that a projective variety is minimal if the canonical bundle on each curve has non-negative degree. It turns out that blown up varieties are never minimal.

This is going to be a blog post on Filtrations and Gradings. We’re going to closely follow the development in Local Algebra by Serre.

A filtered ring is a ring with the set of ideals $\{A_n\}_{n\in\Bbb{Z}}$ such that $A_0=A$, $A_{n+1}\subset A_n$, and $A_pA_q=A_{p+q}$. An example would be $A_n=(2^n)$, where $(2^n)$ is the ideal generated by $2^n$ in $\Bbb{Z}$.

Similarly, a filtered module $M$ over a filtered ring $A$ is defined as a module with a set of submodules $\{M_n\}_{n\in\Bbb{N}}$ such that $M_0=M$, $M_{n+1}\subset M_n$, and $A_pM_q\subset M_{p+q}$. Why not just have $M_pM_q\subset M_{p+q}$? This is because multiplication between elements of a module may not be defined. An example would be the module generated by by the element $v$ over $\Bbb{Z}$, where $M_n=2^n M$.

Filtered modules form an additive category $F_A$ with morphisms $u:M\to N$ such that $u(M_n)\subset N_n$. A trivial example is $\Bbb{Z}\to\Bbb{Z}$, defined using the grading above, and the map being defined as $x\to -x$.

If $P\subset M$ is a submodule, then the induced filtration is defined as $P_n=P\cap M_n$. Is every $P_n$ a submodule of $P$? Yes, because every $M_n$ is by definition a submodule of $M$, and the intersection of two submodules ($M_n$ and $P$ in particular) is always a submodule. Simialrly, the quotient filtration $N=M/P$ is also defined. As the quotient of two modules, the meaning of $M/P$ is clear. However, what about the filtration of $M/P$? Turns out the filtration of $N=M/P$ is defined the following way: $N_n=(M_n+P)/P$. We need to have $M_n+P$ as the object under consideration because it is not necessary that $M_n\in P$.

An important example of filtration is the $m$-adic filtration. Let $m$ be an ideal of $A$, and let the filtration of $A$ be defined as $A_n=m^n$. Similarly, for a module $M$ over $A$, the $m$-adic filtration of $M$ is defined by $M_n=m^nM$.

Now we shall discuss the topology defined by filtration. If $M$ is a filtered module over the filtered ring, then $M_n$ form a basis for neighbourhoods around $0$. This obviously is a nested set of neighbourhoods, and surely enough the intersection of a finite number of neighbourhoods is also a neighbourhood, and so is the union of any set of neighbourhoods. Hence, the usual topological requirements for a basis is satisfied. But why $0$?

Proposition: Let $N$ be a submodule of a filtered module $M$. Then the closure of $\overline{N}$ of $N$ is defined as $\bigcap(N+M_n)$. How does this work? If one were to hand wave a bit, we are essentially finding the intersection of all neighbourhoods of $N$. Remember that each $M_n$ is a neighbourhood of $0$. We’re translating each such neighbourhood by $N$, which is another way of saying we’re now considering all neighbourhoods of $N$. And then we find the intersection of all such neighbourhoods to find the smallest closed set containing $N$. There is an analogous concept in metric spaces- the intersection of all open sets containing $[0,1]$, for instance, is the closed set $[0,1]$. The analogy is not perfect, as the intersection of all neighbourhoods of $(0,1)$ is $(0,1)$ itself, which is not a closed set. But hey. We at least have something to go by.

Corollary: $M$ is Hausdorff if and only if $\cap M_n=0$.