cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

Invertible Sheaves and Picard Groups

This is a blog post on invertible sheaves, which form elements (over a fixed algebraic variety) of the Picard Group. The group operation here is the tensor product. We will closely follow the developments in Victor I. Piercey’s paper.

We will develop invertible sheaves on algebraic varieties. However, instead of studying sheaves over varieties, we will be studying the algebraic analogues of these geometric entities- we’ll be studying modules over coordinate rings.

First we discuss what it means for a module to be invertible over a ring. Over a ring A, a module I is invertible if it is finitely generated and if for any prime ideal p\subset A, we have I_p\simeq A_p as A_p-modules. Here A_p is the localization of the ring A with respect to the prime ideal p, and I_p is just the ideal over the localized ring A_p. What does the expression I_p\simeq A_p mean? One way that this condition is easily seen to be satisfied is that I is generated by a single element over A. I can’t think of any other ways right now. It is perhaps fitting that the article says next that this condition implies that I_p is locally free of rank 1.

The reason that the notation I is chosen for an invertible module is that we shall soon see that every invertible module is isomorphic to an invertible ideal. How does one see that? An ideal of a ring is definitely a module over that ring. Assuming that the ideal is a principal ideal and the module under consideration is also generated by a single element, all we need to do is to map the generator of the module to the generator of the ideal. The reason we can assume that the ideal is principal and that the module is generated by a single element is that we want both the modules to be locally of rank 1, and this is the easiest way of doing so.

We will now discuss an ideal of a module that is locally free, but not principal. Let A=\Bbb{Z}[\sqrt{-5}] and I=(2,1+\sqrt{-5}). It is easy to see that this ideal is not principal. Also, A/I=F_2. Hence, I is maximal in A. Now if I\not\subset p, where p is the prime ideal under consideration, then I\cap A\setminus p\neq\emptyset. Hence, I_p=A_p. This is because there is an element of I which has been inverted, which causes the ideal to be equal to the ring. We therefore assume that I\subset p. As I is maximal, we conclude that I=p. We observe that 3 is not in I, and hence invertible in A_p (which can now be written as A_I). Now 2, which is one of the generators of I, is written as an element of I_p (it is written as \{(1+\sqrt{-5})(1-\sqrt{-5})\}/3). This shows that the ideal can be generated by a single element in A_p, which makes it isomorphic to A_p.

The isomorphism classes of invertible modules over the ring A form the Picard group. The identity element is the isomorphism class of A over itself. Given an invertible module I, its inverse is the module I^*=\text{Hom}(I,A). Why is this the inverse element? This is because there is a natural map I^*\otimes I\to A, which is defined as \psi\otimes a=\psi(a). As the isomorphism class of A is the identity element, this is a map of the product of two elements to the identity, which makes one the inverse of the other. What about I\otimes I^*? Shouldn’t we have a two-sided inverse? Remember that in general, for any two modules M and N, M\otimes N\simeq N\otimes M. Hence, we can define a\otimes \psi to be the same as \psi\otimes a, and get away with it.

Theorem 1: If I is an A-module, then I is invertible if and only if the natural map \mu:I^*\otimes I\to A is an isomorphism.

The proof and subsequent theorems in the paper will be discussed in a later blog post.

Tight Closure

This is a small introduction on tight closure. This is an active field of research in commutative algebra, and this is essentially a survey article. This article will closely follow the paper “An introduction to tight closure” by Karen Smith.

Definition: Let R be a Noetherian domain of prime characteristic p (not that in general, p need not be prime). Let I\subset R be an ideal with generators (y_1,y_2,\dots,y_r) Then an element z is defined to be in the tight closure I^* if \exists c\in R such that cz^{p^e}\in (y_1^{p^e},y_2^{p^e},\dots,y_r^{p^e}).

What does this condition even mean? Let the ring under consideration be \Bbb{Z_3}[x,y,z], and let the ideal I be (x,y). Does the tight closure I^* contain I? For example, x+y\in (x,y). Then is it true that (x+y)^9\in (x^9, y^9)? Yes! Because remember that the ring has characteristic 3. Hence all the other terms in the binomial expansion are 0. In general, I\subset I^*. It is easy to see why. What is an example of an element outside of I that belongs to I^*? Clearly, z\notin I^*? Why? Why can we not have a value for c such that cz^{p^e}\in (x^{p^e}. y^{p^e})? For example, c=x. However, the value for c should remain the same for all prime powers p^e. Clearly, there is no such c.

Is I^* an ideal? Yes. This is part is quite clear.

Properties of tight closure:

1. If R is regular, then all ideals of R are tightly closed. In fact, one of the most important uses of tight closure is to compensate for the fact that the ring under consideration may not be regular.

2. If R\hookrightarrow S is an integral extension, then IS\cap R\subset I^* for all ideals I\subset R. What does this condition mean? You’re multiplying I with an ideal outside of R. It might create elements in R that are outside of I, and even outside of I^*. The former is possible, but the latter is not.

3. If R is local, with system of parameters x_1,x_2,\dots,x_d, then (x_1,x_2,\dots,x_i):x_{i+1}\subset (x_1,x_2,\dots,x_i)^*. This means that we start building an ideal with the element x_1, and then every subsequent element that we add is present in the closure of the pre-existing ideal. Hence, it is like we’re building an ideal up from (x_1) to (x_1)^*.

4. If \mu denotes the minimal number of generators of I, then \overline{I^\mu}\subset I^*\subset\overline{I}. Here \overline{I} denotes the integral closure of I. Note that the number of generators of an ideal is generally not well defined. For instance the ideal (x)\subset \Bbb{Q}[x] can also be written as (x^2+2x,x^2). However, the minimal number of generators is well-defined, as we’re talking about a Noetherian ring. Hence, every ideal has a finite number of generators. Note that I^*\subset \overline{I} is easy to see. For instance, let I=(x,y) in \Bbb{R}[x,y]. Then a\in I^* implies that ca^{p^e}\in (x^{p^e},y^{p^e}). Hence, ca^{p^e}-(\text{polynomial in }x^{p^e}\text{ and }y^{p^e})=0. This implies that a is integral over I, and hence I^*\subset \overline{I}. What about \overline{I^\mu}\subset I^*?

5. If \phi:R\to S is any ring map, I^*S\subset (IS)^*. Here IS is actually \phi(I)S. This property is labelled as “persistence” in the paper. I suppose what this means is that it is good to persist (find the closure *after* you find the image) rather than throw up your hands at the beginning (finding the closure right at the beginning).

But I’m probably just putting words into Karen’s mouth. What do I know.

It seems to me that a tight closure is a “tighter” form of closure; tighter than integral closure for instance. And for a lot of analytic requirements, it is just the right size; integral closure would be too big.

Notes on Speyer’s paper titled “Some Sums over Irreducible Polynomials”

Let \mathcal{P} be the set of irreducible polynomials over F_2[T]. Then \sum\limits_{P\in \mathcal{P}}\frac{1}{1-P}=0. The paper lists certain examples of \frac{1}{1-P} below. These are all expanded as geometric series. As one can see only P=T, T+1 contribute to the coefficient of T^{-1} in the sum \sum\limits_{P\in \mathcal{P}}\frac{1}{1-P}=0. Why don’t the other irreducible polynomials do the same? This is because these are the only two linear polynomials in F_2[T]. All other polynomials are of higher degree. Moreover, all other irreducible polynomials have the constant term 1; otherwise they would be reducible, as T would be a common factor. Hence \frac{1}{P-1} would be of the form \frac{1}{T^{a_1}+T^{a_2}+\dots+T^{a_n}}, where a_1>1. Now divide both the numerator and denominator by T^{a_1}. So we get an expression of the form \frac{1}{T^{a_1}}(\frac{1}{1+T^{a_2-a_1}+T^{a_3-a_1}+\dots+T^{a_n-a_1}}). As a_i-a_1<0 for all i\neq 1, this is a power series expansion in negative powers of T. Also, as a_1\geq 2, all such negative exponents will be less than -1. This proves that only the polynomials T and T+1 contribute to the coefficient of T^{-1} in \sum\limits_{P\in \mathcal{P}}\frac{1}{1-P}=0.

We now try and understand Theorem 1.1 in this paper. Let \mathcal{P_1} be the set of monic irreducible polynomials in F_{2^n}[T]. Then \sum\limits_{P\in \mathcal{P_1}}\frac{1}{P^k-1}\in F_{2^n}(T) for any k\equiv 0(\mod 2^n-1).

A corollary of this is that \sum\limits_{P\in \mathcal{P}}\frac{1}{P^k-1} is in F_{2^n}(T)

Proof of corollary: We have rewritten \sum\limits_{P\in \mathcal{P}}\frac{1}{P^k-1} as \sum\limits_{P\in \mathcal{P}_1}\sum\limits_{a\in \Bbb{F}_q^\times}\frac{1}{(aP)^k-1}, where q=2^n. Why can we do that? This is because for any a\in\Bbb{F}_q^\times, a^{q-1}=1. Hence, we’re essentially counting the same thing as before. Aren’t we counting each term |\Bbb{F}_q^\times| times? Also, every irreducible polynomial is of the form aP for some P\in\mathcal{P_1}. Now consider the identity \sum\limits_{a\in \Bbb{F}_q^\times}\frac{1}{(aX)^k-1}=\frac{1}{(X)^{lcm(k,q-1)}-1} in \Bbb{F}_q(U). Why is this true? This is because \frac{1}{(aX)^k-1} can be written as \sum\limits_{j=1}^\infty\frac{1}{(ax)^{kj}} (just multiply and divide \frac{1}{(aX)^k-1} by \frac{1}{({aX})^k}).

Now, as \sum\limits_{a\in\Bbb{F}_q}a^m=1 if m\equiv 0 \mod q-1, and \sum\limits_{a\in\Bbb{F}_q}a^m=0 otherwise. This is because if m\equiv 0 \mod q-1, then \sum a^m is essentially adding 1 to itself q-1 times. As the characteristic of the field is 2, and as q-1 is essentially 2^m-1, this sum is equal to the inverse of 1, which is exactly 1. When q\not\equiv 0\mod q-1, then \sum a^m=0. This can be verified independently.

Introduction to Schemes

This is a short introduction to Scheme Theory, as modeled on the article by Brian Lawrence.

A variety here is a zero set that can be covered by a finite number of affine varieties. Hence, a morphism between varieties can be considered to be a bunch of affine morphisms, as long as they agree on the intersections.

We need a shift in perspective. What this means is that we need to start thinking about the coordinate ring rather than the points themselves.

Now let us think about the following example: the coordinate ring of y=0 in K^2 is K[x,y]/(y). However, the coordinate ring of y^2=0 is also K[x,y]/(y); it is not K[x,y]/(y^2). The reasons for this can be worked out easily. Hence, the variety in this case is not accurately recovered from the coordinate ring. We started off with the variety y^2=0, and got back y=0. We need a new concept, which would allow us to accurately get back the variety from the coordinate ring- something that would allow nilpotents.

An affine scheme, written as \text{Spec }A, is the data of a ring A. A morphism of affine schemes \text{Spec } A\to \text{Spec }B, is a morphism of rings B\to A. An affine scheme over a field k is a scheme \text{Spec }A where A is equipped with a k-Algebra structure.

Why are morphisms defined backwards here? In other words, why is \text{spec }A\to \text{spec }B defined as B\to A? This is because A,B are the coordinate rings. Let Var(A) be the variety corresponding to the coordinate ring A. Then a map Var(A)\to Var(B) defines a map B\to A, and vice-versa. Maybe \text{spec }A is a formal representation of Var(A). It is at least easy to remember which way the arrow goes this way.

How do we recover points from coordinate rings? Hilbert’s Nullstellensatz tells us that we can recover them using maximal ideals. Hence, our aim right now is to take an affine morphism, and construct a morphism between varieties. Hence, if the affine morphism is B\to A, we want to construct a map Var(A)\to Var(B).

Given a ring homomorphism \phi:R\to S, for any prime ideal p\in S, \phi^{-1}(p) is also prime. This is an elementary exercise in ring theory. It is however, not true in general that the inverse image of a maximal ideal is also maximal. For example, consider the map \psi:\Bbb{Z}[x]\to\Bbb{Q}[x] defined by inclusion. Then the only maximal ideal of \Bbb{Q}[x] is (x), the inverse of which is also just (x). It is easy to see that (x) is not a maximal ideal in \Bbb{Z}[x]. For instance, 2\notin (x), and 2+zx\neq 1 for any z\in\Bbb{Z}[x].

We define the points of the affine scheme to be prime ideals. Why? Let us work this through. We have a scheme morphsim \phi:B\to A, where both B and A are coordinate rings. Now let us take a prime ideal in A. From the discussion above, we know that \phi^{-1}(p) is a prime ideal in B. Hence, if prime ideals were points, we have taken a point in A, and mapped it to B. In a way, we have constructed a map from Var(A) to Var(B).

However, this is a little weird. Points correspond to maximal ideals, and not prime ideals. All maximal ideals are prime, but the converse is not true. Do we really have a map from Var(A)\to Var(B)? No. At least not in the traditional sense. What we have is a map from some “stuff” in A, which includes points, to “stuff” in B, which too includes points (possibly not all). Hence, something that’s not a point in A may map to a point in B, and a point in A may map to something that is not a point in B. We’re gonna call this “stuff” generic points. Hence, generic points in A go to generic points in B. This is a classic example of formulating new definitions to suit our world-view.

Now that we have the concept of “generic” points, we also need a name for “actual points”. This name is “classical points”. Hence, we’ll refer to maximal ideals in A as classical points.

So what exactly is a scheme? A scheme is a coordinate ring, whose prime ideals are its points. Simple. It generalizes the notion of a variety. How? A variety has a set of points and an associated coordinate ring. A scheme has a larger set of points, and an associated coordinate ring. Hence the generalization is in the set of points; at least in this instance. Also, as discussed before, although k[x,y]/(y) and k[x,y]/(y^2) correspond to different coordinate rings but the same variety, they correspond to different schemes. Why? If a scheme was the data of its “points” (read generic points), then the points of k[x,y]/(y^2) are different from those of k[x,y]/(y) (the cosets look different, for starters). Hence, we now allow for distinguishing between multiplicities.

Puiseux Series and Tropical Varieties

Puiseux series- This field is denoted by \Bbb{C}[[t]]. Note that we have a double brace “[[ ]]” instead of “[]”. This implies that we have infinite series instead of finite ones (which would be polynomials). The Puiseux laurent series is denoted as \Bbb{C}((t)). This means that t is also allowed to have negative powers. Now \mathcal{K}=\bigcup\limits_{n\geq 1} \Bbb{C}((t^{1/n})), which just means that \mathcal{K} contains all rational powers of t now, and not just integral powers, as in the Laurent series. We seem to be generalizing in every successive step.

Now we define a valuation: v:\mathcal{K}^{\times}\to\Bbb{Q}: v(\sum a_it^{i/N})=\min(i/N), for a_i\neq 0. So what we’ve essentially done is that we’ve written all rational powers (including integral ones) as fractions with denominator N. Clearly, amongst all the denominators in the rational powers of t, N has to be the largest denominator.

If the Puiseux series converges, then we have v(f)=\lim\limits_{t\to 0^+}\frac{\log (f(t))}{\log t}

Why is that? It seems to me that \lim\limits_{t\to 0^+}\frac{\log (f(t))}{\log t} would give us the sum of all rational powers of t, which could possibly be infinite. Then why do we just get the lowest one?

Now for X\in (\mathcal{K}^\times)^n, set Trop(X) to be v(f)\subset\Bbb{Q}^n. Take X=x+y+1=0 for example. One should think about this variety as x(t)+y(t)+1=0 instead, where x and y are power series in t (with rational powers). Then there are three possibilities:

i) v(x)> 0 and v(t)=0.

ii) v(x)=0 and v(y)> 0

iii) v(x)=v(y)\leq 0.

These cases can be easily deduced to contain all possibilities. For instance, if v(x)<0, then v(y)<0 too. This is because v(x)<0 implies that x has negative powers of t. This implies that y(t) too has to have negative powers of t, as x(t)+y(t)=-1. When one of them contains strictly positive powers, the other has to contain -1 and no negative powers of t, which implies that if x(t)>0, then y(t)=0.

Adjoint Functors

Today we’re going to talk about adjoint functors.

Definition: Let L:C\to D be a functor, and let R:D\to C be another functor. Then L and R are adjoint functors if for X\in Obj(C) and Y\in Ob(D), we have Mor_C(X,RY)\simeq Mor_D(LX,Y).

Maclane has famously stated in his seminal book that “adjoint functors arise everywhere”. However, what is the utility of such functors?

1. Solutions to Optimization Problems: Suppose you have an rng (which is a ring without the identity element), and you want to “adjoin” the minimal number of elements to the rng such that it becomes a ring. Explicitly, you want to adjoint 1 to the rng, along with the elements r\pm 1 for all r\in rng. How can you solve this problem? For any fixed ring R, consider the category E in which the objects are morphisms of the form R\to S_j, where S_j are unital rings. A morphism between the objects R\to S_i and R\to S_j is a morphism between S_i and S_j. Then the final object in this category is R adjoined with the unity. This is the smallest ring which contains R. Hence, this is the most efficient solution to our problem.

When we have a functor L, we ask ourselves what is the problem R to which L is the most efficient solution. Then L is the left adjoint of R.

We have different formulations for adjoint functor. The unit-counit formulation says that 1_c=R\circ L and 1_D=L\circ R.

In terms of Hom isomorphism: An adjunction L\vdash R is a natural isomorphism between functors C^{op}\times D\to Set. What does this mean? We study the isomorphisms between functors from C^{op}\times D to Set. This natural isomorphism is given by Hom_C(X,RY)\simeq Hom_D(LX,Y). How is this a natural transformation between functors from C^{op}\times D to Set? So we take a tuple (X,Y)\in C\times D, the make the following two functors act on it: (X,Y)\to Hom(LX,Y) and (X,Y)\to Hom(X,RY). Then the natural isomorphism is the isomorphism between these two functors. But why C^{op}? Why not just C? If one draws the commutative diagram, one will soon figure out that reversing the arrows in C is a simple way of mapping Hom(RX,Y) to Hom(X,LY).

De Rham Cohomology- I

This will be a rambling progression to the De Rham cohomology. First we have a map from an open set in the manifold to an open subset of \Bbb{C}^n. A collection of such charts over a cover of the manifold is known as an atlas. There can be many charts and hence many atlases. Do these charts have to agree on the intersection of open sets? No. But \phi_2\circ\phi_1^{-1} has to be diffeomorphic. Why do we have this condition? As discussed in an earlier post, we cannot have consistently map any manifold to \Bbb{C}. The flat “mattress” of \Bbb{C} is a fairly restrictive shape. Hence, we need to weaken this condition.

What is the union of two atlases? You’re mapping the same point on the manifold to different open sets of \Bbb{C}. The more the number of atlases, the more the number of different points that the same point on the manifold may be mapped to. However, as even a neighbourhood of that point is mapped to those very different open sets, differentiability is preserved. In a manner of speaking, things are still very much under control. We can still do calculus. The maximal atlas I suppose is the union of all these atlases. Now it’s possible that the union of two smooth atlases may not be smooth. Hence, we form equivalence classes based on the “compatibility” of atlases- two atlases are called compatible if their union is smooth. Hence union of all atlases in the same equivalence class is a maximal atlas.

Let us now talk about manifolds with boundary. A manifold has a boundary of dimension n if each point has a neighbourhood that can be mapped homeomorphically to an open set of \Bbb{H}^n. A smooth map f:M\to N between manifolds is such that for any U\subset M, there exists V\subset N such that f(U)\subset V, and there exist charts (U,\phi) and (V,\psi):\Bbb{R}^m\to\Bbb{R}^n such that \psi\circ f\circ \phi^{-1} is differentiable. Here the manifold M is assumed to be m-dimensional and N is supposed to be n-dimensional. If f is a smooth homeomorphism and so it f^{-1}, then M and N are called diffeomorphic manifolds. Turns out smooth manifolds form a category called Man. Morphisms are smooth maps, and it is easy to see by the chain rule why morphisms compose to give smooth morphisms.

Now we shall discuss tangent spaces. These, as intuition would suggest, are determined by differentiation. However, differentiation cannot be done on the manifold itself. It has to be done on the image of a chart on each point (to be precise, a neighbourhood around that point). Hence, we take a path containing that point, and then map that path to \Bbb{C}^n. We then find the derivative. This derivative of course is path dependant, but independent of the chart. Two paths are called equivalent if their derivatives at the image of p are the same. The tangent space is the vector space generated by all such equivalence classes of path. Why is the dimension of the tangent space the same as the dimension of the manifold? Just think about paths in all m dimensions of the manifold. Think about why they would be linearly independent.

Let f:M\to N be a smooth map. Then the pushforward or f_* is a map from T_pM \to T_{f(p)}N. It maps [\alpha] to [f\circ\alpha]. This map is well-defined. If \alpha_1'(0)=\alpha_2'(0), then (f\circ\alpha_1)'(0)=(f\circ\alpha_2)'(0). Now we defined a basis for T_pM. The basis is defined as \frac{\partial}{\partial x_i}|_p=(\phi^{-1})_*(e_i) for the chart \phi:M\to \Bbb{C}. What does this mean? We just want to determine the equivalence class of paths that maps to e_i. So the equivalence class of paths that “looks like” that standard ith basis vector.

The tangent bundle is a way of considering the union of all the tangent spaces. Say for p\in M, the fiber \phi^{-1}(p) is T_pM. Then for U\subset M, we have \phi^{-1}(U) as homeomorphic to U\times \Bbb{C}^n. Also, if \pi is the projection operator from U\times \Bbb{C}^n to U, and f is the homeomorhism from \phi^{-1}(U) to U\times \Bbb{C}^n, then \pi\circ f\circ \phi^{-1}(p)=p. What does this condition imply? That (p)\times\Bbb{C}^n is also a sort of tangent space on the manifold M, and T_pM maps to p\times\Bbb{C}^n. The vector bundle is smooth if E and M are both smooth manifolds, and \phi^{-1}:M\to E is also smooth. Let TM be the tangent bundle of the smooth manifold M. Why is TM a smooth manifold? What we need to do is the following: take any chart on M, and extend it to TM (in the obvious way). Then just prove that the transition functions are smooth. They will be smooth because we’re just extending the Jacobian by increasing its rank by, say, n, and just appending I_n to it.

What is a smooth bundle map between two smooth bundles? Remember that the bundle is now a smooth manifold. Hence, we can think of such a map as just a normal smooth map. In fact, we can now talk about the tangent space on this bundle. The maps between tangent spaces should be linear. The bundle map and the map between manifolds should be smooth. And the map between bundles and the map between the manifolds should commute in the obvious way.

A cotangent space is the dual vector space on T_pM on any p\in M, where M is a manifold. The basis vectors dx^i|p of this cotangent space are developed in the usual way, and are called differentials. Pushforwards (f_*) become pullbacks (f^*) the same way that the direction of the map between vectors spaces gets reversed when mapping duals to duals. f^* takes a functional on a certain vector in T_{f(p)}N, and maps it to f_* composed with that functional on the pre-image of that vector. A cotangent bundle is constructed in a way that is similar to the way that a tangent bundle is constructed, and it is also a smooth manifold for the very same reasons. You have a homeomorphism from \phi^{-1}(U) to U\times \Bbb{C}^n.

A differential 1-form is a smooth map f:M\to T^*M such that \pi\circ f=id_M. It is important to note that this is a smooth map. For a map f:M\to N between manifolds, the pullback between differential forms is defined in the obvious way.

The Lagrangian Method

What exactly is the Lagrangian method? It seems to be a popular method to solve Max/Min problems in Calculus. But generations of Calculus students may have found it troubling to understand why it works. We shall discuss this method today.

This is a method of finding local maxima and minima. Clearly, derivatives don’t tell us much about the global property of a function. They’re very much a local property. We’re supposed to maximize f(x,y) under the condition that g(x,y)=c. Note that f(x,y) is embedded in three dimensions, while g(x,y)=c is embedded in two.

In order to crack this problem, we need to rely upon the intuition that f(x,y) at a critical point cannot increase/decrease anymore locally in the direction of the contour. The gradient is the direction along which a function sees its fastest increase/decrease. Hence, the direction in which it will increase/decrease lies completely orthogonal to the contour, which is exactly the direction in which the gradient of the contour lies.

Hence, \nabla f=\lambda \nabla g.

Why manifolds

We know what complex manifolds are. They’re entities which “locally” look like \Bbb{C}^n. We also know about transition functions. However, today we’re going to ask an important question: a question that impedes all progress in modern math- **why** manifolds.

We can sort of understand why manifolds have the condition that each point lies inside a neighbourhood which is homeomorphic to an open set in \Bbb{C}^n. This allows us to do a lot of things on the manifold, because we know how to do those things on \Bbb{C}^n. Calculus is just the tip of the iceberg. It allows us to establish a metric (at least locally), and we also gain a lot of intuition as to how the manifold “looks” if we zoom in a lot. Which sometimes is enough.

However, why transition functions? Why can we not just “continuously” map neighbourhoods to neighbourhoods to \Bbb{C}^n, mapping the intersection of two neighbourhoods to the same points in \Bbb{C}^n?? After all, isn’t a manifold just a slight perturbed, slightly wavy copy of \Bbb{C}^n? Why are we mapping two overlapping sets to humungously different open sets, mapping points in the intersection to different points in \Bbb{C}^n, and then just ensuring that \phi_2\phi_1^{-1} is holomorphic? (I realize that I have not specified what \phi^{-1}_1 and \phi_2 are, but the reader who’s read up on complex manifolds will easily be able to infer this). This is because we want to be able to study objects that locally look like \Bbb{C}^n, but are not all slightly perturbed, wavy versions of \Bbb{C^n}.

Consider \Bbb{P}^1. It is easy to see that it locally looks like \Bbb{C}. However, there’s a major different between \Bbb{P} and \Bbb{C}: \Bbb{P} is compact while \Bbb{C} is not. Hence, there is a major global property that differentiates them, and prevents even a homeomorphism between them. We cannot “continuously” map the open neighbourhoods in \Bbb{P}^1 into \Bbb{C}. To push an image, imagine “continuously” mapping open neighbourhoods on \Bbb{P}^1 to \Bbb{C}. On the ball, you eventually loop around, and move towards where you started. However, on \Bbb{C}, you just keep going further. These two are incompatible.

Hence, we have to weaken what we can ask for. We need to make smaller demands of our mathematical gods. We cannot have a continuous mapping of neighbourhoods. However, we can at least ensure that the transition functions are holomorphic.

Small wins.

Notes on the Zero Forcing Algorithm

In this post I will try and understand the gist of the paper Zero Forcing Sets and the Minimum Rank of Graphs by Brualdi.

Let F be a field. The set of symmetric matrices of order n\times n containing entries from F is called S_n(F). The matrix corresponding to a graph is defined in the following way: if there is an edge between edges i and j, then a_{ij}=a_{ji} are non-zero. Clearly, multiple symmetric matrices can correspond to a single graph.

Let \mathfrak{S}(G) be the set of all symmetric matrices in S_n(F) corresponding to a graph G. Then the minimum rank of G, or mr(G), is defined to be \min(\text{rank}(A):A\in\mathfrak{S}(G)). Also, the corank of a matrix is the dimension of its nullity, and its maximum corank is M(G)=\max(\text{corank}(A):A\in\mathfrak{S}(G)). There’s a theorem which states that for a graph G, mr(G)+M(G)=|G|. All this is for the field F.

Here we talk about Zero-forcing sets, and discuss whether this is similar to the game that Pete and I developed. The colour change rule is the following: Let all vertices of a graph G be either white or black. If a white vertex is attached to a black vertex such that it is the only white neighbour of the black vertex, then it too is coloured black. The zero forcing set of G is a minimal set of vertices (may not be unique) such that colouring them black ensures that the whole graph will eventually be coloured black. Is this related to the game that Pete and I developed?

If we can build a graph which becomes all black by one algorithm but not by another, then we can know that they’re not the same.

1. Assume that 3-vertex: white vertex, and 2-vertex: black vertex.

The three squares is a graph that turns black by the zero forcing algorithm, but not our game.

20161020_200707

The diagram given below is an example of a graph which is forces to have zero sections by our game, but not the zero forcing algorithm.

20161020_195730

2. Assume that 3-vertex: black vertex, and 2-vertex: white vertex.

The three squares graph again is an example of a graph that is forced to have zero sections by the zero forcing algorithm, but not our game.

The graph given above is again an example of a graph which is forced to have zero sections by our game, but not the zero forcing algorithm.

Hence, the zero-forcing algorithm does not have much to do with the game that we’ve developed.