cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

Month: July, 2014

Algebraic sets

This is a kind of longish post on algebraic sets. I bought Hartshorne, then ended up studying from Fulton’s “Algebraic curves”. I will focus on the geometric aspects of algebraic sets.

So what are algebraic sets? They could be points, curves, surfaces. But if they are curves, they have to be infinitely long curves on both ends, and not one of those finitely long segments. If they are surfaces, they have to stretch infinitely in all directions. In short, some geometric entities qualify as algebraic sets, whilst others don’t. Confused?

Algebraic sets V(S) in affine space k^n are those sets of points which satisfy all polynomials in S\subset k[x_1,x_2,\dots,x_n]. More on algebraic sets can be found in Hartshorne and Fulton amongst other books on introductory Algebraic Geometry.

As an example, take all points satisfying y-x=0. This is an algebraic set, as it satisfies all polynomials in \langle y-x\rangle.

Now we move on to discuss the nature of k. It is quite clearly a field. But does it matter if it is algebraically closed or not? It does. Take \Bbb{R} and \Bbb{C} for instance. In affine spaces k^n where n\geq 2, all polynomials in x,y have infinite zeroes if k=\Bbb{C}, but may have a finite number of zeroes if k=\Bbb{R}. For example, take the curve x^2+y^2=0 in k[x,y]. Clearly, it has only one zero in \Bbb{R}^2, but infinitely many in \Bbb{C}^2. Now you might wonder why does n have to be greater than 2? Why does it not work for n=1? Figure that out for yourself [hint: something to do with the algebraically closed nature of \Bbb{C} and the fact that you can substitute any value for y in every equation and get values for x in \Bbb{C}, while there being only one variable in k^1.

Now let us move on to more important things. Say you have k^n. Can you construct a set of polynomials which have only one point (say point a) as its zero set? What about a set of polynomials which have a curve as a zero set? What about a surface? What kinds of lines and surfaces are possible? I will try to answer all these questions in the subsequent paragraphs. But first a note about polynomials in n variables. These polynomials are not the usual polynomials that you can draw on a plane or a sheet of paper, like y=x^2. These polynomials can be planes, surfaces, cylinders, 25-dimensional quantum surfaces, whatever you will. Please spend some time thinking about these concepts.

What is a zero-dimensional dot in n-dimensional affine space A? What is a one dimensional curve in A? What is a two dimensional surface in A? How are these things defined? In n-dimensional space, all polynomials of lower dimensions are defined through intersections. For example, in k^3, all curves are defined by the intersection of surfaces. It is possible to define the curve as the intersection of two surfaces. A zero-dimensional point can be defined as the intersection of 3 surfaces. Generalizing this idea, in n-dimensional space, n-a dimensional polynomials can be defined as intersections of a n-1-dimensional polynomials. This is probably the most important line in the whole article. Read it again. Draw it. Gulp it down with some lemon sikanjee.

What are algebraic sets geometrically speaking? In an n-dimensional affine space, they are a finite union of polynomials of dimensions \leq n. They are polynomials. Polynomials they are (says Yoda). Why do they have to be polynomials? Because they are in essence either n-dimensional polynomials, or the finite intersection of n-dimensional polynomials. Who says that? The definition. An algebraic set is the set of points satisfied by every polynomial in an assorted bunch of polynomials in k[x_1,x_2,\dots,x_n]. In other words, it is the intersection of all polynomials in that set of polynomials. Can it be an infinite intersection? Yes. But you will always end up with a polynomial with a degree \leq 2. That is the thing with polynomials. They intersect to give other polynomials.

Why finite union? Because you need to multiply sets of polynomials with each other to get the union of algebraic sets, and the multiplication of an infinite number of polynomials is not defined as every polynomial needs to have a finite degree.

Hence, an infinite bunch of disconnected points or a finitely long line segment cannot be algebraic sets.

Let S be an algebraic set. Then I(S) is quite clearly an ideal. I will try to motivate why I(S) needs to be finitely generated. First let us get a lemma out of the way. If an ideal P is generated by polynomials p_1,p_2,\dots,p_n, then Z(P) is exactly equal to Z(p_1)\cap Z(p_2)\cap\dots Z(p_n). OK. Now let S be a dimensional, where a\leq n. Then we can surely find n-a n-1-dimensional polynomials which intersect to give S. These polynomials have to exist (this becomes clear on thinking a little about the concepts involved). Do these n-a polynomials generate I(S)? This is a question that I am yet to answer. Hopefully I will continue this post tomorrow.

A little something about the two equivalent definitions of compactness in a metric space

The two statements are equivalent in a metric space:

1. Every infinite sequence has a convergent subsequence. In other words, there is an accumulation point for every infinite sequence.

2. The metric space is convergent.

Proving (2) from (1) follows the standard practice of deducing that any open set containing the accumulation point contains all but finite points in the sequence, and the remaining finite points can be covered by a finite number of open sets.

I used to wonder what if there are infinite accumulation points? Can there be infinite sequences, each with infinite subsequences converging to infinite separate points? The answer is yes. But we notice that using the above argument is not enough.

However, the problem soon gets resolved if we notice that the set of all limit points of a sequence (the limit points of all subsequences) is closed! See if you can manage to figure out why this necessarily produces an open covering. If you’re unable to, leave a comment. I will include the full solution.

My new project

When we learn mathematics while young, we are taught numbers using pictures and other kinds of visual devices. That is what helps us learn those concepts. Had they been taught as axiomatic constructs, children would find it exceedingly difficult to learn them. This is not substantiated, but something I truly feel.

The things that I truly feel I understand well are things that I learn from pictures and images. Every picture carries a thousand words (an overused cliche, but true all the same). More so, these pictures are easy to recall later, as compared to mathematical ideas in written formalized form.

My new pet project is to convert mathematical definitions, ideas and proofs into images that I will construct in Microsoft paint or elsewhere, and them embed in this blog.

I remember reading an article by a Fields medallist recently, who described how he learned mathematics. He said he used to write and rewrite proofs and ideas a hundred times over, until he had memorized and internalized them well enough for future use.

With all due respect to him, I hope that the visual conveyance of ideas gets this done much quicker than that. I strongly believe in this project, and shall spend copious amounts of time constructing tell-tale images. I hope they are of help in attracting the best minds of the world to Mathematics.

Note:- I was talking to a bachelors student from IIT Bombay yesterday, clearly one of the most competitive colleges to get into in the world! He sounded really intelligent and well-informed, although that is difficult to judge on a first meeting. He too was interested in pursuing mathematics, having qualified for the olympiad program two times. However, he soon lost interest. He couldn’t understand what was going on in things like Topology and Group Theory. He worked relentlessly, and also took advanced classes like Algebraic Geometry, which he aced. However, he confessed that he couldn’t understand or recall much. Hence, we switched to combinatorics. I also know IMO medallists who feel the same way and eventually shift to something more intuitive like combinatorics.One of them is currently attending the VSRP program with me.

I remember feeling lost too for a long time. Although typing my thoughts down would help me understand the proofs and concepts involved, retention and developing an intuitive feel for said concepts for future use was not up to the mark.

I have tried the concept of visualization- the construction of images has helped me tremendously. Although this project may be huge and monumental, I feel it will be worth it, at the end.

.

Just a small note.

I am exactly like this guy. EXACTLY. Will Mackenzie. Kinda bad looking. Bad teeth. Unremarkable complexion. High morals which have been discarded only too often. And an idealism-fuelled self assurance.

 

I used to think I was a lot like my girlfriend. But I think everyone knows she’s too good for me

A continuation of the previous post

Let V be an n-dimensional vector space, and let T:V\to W be a surjective linear mapping. If W is m-dimensional, then T is a matrix of order m\times n.

Is it possible that m>n? Let us assume that it is. Then W has a basis \{b_1,b_2,\dots,b_m\}, all of which are mapped to by m distinct vectors in V. Moreover, these vectors have to be linearly independent, which is impossible as V is n-dimensional. Hence m\leq n. To rephrase this, there cannot be a surjective linear mapping from a vector space of lower dimension to that with a higher dimension.

Choose a basis \{v_1,v_2,\dots,v_n\} for V, and choose a corresponding representation of the transformation T. Note that a difference choice of basis would entail a different representation for the same transformation T. How do you visualise multiple matrices denoting the same transformation? Visualize T geometrically. It maps the the same vectors (arrows) to the same vectors (arrows).

The kernel of K will always be a vector subspace of V. However, it may or may not be generated by a subset of \{v_1,v_2,\dots,v_n\}. But then again,l you can always create a new basis in which a subset generates the kernel, and then formulate the accompanying representation of T. Say the new basis is \{r_1,r_2,\dots,r_n\}, where \{r_1,r_2,\dots,r_k\} generate the kernel. Then T is of the form

\begin{pmatrix}0&0&\dots &0&a_{1,k+1}&a_{1,k+2}&\dots& a_{1,n}\&0&\dots&0&a_{2,k+1}&a_{2,k+2}&\dots&a_{2,n}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\&0&\dots &0&a_{m,k+1}&a_{m,k+2}&\dots& a_{m,n}\end{pmatrix}.

That T maps \{r_{k+1},r_{k+2},\dots,r_{n}\} to linearly independent vectors can easily be verified. And as the mapping is onto, n-k is the dimension of W.

If the mapping was not surjective, we would have \dim (\text{Ker }T)+\dim W>n. But then again, even if the mapping is not surjective, we have \dim(\text{Ker }T)+\dim(\text{Im T})=n

Now think of T in terms of the original basis \{v_1,v_2,\dots,v_n\}. Note that if a set of vectors are linearly independent in one basis representation, they are linearly independent in EVERY basis representation. Moreover, the kernel of a linear mapping T will consist of the same vectors, regardless of basis. Hence, if a_{1g}v_1+a_{2g}+\dots+ a_{ng}=r_g for g\leq k, the vectors \begin{pmatrix}a_{1g}&a_{2g}&\dots&a_{ng}\end{pmatrix} will generate the kernel. What I’m trying to say is that it does not matter whether a subset of the chosen basis generates the kernel or not. The dimensions of the kernel and that of the image will remain the same. It is just that T is easier to visualize if a subset of the basis does indeed generate the kernel.

Just a reminder: the mapping being “onto” is imperative for the aforementioned formula to hold.

So how we have two formulae for the dimension of V.  In the first case we chose a subspace of any dimension, and then calculated the dimension of the subspace of V^* which had the former subspace as kernel. In the second case, we first chose the linear mapping, and then calculated the dimensions of the kernel and image (yes dimension of image, which works even if the mapping is not surjective). You get the feeling that you’re pretty much doing the same thing using two different methods. I leave you to further explore whether you’re indeed doing the same thing or not.

Dual spaces- An exposition

Studying Dual Spaces can be confusing. I know it was for me. I’m going to try to break down the arguments into a more coherent whole. I am following Serge Lang’s “Linear Algebra” (the Master is my favourite author). However, I am not strictly following the order he follows to develop the theory.

Say we have a vector space V over field \Bbb{K} satisfying \dim V=n. Then this vector space is isomorphic to the vector space of n-tuples over \Bbb{K}, provided the operations defined are component-wise addition and scalar multiplication across all components.

Convince yourself of the fact above. It really is simple, and the n-tuple representation is most clearly suggestive of the steps of the proof.

Now in such an n-tuple vector space, every term can multiply with every vector in V to give a map into \Bbb{K}. Note that it is not a map into V. It is a map into \Bbb{K}. This n-tuples vector space is called the dual space of V, as it mimics the properties of V amazingly well. Notationally, the dual space of V is represented by Lang as V^*.

I feel studying dual spaces in this order gives the main motivation behind the nomenclature- it is isomorphic to V, just like any other n-tuple vector space over \Bbb{K}.

Now onto a major property. Say V has a basis \{v_1,v_2,\dots, v_n\}. Take W to be a vector subspace of V. Then 0\in W for obvious reasons. Let W be generated by \{a_1,a_2,\dots,a_i\}, where \{a_1,a_2,\dots,a_i\}\subseteq \{v_1,v_2,\dots, v_n\}.

Is V\setminus W a vector space? No. It does not contain 0. However, it is generated (almost) by n-i basis vectors of V. Hence, if it were a vector space, we could write \dim W+\dim V\setminus W=\dim V=n! But we can’t, as V\setminus W is not a vector space. However, if we could somehow embed V\setminus W into a vector space which is of n-i dimension, we’d be done.

There may be multiple ways of doing this. We’re going to look at one in particular- the set of functionals which maps W to 0. Note that it can map vectors in V\setminus W to 0 too. But it definitely maps all vectors in W to 0, regardless of what it does with the vectors in V\setminus W.

Let us call this set \beta, and pick an arbitrary functional T\in \beta. Then T(a_k)=0 for all k\in \{1,2,3,\dots,i\}. Check this for yourself. It is also easy to check that \beta is a vector space over \Bbb{K}, \dim \beta=n-i. If the basis of V is written as \{a_1,a_2,\dots,a_i,b_1,b_2\dots,b_{n-i}\}, then each element of \beta can be visualised as \{0,0,\dots,0,c_1,c_2,\dots,c_{n-i}\}, where c_1,c_2,\dots,c_{n-i} are the elements in \Bbb{K} that b_1,b_2,\dots,b_{n-i} are mapped to respectively.

Hence, we have \dim W+\dim\beta=n, as i+(n-i)=n. It is strange that we are adding dimensions of subspaces that belong to different vector spaces (W belongs to V while \beta belongs to V^*). However, we are only adding natural numbers, and nothing else. Hence there is no contradiction.

Note: Another possibility that we could have looked into of embedding V\setminus W into a suitable vector space would have been the vector subspace of elements with coefficients of a_1,a_2\dots,a_i as 0. This is obviously isomorphic to \beta. The point here is that we didn’t necessarily need to traverse to a different vector space to find a suitable subspace of dimension n-i. We had one right at home.