My original proof of “continuous mappings on compact metric spaces are uniformly continuous”

by ayushkhaitan3437

The proof of “continuous mappings on compact metric spaces are uniformly continuous” is rather convoluted and opaque, as given in most real analysis textbooks (Rudin included). There is a lot of unnecessary bookkeeping, and the treatment is not really motivated. Given below is my own proof of the theorem, which is inspired by the existing proof to an extent I am unaware of. I hope it helps the readers. The motivation for each step is explicitly stated.

A mapping is uniformly continuous if for every \epsilon>0, there exists \delta>0 such that f(B(x,\delta))\subset B(f(x),\epsilon) for all x\in X.

Let f:X\to Y be the continuous mapping under consideration. For every B(f(x),\epsilon/2)\in Y, take the inverse B(x,\lambda_x).

\bigcup B(x,\lambda_x)_{x\in X} forms a cover of X. Call this cover U.

We will do two things with this cover U.

Firstly, as X is compact, let us choose a finite subcover \bigcup_{i=1}^n B(x_i,\lambda_{x_i}). We shall call this cover U'.
Secondly, we form another cover $latex \bigcup B(x,\lambda_x/2)_{x\in X}$. Call it W. As X is compact. W too will have a finite subcover. We shall call it W'.

Both U' and W' consist of balls centred on certain points in X. The set of centres of these balls may be distinct. Let the set of centres of W' be S and the corresponding set for U' be R. Seek out points in $latex S\setminus R$, and add them to R. Call this new set $latex latex R’=R\cup S\setminus R$. Hence, R'\bigcap S=\emptyset. For the points in R'\setminus R, draw $latex \lambda_x$-balls around them, and add them to U'. Note that U' still remains a finite cover of X. Call the new finite subcover U''.

Final bit of notation: let \delta=\min\limits_{x_i\in R'}\{\lambda_{x_i}/2\}.

Take any two points a and b such that $latex d(a,b)=\delta$. Clearly $latex a$ is contained within some set of $latex W’$. Let that set of W' have as centre the point p. Note that p is also contained within R', and hence has a $\lambda_p$ ball around it. We have d(f(p),f(a))\leq \epsilon/2.

In addition to this, we have d(p,b)\leq d(p,a)+d(a,b)\leq \lambda_p/2+\lambda_p/2=\lambda_p. Hence, $latex d(f(p),f(b))\leq\epsilon/2$.

Now $latex d(f(a),f(b))\leq d(f(a),f(p))+d(f(p),f(b))\leq\epsilon/2+\epsilon/2=\epsilon$.

Hence proved.

An explanation for some steps:

The essential motive in this proof is to determine a distance \lambda such that for any two points a and b separated by the distance \delta, we should find that their mappings are separated by \epsilon. For this we need both points to be in such a ball which maps to an $latex \epsilon/2$-ball.

I will continue this explanation when I get the time.