The topology of data

The paper that I wish to discuss today is An Introduction to Topological Data Analysis: fundamental and practical aspects for data scientists, by Frederic Chazal and Bertrand Michel. Topological data analysis is an exciting new field, and this paper can be understood by people from a wide range of backgrounds.

Notation: For this paper, \Bbb{X} denotes a topological space, and \Bbb{X}_n=\{x_1,\dots,x_n\}\subset \Bbb{X} a sample of points from it.

Introduction

Topological Data Analysis works on the assumption that the topology of data is important. But why? Let us take an example from physics. Suppose we want to study the energy emitted by atoms after absorbing sunlight. We observe that energy emitted by atoms forms a discrete set, and not a continuous one. We take a large number of readings, and observe the very same phenomenon. Hence, we can conclude with a high degree of probability that the topology of the energy states of atoms is discrete. This is one of the most fundamental discoveries of modern Science, and heralded the Quantum revolution.

It becomes clear from the above example that understanding the topology of data can lead us to understand the universe around us. However, we have to “guess” this topology of the “population” from a much smaller “sample”. Topological Data Analysis can be thought of as the study of making “good” guesses in this realm.

Metric Spaces, Covers, and Simplicial Complexes

Let us take a metric space (X,d). The Hausdorff distance between two compact sets A,B\subset X is defined as \inf\limits_{a\in A}\sup\limits_{b\in B}d(a,b). It is essentially a measure of how “similar” two compact sets look. We need compactness, or rather boundedness, because we need the \inf, \sup limits to be well-defined. However, what if A,B are not necessarily subsets of the same space? Gromov generalized the above definition in the following way: The Gromov-Hausdorff distance between two compact sets A,B is the infimum of all positive real numbers r\geq 0 such that d(\phi_A(A),\phi_B(B))\leq r, where \phi is the isometric embedding of A,B in some manifold M. Essentially, we want to see how “close” the two sets can get across all possible isometric embeddings in all possible manifolds. As one can imagine, calculating it is a seemingly impossible task in most situations, and it is primarily useful when an upper bound to it implies other useful mathematical facts.

Simplicial complexes

Now given a set of points \{x_0,\dots,x_k\}\subset \Bbb{R}^d of k+1 affinely independent points, we can make a k-simplex in \Bbb{R}^d called the convex hull of the points. A simplicial complex K is a collection of simplices such that

  1. Any face of a simplex is also a simplex. We need this condition because we want to define a boundary map from the simplicial complex to itself, which will allow us to calculate topological invariants like the homology of the complex.
  2. The intersection of two simplices is either empty, or a common face of both. This condition ensures that only topological “holes” are detected in homology groups, and not other topological features.

Note that K can be thought of as both a topological space and a combinatorial entity. The topological perspective is useful when we’re trying to break up a topological space into simplices in order to calculate homology, and the combinatorial perspective is useful when we use simplices to represent mathematical entities that are not originally topological spaces. For instance, polynomial rings can also be studied using simplicial complexes, where each vertex corresponds to a homogeneous polynomial. Of course, each combinatorial simplicial complex can be given a topological description. Also, note that the combinatorial description is more suitable than the topological one in the realm of algorithms and machine learning.

Vietoris-Rips Complex: Given a set of points \{x_0,\dots,x_k\}\subset \Bbb{R}^d with pre-determined distances, form all simplices of the form [x_{i_1},\dots,x_{i_l}] whenever the distance between any pair of points in \{x_{i_1},\dots,x_{i_l}\} is at most \alpha. If k>d, we might even have simplices that cannot be fit into \Bbb{R}^d. The set of all such simplices forms a simplicial complex in some \Bbb{R}^k, where k might be bigger than d. As we increase the value of \alpha>0, this simplicial complex, and consequently its homology groups, will change.

Čech Complex: Given a set of points \{x_0,\dots,x_k\}\subset \Bbb{R}^d with pre-determined distances, we form l-simplices when the \alpha-balls of l points have a non-empty intersection. Hence, the maximum distance between two points now has to be 2\alpha, and not \alpha. A Čech complex is denoted as {Cech}_{\alpha}(\Bbb{X}). How is this different from Rips_{2\alpha}(\Bbb{X}) though? As shown in the diagram above, three pairwise intersecting discs give us a 2-simplex in Rips_{2\alpha}(\Bbb{X}), but just a 1-cycle in the corresponding Čech complex.

The Nerve Theorem: Given a cover \{U_i\} of a manifold, we can form a Čech complex from it (note that the open sets here are U_i, and not \alpha-balls as stated before). The Nerve Theorem says that if the intersection of any sub collection of \{U_i\} is either empty or contractible, then the Čech complex is homotopy equivalent to the manifold.

The contractibility of intersections ensures that we do not “cover up” any holes in the manifold using our discs. But why the Čech complex, and not the Rips complex? This is because a hole would be covered up by three pair-wise intersecting discs in a Rips complex, but not in the Čech complex. Hence, the Čech complex is a useful way of preserving the homology of the underlying manifold.

Why is this theorem important? This is because it takes a continuous property of a topological space, and converts it into a combinatorial entity. For instance, if I only needed to know the homotopy class of a manifold for my problem, studying a simplicial complex on the manifold with the same homotopy type is much easier for me, as I can now feed the combinatorial data of the simplicial complex into well-known algorithms to make deductions about the space.

Mapper Algorithm

For a space X, let f:X\to \Bbb{R}^d be a continuous map. For a cover \{U_i\} of \Bbb{R}^d, the sets \{f^{-1}(U_i)\} form a cover of X. Now consider the set of all connected components of \{f^{-1}(U_i)\}. If the function f and the open cover \{U_i\} are well chosen, the nerve of this “refined” cover gives us useful information about the underlying space X. Note that the \{f^{-1}(U_i)\} don’t have to be contractible anymore. An example is given below:

The function here is the height function (look out for the slightly camouflaged yellow parts). This nerve of the two-holed torus does a decent job of representing its topology, although we fail to detect the small hole at the bottom because the cover chosen of \Bbb{R} is not fine enough.

In practice, however, we don’t map continuous manifolds, but just data points. A suitable example is given below:

From the nerve drawn on the right, one may conclude that the topology of the underlying population, from which the data has been extracted, is a circle.

Some popular choices of f are the centrality function f(x)=\sum\limits_{y\in \Bbb{X}} d(x,y), or the eccentricity function f(x)=\max\limits_{y\in \Bbb{X}} d(x,y). These functions do not require any special knowledge of the data.

As one may imagine, the choice of the cover \{U_i\} determines the nerve that we get from the Mapper algorithm. Generally, one may choose regularly spaced rectangles U_i which overlap with each other. If f:X\to \Bbb{R}, then the length of the intervals U_i is known as the resolution, and the fraction of overlap is denoted as g. One must explore various resolutions and values of g in order to find the “best” nerve of X.

Now remember that the connected components of the f^{-1}(U_i)`s form just the vertex set of the simplicial complex we are building. Although we could build a Čech complex from these pre-images, we may also cluster the vertices corresponding to the f^{-1}(U_i)‘s in other ways. For instance, we may build a k-NN graph with the points in X, associate points to each f^{-1}(U_i), cluster them appropriately, and then only select the connected components of the subgraph whose vertices correspond to the f^{-1}(U_i)‘s. This is a different algorithm because we don’t care if the f^{-1}(U_i)`s intersect anymore.

Geometric Reconstruction and Homology Inference

Let us now bring probability into the mix. Suppose we have a space X\subset \Bbb{R}^d with a probability measure \mu. If we take a sample of points \{x_0,\dots,x_k\}\subset X, we want the topology of Cech_{\alpha}(\{x_0,\dots,x_k\}) to somehow approximate the topology of X.

Now, we introduce some mathematical definitions. For a compact space K\subset \Bbb{R}^d, the r-offset K^r is defined as the set of all points x such that d(x,K)\leq r. Why do we care about r-offsets? Because they are much better at capturing the topology of the space around them. For instance, the homology of a loop is clearly different from the Euclidean space it is contained in. However, for appropriate values of r, the r-offset of the loop becomes contractible, and hence has the same homology as the surrounding space.

A function \phi:\Bbb{R}^d\to \Bbb{R}_+ is distance-like if it is proper, and x\to \|x\|^2-\phi^2(x) is convex. We need the proper condition because we don’t want unbounded sets to only be at finite distance from a point. Hence, properness ensures that only bounded sets are at a bounded distance from a point (with respect to which a distance function is defined). The second condition is that of semi-concavity, and I would like to re-write it in its more natural form: \phi^2(x)-\|x\|^2 is concave. The reason that it is called “semi”-concave is that it is concave only when a very concave function is added to it. \phi(x) can actually be a convex function itself. Why do we want this condition here? This is because we want to generate a continuous flow using \phi, and functions that “rise too fast” may have discontinuities when we generate this flow.

A point x\in \Bbb{R}^d is said to be \alpha-critical if \|\nabla_x \phi\|\leq \alpha. It is a generalization of the notion of a critical point (where \alpha=0). We want to find the smallest r such that \phi^{-1}[0,r) does not have any \alpha-critical values. This is known as the \alpha-reach of \phi. What this means is that the level sets \phi^{-1}(r') for r'<r will “flow” along the gradient of \phi at a speed that is faster than \alpha, at least until they reach the level set \phi^{-1}(r). Why do we want level sets flowing into each other at all? This is because of their relation to Čech complexes. Consider the level sets in \phi^{-1}((a,b)) and \phi^{-1}((c,d)). If the level sets of the first can “flow” into the level sets of the “second”, then there is no hole between them, and the two vertices corresponding to these inverse open sets can be joined by a line regardless of how these vertices have been clustered.

Isotopy Lemma– Let \phi be a distance-like function such that \phi^{-1}([r_1,r_2]) has no critical points. Then the level sets of \phi^{-1}([0,r]) are isotopic for r\in [r_1,r_2]. Two topological sets are isotopic when they are homeomorphic, and one can continuously be deformed into the other. Essentially, the level sets in \phi^{-1}([0,r_1]) can essentially stay where they are, and the level sets inside \phi^{-1}([r_1,r_2]) can move because there are no critical points inside. Whether there are critical points in \phi^{-1}([0,r_1]) is irrelevant.

Reconstruction Theorem– This theorem essentially states that when two distance-like functions \phi and \psi are “close enough”, suitable sub level sets of both are homotopically equivalent. Of course it is unclear from the statement how these level sets “flow” into each other, and which function’s gradient field is chosen for this. Why is this theorem important?

Let \phi=d_M and \psi=d_{\Bbb{X}_n}, which are the distance functions with respect to the support of \mu on M and \Bbb{X}_n respectively. The Reconstruction theorem can tell us that for appropriate values of r and \eta, the r-offset of M is homotopically equivalent to the union of the \eta-offsets of the points in \Bbb{X}_n, which in turn is homotopically equivalent to the Čech complex formed by these offsets. Essentially, the Reconstruction Theorem provides the basis for studying the topology of \Bbb{X} using the nerve of \Bbb{X}_n.

An important result of Chazal and Oudot in this direction is the following: For M\subset \Bbb{R}^d a compact set, let the \alpha-reach of d_M be R>0. Also, let X be a set of points such that d(M,X)<\epsilon<\frac{1}{9}(\alpha-reach) of d_M. Then for \eta\in (0,R), \beta_k(X^{\eta})=rk(H_k(Rips_r(X)\to H_k(Rips_{4r}(X))). Here a suitable r has been chosen. Why is this theorem important? Because it allows us to calculate the Betti numbers of M using just information gleaned from the Rips complexes of X.

Distance to measure

Note that in all of the theorems discussed above, d_M and d_{X_n} have been “pretty close” as metrics. This forms the basis of all our topological deductions. What if they’re not? What if we have outlier points in X_n? Note that it is not in general possible to select a point from outside of the support of the probability measure \mu on M, as the probability of selecting a point outside of it is by definition 0. Hence, the existence of such a point is purely due to noise, which brings in a probability distribution that is independent of $mu$.

To deal with noise of this sort, we have the notion of “distance to a measure”. Given a probability distribution P and a given parameter 0<u\leq 1, \delta_{P,u}: x\to \inf\{t>0: P(B(x,t))\geq u\}.

Note that this map can be discontinuous if the support of P is badly behaved. Hence, to further regularize this distance, we define d_{P,m,r}=\big(\frac{1}{m}\int_0^m{\delta^r_{P,m}(x) du}\big)^{1/r}

A nice property of d_{P,m,r} is that it is stable with respect to the Wasserstein metric. In other words if P,P' are two probability measures, then \|d_{P,m,r}-d_{P',m,r}\|_{\infty}\leq m^{-1/r}W_r(P,P'). Hence, d_{P,m,r} is a good distance-like function to consider to analyze the topology of M, or at least the support of \mu.

In practice, P is not known. We can only hope to approximate it from X_n. Consider the probability measure P_n on \Bbb{X}_n. Although the exact construction of P_n is not mentioned (it might just be a discrete measure), the following formula has been mentioned: for

m=\frac{k}{n}, \delta^r_{P_n,k/n,r}(x)=\frac{1}{k}\sum\limits_{j=1}^k \|x-X_n\|^r_{j}

Here \|x-X_n\|^r_{j} denotes the distance between x and its jth neighbor in \Bbb{X}_n. If the Wasserstein distance between P and P_n is small, which is what we can hope if we take a large enough sample from the population, then \delta^r_{P_n,k/n,r} is pretty close to \delta^r_{P,k/n,r} in the L^{\infty} measure.

Persistent Homology

Persistent homology is used to study how the Čech complexes and Betti numbers change with a parameter (generally the radius of the overlapping balls). Why is it important? We can never really be sure if the homology of the Čech complex that we have is the same as the homology of the underlying space. Hence, why not study all possible homologies obtained from an infinite family of Čech complexes? Two spaces with “similar” such families of homologies are likely to be the “same” in some well defined topological sense. Hence, this is another attempt at determining the topology of the underlying space.

A filtration of a set X is a family of subsets \{X_r\} such that r<r'\implies X_r\subset X_{r'}. Some useful filtrations are the family of simplicial complexes \{Cech_{r}(X_n)\} or \{f^{-1}([0,r])\}.

Given a filtration F_r, the homology of F_r changes as r increases. Consider the persistence diagram below:

I will briefly try to explain what is happening here. f is the height function defined on this graph, and F_r=f^{-1}([0,r]) for a_1<r<a_2 looks like an interval that is expanding in size. When r=a_2,,a_3, etc, new intervals are created, which die when they join with some other interval that was created before them. For example, the interval created when r=a_3 joins the interval created at r=a_1 when we reach the point r=a_4. (value of r at the birth of an interval, value of r at the death of that interval) can be graphed as a point in \Bbb{R}^2, as shown by the red dots in the diagram on the right. We also add al diagonal points with infinite multiplicity. One way to interpret that is for all values of r, an infinite number of intervals come alive and die. The reason why we add this seemingly arbitrary line is that when we try to sample a population of data points, we might receive some noise. Hence, we can create a small neighborhood of the diagonal, and only interpret the points that lie outside of the diagonal as genuine topological features of the underlying manifold. The points inside the neighborhood denote topological features that are born and die “too soon”, and hence might just be noise. More will be said about this later.

Given below is the corresponding diagram for Čech complexes. The blue dots correspond to the birth and death times of “topological holes”.

Note that in the diagram above, all the balls grow at the same time. Hence, we don’t have a clear way of choosing which red interval should “die” and which one should survive. We arbitrarily decide that if two balls that start at the same time join, the red line below remains, and the one above ends. The persistence diagram is is given on the bottom right.

Persistence modules and persistence diagrams

The inclusion diagram \dots\to F_r\to F_{r'}\to \dots, where r<r', induces the inclusion diagram of vector spaces \dots\to H_k(F_r)\to H_k(F_{r'})\to \dots. The latter inclusion diagram is called a persistence module. Why is this important? Because it shows, in real time, the evolution of the kth Betti number of the diagram.

In general, a persistence module \Bbb{V} over a subset of the real numbers T is an indexed family of vector spaces \{V_r| r\in T\} such that \psi^r_s:V_r\to V_s when r\leq s, along with the property that \psi^t_r\circ \psi^r_s=\psi^t_s. In many cases, such diagrams can expressed as a direct sum of “inclusions” modules \Bbb{I}_{(b,d)} of the form

\dots\to 0 \dots\to 0 \to \Bbb{Z}_2\to \Bbb{Z}_2\to \dots\to \Bbb{Z}_2\to 0 \to\dots

where the \Bbb{Z}_2\to \Bbb{Z}_2 maps are identity maps, and the rest are 0 maps. In some sense, when the vector spaces in the persistence module are the H_k groups, we are breaking up the \dots\to H_k(F_r)\to H_k(F_{r'})\to \dots for each k-dimensional hole, and tracking when each hole appears and disappears. Chazal et al proved that if the map \psi^r_s:V_r\to V_s in a persistence module has finite rank for each r,s\in T, then it can be decomposed as a direct sum of “inclusions” modules in a well-defined way. One way to think of this is the following: a generic “element” of a persistence module is the set births and deaths of all topological k-holes that survive at least one iteration (or increment in the value of r), between r=b and r=d. If all but a finite number of holes die only within one iteration, then each element of the persistence module can be thought of as a finite sum of “inclusions modules”.

Persistence landscapes

The persistence landscape was introduced by Bubenik, who stated that the “dots” in a persistence diagram should be replaced by “functions”. But why? Because there’s not much algebra that we can do with dots. We can’t really add or multiply them in canonical ways. However, functions lend themselves to such operations easily. Hence, the inferences that we make about such functions may help us make inferences about the underlying topological space.

How do we construct such a function? Take a point (a,b). We just construct a function \Lambda_{p}(t) for a point p that looks like a “tent”, by joining the points (a,0) and (\frac{a+b}{2},\frac{b-a}{2}) by a straight line, and then joining (\frac{b+a}{2},\frac{b-a}{2}) and (b,0) by another straight line. Three such “tents” for three different points are given below. They’re drawn in red.

The persistence landscape for this diagram is defined as \lambda_{dgm}(k,t)=kmax\Lambda_p (t), where kmax is the kth highest value in the set. For instance, the function in blue drawn above is \lambda_{dgm}(1,t).

A short note about the axes in the two diagrams above: the b and d on the left diagram correspond to time of birth and death respectively. For the diagram on the right, the axes denote the coordinates of the black “dots” on the functions. The “tent”-ed functions themselves may be thought of as a progression from left to right, in which a topological feature is birthed and then dies.

One of the advantages of persistence landscapes is that they share the same stability properties as persistence diagrams. Hence, if two probability measures are “close enough”, then their persistence landscapes will also be “quite close”.

Metrics on the space of persistence diagrams

We know that then two probability distributions are “close enough”, then the distance functions to those probability distributions are also “pretty close”. However, what about the persistence diagrams of those probability distributions? Does the persistence diagrams of two probability distributions being “close” imply that the probability distributions are also close? Before we can answer this question, we must find a good metric to calculate the distance between two persistence diagrams.

One such metric is the bottleneck metric, which is defined as

d_b(dgm_1,dgm_2)=\inf_{m}\max_{(p,q)\in m}\|p-q\|_{\infty}

Here m is a “matching”, which is the arbitrary bipartite pairing of points in dgm_1 with those in dgm_2. Because the L^{\infty} norm is too sensitive to “outliers”, a more robust metric is

W_p(dgm_1,dgm_2)^p=\inf_{m}\sum_{(p,q)\in m}\|p-q\|^p_{\infty}

Stability properties of persistence diagrams

But if two persistence diagrams are “close”, are the underlying probability distributions also “close”? We don’t know. But the converse is true.

Let \Bbb{X} and \Bbb{Y} be two compact metric spaces and let Filt(\Bbb{X}) and Filt(\Bbb{Y}) be the Vietoris Rips of Čech filtrations built on top of \Bbb{X} and \Bbb{Y}. Then

d_b(dgm(Filt(\Bbb{X})),dgm(Filt(\Bbb{Y})))\leq 2d_{GH}(\Bbb{X},\Bbb{Y})

We can also conclude that if two persistence diagrams are close, then their persistence landscapes are also close: Let \Bbb{X} and \Bbb{\tilde{X}} be two compact sets. Then for any t, we have

|\lambda_{\Bbb{X}}(k,t)-\lambda_{\tilde{X}}(k,t)|\leq d_b(dgm(Filt(\Bbb{X})),dgm(Filt(\Bbb{Y})))

Whether the closeness of persistence diagrams denotes the closeness of the underlying topological spaces remains woefully unanswered.

Statistical aspects of persistent homology

While talking about persistence homology, we have only talked about topological spaces, and not necessarily about probability distributions. We do so here.

Let \Bbb{X} be an underlying space with probability measure \mu, and let \Bbb{X}_{\mu} be the compact support of this measure. If we take n independent readings from this set- say \Bbb{X}_n=\{X_1,\dots,X_n\}, then we can estimate the space \Bbb{X}_{\mu} by \Bbb{\hat{X}}. The probability measure on \Bbb{X}_n has support \{X_1,\dots,X_n\}.

For some (a,b), let \mu satisfy the condition \mu(B(x,r))\geq \min(ar^b,1). Then

\Bbb{P}\big(d_b(dgm(Filt(\Bbb{X}_{\mu})),dgm(Flt(\Bbb{X}_n)))>\epsilon \big)\leq\min\big(\frac{2^b}{a\epsilon^b}exp^{-na\epsilon^b},1\big)

Because we do not exactly know \mu and hence the persistence diagram of \Bbb{X}_{\mu}, we can only calculate the probability of the persistence diagram of \Bbb{X}_n being close to that of \Bbb{X}_{\mu}. Clearly, as n grows big, this probability becomes smaller. This can also be ascertained from the following formula:

P\big(d_b(dgm(Filt(\Bbb{X}_{\mu})),diag(Filt(\Bbb{X}_n)))\leq C\big(\frac{\log n}{n}\big)^{1/b}\big)

approaches 1 as n\to\infty. Here, C is some constant.

Estimation of the persistent homology of functions

If two functions f,g on a manifold are “close”, then the persistence diagrams induced by them are also close. More precisely,

d_b(dgm_k(f),dgm_k(g))\leq \sup\limits_{x\in M}|f(x)-g(x)|

This opens up a vista of opportunities, in that we can now study density estimators, regression functions, etc. But how? Suppose we do not know how to calculate the persistence homology of a complicated function. We take a more regular function that is “close” to it in the L^{\infty} norm, calculate its persistence homology, and then be assured that the persistence diagram of the complicated function looks almost like the persistence diagram of the current, “better” function.

Confidence regions for persistent homology

When estimating a persistence diagram dgm with an estimator diagram \hat{dgm}, we look for a value \eta_{\alpha} such that P(d_b(\hat{dgm},dhm)\geq n_{\alpha})\leq \alpha, for some \alpha\in (0,1). The n_{\alpha} gives us an upper bound on how “far” the two diagrams can be.

In some sense, if we were in the space of persistence diagrams (each point in this space is a persistence diagram), B(\hat{dgm},\eta_{\alpha}) is the \alpha-confidence interval of dgm. How does this translate to confidence intervals of the actual points in \hat{dgm}? One way to do this is to center a box of side length 2\alpha at each of these points. Another way is to create an \eta_{\alpha} neighborhood of the diagonal line in the persistence diagram. The points outside of it are significant topological features of the sample, and are definitely preserved in dgm. This is perhaps an important reason why we include the diagonal in persistence diagrams- points on the diagonal are unimportant topological features that might just be noise. Hence, we can infer the persistence diagram of the underlying space from that of the sample, as long as the points that we get are “far enough” from the diagonal.

Of course, all of this depends on whether we can successfully approximate the value of \eta_{\alpha} from the sample. Methods like the Subsampling Approach and the Bottleneck Bootstrap are important in this context.

Central tendency for persistent homology

Persistence diagrams are just a bunch of dots and a diagonal line. As they’re not elements of a Hilbert space, we cannot determine an “expected” or “mean” persistence diagram. Hence, we need to move to persistence landscapes, which are elements of a Hilbert space, and consequently lend themselves to such an analysis.

For any positive integer m, let \{x_1\dots,x_m\}\subset \Bbb{X}_{\mu} be a sample of m points from \mu. For a fixed k, let the corresponding persistence landscape be \lambda_{X}. Now consider the space of all persistence landscapes (corresponding to different \Bbb{X}_n‘s drawn from \Bbb{X}), and let \mu^{\otimes m} be the induced measure on \Bbb{X}^m, which then induces the measure \Psi^m_{\mu} on the space of persistence landscapes. It is now possible to calculate the expected persistence landscape, which is \Bbb{E}_{\Psi^m_{\mu}}[\lambda_X(t)]. This quantity is quite stable. In fact, the following is true:

Let X\sim \mu^{\otimes m} and Y\sim \nu^{\otimes m}, where \mu and \nu are two probability measures on M. For any p\geq 1, we have

\|\Bbb{E}_{\Psi^m_{\mu}}[\lambda_X]-\Bbb{E}_{\Psi^m_{\mu}}[\lambda_Y]\|_{\infty}\leq 2m^{\frac{1}{p}}W_p(\mu,\nu)

Persistence homology and machine learning

Due to the highly non linear nature of persistence diagrams, persistence diagrams first need to be converted into persistence landscapes to be useful in machine learning. Such persistence landscapes have been useful in protein binding and object recognition.

The construction of kernels for persistence diagrams has also drawn some interest. Can we directly compare two persistence diagrams by taking their “dot product” in some sense? Convolving a symmetrized version of a persistence diagram (with respect to the diagonal) with the two dimensional Gaussian measure gives us exactly such a kernel. Such a kernel can be used for texture recognition, etc.

Sometimes, identifying topological features from a persistence diagram becomes a difficult task. Hence, the choice of a kernel becomes an important factor. Also, deep learning can also be used to identity the relevant topological features in a given situation.

References

  1. An Introduction to Topological Data Analysis: fundamental and practical aspects for data scientists, by Chazal and Michel

Advertisement

Published by -

Graduate student

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: