Loading AI tools
Distance function defined between probability distributions From Wikipedia, the free encyclopedia
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on , the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781. Because of this analogy, the metric is known in computer science as the earth mover's distance.
The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata[1] (Russian, 1969). However the metric was first defined by Leonid Kantorovich in The Mathematical Method of Production Planning and Organization[2] (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" (Russian: Васерштейн) being of Yiddish origin).
Let be a metric space that is a Polish space. For , the Wasserstein -distance between two probability measures and on with finite -moments is where is the set of all couplings of and ; is defined to be and corresponds to a supremum norm. Here, a coupling is a joint probability measure on whose marginals are and on the first and second factors, respectively. This means that for all measurable , it fulfills and .
One way to understand the above definition is to consider the optimal transport problem. That is, for a distribution of mass on a space , we wish to transport the mass in such a way that it is transformed into the distribution on the same space; transforming the 'pile of earth' to the pile . This problem only makes sense if the pile to be created has the same mass as the pile to be moved; therefore without loss of generality assume that and are probability distributions containing a total mass of 1. Assume also that there is given some cost function
that gives the cost of transporting a unit mass from the point to the point . A transport plan to move into can be described by a function which gives the amount of mass to move from to . You can imagine the task as the need to move a pile of earth of shape to the hole in the ground of shape such that at the end, both the pile of earth and the hole in the ground completely vanish. In order for this plan to be meaningful, it must satisfy the following properties:
That is, that the total mass moved out of an infinitesimal region around must be equal to and the total mass moved into a region around must be . This is equivalent to the requirement that be a joint probability distribution with marginals and . Thus, the infinitesimal mass transported from to is , and the cost of moving is , following the definition of the cost function. Therefore, the total cost of a transport plan is
The plan is not unique; the optimal transport plan is the plan with the minimal cost out of all possible transport plans. As mentioned, the requirement for a plan to be valid is that it is a joint distribution with marginals and ; letting denote the set of all such measures as in the first section, the cost of the optimal plan is If the cost of a move is simply the distance between the two points, then the optimal cost is identical to the definition of the distance.
Let and be two degenerate distributions (i.e. Dirac delta distributions) located at points and in . There is only one possible coupling of these two measures, namely the point mass located at . Thus, using the usual absolute value function as the distance function on , for any , the -Wasserstein distance between and is By similar reasoning, if and are point masses located at points and in , and we use the usual Euclidean norm on as the distance function, then
If is an empirical measure with samples and is an empirical measure with samples , the distance is a simple function of the order statistics:
If and are empirical distributions, each based on observations, then
where the infimum is over all permutations of elements. This is a linear assignment problem, and can be solved by the Hungarian algorithm in cubic time.
Let and be two non-degenerate Gaussian measures (i.e. normal distributions) on , with respective expected values and and symmetric positive semi-definite covariance matrices and . Then,[3] with respect to the usual Euclidean norm on , the 2-Wasserstein distance between and is where denotes the principal square root of . Note that the second term (involving the trace) is precisely the (unnormalised) Bures metric between and . This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case ), since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term disappears and only the term involving the Euclidean distance between the means remains.
Let be probability measures on , and denote their cumulative distribution functions by and . Then the transport problem has an analytic solution: Optimal transport preserves the order of probability mass elements, so the mass at quantile of moves to quantile of . Thus, the -Wasserstein distance between and is where and are the quantile functions (inverse CDFs). In the case of , a change of variables leads to the formula
The Wasserstein metric is a natural way to compare the probability distributions of two variables X and Y, where one variable is derived from the other by small, non-uniform perturbations (random or deterministic).
In computer science, for example, the metric W1 is widely used to compare discrete distributions, e.g. the color histograms of two digital images; see earth mover's distance for more details.
In their paper 'Wasserstein GAN', Arjovsky et al.[4] use the Wasserstein-1 metric as a way to improve the original framework of generative adversarial networks (GAN), to alleviate the vanishing gradient and the mode collapse issues. The special case of normal distributions is used in a Frechet inception distance.
The Wasserstein metric has a formal link with Procrustes analysis, with application to chirality measures,[5] and to shape analysis.[6]
In computational biology, Wasserstein metric can be used to compare between persistence diagrams of cytometry datasets.[7]
The Wasserstein metric also has been used in inverse problems in geophysics.[8]
The Wasserstein metric is used in integrated information theory to compute the difference between concepts and conceptual structures.[9]
The Wasserstein metric and related formulations have also been used to provide a unified theory for shape observable analysis in high energy and collider physics datasets.[10][11]
It can be shown that Wp satisfies all the axioms of a metric on the Wasserstein space Pp(M) consisting of all Borel probability measures on M having finite pth moment. Furthermore, convergence with respect to Wp is equivalent to the usual weak convergence of measures plus convergence of the first pth moments.[12]
The following dual representation of W1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when μ and ν have bounded support,
where Lip(f) denotes the minimal Lipschitz constant for f. This form shows that W1 is an integral probability metric.
Compare this with the definition of the Radon metric:
If the metric d of the metric space (M,d) is bounded by some constant C, then
and so convergence in the Radon metric (identical to total variation convergence when M is a Polish space) implies convergence in the Wasserstein metric, but not vice versa.
The following is an intuitive proof which skips over technical points. A fully rigorous proof is found in.[13]
Discrete case: When is discrete, solving for the 1-Wasserstein distance is a problem in linear programming: where is a general "cost function".
By carefully writing the above equations as matrix equations, we obtain its dual problem:[14] and by the duality theorem of linear programming, since the primal problem is feasible and bounded, so is the dual problem, and the minimum in the first problem equals the maximum in the second problem. That is, the problem pair exhibits strong duality.
For the general case, the dual problem is found by converting sums to integrals: and the strong duality still holds. This is the Kantorovich duality theorem. Cédric Villani recounts the following interpretation from Luis Caffarelli:[15]
Suppose you want to ship some coal from mines, distributed as , to factories, distributed as . The cost function of transport is . Now a shipper comes and offers to do the transport for you. You would pay him per coal for loading the coal at , and pay him per coal for unloading the coal at .
For you to accept the deal, the price schedule must satisfy . The Kantorovich duality states that the shipper can make a price schedule that makes you pay almost as much as you would ship yourself.
This result can be pressed further to yield:
Theorem (Kantorovich-Rubenstein duality) — When the probability space is a metric space, then for any fixed , where is the Lipschitz norm.
It suffices to prove the case of . Start with Then, for any choice of , one can push the term higher by setting , making it an infimal convolution of with a cone. This implies for any , that is, .
Thus, Next, for any choice of , can be optimized by setting . Since , this implies .
The two infimal convolution steps are visually clear when the probability space is .
For notational convenience, let denote the infimal convolution operation.
For the first step, where we used , plot out the curve of , then at each point, draw a cone of slope 1, and take the lower envelope of the cones as , as shown in the diagram, then cannot increase with slope larger than 1. Thus all its secants have slope .
For the second step, picture the infimal convolution , then if all secants of have slope at most 1, then the lower envelope of are just the cone-apices themselves, thus .
1D Example. When both are distributions on , then integration by parts give thus
Benamou & Brenier found a dual representation of by fluid mechanics, which allows efficient solution by convex optimization.[16][17]
Given two probability densities on , where ranges over velocity fields driving the continuity equation with boundary conditions on the fluid density field: That is, the mass should be conserved, and the velocity field should transport the probability distribution to during the time interval .
Under suitable assumptions, the Wasserstein distance of order two is Lipschitz equivalent to a negative-order homogeneous Sobolev norm. More precisely, if we take to be a connected Riemannian manifold equipped with a positive measure , then we may define for the seminorm and for a signed measure on the dual norm Then any two probability measures and on satisfy the upper bound [18] In the other direction, if and each have densities with respect to the standard volume measure on that are both bounded above by some , and has non-negative Ricci curvature, then [19] [20]
For any p ≥ 1, the metric space (Pp(M), Wp) is separable, and is complete if (M, d) is separable and complete.[21]
It is also possible to consider the Wasserstein metric for . In this case, the defining formula becomes: where denotes the essential supremum of with respect to measure . The metric space (P∞(M), W∞) is complete if (M, d) is separable and complete. Here, P∞ is the space of all probability measures with bounded support.[22]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.