To see the other types of publications on this topic, follow the link: Distances de Wasserstein.

Journal articles on the topic 'Distances de Wasserstein'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distances de Wasserstein.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Solomon, Justin, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. "Convolutional wasserstein distances." ACM Transactions on Graphics 34, no. 4 (July 27, 2015): 1–11. http://dx.doi.org/10.1145/2766963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kindelan Nuñez, Rolando, Mircea Petrache, Mauricio Cerda, and Nancy Hitschfeld. "A Class of Topological Pseudodistances for Fast Comparison of Persistence Diagrams." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13202–10. http://dx.doi.org/10.1609/aaai.v38i12.29220.

Full text
Abstract:
Persistence diagrams (PD)s play a central role in topological data analysis, and are used in an ever increasing variety of applications. The comparison of PD data requires computing distances among large sets of PDs, with metrics which are accurate, theoretically sound, and fast to compute. Especially for denser multi-dimensional PDs, such comparison metrics are lacking. While on the one hand, Wasserstein-type distances have high accuracy and theoretical guarantees, they incur high computational cost. On the other hand, distances between vectorizations such as Persistence Statistics (PS)s have lower computational cost, but lack the accuracy guarantees and theoretical properties of a true distance over PD space. In this work we introduce a class of pseudodistances called Extended Topological Pseudodistances (ETD)s, which have tunable complexity, and can approximate Sliced and classical Wasserstein distances at the high-complexity extreme, while being computationally lighter and close to Persistence Statistics at the lower complexity extreme, and thus allow users to interpolate between the two metrics. We build theoretical comparisons to show how to fit our new distances at an intermediate level between persistence vectorizations and Wasserstein distances. We also experimentally verify that ETDs outperform PSs in terms of accuracy and outperform Wasserstein and Sliced Wasserstein distances in terms of computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Panaretos, Victor M., and Yoav Zemel. "Statistical Aspects of Wasserstein Distances." Annual Review of Statistics and Its Application 6, no. 1 (March 7, 2019): 405–31. http://dx.doi.org/10.1146/annurev-statistics-030718-104938.

Full text
Abstract:
Wasserstein distances are metrics on probability distributions inspired by the problem of optimal mass transportation. Roughly speaking, they measure the minimal effort required to reconfigure the probability mass of one distribution in order to recover the other distribution. They are ubiquitous in mathematics, with a long history that has seen them catalyze core developments in analysis, optimization, and probability. Beyond their intrinsic mathematical richness, they possess attractive features that make them a versatile tool for the statistician: They can be used to derive weak convergence and convergence of moments, and can be easily bounded; they are well-adapted to quantify a natural notion of perturbation of a probability distribution; and they seamlessly incorporate the geometry of the domain of the distributions in question, thus being useful for contrasting complex objects. Consequently, they frequently appear in the development of statistical theory and inferential methodology, and they have recently become an object of inference in themselves. In this review, we provide a snapshot of the main concepts involved in Wasserstein distances and optimal transportation, and a succinct overview of some of their many statistical aspects.
APA, Harvard, Vancouver, ISO, and other styles
4

Kelbert, Mark. "Survey of Distances between the Most Popular Distributions." Analytics 2, no. 1 (March 1, 2023): 225–45. http://dx.doi.org/10.3390/analytics2010012.

Full text
Abstract:
We present a number of upper and lower bounds for the total variation distances between the most popular probability distributions. In particular, some estimates of the total variation distances in the cases of multivariate Gaussian distributions, Poisson distributions, binomial distributions, between a binomial and a Poisson distribution, and also in the case of negative binomial distributions are given. Next, the estimations of Lévy–Prohorov distance in terms of Wasserstein metrics are discussed, and Fréchet, Wasserstein and Hellinger distances for multivariate Gaussian distributions are evaluated. Some novel context-sensitive distances are introduced and a number of bounds mimicking the classical results from the information theory are proved.
APA, Harvard, Vancouver, ISO, and other styles
5

Vayer, Titouan, Laetitia Chapel, Remi Flamary, Romain Tavenard, and Nicolas Courty. "Fused Gromov-Wasserstein Distance for Structured Objects." Algorithms 13, no. 9 (August 31, 2020): 212. http://dx.doi.org/10.3390/a13090212.

Full text
Abstract:
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper, we study the Fused Gromov-Wasserstein distance that extends the Wasserstein and Gromov–Wasserstein distances in order to encode simultaneously both the feature and structure information. We provide the mathematical framework for this distance in the continuous setting, prove its metric and interpolation properties, and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various applications, where structured objects are involved.
APA, Harvard, Vancouver, ISO, and other styles
6

Belili, Nacereddine, and Henri Heinich. "Distances de Wasserstein et de Zolotarev." Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 330, no. 9 (May 2000): 811–14. http://dx.doi.org/10.1016/s0764-4442(00)00274-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Peyre, Rémi. "Comparison between W2 distance and Ḣ−1 norm, and Localization of Wasserstein distance." ESAIM: Control, Optimisation and Calculus of Variations 24, no. 4 (October 2018): 1489–501. http://dx.doi.org/10.1051/cocv/2017050.

Full text
Abstract:
It is well known that the quadratic Wasserstein distance W2(⋅, ⋅) is formally equivalent, for infinitesimally small perturbations, to some weighted H−1 homogeneous Sobolev norm. In this article I show that this equivalence can be integrated to get non-asymptotic comparison results between these distances. Then I give an application of these results to prove that the W2 distance exhibits some localization phenomenon: if μ and ν are measures on ℝn and ϕ: ℝn → ℝ+ is some bump function with compact support, then under mild hypotheses, you can bound above the Wasserstein distance between ϕ ⋅ μ and ϕ ⋅ ν by an explicit multiple of W2(μ, ν).
APA, Harvard, Vancouver, ISO, and other styles
8

Tong, Qijun, and Kei Kobayashi. "Entropy-Regularized Optimal Transport on Multivariate Normal and q-normal Distributions." Entropy 23, no. 3 (March 3, 2021): 302. http://dx.doi.org/10.3390/e23030302.

Full text
Abstract:
The distance and divergence of the probability measures play a central role in statistics, machine learning, and many other related fields. The Wasserstein distance has received much attention in recent years because of its distinctions from other distances or divergences. Although computing the Wasserstein distance is costly, entropy-regularized optimal transport was proposed to computationally efficiently approximate the Wasserstein distance. The purpose of this study is to understand the theoretical aspect of entropy-regularized optimal transport. In this paper, we focus on entropy-regularized optimal transport on multivariate normal distributions and q-normal distributions. We obtain the explicit form of the entropy-regularized optimal transport cost on multivariate normal and q-normal distributions; this provides a perspective to understand the effect of entropy regularization, which was previously known only experimentally. Furthermore, we obtain the entropy-regularized Kantorovich estimator for the probability measure that satisfies certain conditions. We also demonstrate how the Wasserstein distance, optimal coupling, geometric structure, and statistical efficiency are affected by entropy regularization in some experiments. In particular, our results about the explicit form of the optimal coupling of the Tsallis entropy-regularized optimal transport on multivariate q-normal distributions and the entropy-regularized Kantorovich estimator are novel and will become the first step towards the understanding of a more general setting.
APA, Harvard, Vancouver, ISO, and other styles
9

Beier, Florian, Robert Beinert, and Gabriele Steidl. "Multi-marginal Gromov–Wasserstein transport and barycentres." Information and Inference: A Journal of the IMA 12, no. 4 (September 18, 2023): 2720–52. http://dx.doi.org/10.1093/imaiai/iaad041.

Full text
Abstract:
Abstract Gromov–Wasserstein (GW) distances are combinations of Gromov–Hausdorff and Wasserstein distances that allow the comparison of two different metric measure spaces (mm-spaces). Due to their invariance under measure- and distance-preserving transformations, they are well suited for many applications in graph and shape analysis. In this paper, we introduce the concept of multi-marginal GW transport between a set of mm-spaces as well as its regularized and unbalanced versions. As a special case, we discuss multi-marginal fused variants, which combine the structure information of an mm-space with label information from an additional label space. To tackle the new formulations numerically, we consider the bi-convex relaxation of the multi-marginal GW problem, which is tight in the balanced case if the cost function is conditionally negative definite. The relaxed model can be solved by an alternating minimization, where each step can be performed by a multi-marginal Sinkhorn scheme. We show relations of our multi-marginal GW problem to (unbalanced, fused) GW barycentres and present various numerical results, which indicate the potential of the concept.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Zhonghui, Huarui Jing, and Chihwa Kao. "High-Dimensional Distributionally Robust Mean-Variance Efficient Portfolio Selection." Mathematics 11, no. 5 (March 6, 2023): 1272. http://dx.doi.org/10.3390/math11051272.

Full text
Abstract:
This paper introduces a novel distributionally robust mean-variance portfolio estimator based on the projection robust Wasserstein (PRW) distance. This approach addresses the issue of increasing conservatism of portfolio allocation strategies due to high-dimensional data. Our simulation results show the robustness of the PRW-based estimator in the presence of noisy data and its ability to achieve a higher Sharpe ratio than regular Wasserstein distances when dealing with a large number of assets. Our empirical study also demonstrates that the proposed portfolio estimator outperforms classic “plug-in” methods using various covariance estimators in terms of risk when evaluated out of sample.
APA, Harvard, Vancouver, ISO, and other styles
11

Pont, Mathieu, Jules Vidal, Julie Delon, and Julien Tierny. "Wasserstein Distances, Geodesics and Barycenters of Merge Trees." IEEE Transactions on Visualization and Computer Graphics 28, no. 1 (January 2022): 291–301. http://dx.doi.org/10.1109/tvcg.2021.3114839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sommerfeld, Max, and Axel Munk. "Inference for empirical Wasserstein distances on finite spaces." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80, no. 1 (May 18, 2017): 219–38. http://dx.doi.org/10.1111/rssb.12236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shao, Jinghai. "Ergodicity of regime-switching diffusions in Wasserstein distances." Stochastic Processes and their Applications 125, no. 2 (February 2015): 739–58. http://dx.doi.org/10.1016/j.spa.2014.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Backhoff-Veraguas, Julio, Daniel Bartl, Mathias Beiglböck, and Manu Eder. "Adapted Wasserstein distances and stability in mathematical finance." Finance and Stochastics 24, no. 3 (June 4, 2020): 601–32. http://dx.doi.org/10.1007/s00780-020-00426-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lipman, Y., and I. Daubechies. "Conformal Wasserstein distances: Comparing surfaces in polynomial time." Advances in Mathematics 227, no. 3 (June 2011): 1047–77. http://dx.doi.org/10.1016/j.aim.2011.01.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ponti, Andrea, Ilaria Giordani, Matteo Mistri, Antonio Candelieri, and Francesco Archetti. "The “Unreasonable” Effectiveness of the Wasserstein Distance in Analyzing Key Performance Indicators of a Network of Stores." Big Data and Cognitive Computing 6, no. 4 (November 15, 2022): 138. http://dx.doi.org/10.3390/bdcc6040138.

Full text
Abstract:
Large retail companies routinely gather huge amounts of customer data, which are to be analyzed at a low granularity. To enable this analysis, several Key Performance Indicators (KPIs), acquired for each customer through different channels are associated to the main drivers of the customer experience. Analyzing the samples of customer behavior only through parameters such as average and variance does not cope with the growing heterogeneity of customers. In this paper, we propose a different approach in which the samples from customer surveys are represented as discrete probability distributions whose similarities can be assessed by different models. The focus is on the Wasserstein distance, which is generally well defined, even when other distributional distances are not, and it provides an interpretable distance metric between distributions. The support of the distributions can be both one- and multi-dimensional, allowing for the joint consideration of several KPIs for each store, leading to a multi-variate histogram. Moreover, the Wasserstein barycenter offers a useful synthesis of a set of distributions and can be used as a reference distribution to characterize and classify behavioral patterns. Experimental results of real data show the effectiveness of the Wasserstein distance in providing global performance measures.
APA, Harvard, Vancouver, ISO, and other styles
17

Blanchet, Jose, Yang Kang, and Karthyek Murthy. "Robust Wasserstein profile inference and applications to machine learning." Journal of Applied Probability 56, no. 3 (September 2019): 830–57. http://dx.doi.org/10.1017/jpr.2019.49.

Full text
Abstract:
AbstractWe show that several machine learning estimators, including square-root least absolute shrinkage and selection and regularized logistic regression, can be represented as solutions to distributionally robust optimization problems. The associated uncertainty regions are based on suitably defined Wasserstein distances. Hence, our representations allow us to view regularization as a result of introducing an artificial adversary that perturbs the empirical distribution to account for out-of-sample effects in loss estimation. In addition, we introduce RWPI (robust Wasserstein profile inference), a novel inference methodology which extends the use of methods inspired by empirical likelihood to the setting of optimal transport costs (of which Wasserstein distances are a particular case). We use RWPI to show how to optimally select the size of uncertainty regions, and as a consequence we are able to choose regularization parameters for these machine learning estimators without the use of cross validation. Numerical experiments are also given to validate our theoretical findings.
APA, Harvard, Vancouver, ISO, and other styles
18

Iacobelli, Mikaela. "A New Perspective on Wasserstein Distances for Kinetic Problems." Archive for Rational Mechanics and Analysis 244, no. 1 (February 7, 2022): 27–50. http://dx.doi.org/10.1007/s00205-021-01705-9.

Full text
Abstract:
AbstractWe introduce a new class of Wasserstein-type distances specifically designed to tackle questions concerning stability and convergence to equilibria for kinetic equations. Thanks to these new distances, we improve some classical estimates by Loeper (J Math Pures Appl (9) 86(1):68–79, 2006) and Dobrushin (Funktsional Anal i Prilozhen 13:48–58, 1979) on Vlasov-type equations, and we present an application to quasi-neutral limits.
APA, Harvard, Vancouver, ISO, and other styles
19

Robin, Yoann, Pascal Yiou, and Philippe Naveau. "Detecting changes in forced climate attractors with Wasserstein distance." Nonlinear Processes in Geophysics 24, no. 3 (July 31, 2017): 393–405. http://dx.doi.org/10.5194/npg-24-393-2017.

Full text
Abstract:
Abstract. The climate system can been described by a dynamical system and its associated attractor. The dynamics of this attractor depends on the external forcings that influence the climate. Such forcings can affect the mean values or variances, but regions of the attractor that are seldom visited can also be affected. It is an important challenge to measure how the climate attractor responds to different forcings. Currently, the Euclidean distance or similar measures like the Mahalanobis distance have been favored to measure discrepancies between two climatic situations. Those distances do not have a natural building mechanism to take into account the attractor dynamics. In this paper, we argue that a Wasserstein distance, stemming from optimal transport theory, offers an efficient and practical way to discriminate between dynamical systems. After treating a toy example, we explore how the Wasserstein distance can be applied and interpreted to detect non-autonomous dynamics from a Lorenz system driven by seasonal cycles and a warming trend.
APA, Harvard, Vancouver, ISO, and other styles
20

Öcal, Kaan, Ramon Grima, and Guido Sanguinetti. "Parameter estimation for biochemical reaction networks using Wasserstein distances." Journal of Physics A: Mathematical and Theoretical 53, no. 3 (December 23, 2019): 034002. http://dx.doi.org/10.1088/1751-8121/ab5877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

HAURAY, MAXIME. "WASSERSTEIN DISTANCES FOR VORTICES APPROXIMATION OF EULER-TYPE EQUATIONS." Mathematical Models and Methods in Applied Sciences 19, no. 08 (August 2009): 1357–84. http://dx.doi.org/10.1142/s0218202509003814.

Full text
Abstract:
We establish the convergence of a vortex system towards equations similar to the 2D Euler equation in vorticity formulation. The only but important difference is that we use singular kernel of the type x⊥/|x|α+1, with α < 1, instead of the Biot–Savard kernel x⊥/|x|2. This paper follows a previous work of Jabin and the author about the particles approximation of Vlasov equation in Ref. 13. Here we study a different mean-field equation, simplify the proofs and weaken non-physical initial conditions. The simplification is due to the introduction of the infinite Wasserstein distance. The results are obtained for L1 ∩ L∞ vorticities without any sign assumption, in the periodic setting, on the whole space and on the half space (with Neumann boundary conditions). A vortex-blob result is also given, that is valid for short times in the true vortex case.
APA, Harvard, Vancouver, ISO, and other styles
22

Loomis, Samuel P., and James P. Crutchfield. "Exploring predictive states via Cantor embeddings and Wasserstein distance." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 12 (December 2022): 123115. http://dx.doi.org/10.1063/5.0102603.

Full text
Abstract:
Predictive states for stochastic processes are a nonparametric and interpretable construct with relevance across a multitude of modeling paradigms. Recent progress on the self-supervised reconstruction of predictive states from time-series data focused on the use of reproducing kernel Hilbert spaces. Here, we examine how Wasserstein distances may be used to detect predictive equivalences in symbolic data. We compute Wasserstein distances between distributions over sequences (“predictions”) using a finite-dimensional embedding of sequences based on the Cantor set for the underlying geometry. We show that exploratory data analysis using the resulting geometry via hierarchical clustering and dimension reduction provides insight into the temporal structure of processes ranging from the relatively simple (e.g., generated by finite-state hidden Markov models) to the very complex (e.g., generated by infinite-state indexed grammars).
APA, Harvard, Vancouver, ISO, and other styles
23

Gairing, Jan, Michael Högele, Tetiana Kosenkova, and Alexei Kulik. "Coupling distances between Lévy measures and applications to noise sensitivity of SDE." Stochastics and Dynamics 15, no. 02 (April 6, 2015): 1550009. http://dx.doi.org/10.1142/s0219493715500094.

Full text
Abstract:
We introduce the notion of coupling distances on the space of Lévy measures in order to quantify rates of convergence towards a limiting Lévy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Lévy measure. The main result yields an estimate of the Wasserstein–Kantorovich–Rubinstein distance on path space between two Lévy diffusions in terms of the coupling distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.
APA, Harvard, Vancouver, ISO, and other styles
24

Tabak, Gil, Minjie Fan, Samuel Yang, Stephan Hoyer, and Geoffrey Davis. "Correcting nuisance variation using Wasserstein distance." PeerJ 8 (February 28, 2020): e8594. http://dx.doi.org/10.7717/peerj.8594.

Full text
Abstract:
Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drug compounds applied at different doses can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevant biological signal from nuisance variation. For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for those from different weeks, despite having identical drug compounds applied in both cases. In this case, the particular batch in which a set of experiments were conducted constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g., drug effects). We develop a general framework for adjusting the image embeddings in order to “forget” domain-specific information while preserving relevant biological information. To achieve this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment. For the dataset we present results with, the only replicated treatment happens to be the negative control treatment, for which we do not expect any treatment-induced cell morphology changes. We find that for our transformed embeddings (i) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal; and (ii) less domain-specific information is present.
APA, Harvard, Vancouver, ISO, and other styles
25

Smith, Kristen J., Brad J. White, David E. Amrine, Robert L. Larson, Miles E. Theurer, Josh I. Szasz, Tony C. Bryant, and Justin W. Waggoner. "Evaluation of First Treatment Timing, Fatal Disease Onset, and Days from First Treatment to Death Associated with Bovine Respiratory Disease in Feedlot Cattle." Veterinary Sciences 10, no. 3 (March 8, 2023): 204. http://dx.doi.org/10.3390/vetsci10030204.

Full text
Abstract:
Bovine respiratory disease (BRD) is a frequent beef cattle syndrome. Improved understanding of the timing of BRD events, including subsequent deleterious outcomes, promotes efficient resource allocation. This study’s objective was to determine differences in timing distributions of initial BRD treatments (Tx1), days to death after initial treatment (DTD), and days after arrival to fatal disease onset (FDO). Individual animal records for the first BRD treatment (n = 301,721) or BRD mortality (n = 19,332) were received from 25 feed yards. A subset of data (318–363 kg; steers/heifers) was created and Wasserstein distances were used to compare temporal distributions of Tx1, FDO, and DTD across genders (steers/heifers) and the quarter of arrival. Disease frequency varied by quarter with the greatest Wasserstein distances observed between Q2 and Q3 and between Q2 and Q4. Cattle arriving in Q3 and Q4 had earlier Tx1 events than in Q2. Evaluating FDO and DTD revealed the greatest Wasserstein distance between cattle arriving in Q2 and Q4, with cattle arriving in Q2 having later events. Distributions of FDO varied by gender and quarter and typically had wide distributions with the largest 25–75% quartiles ranging from 20 to 80 days (heifers arriving in Q2). The DTD had right-skewed distributions with 25% of cases occurring by days 3–4 post-treatment. Results illustrate temporal disease and outcome patterns are largely right-skewed and may not be well represented by simple arithmetic means. Knowledge of typical temporal patterns allows cattle health managers to focus disease control efforts on the correct groups of cattle at the appropriate time.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Jialin, Wotao Yin, Wuchen Li, and Yat Tin Chow. "Multilevel Optimal Transport: A Fast Approximation of Wasserstein-1 Distances." SIAM Journal on Scientific Computing 43, no. 1 (January 2021): A193—A220. http://dx.doi.org/10.1137/18m1219813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mémoli, Facundo. "Gromov–Wasserstein Distances and the Metric Approach to Object Matching." Foundations of Computational Mathematics 11, no. 4 (April 30, 2011): 417–87. http://dx.doi.org/10.1007/s10208-011-9093-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bonafini, Mauro, and Bernhard Schmitzer. "Domain decomposition for entropy regularized optimal transport." Numerische Mathematik 149, no. 4 (November 19, 2021): 819–70. http://dx.doi.org/10.1007/s00211-021-01245-0.

Full text
Abstract:
AbstractWe study Benamou’s domain decomposition algorithm for optimal transport in the entropy regularized setting. The key observation is that the regularized variant converges to the globally optimal solution under very mild assumptions. We prove linear convergence of the algorithm with respect to the Kullback–Leibler divergence and illustrate the (potentially very slow) rates with numerical examples. On problems with sufficient geometric structure (such as Wasserstein distances between images) we expect much faster convergence. We then discuss important aspects of a computationally efficient implementation, such as adaptive sparsity, a coarse-to-fine scheme and parallelization, paving the way to numerically solving large-scale optimal transport problems. We demonstrate efficient numerical performance for computing the Wasserstein-2 distance between 2D images and observe that, even without parallelization, domain decomposition compares favorably to applying a single efficient implementation of the Sinkhorn algorithm in terms of runtime, memory and solution quality.
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Yoon-Tae, and Hyun-Suk Park. "Bound for an Approximation of Invariant Density of Diffusions via Density Formula in Malliavin Calculus." Mathematics 11, no. 10 (May 15, 2023): 2302. http://dx.doi.org/10.3390/math11102302.

Full text
Abstract:
The Kolmogorov and total variation distance between the laws of random variables have upper bounds represented by the L1-norm of densities when random variables have densities. In this paper, we derive an upper bound, in terms of densities such as the Kolmogorov and total variation distance, for several probabilistic distances (e.g., Kolmogorov distance, total variation distance, Wasserstein distance, Forter–Mourier distance, etc.) between the laws of F and G in the case where a random variable F follows the invariant measure that admits a density and a differentiable random variable G, in the sense of Malliavin calculus, and also allows a density function.
APA, Harvard, Vancouver, ISO, and other styles
30

Bigot, Jérémie. "Statistical data analysis in the Wasserstein space." ESAIM: Proceedings and Surveys 68 (2020): 1–19. http://dx.doi.org/10.1051/proc/202068001.

Full text
Abstract:
This paper is concerned by statistical inference problems from a data set whose elements may be modeled as random probability measures such as multiple histograms or point clouds. We propose to review recent contributions in statistics on the use of Wasserstein distances and tools from optimal transport to analyse such data. In particular, we highlight the benefits of using the notions of barycenter and geodesic PCA in the Wasserstein space for the purpose of learning the principal modes of geometric variation in a dataset. In this setting, we discuss existing works and we present some research perspectives related to the emerging field of statistical optimal transport.
APA, Harvard, Vancouver, ISO, and other styles
31

Koehl, Patrice, Marc Delarue, and Henri Orland. "Computing the Gromov-Wasserstein Distance between Two Surface Meshes Using Optimal Transport." Algorithms 16, no. 3 (February 28, 2023): 131. http://dx.doi.org/10.3390/a16030131.

Full text
Abstract:
The Gromov-Wasserstein (GW) formalism can be seen as a generalization of the optimal transport (OT) formalism for comparing two distributions associated with different metric spaces. It is a quadratic optimization problem and solving it usually has computational costs that can rise sharply if the problem size exceeds a few hundred points. Recently fast techniques based on entropy regularization have being developed to solve an approximation of the GW problem quickly. There are issues, however, with the numerical convergence of those regularized approximations to the true GW solution. To circumvent those issues, we introduce a novel strategy to solve the discrete GW problem using methods taken from statistical physics. We build a temperature-dependent free energy function that reflects the GW problem’s constraints. To account for possible differences of scales between the two metric spaces, we introduce a scaling factor s in the definition of the energy. From the extremum of the free energy, we derive a mapping between the two probability measures that are being compared, as well as a distance between those measures. This distance is equal to the GW distance when the temperature goes to zero. The optimal scaling factor itself is obtained by minimizing the free energy with respect to s. We illustrate our approach on the problem of comparing shapes defined by unstructured triangulations of their surfaces. We use several synthetic and “real life” datasets. We demonstrate the accuracy and automaticity of our approach in non-rigid registration of shapes. We provide numerical evidence that there is a strong correlation between the GW distances computed from low-resolution, surface-based representations of proteins and the analogous distances computed from atomistic models of the same proteins.
APA, Harvard, Vancouver, ISO, and other styles
32

Madras, Neal, and Deniz Sezer. "Quantitative bounds for Markov chain convergence: Wasserstein and total variation distances." Bernoulli 16, no. 3 (August 2010): 882–908. http://dx.doi.org/10.3150/09-bej238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Muskulus, Michael, and Sjoerd Verduyn-Lunel. "Wasserstein distances in the analysis of time series and dynamical systems." Physica D: Nonlinear Phenomena 240, no. 1 (January 2011): 45–58. http://dx.doi.org/10.1016/j.physd.2010.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Irpino, Antonio, Rosanna Verde, and Francisco de A. T. De Carvalho. "Dynamic clustering of histogram data based on adaptive squared Wasserstein distances." Expert Systems with Applications 41, no. 7 (June 2014): 3351–66. http://dx.doi.org/10.1016/j.eswa.2013.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Sturm, Karl-Theodor. "Generalized Orlicz spaces and Wasserstein distances for convex–concave scale functions." Bulletin des Sciences Mathématiques 135, no. 6-7 (September 2011): 795–802. http://dx.doi.org/10.1016/j.bulsci.2011.07.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Friesen, Martin, Peng Jin, and Barbara Rüdiger. "Stochastic equation and exponential ergodicity in Wasserstein distances for affine processes." Annals of Applied Probability 30, no. 5 (October 2020): 2165–95. http://dx.doi.org/10.1214/19-aap1554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Feng-Yu. "Exponential Contraction in Wasserstein Distances for Diffusion Semigroups with Negative Curvature." Potential Analysis 53, no. 3 (February 6, 2020): 1123–44. http://dx.doi.org/10.1007/s11118-019-09800-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fraser, Jonathan M. "First and second moments for self-similar couplings and Wasserstein distances." Mathematische Nachrichten 288, no. 17-18 (June 29, 2015): 2028–41. http://dx.doi.org/10.1002/mana.201400408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Luo, Dejun, and Jian Wang. "Refined basic couplings and Wasserstein-type distances for SDEs with Lévy noises." Stochastic Processes and their Applications 129, no. 9 (September 2019): 3129–73. http://dx.doi.org/10.1016/j.spa.2018.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hairer, Martin, and Jonathan C. Mattingly. "Spectral gaps in Wasserstein distances and the 2D stochastic Navier–Stokes equations." Annals of Probability 36, no. 6 (November 2008): 2050–91. http://dx.doi.org/10.1214/08-aop392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sagiv, Amir. "The Wasserstein distances between pushed-forward measures with applications to uncertainty quantification." Communications in Mathematical Sciences 18, no. 3 (2020): 707–24. http://dx.doi.org/10.4310/cms.2020.v18.n3.a6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Beinert, Robert, Cosmas Heiss, and Gabriele Steidl. "On Assignment Problems Related to Gromov–Wasserstein Distances on the Real Line." SIAM Journal on Imaging Sciences 16, no. 2 (June 23, 2023): 1028–32. http://dx.doi.org/10.1137/22m1497808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Aguech, Rafik, Nabil Lasmar, and Hosam Mahmoud. "Limit distribution of distances in biased random tries." Journal of Applied Probability 43, no. 2 (June 2006): 377–90. http://dx.doi.org/10.1239/jap/1152413729.

Full text
Abstract:
The trie is a sort of digital tree. Ideally, to achieve balance, the trie should grow from an unbiased source generating keys of bits with equal likelihoods. In practice, the lack of bias is not always guaranteed. We investigate the distance between randomly selected pairs of nodes among the keys in a biased trie. This research complements that of Christophi and Mahmoud (2005); however, the results and some of the methodology are strikingly different. Analytical techniques are still useful for moments calculation. Both mean and variance are of polynomial order. It is demonstrated that the standardized distance approaches a normal limiting random variable. This is proved by the contraction method, whereby the limit distribution is shown to approach the fixed-point solution of a distributional equation in the Wasserstein metric space.
APA, Harvard, Vancouver, ISO, and other styles
44

Aguech, Rafik, Nabil Lasmar, and Hosam Mahmoud. "Limit distribution of distances in biased random tries." Journal of Applied Probability 43, no. 02 (June 2006): 377–90. http://dx.doi.org/10.1017/s0021900200001704.

Full text
Abstract:
Thetrieis a sort of digital tree. Ideally, to achieve balance, the trie should grow from an unbiased source generating keys of bits with equal likelihoods. In practice, the lack of bias is not always guaranteed. We investigate the distance between randomly selected pairs of nodes among the keys in a biased trie. This research complements that of Christophi and Mahmoud (2005); however, the results and some of the methodology are strikingly different. Analytical techniques are still useful for moments calculation. Both mean and variance are of polynomial order. It is demonstrated that the standardized distance approaches a normal limiting random variable. This is proved by the contraction method, whereby the limit distribution is shown to approach the fixed-point solution of a distributional equation in the Wasserstein metric space.
APA, Harvard, Vancouver, ISO, and other styles
45

R.U. Gobithaasan, Kirthana Devi Selvarajh, and Kenjiro T. Miura. "Clustering Datasaurus Dozen Using Bottleneck Distance, Wasserstein Distance (WD) and Persistence Landscapes." Journal of Advanced Research in Applied Sciences and Engineering Technology 38, no. 1 (January 24, 2024): 12–24. http://dx.doi.org/10.37934/araset.38.1.1224.

Full text
Abstract:
Topological Data Analysis (TDA) is an emerging field of study that helps to obtain insights from the topological information of datasets. Motivated by the emergence of TDA, we applied Persistent Homology (PH), one of the tools commonly used to extract topological features to cluster the Datasaurus Dozen dataset. This dataset is ideal to show PH’s capability in clustering as it consists of twelve distinct point clouds (PC) that have identical mean values, standard deviation, and correlation values, yet produce dissimilar patterns. The methodology starts with normalizing Datasaurus Dozen, followed by computing H1 Persistence Diagrams (PD) for each dataset. Two types of PD distances are computed directly: Wasserstein Distance (WD) and Bottleneck Distance (BD) and represented as proximity matrix. We also vectorized H1 Persistence Diagrams to obtain the average of first five strips of Persistence Landscape (PL) and computed L2 distance to represent a proximity matrix. These three distance matrices are used to generate dendrograms by using Hierarchical Agglomerative Clustering (HAC). Regardless of possessing similar descriptive statistics, PH accurately extracts the global and local geometric topological information, and clusters them accordingly. It is evident that for clustering based on global geometric information, BD is suitable and computably cheap, whereas for clustering based on local geometric information, WD and average PL vectors are suitable but may incur extra computation.
APA, Harvard, Vancouver, ISO, and other styles
46

Brecheteau, Claire, Edouard Genetay, Timothee Mathieu, and Adrien Saumard. "Topics in robust statistical learning." ESAIM: Proceedings and Surveys 74 (November 2023): 119–36. http://dx.doi.org/10.1051/proc/202374119.

Full text
Abstract:
Some recent contributions to robust inference are presented. Firstly, the classical problem of robust M-estimation of a location parameter is revisited using an optimal transport approach - with specifically designed Wasserstein-type distances - that reduces robustness to a continuity property. Secondly, a procedure of estimation of the distance function to a compact set is described, using union of balls. This methodology originates in the field of topological inference and offers as a byproduct a robust clustering method. Thirdly, a robust Lloyd-type algorithm for clustering is constructed, using a bootstrap variant of the median-of-means strategy. This algorithm comes with a robust initialization.
APA, Harvard, Vancouver, ISO, and other styles
47

Gattone, Stefano, Angela De Sanctis, Stéphane Puechmorel, and Florence Nicol. "On the Geodesic Distance in Shapes K-means Clustering." Entropy 20, no. 9 (August 29, 2018): 647. http://dx.doi.org/10.3390/e20090647.

Full text
Abstract:
In this paper, the problem of clustering rotationally invariant shapes is studied and a solution using Information Geometry tools is provided. Landmarks of a complex shape are defined as probability densities in a statistical manifold. Then, in the setting of shapes clustering through a K-means algorithm, the discriminative power of two different shapes distances are evaluated. The first, derived from Fisher–Rao metric, is related with the minimization of information in the Fisher sense and the other is derived from the Wasserstein distance which measures the minimal transportation cost. A modification of the K-means algorithm is also proposed which allows the variances to vary not only among the landmarks but also among the clusters.
APA, Harvard, Vancouver, ISO, and other styles
48

Kwessi, Eddy. "Topological Comparison of Some Dimension Reduction Methods Using Persistent Homology on EEG Data." Axioms 12, no. 7 (July 18, 2023): 699. http://dx.doi.org/10.3390/axioms12070699.

Full text
Abstract:
In this paper, we explore how to use topological tools to compare dimension reduction methods. We first make a brief overview of some of the methods often used in dimension reduction such as isometric feature mapping, Laplacian Eigenmaps, fast independent component analysis, kernel ridge regression, and t-distributed stochastic neighbor embedding. We then give a brief overview of some of the topological notions used in topological data analysis, such as barcodes, persistent homology, and Wasserstein distance. Theoretically, when these methods are applied on a data set, they can be interpreted differently. From EEG data embedded into a manifold of high dimension, we discuss these methods and we compare them across persistent homologies of dimensions 0, 1, and 2, that is, across connected components, tunnels and holes, shells around voids, or cavities. We find that from three dimension clouds of points, it is not clear how distinct from each other the methods are, but Wasserstein and Bottleneck distances, topological tests of hypothesis, and various methods show that the methods qualitatively and significantly differ across homologies. We can infer from this analysis that topological persistent homologies do change dramatically at seizure, a finding already obtained in previous analyses. This suggests that looking at changes in homology landscapes could be a predictor of seizure.
APA, Harvard, Vancouver, ISO, and other styles
49

Berthet, Philippe, Jean-Claude Fort, and Thierry Klein. "A Central Limit Theorem for Wasserstein type distances between two distinct univariate distributions." Annales de l'Institut Henri Poincaré, Probabilités et Statistiques 56, no. 2 (May 2020): 954–82. http://dx.doi.org/10.1214/19-aihp990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Feyeux, Nelson, Arthur Vidard, and Maëlle Nodet. "Optimal transport for variational data assimilation." Nonlinear Processes in Geophysics 25, no. 1 (January 30, 2018): 55–66. http://dx.doi.org/10.5194/npg-25-55-2018.

Full text
Abstract:
Abstract. Usually data assimilation methods evaluate observation-model misfits using weighted L2 distances. However, it is not well suited when observed features are present in the model with position error. In this context, the Wasserstein distance stemming from optimal transport theory is more relevant.This paper proposes the adaptation of variational data assimilation for the use of such a measure. It provides a short introduction of optimal transport theory and discusses the importance of a proper choice of scalar product to compute the cost function gradient. It also extends the discussion to the way the descent is performed within the minimization process.These algorithmic changes are tested on a nonlinear shallow-water model, leading to the conclusion that optimal transport-based data assimilation seems to be promising to capture position errors in the model trajectory.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography