Academic literature on the topic 'Distances de Wasserstein'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distances de Wasserstein.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Distances de Wasserstein"

1

Solomon, Justin, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. "Convolutional wasserstein distances." ACM Transactions on Graphics 34, no. 4 (July 27, 2015): 1–11. http://dx.doi.org/10.1145/2766963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kindelan Nuñez, Rolando, Mircea Petrache, Mauricio Cerda, and Nancy Hitschfeld. "A Class of Topological Pseudodistances for Fast Comparison of Persistence Diagrams." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13202–10. http://dx.doi.org/10.1609/aaai.v38i12.29220.

Full text
Abstract:
Persistence diagrams (PD)s play a central role in topological data analysis, and are used in an ever increasing variety of applications. The comparison of PD data requires computing distances among large sets of PDs, with metrics which are accurate, theoretically sound, and fast to compute. Especially for denser multi-dimensional PDs, such comparison metrics are lacking. While on the one hand, Wasserstein-type distances have high accuracy and theoretical guarantees, they incur high computational cost. On the other hand, distances between vectorizations such as Persistence Statistics (PS)s have lower computational cost, but lack the accuracy guarantees and theoretical properties of a true distance over PD space. In this work we introduce a class of pseudodistances called Extended Topological Pseudodistances (ETD)s, which have tunable complexity, and can approximate Sliced and classical Wasserstein distances at the high-complexity extreme, while being computationally lighter and close to Persistence Statistics at the lower complexity extreme, and thus allow users to interpolate between the two metrics. We build theoretical comparisons to show how to fit our new distances at an intermediate level between persistence vectorizations and Wasserstein distances. We also experimentally verify that ETDs outperform PSs in terms of accuracy and outperform Wasserstein and Sliced Wasserstein distances in terms of computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Panaretos, Victor M., and Yoav Zemel. "Statistical Aspects of Wasserstein Distances." Annual Review of Statistics and Its Application 6, no. 1 (March 7, 2019): 405–31. http://dx.doi.org/10.1146/annurev-statistics-030718-104938.

Full text
Abstract:
Wasserstein distances are metrics on probability distributions inspired by the problem of optimal mass transportation. Roughly speaking, they measure the minimal effort required to reconfigure the probability mass of one distribution in order to recover the other distribution. They are ubiquitous in mathematics, with a long history that has seen them catalyze core developments in analysis, optimization, and probability. Beyond their intrinsic mathematical richness, they possess attractive features that make them a versatile tool for the statistician: They can be used to derive weak convergence and convergence of moments, and can be easily bounded; they are well-adapted to quantify a natural notion of perturbation of a probability distribution; and they seamlessly incorporate the geometry of the domain of the distributions in question, thus being useful for contrasting complex objects. Consequently, they frequently appear in the development of statistical theory and inferential methodology, and they have recently become an object of inference in themselves. In this review, we provide a snapshot of the main concepts involved in Wasserstein distances and optimal transportation, and a succinct overview of some of their many statistical aspects.
APA, Harvard, Vancouver, ISO, and other styles
4

Kelbert, Mark. "Survey of Distances between the Most Popular Distributions." Analytics 2, no. 1 (March 1, 2023): 225–45. http://dx.doi.org/10.3390/analytics2010012.

Full text
Abstract:
We present a number of upper and lower bounds for the total variation distances between the most popular probability distributions. In particular, some estimates of the total variation distances in the cases of multivariate Gaussian distributions, Poisson distributions, binomial distributions, between a binomial and a Poisson distribution, and also in the case of negative binomial distributions are given. Next, the estimations of Lévy–Prohorov distance in terms of Wasserstein metrics are discussed, and Fréchet, Wasserstein and Hellinger distances for multivariate Gaussian distributions are evaluated. Some novel context-sensitive distances are introduced and a number of bounds mimicking the classical results from the information theory are proved.
APA, Harvard, Vancouver, ISO, and other styles
5

Vayer, Titouan, Laetitia Chapel, Remi Flamary, Romain Tavenard, and Nicolas Courty. "Fused Gromov-Wasserstein Distance for Structured Objects." Algorithms 13, no. 9 (August 31, 2020): 212. http://dx.doi.org/10.3390/a13090212.

Full text
Abstract:
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper, we study the Fused Gromov-Wasserstein distance that extends the Wasserstein and Gromov–Wasserstein distances in order to encode simultaneously both the feature and structure information. We provide the mathematical framework for this distance in the continuous setting, prove its metric and interpolation properties, and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various applications, where structured objects are involved.
APA, Harvard, Vancouver, ISO, and other styles
6

Belili, Nacereddine, and Henri Heinich. "Distances de Wasserstein et de Zolotarev." Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 330, no. 9 (May 2000): 811–14. http://dx.doi.org/10.1016/s0764-4442(00)00274-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Peyre, Rémi. "Comparison between W2 distance and Ḣ−1 norm, and Localization of Wasserstein distance." ESAIM: Control, Optimisation and Calculus of Variations 24, no. 4 (October 2018): 1489–501. http://dx.doi.org/10.1051/cocv/2017050.

Full text
Abstract:
It is well known that the quadratic Wasserstein distance W2(⋅, ⋅) is formally equivalent, for infinitesimally small perturbations, to some weighted H−1 homogeneous Sobolev norm. In this article I show that this equivalence can be integrated to get non-asymptotic comparison results between these distances. Then I give an application of these results to prove that the W2 distance exhibits some localization phenomenon: if μ and ν are measures on ℝn and ϕ: ℝn → ℝ+ is some bump function with compact support, then under mild hypotheses, you can bound above the Wasserstein distance between ϕ ⋅ μ and ϕ ⋅ ν by an explicit multiple of W2(μ, ν).
APA, Harvard, Vancouver, ISO, and other styles
8

Tong, Qijun, and Kei Kobayashi. "Entropy-Regularized Optimal Transport on Multivariate Normal and q-normal Distributions." Entropy 23, no. 3 (March 3, 2021): 302. http://dx.doi.org/10.3390/e23030302.

Full text
Abstract:
The distance and divergence of the probability measures play a central role in statistics, machine learning, and many other related fields. The Wasserstein distance has received much attention in recent years because of its distinctions from other distances or divergences. Although computing the Wasserstein distance is costly, entropy-regularized optimal transport was proposed to computationally efficiently approximate the Wasserstein distance. The purpose of this study is to understand the theoretical aspect of entropy-regularized optimal transport. In this paper, we focus on entropy-regularized optimal transport on multivariate normal distributions and q-normal distributions. We obtain the explicit form of the entropy-regularized optimal transport cost on multivariate normal and q-normal distributions; this provides a perspective to understand the effect of entropy regularization, which was previously known only experimentally. Furthermore, we obtain the entropy-regularized Kantorovich estimator for the probability measure that satisfies certain conditions. We also demonstrate how the Wasserstein distance, optimal coupling, geometric structure, and statistical efficiency are affected by entropy regularization in some experiments. In particular, our results about the explicit form of the optimal coupling of the Tsallis entropy-regularized optimal transport on multivariate q-normal distributions and the entropy-regularized Kantorovich estimator are novel and will become the first step towards the understanding of a more general setting.
APA, Harvard, Vancouver, ISO, and other styles
9

Beier, Florian, Robert Beinert, and Gabriele Steidl. "Multi-marginal Gromov–Wasserstein transport and barycentres." Information and Inference: A Journal of the IMA 12, no. 4 (September 18, 2023): 2720–52. http://dx.doi.org/10.1093/imaiai/iaad041.

Full text
Abstract:
Abstract Gromov–Wasserstein (GW) distances are combinations of Gromov–Hausdorff and Wasserstein distances that allow the comparison of two different metric measure spaces (mm-spaces). Due to their invariance under measure- and distance-preserving transformations, they are well suited for many applications in graph and shape analysis. In this paper, we introduce the concept of multi-marginal GW transport between a set of mm-spaces as well as its regularized and unbalanced versions. As a special case, we discuss multi-marginal fused variants, which combine the structure information of an mm-space with label information from an additional label space. To tackle the new formulations numerically, we consider the bi-convex relaxation of the multi-marginal GW problem, which is tight in the balanced case if the cost function is conditionally negative definite. The relaxed model can be solved by an alternating minimization, where each step can be performed by a multi-marginal Sinkhorn scheme. We show relations of our multi-marginal GW problem to (unbalanced, fused) GW barycentres and present various numerical results, which indicate the potential of the concept.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Zhonghui, Huarui Jing, and Chihwa Kao. "High-Dimensional Distributionally Robust Mean-Variance Efficient Portfolio Selection." Mathematics 11, no. 5 (March 6, 2023): 1272. http://dx.doi.org/10.3390/math11051272.

Full text
Abstract:
This paper introduces a novel distributionally robust mean-variance portfolio estimator based on the projection robust Wasserstein (PRW) distance. This approach addresses the issue of increasing conservatism of portfolio allocation strategies due to high-dimensional data. Our simulation results show the robustness of the PRW-based estimator in the presence of noisy data and its ability to achieve a higher Sharpe ratio than regular Wasserstein distances when dealing with a large number of assets. Our empirical study also demonstrates that the proposed portfolio estimator outperforms classic “plug-in” methods using various covariance estimators in terms of risk when evaluated out of sample.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Distances de Wasserstein"

1

Boissard, Emmanuel. "Problèmes d'interaction discret-continu et distances de Wasserstein." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1389/.

Full text
Abstract:
On étudie dans ce manuscrit plusieurs problèmes d'approximation à l'aide des outils de la théorie du transport optimal. Les distances de Wasserstein fournissent des bornes d'erreur pour l'approximation particulaire des solutions de certaines équations aux dérivées partielles. Elles jouent également le rôle de mesures de distorsion naturelles dans les problèmes de quantification et de partitionnement ("clustering"). Un problème associé à ces questions est d'étudier la vitesse de convergence dans la loi des grands nombres empirique pour cette distorsion. La première partie de cette thèse établit des bornes non-asymptotiques, en particulier dans des espaces de Banach de dimension infinie, ainsi que dans les cas où les observations sont non-indépendantes. La seconde partie est consacrée à l'étude de deux modèles issus de la modélisation des déplacements de populations d'animaux. On introduit un nouveau modèle individu-centré de formation de pistes de fourmis, que l'on étudie expérimentalement à travers des simulations numériques et une représentation en terme d'équations cinétiques. On étudie également une variante du modèle de Cucker-Smale de mouvement d'une nuée d'oiseaux : on montre le caractère bien posé de l'équation de transport de type Vlasov associée, et on établit des résultats sur le comportement en temps long de cette équation. Enfin, dans une troisième partie, on étudie certaines applications statistiques de la notion de barycentre dans l'espace des mesures de probabilités muni de la distance de Wasserstein, récemment introduite par M. Agueh et G. Carlier
We study several problems of approximation using tools from Optimal Transportation theory. The family of Wasserstein metrics are used to provide error bounds for particular approximation of some Partial Differential Equations. They also come into play as natural measures of distorsion for quantization and clustering problems. A problem related to these questions is to estimate the speed of convergence in the empirical law of large numbers for these distorsions. The first part of this thesis provides non-asymptotic bounds, notably in infinite-dimensional Banach spaces, as well as in cases where independence is removed. The second part is dedicated to the study of two models from the modelling of animal displacement. A new individual-based model for ant trail formation is introduced, and studied through numerical simulations and kinetic formulation. We also study a variant of the Cucker-Smale model of bird flock motion : we establish well-posedness of the associated Vlasov-type transport equation as well as long-time behaviour results. In a third part, we study some statistical applications of the notion of barycenter in Wasserstein space recently introduced by M. Agueh and G. Carlier
APA, Harvard, Vancouver, ISO, and other styles
2

Schrieber, Jörn [Verfasser], Dominic [Akademischer Betreuer] Schuhmacher, Dominic [Gutachter] Schuhmacher, and Anita [Gutachter] Schöbel. "Algorithms for Optimal Transport and Wasserstein Distances / Jörn Schrieber ; Gutachter: Dominic Schuhmacher, Anita Schöbel ; Betreuer: Dominic Schuhmacher." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1179449304/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SEGUY, Vivien Pierre François. "Measure Transport Approaches for Data Visualization and Learning." Kyoto University, 2018. http://hdl.handle.net/2433/233857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gairing, Jan, Michael Högele, Tetiana Kosenkova, and Alexei Kulik. "On the calibration of Lévy driven time series with coupling distances : an application in paleoclimate." Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/6978/.

Full text
Abstract:
This article aims at the statistical assessment of time series with large fluctuations in short time, which are assumed to stem from a continuous process perturbed by a Lévy process exhibiting a heavy tail behavior. We propose an easily implementable procedure to estimate efficiently the statistical difference between the noisy behavior of the data and a given reference jump measure in terms of so-called coupling distances. After a short introduction to Lévy processes and coupling distances we recall basic statistical approximation results and derive rates of convergence. In the sequel the procedure is elaborated in detail in an abstract setting and eventually applied in a case study to simulated and paleoclimate data. It indicates the dominant presence of a non-stable heavy-tailed jump Lévy component for some tail index greater than 2.
APA, Harvard, Vancouver, ISO, and other styles
5

Flenghi, Roberta. "Théorème de la limite centrale pour des fonctionnelles non linéaires de la mesure empirique et pour le rééchantillonnage stratifié." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0051.

Full text
Abstract:
Cette thèse porte sur le théorème de la limite centrale, l'un des deux théorèmes limites fondamentaux de la théorie des probabilités avec la loi forte des grands nombres. Le théorème de la limite centrale usuel qui porte sur des fonctionnelles linéaires de la mesure empirique de vecteurs aléatoires indépendants et identiquement distribués a récemment été étendu à des fonctionnelles non linéaires par l'utilisation de la dérivée fonctionnelle linéaire sur l'espace de Wasserstein des mesures de probabilité. Nous généralisons cette extension à la mesure empirique de vecteurs aléatoires indépendants mais non identiquement distribués d'une part et à la mesure empirique des états successifs d'une chaîne de Markov ergodique d'autre part. Dans un second temps, nous nous intéressons au rééchantillonnage stratifié qui est couramment utilisé dans les filtres particulaires. Nous prouvons un théorème de la limite centrale pour le premier rééchantillonnage sous l'hypothèse que les positions initiales des particules sont indépendantes et identiquement distribuées et leurs poids sont proportionnels à une fonction positive des positions qui envoie leur loi commune sur une probabilité possédant une composante non nulle absolument continue par rapport à la mesure de Lebesgue. Ce résultat repose sur la convergence en loi de la partie fractionnaire des sommes partielles de poids normalisés vers une variable aléatoire uniforme sur [0,1]. Plus généralement, nous montrons la convergence en loi vers un vecteur aléatoire uniforme sur [dollar][0,1]^q[dollar] de q sommes partielles d'une suite de variables aléatoires i.i.d. de carré intégrable multipliées par une fonction de la moyenne empirique de cette suite. Pour traiter le couplage introduit par ce facteur commun, nous supposons que la loi commune a une composante non nulle absolument continue par rapport à la mesure de Lebesgue, ce qui assure la convergence en variation totale dans le théorème de la limite centrale pour cette suite. Sous l'hypothèse que la convergence en loi de la partie fractionnaire des poids normalisés reste vraie au étapes suivantes d'un filtre particulaire calculé en alternant des étapes de rééchantillonnage suivant le mécanisme stratifié et des mutations suivant des noyaux markoviens, nous obtenons une formule de récurrence pour la variance asymptotique des particules après n étapes. Nous vérifions la validité de cette formule au travers d'expériences numériques
This thesis is dedicated to the central limit theorem which is one of the two fundamental limit theorems in probability theory with the strong law of large numbers.The central limit theorem which is well known for linear functionals of the empirical measure of independent and identically distributed random vectors, has recently been extended to non-linear functionals. The main tool permitting this extension is the linear functional derivative, one of the notions of derivation on the Wasserstein space of probability measures.We generalize this extension by first relaxing the equal distribution assumptionand then the independence property to be able to deal with the successive values of an ergodic Markov chain.In the second place, we focus on the stratified resampling mechanism.This is one of the resampling schemes commonly used in particle filters. We prove a central limit theorem for the first resampling according to this mechanism under the assumption that the initial positions are independent and identically distributed and the weights proportional to a positive function of the positions such that the image of their common distribution by this function has a non zero component absolutely continuous with respect to the Lebesgue measure. This result relies on the convergence in distribution of the fractional part of partial sums of the normalized weights to some random variable uniformly distributed on [0,1]. More generally, we prove the joint convergence in distribution of q variables modulo one obtained as partial sums of a sequence of i.i.d. square integrable random variables multiplied by a common factor given by some function of an empirical mean of the same sequence. The limit is uniformly distributed over [dollar][0,1]^q[dollar]. To deal with the coupling introduced by the common factor, we assume that the common distribution of the random variables has a non zero component absolutely continuous with respect to the Lebesgue measure, so that the convergence in the central limit theorem for this sequence holds in total variation distance.Under the conjecture that the convergence in distribution of fractional parts to some uniform random variable remains valid at the next steps of a particle filter which alternates selections according to the stratified resampling mechanism and mutations according to Markov kernels, we provide an inductive formula for the asymptotic variance of the resampled population after n steps. We perform numerical experiments which support the validity of this formula
APA, Harvard, Vancouver, ISO, and other styles
6

Bobbia, Benjamin. "Régression quantile extrême : une approche par couplage et distance de Wasserstein." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD043.

Full text
Abstract:
Ces travaux concernent l'estimation de quantiles extrêmes conditionnels. Plus précisément, l'estimation de quantiles d'une distribution réelle en fonction d'une covariable de grande dimension. Pour effectuer une telle estimation, nous présentons un modèle, appelé modèle des queues proportionnelles. Ce modèle est étudié à l'aide de méthodes de couplage. La première est centré sur les processus empiriques, tendis que la seconde est basée sur le transport et le couplage optimal. Ces méthodes nous permettent de fournir et d'étudier les estimateurs des quantiles et des différents paramètres ainsi que de fournir une procédure de validation du modèle. La seconde approche est également développée dans le contexte général des extrêmes univariés
This work is related with the estimation of conditional extreme quantiles. More precisely, we estimate high quantiles of a real distribution conditionally to the value of a covariate, potentially in high dimension. A such estimation is made introducing the proportional tail model. This model is studied with coupling methods. The first is an empirical processes based method whereas the second is focused on transport and optimal coupling. We provide estimators of both quantiles and model parameters, we show their asymptotic normality with our coupling methods. We also provide a validation procedure for proportional tail model. Moreover, we develop the second approach in the general framework of univariate extreme value theory
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Lu. "A Risk-Oriented Clustering Approach for Asset Categorization and Risk Measurement." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39444.

Full text
Abstract:
When faced with market risk for investments and portfolios, people often calculate the risk measure, which is a real number mapping to each random payoff. There are many ways to quantify the potential risk, among which the most important input is the features from future performance. Future distributions are unknown and thus always estimated from historical Profit and Loss (P&L) distributions. However, past data may not be appropriate for estimating the future; risk measures generated from single historical distributions can be subject to error. To overcome these shortcomings, one natural way implemented is to identify and categorize similar assets whose Profit and Loss distributions can be used as alternative scenarios. In practice, one of the most common and intuitive categorizations is sector, based on industry. It is widely agreed that companies in the same sector share the same, or related, business types and operating characteristics. But in the field of risk management, sector-based categorization does not necessarily mean assets are grouped in terms of their risk profiles, and we show that risk measures in the same sector tend to have large variation. Although improved risk measures related to the distribution ambiguity has been discussed at length, we seek to develop a more risk-oriented categorization by providing a new clustering approach. Furthermore, our method can better inform us of the potential risk and the extreme worst-case scenario within the same category.
APA, Harvard, Vancouver, ISO, and other styles
8

Lescornel, Hélène. "Covariance estimation and study of models of deformations between distributions with the Wasserstein distance." Toulouse 3, 2014. http://www.theses.fr/2014TOU30045.

Full text
Abstract:
La première partie de cette thèse est consacrée à l'estimation de covariance de processus stochastiques non stationnaires. Le modèle étudié amène à estimer la covariance du processus dans différents espaces vectoriels de matrices. Nous étudions dans le chapitre 3 une méthode de sélection de modèle par minimisation d'un critère pénalisé en utilisant des inégalités de concentration, et le chapitre 4 présente une méthode basée sur l'estimation sans biais du risque. Dans les deux cas des inégalités oracles sont obtenues. La seconde partie de cette thèse concerne l'étude de modèles de déformations entre distributions. On suppose observer une quantité aléatoire epsilon à travers une fonction de déformation. C'est l'importance de la déformation, représentée par un paramètre theta, que l'on cherche à retrouver. Nous présentons plusieurs méthodes d'estimation basées sur la distance de Wasserstein en alignant les lois des observations pour retrouver le paramètre de déformation. Dans le cas où les variables aléatoires sont à valeurs réelles, le chapitre 7 donne des propriétés de consistance pour un M-estimateur et sa distribution asymptotique. On y utilise des techniques de Hadamard différentiabilité pour appliquer une Delta-Méthode fonctionnelle. Le chapitre 8 concerne l'étude d'un estimateur de type Robbins-Monro et présente des propriétés de convergence pour un estimateur à noyau de la densité de la variable epsilon obtenu à l'aide des observations. Le modèle est généralisé à des variables dans des espaces métriques complets dans le chapitre 9, puis, dans l'optique de créer un test d'adéquation, le chapitre 10 donne des résultats sur la distribution asymptotique d'une statistique de test
The first part of this thesis concerns the covariance estimation of non stationary processes. We are estimating the covariance in different vectorial spaces of matrices. In Chapter 3, we give a model selection procedure by minimizing a penalized criterion and using concentration inequalities, and Chapter 4 presents an Unbiased Risk Estimation method. In both cases we give oracle inequalities. The second part deals with the study of models of deformation between distributions. We assume that we observe a random quantity epsilon through a deformation function. The importance of the deformation is represented by a parameter theta that we aim to estimate. We present several methods of estimation based on the Wasserstein distance by aligning the distributions of the observations to recover the deformation parameter. In the case of real random variables, Chapter 7 presents properties of consistency for a M-estimator and its asymptotic distribution. We use Hadamard differentiability techniques to apply a functional Delta method. Chapter 8 concerns a Robbins-Monro estimator for the deformation parameter and presents properties of convergence for a kernel estimator of the density of the variable epsilon obtained with the observations. The model is generalized to random variables in complete metric spaces in Chapter 9. Then, in the aim to build a goodness of fit test, Chapter 10 gives results on the asymptotic distribution of a test statistic
APA, Harvard, Vancouver, ISO, and other styles
9

Boistard, Hélène. "Eficacia asintotica tests relacionados con el estadística de Wasserstein." Toulouse 3, 2007. http://www.theses.fr/2007TOU30155.

Full text
Abstract:
Le test d'ajustement basé sur la distance de Wasserstein est un test bien adapté aux familles de localisation et changement d'échelle. La distribution asymptotique sous hypothèse nulle est connue depuis les travaux de del Barrio et al. (1999, 2000). Le thème de cette thèse est l'étude de la puissance asymptotique de ce test et de tests apparentés, grâce à divers critères d'efficacité. Après une introduction qui fait l'objet du premier chapitre et présente le problème et les outils utilisés, le second chapitre est consacré à l'établissement de résultats asymptotiques pour les intégrales multiples par rapport au processus empirique. Ces statistiques sont reliées aux U-statistiques, mais permettent une grande simplification des hypothèses pour établir la distribution asymptotique sous hypothèse nulle, sous alternative contigüe et pour le bootstrap. Dans le troisième chapitre, nous prouvons l'équivalence du test de Wasserstein avec un test basé sur une intégrale double par rapport au processus empirique. Cela nous permet d'appliquer à ce test les résultats du chapitre antérieur, et d'obtenir des renseignements sur son efficacité asymptotique dans le cadre des expériences gaussiennes de déplacement (Gaussian shifts). Le quatrième chapitre est dédié à l'efficacité au sens de Bahadur. Ce critère d'efficacité est basé sur la théorie des grandes déviations. Nous établissons un principe de grandes déviations fonctionnel pour les L-statistiques, sous des hypothèses sur les extrêmes de la distribution sous-jacente. Nous obtenons également un résultat pour les L-statistiques normalisées, famille à laquelle appartient la statistique de test de Wasserstein
The goodness of fit test based on the Wasserstein distance is a test which is well adapted to location-scale families. The asymptotic distribution under the null hypothesis has been known since the works by del Barrio et al. (1999, 2000). The subject of this thesis is the study of the asymptotic power of this test and of some related tests, owing to several efficiency criteria. In the first chapter, a short introduction presents the problem and the tools to be used. The second chapter is devoted to the the proof of some asymptotic results for multiple integrals with respect to the empirical process. These statistics are strongly related to U-statistics, but they permit an important simplification of the classical hypotheses to establish the asymptotic distribution under the null hypothesis, under contiguous alternative and for the bootstrap. In the third chapter, we prove that the Wasserstein test statistic is equivalent to a test based on the double integral with respect to the empirical process. This allows us to apply to this test the results of the previous chapter, and to obtain some information about its asymptotic efficiency in the framework of Gaussian shift experiments. .
APA, Harvard, Vancouver, ISO, and other styles
10

Lebrat, Léo. "Projection au sens de Wasserstein 2 sur des espaces structurés de mesures." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0035.

Full text
Abstract:
Cette thèse s’intéresse à l’approximation pour la métrique de 2-Wasserstein de mesures de probabilité par une mesure structurée. Les mesures structurées étudiées sont des discrétisations consistantes de mesures portées par des courbes continues à vitesse et à accélération bornées. Nous comparons deux types d’approximations pour ces courbes continues : l’approximation constante par morceaux et linéaire par morceaux. Pour chaque méthode, des algorithmes rapides et fonctionnant pour une discrétisation fine ont été développés. Le problème d’approximation se divise en deux étapes avec leurs propres défis théoriques et algorithmiques : le calcul de la distance de Wasserstein 2 et son optimisation par rapport aux paramètres de structure. Ce travail est initialement motivé par la génération de trajectoires d’IRM en acquisition compressée, toutefois nous donnons de nouvelles applications potentielles pour ces méthodes
This thesis focuses on the approximation for the 2-Wasserstein metric of probability measures by structured measures. The set of structured measures under consideration is made of consistent discretizations of measures carried by a smooth curve with a bounded speed and acceleration. We compare two different types of approximations of the curve: piecewise constant and piecewise linear. For each of these methods, we develop fast and scalable algorithms to compute the 2-Wasserstein distance between a given measure and the structured measure. The optimization procedure reveals new theoretical and numerical challenges, it consists of two steps: first the computation of the 2-Wasserstein distance, second the optimization of the parameters of structure. This work is initially motivated by the design of trajectories in MRI acquisition, however we provide new applications of these methods
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Distances de Wasserstein"

1

Computational Inversion with Wasserstein Distances and Neural Network Induced Loss Functions. [New York, N.Y.?]: [publisher not identified], 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

An Invitation to Optimal Transport, Wasserstein Distances, and Gradient Flows. European Mathematical Society, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Distances de Wasserstein"

1

Bachmann, Fynn, Philipp Hennig, and Dmitry Kobak. "Wasserstein t-SNE." In Machine Learning and Knowledge Discovery in Databases, 104–20. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-26387-3_7.

Full text
Abstract:
AbstractScientific datasets often have hierarchical structure: for example, in surveys, individual participants (samples) might be grouped at a higher level (units) such as their geographical region. In these settings, the interest is often in exploring the structure on the unit level rather than on the sample level. Units can be compared based on the distance between their means, however this ignores the within-unit distribution of samples. Here we develop an approach for exploratory analysis of hierarchical datasets using the Wasserstein distance metric that takes into account the shapes of within-unit distributions. We use t-SNE to construct 2D embeddings of the units, based on the matrix of pairwise Wasserstein distances between them. The distance matrix can be efficiently computed by approximating each unit with a Gaussian distribution, but we also provide a scalable method to compute exact Wasserstein distances. We use synthetic data to demonstrate the effectiveness of our Wassersteint-SNE, and apply it to data from the 2017 German parliamentary election, considering polling stations as samples and voting districts as units. The resulting embedding uncovers meaningful structure in the data.
APA, Harvard, Vancouver, ISO, and other styles
2

Villani, Cédric. "The Wasserstein distances." In Grundlehren der mathematischen Wissenschaften, 93–111. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-71050-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barbe, Amélie, Marc Sebban, Paulo Gonçalves, Pierre Borgnat, and Rémi Gribonval. "Graph Diffusion Wasserstein Distances." In Machine Learning and Knowledge Discovery in Databases, 577–92. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67661-2_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jacobs, Bart. "Drawing from an Urn is Isometric." In Lecture Notes in Computer Science, 101–20. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57228-9_6.

Full text
Abstract:
AbstractDrawing (a multiset of) coloured balls from an urn is one of the most basic models in discrete probability theory. Three modes of drawing are commonly distinguished: multinomial (draw-replace), hypergeometric (draw-delete), and Pólya (draw-add). These drawing operations are represented as maps from urns to distributions over multisets of draws. The set of urns is a metric space via the Wasserstein distance. The set of distributions over draws is also a metric space, using Wasserstein-over-Wasserstein. The main result of this paper is that the three draw operations are all isometries, that is, they preserve the Wasserstein distances.
APA, Harvard, Vancouver, ISO, and other styles
5

Santambrogio, Filippo. "Wasserstein distances and curves in the Wasserstein spaces." In Optimal Transport for Applied Mathematicians, 177–218. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20828-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Öcal, Kaan, Ramon Grima, and Guido Sanguinetti. "Wasserstein Distances for Estimating Parameters in Stochastic Reaction Networks." In Computational Methods in Systems Biology, 347–51. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31304-3_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Carrillo, José Antonio, Young-Pil Choi, and Maxime Hauray. "The derivation of swarming models: Mean-field limit and Wasserstein distances." In Collective Dynamics from Bacteria to Crowds, 1–46. Vienna: Springer Vienna, 2014. http://dx.doi.org/10.1007/978-3-7091-1785-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Haeusler, Erich, and David M. Mason. "Asymptotic Distributions of Trimmed Wasserstein Distances Between the True and the Empirical Distribution Function." In Stochastic Inequalities and Applications, 279–98. Basel: Birkhäuser Basel, 2003. http://dx.doi.org/10.1007/978-3-0348-8069-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Walczak, Szymon M. "Wasserstein Distance." In SpringerBriefs in Mathematics, 1–10. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57517-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Breiding, Paul, Kathlén Kohn, and Bernd Sturmfels. "Wasserstein Distance." In Oberwolfach Seminars, 53–66. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51462-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Distances de Wasserstein"

1

Lopez, Adrian Tovar, and Varun Jog. "Generalization error bounds using Wasserstein distances." In 2018 IEEE Information Theory Workshop (ITW). IEEE, 2018. http://dx.doi.org/10.1109/itw.2018.8613445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Memoli, Facundo. "Spectral Gromov-Wasserstein distances for shape matching." In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Prossel, Dominik, and Uwe D. Hanebeck. "Dirac Mixture Reduction Using Wasserstein Distances on Projected Cumulative Distributions." In 2022 25th International Conference on Information Fusion (FUSION). IEEE, 2022. http://dx.doi.org/10.23919/fusion49751.2022.9841286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Steuernagel, Simon, Aaron Kurda, and Marcus Baum. "Point Cloud Registration based on Gaussian Mixtures and Pairwise Wasserstein Distances." In 2023 IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI). IEEE, 2023. http://dx.doi.org/10.1109/sdf-mfi59545.2023.10361440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Perkey, Scott, Ana Carvalho, and Alberto Krone-Martins. "Using Fourier Coefficients and Wasserstein Distances to Estimate Entropy in Time Series." In 2023 IEEE 19th International Conference on e-Science (e-Science). IEEE, 2023. http://dx.doi.org/10.1109/e-science58273.2023.10254949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Barbe, Amelie, Paulo Goncalves, Marc Sebban, Pierre Borgnat, Remi Gribonval, and Titouan Vayer. "Optimization of the Diffusion Time in Graph Diffused-Wasserstein Distances: Application to Domain Adaptation." In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai52525.2021.00125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Garcia Ramirez, Jesus. "Which Kernels to Transfer in Deep Q-Networks?" In LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai201912087.

Full text
Abstract:
Deep Reinforcement Learning (DRL) combines the benefits of Deep Learning and Reinforcement Learning. However, it still requires long training times and a large number of instances to reach an acceptable performances. Transfer Learning (TL) offers an alternative to reduce the training time of DRL agents, using less instances and possibly improving performance. In this work, we propose a transfer learning formulation for DRL across tasks. Relevant source tasks are selected considering the action spaces and the Wasserstein distances of an output in a hidden layer of a convolutional neural network. Rather than transferring the whole source model, we propose a method for selecting only relevant kernels based on their entropy values, which results in smaller models that can produce better performances. In our experiments we use Deep QNetworks (DQN) with Atari games We evaluated the proposed method with dierent percentages of selected kernels and show that we can obtain similar performances than DQN in less nteractions and with smaller models.
APA, Harvard, Vancouver, ISO, and other styles
8

Choi, Youngwon, and Joong-Ho Won. "Ornstein Auto-Encoders." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/301.

Full text
Abstract:
We propose the Ornstein auto-encoder (OAE), a representation learning model for correlated data. In many interesting applications, data have nested structures. Examples include the VGGFace and MNIST datasets. We view such data consist of i.i.d. copies of a stationary random process, and seek a latent space representation of the observed sequences. This viewpoint necessitates a distance measure between two random processes. We propose to use Orstein's d-bar distance, a process extension of Wasserstein's distance. We first show that the theorem by Bousquet et al. (2017) for Wasserstein auto-encoders extends to stationary random processes. This result, however, requires both encoder and decoder to map an entire sequence to another. We then show that, when exchangeability within a process, valid for VGGFace and MNIST, is assumed, these maps reduce to univariate ones, resulting in a much simpler, tractable optimization problem. Our experiments show that OAEs successfully separate individual sequences in the latent space, and can generate new variations of unknown, as well as known, identity. The latter has not been possible with other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Kasai, Hiroyuki. "Multi-View Wasserstein Discriminant Analysis with Entropic Regularized Wasserstein Distance." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Su, Yuxin, Shenglin Zhao, Xixian Chen, Irwin King, and Michael Lyu. "Parallel Wasserstein Generative Adversarial Nets with Multiple Discriminators." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/483.

Full text
Abstract:
Wasserstein Generative Adversarial Nets~(GANs) are newly proposed GAN algorithms and widely used in computer vision, web mining, information retrieval, etc. However, the existing algorithms with approximated Wasserstein loss converge slowly due to heavy computation cost and usually generate unstable results as well. In this paper, we solve the computation cost problem by speeding up the Wasserstein GANs from a well-designed communication efficient parallel architecture. Specifically, we develop a new problem formulation targeting the accurate evaluation of Wasserstein distance and propose an easily parallel optimization algorithm to train the Wasserstein GANs. Compared to traditional parallel architecture, our proposed framework is designed explicitly for the skew parameter updates between the generator network and discriminator network. Rigorous experiments reveal that our proposed framework achieves a significant improvement regarding convergence speed with comparable stability on generating images, compared to the state-of-the-art of Wasserstein GANs algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography