Auswahl der wissenschaftlichen Literatur zum Thema „Divergences de Bregman“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Divergences de Bregman" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Divergences de Bregman"

1

Amari, S., und A. Cichocki. „Information geometry of divergence functions“. Bulletin of the Polish Academy of Sciences: Technical Sciences 58, Nr. 1 (01.03.2010): 183–95. http://dx.doi.org/10.2478/v10175-010-0019-1.

Der volle Inhalt der Quelle
Annotation:
Information geometry of divergence functionsMeasures of divergence between two points play a key role in many engineering problems. One such measure is a distance function, but there are many important measures which do not satisfy the properties of the distance. The Bregman divergence, Kullback-Leibler divergence andf-divergence are such measures. In the present article, we study the differential-geometrical structure of a manifold induced by a divergence function. It consists of a Riemannian metric, and a pair of dually coupled affine connections, which are studied in information geometry. The class of Bregman divergences are characterized by a dually flat structure, which is originated from the Legendre duality. A dually flat space admits a generalized Pythagorean theorem. The class off-divergences, defined on a manifold of probability distributions, is characterized by information monotonicity, and the Kullback-Leibler divergence belongs to the intersection of both classes. Thef-divergence always gives the α-geometry, which consists of the Fisher information metric and a dual pair of ±α-connections. The α-divergence is a special class off-divergences. This is unique, sitting at the intersection of thef-divergence and Bregman divergence classes in a manifold of positive measures. The geometry derived from the Tsallisq-entropy and related divergences are also addressed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

ABDULLAH, AMIRALI, JOHN MOELLER und SURESH VENKATASUBRAMANIAN. „APPROXIMATE BREGMAN NEAR NEIGHBORS IN SUBLINEAR TIME: BEYOND THE TRIANGLE INEQUALITY“. International Journal of Computational Geometry & Applications 23, Nr. 04n05 (August 2013): 253–301. http://dx.doi.org/10.1142/s0218195913600066.

Der volle Inhalt der Quelle
Annotation:
Bregman divergences are important distance measures that are used extensively in data-driven applications such as computer vision, text mining, and speech processing, and are a key focus of interest in machine learning. Answering nearest neighbor (NN) queries under these measures is very important in these applications and has been the subject of extensive study, but is problematic because these distance measures lack metric properties like symmetry and the triangle inequality. In this paper, we present the first provably approximate nearest-neighbor (ANN) algorithms for a broad sub-class of Bregman divergences under some assumptions. Specifically, we examine Bregman divergences which can be decomposed along each dimension and our bounds also depend on restricting the size of our allowed domain. We obtain bounds for both the regular asymmetric Bregman divergences as well as their symmetrized versions. To do so, we develop two geometric properties vital to our analysis: a reverse triangle inequality (RTI) and a relaxed triangle inequality called μ-defectiveness where μ is a domain-dependent value. Bregman divergences satisfy the RTI but not μ-defectiveness. However, we show that the square root of a Bregman divergence does satisfy μ-defectiveness. This allows us to then utilize both properties in an efficient search data structure that follows the general two-stage paradigm of a ring-tree decomposition followed by a quad tree search used in previous near-neighbor algorithms for Euclidean space and spaces of bounded doubling dimension. Our first algorithm resolves a query for a d-dimensional (1 + ε)-ANN in [Formula: see text] time and O(n log d-1 n) space and holds for generic μ-defective distance measures satisfying a RTI. Our second algorithm is more specific in analysis to the Bregman divergences and uses a further structural parameter, the maximum ratio of second derivatives over each dimension of our allowed domain (c0). This allows us to locate a (1 + ε)-ANN in O( log n) time and O(n) space, where there is a further (c0)d factor in the big-Oh for the query time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nielsen, Frank. „On Voronoi Diagrams on the Information-Geometric Cauchy Manifolds“. Entropy 22, Nr. 7 (28.06.2020): 713. http://dx.doi.org/10.3390/e22070713.

Der volle Inhalt der Quelle
Annotation:
We study the Voronoi diagrams of a finite set of Cauchy distributions and their dual complexes from the viewpoint of information geometry by considering the Fisher-Rao distance, the Kullback-Leibler divergence, the chi square divergence, and a flat divergence derived from Tsallis entropy related to the conformal flattening of the Fisher-Rao geometry. We prove that the Voronoi diagrams of the Fisher-Rao distance, the chi square divergence, and the Kullback-Leibler divergences all coincide with a hyperbolic Voronoi diagram on the corresponding Cauchy location-scale parameters, and that the dual Cauchy hyperbolic Delaunay complexes are Fisher orthogonal to the Cauchy hyperbolic Voronoi diagrams. The dual Voronoi diagrams with respect to the dual flat divergences amount to dual Bregman Voronoi diagrams, and their dual complexes are regular triangulations. The primal Bregman Voronoi diagram is the Euclidean Voronoi diagram and the dual Bregman Voronoi diagram coincides with the Cauchy hyperbolic Voronoi diagram. In addition, we prove that the square root of the Kullback-Leibler divergence between Cauchy distributions yields a metric distance which is Hilbertian for the Cauchy scale families.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Wuchen. „Transport information Bregman divergences“. Information Geometry 4, Nr. 2 (15.11.2021): 435–70. http://dx.doi.org/10.1007/s41884-021-00063-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Nielsen, Frank. „On a Generalization of the Jensen–Shannon Divergence and the Jensen–Shannon Centroid“. Entropy 22, Nr. 2 (16.02.2020): 221. http://dx.doi.org/10.3390/e22020221.

Der volle Inhalt der Quelle
Annotation:
The Jensen–Shannon divergence is a renown bounded symmetrization of the Kullback–Leibler divergence which does not require probability densities to have matching supports. In this paper, we introduce a vector-skew generalization of the scalar α -Jensen–Bregman divergences and derive thereof the vector-skew α -Jensen–Shannon divergences. We prove that the vector-skew α -Jensen–Shannon divergences are f-divergences and study the properties of these novel divergences. Finally, we report an iterative algorithm to numerically compute the Jensen–Shannon-type centroids for a set of probability densities belonging to a mixture family: This includes the case of the Jensen–Shannon centroid of a set of categorical distributions or normalized histograms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chen, P., Y. Chen und M. Rao. „Metrics defined by Bregman Divergences“. Communications in Mathematical Sciences 6, Nr. 4 (2008): 915–26. http://dx.doi.org/10.4310/cms.2008.v6.n4.a6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bock, Andreas A., und Martin S. Andersen. „Preconditioner Design via Bregman Divergences“. SIAM Journal on Matrix Analysis and Applications 45, Nr. 2 (07.06.2024): 1148–82. http://dx.doi.org/10.1137/23m1566637.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Jianwen, und Changshui Zhang. „Multitask Bregman Clustering“. Proceedings of the AAAI Conference on Artificial Intelligence 24, Nr. 1 (03.07.2010): 655–60. http://dx.doi.org/10.1609/aaai.v24i1.7674.

Der volle Inhalt der Quelle
Annotation:
Traditional clustering methods deal with a single clustering task on a single data set. However, in some newly emerging applications, multiple similar clustering tasks are involved simultaneously. In this case, we not only desire a partition for each task, but also want to discover the relationship among clusters of different tasks. It's also expected that the learnt relationship among tasks can improve performance of each single task. In this paper, we propose a general framework for this problem and further suggest a specific approach. In our approach, we alternatively update clusters and learn relationship between clusters of different tasks, and the two phases boost each other. Our approach is based on the general Bregman divergence, hence it's suitable for a large family of assumptions on data distributions and divergences. Empirical results on several benchmark data sets validate the approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liang, Xiao. „A Note on Divergences“. Neural Computation 28, Nr. 10 (Oktober 2016): 2045–62. http://dx.doi.org/10.1162/neco_a_00878.

Der volle Inhalt der Quelle
Annotation:
In many areas of neural computation, like learning, optimization, estimation, and inference, suitable divergences play a key role. In this note, we study the conjecture presented by Amari ( 2009 ) and find a counterexample to show that the conjecture does not hold generally. Moreover, we investigate two classes of [Formula: see text]-divergence (Zhang, 2004 ), weighted f-divergence and weighted [Formula: see text]-divergence, and prove that if a divergence is a weighted f-divergence, as well as a Bregman divergence, then it is a weighted [Formula: see text]-divergence. This result reduces in form to the main theorem established by Amari ( 2009 ) when [Formula: see text] [Formula: see text].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Voronov, Yuri, und Matvej Sviridov. „NEW INTERREGIONAL DIFFERENCES MEASURE MENTTOOLS“. Interexpo GEO-Siberia 3, Nr. 1 (2019): 71–77. http://dx.doi.org/10.33764/2618-981x-2019-3-1-71-77.

Der volle Inhalt der Quelle
Annotation:
The article proposes new methods of measurement of differences between of RF regions economic development level. Currently, indicators are used for comparison such as mean, median and mode. And the choice of three options is carried out not on formal criteria, but on the basis of general reasoning. The authors propose to build a system of comparisons on the basis of Bregman divergences. They are already beginning to be used when comparing regions by biologists. These measures have not yet been applied in economic analysis. The divergence of Kulback-Leibler (KL-divergence) is investigated, primarily, the variant of the Bregman divergence. Pairwise comparisons of the level of immigration to the regions as a criterion of quality of measurement of differences are proposed. The main calculations are made on the example of the level of wages in the regions of the South of Western Siberia. The conclusion is made about the prospects of using the existing variants of Bregman divergence, as well as about the possibility of developing new divergence options that best match the comparative attractiveness of the regions for immigrants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Divergences de Bregman"

1

Wu, Xiaochuan. „Clustering and visualisation with Bregman divergences“. Thesis, University of the West of Scotland, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.730022.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sun, Jiang. „Extending the metric multidimensional scaling with bregman divergences“. Thesis, University of the West of Scotland, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556070.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Godeme, Jean-Jacques. „Ρhase retrieval with nοn-Euclidean Bregman based geοmetry“. Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC214.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail, nous nous intéressons au problème de reconstruction de phase de signaux à valeurs réelles en dimension finie, un défi rencontré dans de nombreuses disciplines scientifiques et d’ingénierie. Nous explorons deux approches complémentaires : la reconstruction avec et sans régularisation. Dans les deux cas, notre travail se concentre sur la relaxation de l’hypothèse de Lipschitz-continuité généralement requise par les algorithmes de descente du premier ordre, et qui n’est pas valide pour la reconstruction de phase lorsqu’il formulée comme un problème de minimisation. L’idée clé ici est de remplacer la géométrie euclidienne par une divergence de Bregman non euclidienne associée à un noyau générateur approprié. Nous utilisons un algorithme de descente miroir ou de descente à la Bregman avec cette divergence pour résoudre le problème de reconstruction de phase sans régularisation. Nous démontrons des résultats de reconstruction exacte (à un signe global près) à la fois dans un cadre déterministe et avec une forte probabilité pour un nombre suffisant de mesures aléatoires (mesures Gaussiennes et pour des mesures structurées comme la diffraction codée). De plus, nous établissons la stabilité de cette approche vis-à-vis d’un bruit additif faible. En passant à la reconstruction de phase régularisée, nous développons et analysons d’abord un algorithme proximal inertiel à la Bregman pour minimiser la somme de deux fonctions, l’une étant convexe et potentiellement non lisse et la seconde étant relativement lisse dans la géométrie de Bregman. Nous fournissons des garanties de convergence à la fois globale et locale pour cet algorithme. Enfin, nous étudions la reconstruction sans bruit et la stabilité du problème régularisé par un a priori de faible complexité. Pour celà, nous formulons le problème comme la minimisation d’une objective impliquant un terme d’attache aux données non convexe et un terme de régularisation convexe favorisant les solutions conformes à une certaine notion de faible complexité. Nous établissons des conditions pour une reconstruction exacte et stable et fournissons des bornes sur le nombre de mesures aléatoires suffisants pour de garantir que ces conditionssoient remplies. Ces bornes d’échantillonnage dépendent de la faible complexité des signaux à reconstruire. Ces résultats nouveaux permettent d’aller bien au-delà du cas de la reconstruction de phase parcimonieuse
In this work, we investigate the phase retrieval problem of real-valued signals in finite dimension, a challenge encountered across various scientific and engineering disciplines. It explores two complementary approaches: retrieval with and without regularization. In both settings, our work is focused on relaxing the Lipschitz-smoothness assumption generally required by first-order splitting algorithms, and which is not valid for phase retrieval cast as a minimization problem. The key idea here is to replace the Euclidean geometry by a non-Euclidean Bregman divergence associated to an appropriate kernel. We use a Bregman gradient/mirror descent algorithm with this divergence to solve thephase retrieval problem without regularization, and we show exact (up to a global sign) recovery both in a deterministic setting and with high probability for a sufficient number of random measurements (Gaussian and Coded Diffraction Patterns). Furthermore, we establish the robustness of this approachagainst small additive noise. Shifting to regularized phase retrieval, we first develop and analyze an Inertial Bregman Proximal Gradient algorithm for minimizing the sum of two functions in finite dimension, one of which is convex and possibly nonsmooth and the second is relatively smooth in the Bregman geometry. We provide both global and local convergence guarantees for this algorithm. Finally, we study noiseless and stable recovery of low complexity regularized phase retrieval. For this, weformulate the problem as the minimization of an objective functional involving a nonconvex smooth data fidelity term and a convex regularizer promoting solutions conforming to some notion of low-complexity related to their nonsmoothness points. We establish conditions for exact and stable recovery and provide sample complexity bounds for random measurements to ensure that these conditions hold. These sample bounds depend on the low complexity of the signals to be recovered. Our new results allow to go far beyond the case of sparse phase retrieval
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Danilevicz, Ian Meneghel. „Detecting Influential observations in spatial models using Bregman divergence“. Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/104/104131/tde-13112018-160231/.

Der volle Inhalt der Quelle
Annotation:
How to evaluate if a spatial model is well ajusted to a problem? How to know if it is the best model between the class of conditional autoregressive (CAR) and simultaneous autoregressive (SAR) models, including homoscedasticity and heteroscedasticity cases? To answer these questions inside Bayesian framework, we propose new ways to apply Bregman divergence, as well as recent information criteria as widely applicable information criterion (WAIC) and leave-one-out cross-validation (LOO). The functional Bregman divergence is a generalized form of the well known Kullback-Leiber (KL) divergence. There is many special cases of it which might be used to identify influential points. All the posterior distributions displayed in this text were estimate by Hamiltonian Monte Carlo (HMC), a optimized version of Metropolis-Hasting algorithm. All ideas showed here were evaluate by both: simulation and real data.
Como avaliar se um modelo espacial está bem ajustado? Como escolher o melhor modelo entre muitos da classe autorregressivo condicional (CAR) e autorregressivo simultâneo (SAR), homoscedásticos e heteroscedásticos? Para responder essas perguntas dentro do paradigma bayesiano, propomos novas formas de aplicar a divergência de Bregman, assim como critérios de informação bastante recentes na literatura, são eles o widely applicable information criterion (WAIC) e validação cruzada leave-one-out (LOO). O funcional de Bregman é uma generalização da famosa divergência de Kullback-Leiber (KL). Há diversos casos particulares dela que podem ser usados para identificar pontos influentes. Todas as distribuições a posteriori apresentadas nesta dissertação foram estimadas usando Monte Carlo Hamiltoniano (HMC), uma versão otimizada do algoritmo Metropolis-Hastings. Todas as ideias apresentadas neste texto foram submetidas a simulações e aplicadas em dados reais.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Adamcik, Martin. „Collective reasoning under uncertainty and inconsistency“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/collective-reasoning-under-uncertainty-and-inconsistency(7fab8021-8beb-45e7-8b45-7cb4fadd70be).html.

Der volle Inhalt der Quelle
Annotation:
In this thesis we investigate some global desiderata for probabilistic knowledge merging given several possibly jointly inconsistent, but individually consistent knowledge bases. We show that the most naive methods of merging, which combine applications of a single expert inference process with the application of a pooling operator, fail to satisfy certain basic consistency principles. We therefore adopt a different approach. Following recent developments in machine learning where Bregman divergences appear to be powerful, we define several probabilistic merging operators which minimise the joint divergence between merged knowledge and given knowledge bases. In particular we prove that in many cases the result of applying such operators coincides with the sets of fixed points of averaging projective procedures - procedures which combine knowledge updating with pooling operators of decision theory. We develop relevant results concerning the geometry of Bregman divergences and prove new theorems in this field. We show that this geometry connects nicely with some desirable principles which have arisen in the epistemology of merging. In particular, we prove that the merging operators which we define by means of convex Bregman divergences satisfy analogues of the principles of merging due to Konieczny and Pino-Perez. Additionally, we investigate how such merging operators behave with respect to principles concerning irrelevant information, independence and relativisation which have previously been intensively studied in case of single-expert probabilistic inference. Finally, we argue that two particular probabilistic merging operators which are based on Kullback-Leibler divergence, a special type of Bregman divergence, have overall the most appealing properties amongst merging operators hitherto considered. By investigating some iterative procedures we propose algorithms to practically compute them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sears, Timothy Dean, und tim sears@biogreenoil com. „Generalized Maximum Entropy, Convexity and Machine Learning“. The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20090525.210315.

Der volle Inhalt der Quelle
Annotation:
This thesis identifies and extends techniques that can be linked to the principle of maximum entropy (maxent) and applied to parameter estimation in machine learning and statistics. Entropy functions based on deformed logarithms are used to construct Bregman divergences, and together these represent a generalization of relative entropy. The framework is analyzed using convex analysis to charac- terize generalized forms of exponential family distributions. Various connections to the existing machine learning literature are discussed and the techniques are applied to the problem of non-negative matrix factorization (NMF).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Silveti, Falls Antonio. „First-order noneuclidean splitting methods for large-scale optimization : deterministic and stochastic algorithms“. Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC204.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail, nous développons et examinons deux nouveaux algorithmes d'éclatement du premier ordre pour résoudre des problèmes d'optimisation composites à grande échelle dans des espaces à dimensions infinies. Ces problèmes sont au coeur de nombres de domaines scientifiques et d'ingénierie, en particulier la science des données et l'imagerie. Notre travail est axé sur l'assouplissement des hypothèses de régularité de Lipschitz généralement requises par les algorithmes de fractionnement du premier ordre en remplaçant l'énergie euclidienne par une divergence de Bregman. Ces développements permettent de résoudre des problèmes ayant une géométrie plus exotique que celle du cadre euclidien habituel. Un des algorithmes développés est l'hybridation de l'algorithme de gradient conditionnel, utilisant un oracle de minimisation linéaire à chaque itération, avec méthode du Lagrangien augmenté, permettant ainsi la prise en compte de contraintes affines. L'autre algorithme est un schéma d'éclatement primal-dual incorporant les divergences de Bregman pour le calcul des opérateurs proximaux associés. Pour ces deux algorithmes, nous montrons la convergence des valeurs Lagrangiennes, la convergence faible des itérés vers les solutions ainsi que les taux de convergence. En plus de ces nouveaux algorithmes déterministes, nous introduisons et étudions également leurs extensions stochastiques au travers d'un point de vue d'analyse de stablité aux perturbations. Nos résultats dans cette partie comprennent des résultats de convergence presque sûre pour les mêmes quantités que dans le cadre déterministe, avec des taux de convergence également. Enfin, nous abordons de nouveaux problèmes qui ne sont accessibles qu'à travers les hypothèses relâchées que nos algorithmes permettent. Nous démontrons l'efficacité numérique et illustrons nos résultats théoriques sur des problèmes comme la complétion de matrice parcimonieuse de rang faible, les problèmes inverses sur le simplexe, ou encore les problèmes inverses impliquant la distance de Wasserstein régularisée
In this work we develop and examine two novel first-order splitting algorithms for solving large-scale composite optimization problems in infinite-dimensional spaces. Such problems are ubiquitous in many areas of science and engineering, particularly in data science and imaging sciences. Our work is focused on relaxing the Lipschitz-smoothness assumptions generally required by first-order splitting algorithms by replacing the Euclidean energy with a Bregman divergence. These developments allow one to solve problems having more exotic geometry than that of the usual Euclidean setting. One algorithm is hybridization of the conditional gradient algorithm, making use of a linear minimization oracle at each iteration, with an augmented Lagrangian algorithm, allowing for affine constraints. The other algorithm is a primal-dual splitting algorithm incorporating Bregman divergences for computing the associated proximal operators. For both of these algorithms, our analysis shows convergence of the Lagrangian values, subsequential weak convergence of the iterates to solutions, and rates of convergence. In addition to these novel deterministic algorithms, we introduce and study also the stochastic extensions of these algorithms through a perturbation perspective. Our results in this part include almost sure convergence results for all the same quantities as in the deterministic setting, with rates as well. Finally, we tackle new problems that are only accessible through the relaxed assumptions our algorithms allow. We demonstrate numerical efficiency and verify our theoretical results on problems like low rank, sparse matrix completion, inverse problems on the simplex, and entropically regularized Wasserstein inverse problems
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hasnat, Md Abul. „Unsupervised 3D image clustering and extension to joint color and depth segmentation“. Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.

Der volle Inhalt der Quelle
Annotation:
L'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la segmentation de scènes intérieures comme (a) comment l'information 3D peut-elle être exploitée au mieux ? et (b) quelle est la meilleure manière de prendre en compte de manière conjointe les informations couleur et 3D ? Nous abordons ces deux questions dans cette thèse et nous proposons de nouvelles méthodes non supervisées pour la classification d'images 3D et la segmentation prenant en compte de manière conjointe les informations de couleur et de profondeur. A cet effet, nous formulons l'hypothèse que les normales aux surfaces dans les images 3D sont des éléments à prendre en compte pour leur analyse, et leurs distributions sont modélisables à l'aide de lois de mélange. Nous utilisons la méthode dite « Bregman Soft Clustering » afin d'être efficace d'un point de vue calculatoire. De plus, nous étudions plusieurs lois de probabilités permettant de modéliser les distributions de directions : la loi de von Mises-Fisher et la loi de Watson. Les méthodes de classification « basées modèles » proposées sont ensuite validées en utilisant des données de synthèse puis nous montrons leur intérêt pour l'analyse des images 3D (ou de profondeur). Une nouvelle méthode de segmentation d'images couleur et profondeur, appelées aussi images RGB-D, exploitant conjointement la couleur, la position 3D, et la normale locale est alors développée par extension des précédentes méthodes et en introduisant une méthode statistique de fusion de régions « planes » à l'aide d'un graphe. Les résultats montrent que la méthode proposée donne des résultats au moins comparables aux méthodes de l'état de l'art tout en demandant moins de temps de calcul. De plus, elle ouvre des perspectives nouvelles pour la fusion non supervisée des informations de couleur et de géométrie. Nous sommes convaincus que les méthodes proposées dans cette thèse pourront être utilisées pour la classification d'autres types de données comme la parole, les données d'expression en génétique, etc. Elles devraient aussi permettre la réalisation de tâches complexes comme l'analyse conjointe de données contenant des images et de la parole
Access to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Acharyya, Sreangsu. „Learning to rank in supervised and unsupervised settings using convexity and monotonicity“. 2013. http://hdl.handle.net/2152/21154.

Der volle Inhalt der Quelle
Annotation:
This dissertation addresses the task of learning to rank, both in the supervised and unsupervised settings, by exploiting the interplay of convex functions, monotonic mappings and their fixed points. In the supervised setting of learning to rank, one wishes to learn from examples of correctly ordered items whereas in the unsupervised setting, one tries to maximize some quantitatively defined characteristic of a "good" ranking. A ranking method selects one permutation from among the combinatorially many permutations defined on the items to rank. Accomplishing this optimally in the supervised setting, with minimal loss in generality, if any, is challenging. In this dissertation this problem is addressed by optimizing, globally and efficiently, a statistically consistent loss functional over the class of compositions of a linear function by an arbitrary, strictly monotonic, separable mapping with large margins. This capability also enables learning the parameters of a generalized linear model with an unknown link function. The method can handle infinite dimensional feature spaces if the corresponding kernel function is known. In the unsupervised setting, a popular ranking approach is is link analysis over a graph of recommendations, as exemplified by pagerank. This dissertation shows that pagerank may be viewed as an instance of an unsupervised consensus optimization problem. The dissertation then solves a more general problem of unsupervised consensus over noisy, directed recommendation graphs that have uncertainty over the set of "out" edges that emanate from a vertex. The proposed consensus rank is essentially the pagerank over the expected edge-set, where the expectation is computed over the distribution that achieves the most agreeable consensus. This consensus is measured geometrically by a suitable Bregman divergence between the consensus rank and the ranks induced by item specific distributions Real world deployed ranking methods need to be resistant to spam, a particularly sophisticated type of which is link-spam. A popular class of countermeasures "de-spam" the corrupted webgraph by removing abusive pages identified by supervised learning. Since exhaustive detection and neutralization is infeasible, there is a need for ranking functions that can, on one hand, attenuate the effects of link-spam without supervision and on the other hand, counter spam more aggressively when supervision is available. A family of non-linear, iteratively defined monotonic functions is proposed that propagates "rank" and "trust" scores through the webgraph. It relies on non-linearity, monotonicity and Schurconvexity to provide the resistance against spam.
text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sprung, Benjamin. „Convergence rates for variational regularization of statistical inverse problems“. Doctoral thesis, 2019. http://hdl.handle.net/21.11130/00-1735-0000-0005-1398-A.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Divergences de Bregman"

1

Nielsen, Frank, und Gaëtan Hadjeres. „Quasiconvex Jensen Divergences and Quasiconvex Bregman Divergences“. In Springer Proceedings in Mathematics & Statistics, 196–218. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77957-3_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nielsen, Frank, und Richard Nock. „Bregman Divergences from Comparative Convexity“. In Lecture Notes in Computer Science, 639–47. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68445-1_74.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lai, Pei Ling, und Colin Fyfe. „Bregman Divergences and Multi-dimensional Scaling“. In Advances in Neuro-Information Processing, 935–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03040-6_114.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Xi, und Colin Fyfe. „Independent Component Analysis Using Bregman Divergences“. In Trends in Applied Intelligent Systems, 627–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13025-0_64.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jang, Eunsong, Colin Fyfe und Hanseok Ko. „Bregman Divergences and the Self Organising Map“. In Lecture Notes in Computer Science, 452–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88906-9_57.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Santos-Rodríguez, Raúl, Alicia Guerrero-Curieses, Rocío Alaiz-Rodríguez und Jesús Cid-Sueiro. „Cost-Sensitive Learning Based on Bregman Divergences“. In Machine Learning and Knowledge Discovery in Databases, 12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04180-8_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sun, Jigang, Malcolm Crowe und Colin Fyfe. „Extending Metric Multidimensional Scaling with Bregman Divergences“. In Trends in Applied Intelligent Systems, 615–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13025-0_63.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wang, Shaojun, und Dale Schuurmans. „Learning Continuous Latent Variable Models with Bregman Divergences“. In Lecture Notes in Computer Science, 190–204. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39624-6_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wu, Yan, Liang Du und Honghong Cheng. „Multi-view K-Means Clustering with Bregman Divergences“. In Communications in Computer and Information Science, 26–38. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2122-1_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Stummer, Wolfgang, und Anna-Lena Kißlinger. „Some New Flexibilizations of Bregman Divergences and Their Asymptotics“. In Lecture Notes in Computer Science, 514–22. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68445-1_60.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Divergences de Bregman"

1

Banerjee, Arindam, Srujana Merugu, Inderjit Dhillon und Joydeep Ghosh. „Clustering with Bregman Divergences“. In Proceedings of the 2004 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2004. http://dx.doi.org/10.1137/1.9781611972740.22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Acharyya, Sreangsu, Arindam Banerjee und Daniel Boley. „Bregman Divergences and Triangle Inequality“. In Proceedings of the 2013 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2013. http://dx.doi.org/10.1137/1.9781611972832.53.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Inan, Huseyin A., Mehmet A. Donmez und Suleyman S. Kozat. „Adaptive mixture methods using Bregman divergences“. In ICASSP 2012 - 2012 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2012. http://dx.doi.org/10.1109/icassp.2012.6288740.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cayton, Lawrence. „Fast nearest neighbor retrieval for bregman divergences“. In the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390171.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Harandi, Mehrtash, Mathieu Salzmann und Fatih Porikli. „Bregman Divergences for Infinite Dimensional Covariance Matrices“. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.132.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ackermann, Marcel R., und Johannes Blömer. „Coresets and Approximate Clustering for Bregman Divergences“. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2009. http://dx.doi.org/10.1137/1.9781611973068.118.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ferreira, Daniela P. L., Eraldo Ribeiro und Celia A. Z. Barcelos. „Variational non rigid registration with bregman divergences“. In SAC 2017: Symposium on Applied Computing. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3019612.3019646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shaojun Wang und D. Schummans. „Learning latent variable models with bregman divergences“. In IEEE International Symposium on Information Theory, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isit.2003.1228234.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Escolano, Francisco, Meizhu Liu und Edwin R. Hancock. „Tensor-based total bregman divergences between graphs“. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). IEEE, 2011. http://dx.doi.org/10.1109/iccvw.2011.6130420.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Magron, Paul, Pierre-Hugo Vial, Thomas Oberlin und Cedric Fevotte. „Phase Recovery with Bregman Divergences for Audio Source Separation“. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413717.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Divergences de Bregman"

1

Li, Xinyao. Block active ADMM to Minimize NMF with Bregman Divergences. Ames (Iowa): Iowa State University, Januar 2021. http://dx.doi.org/10.31274/cc-20240624-307.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Cherian, Anoop, Suvrit Sra, Arindam Banerjee und Nikos Papanikolopoulos. Jensen-Bregman LogDet Divergence for Efficient Similarity Computations on Positive Definite Tensors. Fort Belvoir, VA: Defense Technical Information Center, Mai 2012. http://dx.doi.org/10.21236/ada561322.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie