Academic literature on the topic 'Kullback-Leibler divergence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kullback-Leibler divergence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kullback-Leibler divergence"

1

Nielsen, Frank. "Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences." Entropy 24, no. 3 (March 17, 2022): 421. http://dx.doi.org/10.3390/e24030421.

Full text
Abstract:
By calculating the Kullback–Leibler divergence between two probability measures belonging to different exponential families dominated by the same measure, we obtain a formula that generalizes the ordinary Fenchel–Young divergence. Inspired by this formula, we define the duo Fenchel–Young divergence and report a majorization condition on its pair of strictly convex generators, which guarantees that this divergence is always non-negative. The duo Fenchel–Young divergence is also equivalent to a duo Bregman divergence. We show how to use these duo divergences by calculating the Kullback–Leibler divergence between densities of truncated exponential families with nested supports, and report a formula for the Kullback–Leibler divergence between truncated normal distributions. Finally, we prove that the skewed Bhattacharyya distances between truncated exponential families amount to equivalent skewed duo Jensen divergences.
APA, Harvard, Vancouver, ISO, and other styles
2

Nielsen, Frank. "Generalizing the Alpha-Divergences and the Oriented Kullback–Leibler Divergences with Quasi-Arithmetic Means." Algorithms 15, no. 11 (November 17, 2022): 435. http://dx.doi.org/10.3390/a15110435.

Full text
Abstract:
The family of α-divergences including the oriented forward and reverse Kullback–Leibler divergences is often used in signal processing, pattern recognition, and machine learning, among others. Choosing a suitable α-divergence can either be done beforehand according to some prior knowledge of the application domains or directly learned from data sets. In this work, we generalize the α-divergences using a pair of strictly comparable weighted means. Our generalization allows us to obtain in the limit case α→1 the 1-divergence, which provides a generalization of the forward Kullback–Leibler divergence, and in the limit case α→0, the 0-divergence, which corresponds to a generalization of the reverse Kullback–Leibler divergence. We then analyze the condition for a pair of weighted quasi-arithmetic means to be strictly comparable and describe the family of quasi-arithmetic α-divergences including its subfamily of power homogeneous α-divergences. In particular, we study the generalized quasi-arithmetic 1-divergences and 0-divergences and show that these counterpart generalizations of the oriented Kullback–Leibler divergences can be rewritten as equivalent conformal Bregman divergences using strictly monotone embeddings. Finally, we discuss the applications of these novel divergences to k-means clustering by studying the robustness property of the centroids.
APA, Harvard, Vancouver, ISO, and other styles
3

van Erven, Tim, and Peter Harremoes. "Rényi Divergence and Kullback-Leibler Divergence." IEEE Transactions on Information Theory 60, no. 7 (July 2014): 3797–820. http://dx.doi.org/10.1109/tit.2014.2320500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nielsen, Frank. "On Voronoi Diagrams on the Information-Geometric Cauchy Manifolds." Entropy 22, no. 7 (June 28, 2020): 713. http://dx.doi.org/10.3390/e22070713.

Full text
Abstract:
We study the Voronoi diagrams of a finite set of Cauchy distributions and their dual complexes from the viewpoint of information geometry by considering the Fisher-Rao distance, the Kullback-Leibler divergence, the chi square divergence, and a flat divergence derived from Tsallis entropy related to the conformal flattening of the Fisher-Rao geometry. We prove that the Voronoi diagrams of the Fisher-Rao distance, the chi square divergence, and the Kullback-Leibler divergences all coincide with a hyperbolic Voronoi diagram on the corresponding Cauchy location-scale parameters, and that the dual Cauchy hyperbolic Delaunay complexes are Fisher orthogonal to the Cauchy hyperbolic Voronoi diagrams. The dual Voronoi diagrams with respect to the dual flat divergences amount to dual Bregman Voronoi diagrams, and their dual complexes are regular triangulations. The primal Bregman Voronoi diagram is the Euclidean Voronoi diagram and the dual Bregman Voronoi diagram coincides with the Cauchy hyperbolic Voronoi diagram. In addition, we prove that the square root of the Kullback-Leibler divergence between Cauchy distributions yields a metric distance which is Hilbertian for the Cauchy scale families.
APA, Harvard, Vancouver, ISO, and other styles
5

Nielsen, Frank. "On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means." Entropy 21, no. 5 (May 11, 2019): 485. http://dx.doi.org/10.3390/e21050485.

Full text
Abstract:
The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.
APA, Harvard, Vancouver, ISO, and other styles
6

Ba, Amadou Diadie, and Gane Samb Lo. "Divergence Measures Estimation and its Asymptotic Normality Theory in the Discrete Case." European Journal of Pure and Applied Mathematics 12, no. 3 (July 25, 2019): 790–820. http://dx.doi.org/10.29020/nybg.ejpam.v12i3.3437.

Full text
Abstract:
In this paper we provide the asymptotic theory of the general of φ-divergences measures, which include the most common divergence measures : R´enyi and Tsallis families and the Kullback-Leibler measure. We are interested in divergence measures in the discrete case. One sided and two-sided statistical tests are derived as well as symmetrized estimators. Almost sure rates of convergence and asymptotic normality theorem are obtained in the general case, and next particularized for the R´enyi and Tsallis families and for the Kullback-Leibler measure as well. Our theoretical results are validated by simulations.
APA, Harvard, Vancouver, ISO, and other styles
7

Yanagimoto, Hidekazu, and Sigeru Omatu. "Information Filtering Using Kullback-Leibler Divergence." IEEJ Transactions on Electronics, Information and Systems 125, no. 7 (2005): 1147–52. http://dx.doi.org/10.1541/ieejeiss.125.1147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sunoj, S. M., P. G. Sankaran, and N. Unnikrishnan Nair. "Quantile-based cumulative Kullback–Leibler divergence." Statistics 52, no. 1 (May 22, 2017): 1–17. http://dx.doi.org/10.1080/02331888.2017.1327534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ponti, Moacir, Josef Kittler, Mateus Riva, Teófilo de Campos, and Cemre Zor. "A decision cognizant Kullback–Leibler divergence." Pattern Recognition 61 (January 2017): 470–78. http://dx.doi.org/10.1016/j.patcog.2016.08.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sankaran, P. G., S. M. Sunoj, and N. Unnikrishnan Nair. "Kullback–Leibler divergence: A quantile approach." Statistics & Probability Letters 111 (April 2016): 72–79. http://dx.doi.org/10.1016/j.spl.2016.01.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kullback-Leibler divergence"

1

MESEJO-LEON, DANIEL ALEJANDRO. "APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=33305@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos de consultas por segundo e precisão.
In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.
APA, Harvard, Vancouver, ISO, and other styles
2

Nounagnon, Jeannette Donan. "Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/86593.

Full text
Abstract:
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking. This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty. However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy. The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Junior, Willian Darwin. "Agrupamento de textos utilizando divergência Kullback-Leibler." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-30032016-160011/.

Full text
Abstract:
O presente trabalho propõe uma metodologia para agrupamento de textos que possa ser utilizada tanto em busca textual em geral como mais especificamente na distribuição de processos jurídicos para fins de redução do tempo de resolução de conflitos judiciais. A metodologia proposta utiliza a divergência Kullback-Leibler aplicada às distribuições de frequência dos radicais (semantemas) das palavras presentes nos textos. Diversos grupos de radicais são considerados, formados a partir da frequência com que ocorrem entre os textos, e as distribuições são tomadas em relação a cada um desses grupos. Para cada grupo, as divergências são calculadas em relação à distribuição de um texto de referência formado pela agregação de todos os textos da amostra, resultando em um valor para cada texto em relação a cada grupo de radicais. Ao final, esses valores são utilizados como atributos de cada texto em um processo de clusterização utilizando uma implementação do algoritmo K-Means, resultando no agrupamento dos textos. A metodologia é testada em exemplos simples de bancada e aplicada a casos concretos de registros de falhas elétricas, de textos com temas em comum e de textos jurídicos e o resultado é comparado com uma classificação realizada por um especialista. Como subprodutos da pesquisa realizada, foram gerados um ambiente gráfico de desenvolvimento de modelos baseados em Reconhecimento de Padrões e Redes Bayesianas e um estudo das possibilidades de utilização de processamento paralelo na aprendizagem de Redes Bayesianas.
This work proposes a methodology for grouping texts for the purposes of textual searching in general but also specifically for aiding in distributing law processes in order to reduce time applied in solving judicial conflicts. The proposed methodology uses the Kullback-Leibler divergence applied to frequency distributions of word stems occurring in the texts. Several groups of stems are considered, built up on their occurrence frequency among the texts and the resulting distributions are taken regarding each one of those groups. For each group, divergences are computed based on the distribution taken from a reference text originated from the assembling of all sample texts, yelding one value for each text in relation to each group of stems. Finally, those values are taken as attributes of each text in a clusterization process driven by a K-Means algorithm implementation providing a grouping for the texts. The methodology is tested for simple toy examples and applied to cases of electrical failure registering, texts with similar issues and law texts and compared to an expert\'s classification. As byproducts from the conducted research, a graphical development environment for Pattern Recognition and Bayesian Networks based models and a study on the possibilities of using parallel processing in Bayesian Networks learning have also been obtained.
APA, Harvard, Vancouver, ISO, and other styles
4

Harmouche, Jinane. "Statistical Incipient Fault Detection and Diagnosis with Kullback-Leibler Divergence : from Theory to Applications." Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0022/document.

Full text
Abstract:
Les travaux de cette thèse portent sur la détection et le diagnostic des défauts naissants dans les systèmes d’ingénierie et industriels, par des approches statistiques non-paramétriques. Un défaut naissant est censé provoquer comme tout défaut un changement anormal dans les mesures des variables du système. Ce changement est imperceptible mais aussi imprévisible dû à l’important rapport signal-sur défaut, et le faible rapport défaut-sur-bruit caractérisant le défaut naissant. La détection et l’identification d’un changement général nécessite une approche globale qui prend en compte la totalité de la signature des défauts. Dans ce cadre, la divergence de Kullback-Leibler est proposée comme indicateur général de défauts, sensible aux petites variations anormales cachées dans les variations du bruit. Une approche d’analyse spectrale globale est également proposée pour le diagnostic de défauts ayant une signature fréquentielle. L’application de l’approche statistique globale est illustrée sur deux études différentes. La première concerne la détection et la caractérisation, par courants de Foucault, des fissures dans les structures conductrices. La deuxième application concerne le diagnostic des défauts de roulements dans les machines électriques tournantes. En outre, ce travail traite le problème d’estimation de l’amplitude des défauts naissants. Une analyse théorique menée dans le cadre d’une modélisation par analyse en composantes principales, conduit à un modèle analytique de la divergence ne dépendant que des paramètres du défaut
This phD dissertation deals with the detection and diagnosis of incipient faults in engineering and industrial systems by non-parametric statistical approaches. An incipient fault is supposed to provoke an abnormal change in the measurements of the system variables. However, this change is imperceptible and also unpredictable due to the large signal-to-fault ratio and the low fault-to-noise ratio characterizing the incipient fault. The detection and identification of a global change require a ’global’ approach that takes into account the total faults signature. In this context, the Kullback-Leibler divergence is considered to be a ’global’ fault indicator, which is recommended sensitive to abnormal small variations hidden in noise. A ’global’ spectral analysis approach is also proposed for the diagnosis of faults with a frequency signature. The ’global’ statistical approach is proved on two application studies. The first one concerns the detection and characterization of minor cracks in conductive structures. The second application concerns the diagnosis of bearing faults in electrical rotating machines. In addition, the fault estimation problem is addressed in this work. A theoretical study is conducted to obtain an analytical model of the KL divergence, from which an estimate of the amplitude of the incipient fault is derived
APA, Harvard, Vancouver, ISO, and other styles
5

Chhogyal, Kinzang. "Belief Change: A Probabilistic Inquiry." Thesis, Griffith University, 2016. http://hdl.handle.net/10072/366331.

Full text
Abstract:
The belief state of a rational agent may be viewed as consisting of sentences that are either beliefs, disbeliefs or neither (non-beliefs). When probabilities are used to model the belief state, beliefs hold a probability of 1, disbeliefs a probability of 0, and non-beliefs a probability between 0 and 1. Probabilistic belief contraction is an operation on the belief state that takes a belief as input and turns it into a non-belief whereas probabilistic belief revision takes a disbelief and turns it into a belief. Given a probabilistic belief state P , the contraction of P by an input a is denoted as Pa− and can be determined as the mixture of P and P ∗a, where P ∗a is the belief state that is a result of revising P by ¬a. The proportion of P and P ∗ that are used in the mixture is set by the mixing factor. Thus, the mixing factor has an important role to play in determining the contracted belief state Pa−.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Ruikun. "A Kullback-Leiber Divergence Filter for Anomaly Detection in Non-Destructive Pipeline Inspection." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40987.

Full text
Abstract:
Anomaly detection generally refers to algorithmic procedures aimed at identifying relatively rare events in data sets that differ substantially from the majority of the data set to which they belong. In the context of data series generated by sensors mounted on mobile devices for non-destructive inspection and monitoring, anomalies typically identify defects to be detected, therefore defining the main task of this class of devices. In this case, a useful way of operationally defining anomalies is to look at their information content with respect to the background data, which is typically noisy and therefore easily masking the relevant events if unfiltered. In this thesis, a Kullback-Leibler (KL) Divergence filter is proposed to detect signals with relatively high information content, namely anomalies, within data series. The data is generated by using the model of a broad class of proximity sensors that apply to devices commonly used in engineering practice. This includes, for example, sensory devices mounted on mobile robotic devices for the non-destructive inspection of hazardous or other environments that may not be accessible to humans for direct inspection. The raw sensory data generated by this class of sensors is often challenging to analyze due to the prevalence of noise over the signal content that reveals the presence of relevant features, as for example damage in gas pipelines. The proposed filter is built to detect the difference of information content between the data series collected by the sensor and a baseline data series, with the advantage of not requiring the design of a threshold. Moreover, differing from the traditional filters which need the prior knowledge or distribution assumptions about the data, this KL Divergence filter is model free and suitable for all kinds of raw sensory data. Of course, it is also compatible with classical signal distribution assumptions, such as Gaussian approximation, for instance. Also, the robustness and sensitivity of the KL Divergence filter are discussed under different scenarios with various signal to noise ratios of data generated by a simulator reproducing very realistic scenarios and based on models of real sensors provided by manufacturers or widely accepted in the literature.
APA, Harvard, Vancouver, ISO, and other styles
7

Jung, Daniel. "Diagnosability performance analysis of models and fault detectors." Doctoral thesis, Linköpings universitet, Fordonssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117058.

Full text
Abstract:
Model-based diagnosis compares observations from a system with predictions using a mathematical model to detect and isolate faulty components. Analyzing which faults that can be detected and isolated given the model gives useful information when designing a diagnosis system. This information can be used, for example, to determine which residual generators can be generated or to select a sufficient set of sensors that can be used to detect and isolate the faults. With more information about the system taken into consideration during such an analysis, more accurate estimations can be computed of how good fault detectability and isolability that can be achieved. Model uncertainties and measurement noise are the main reasons for reduced fault detection and isolation performance and can make it difficult to design a diagnosis system that fulfills given performance requirements. By taking information about different uncertainties into consideration early in the development process of a diagnosis system, it is possible to predict how good performance can be achieved by a diagnosis system and avoid bad design choices. This thesis deals with quantitative analysis of fault detectability and isolability performance when taking model uncertainties and measurement noise into consideration. The goal is to analyze fault detectability and isolability performance given a mathematical model of the monitored system before a diagnosis system is developed. A quantitative measure of fault detectability and isolability performance for a given model, called distinguishability, is proposed based on the Kullback-Leibler divergence. The distinguishability measure answers questions like "How difficult is it to isolate a fault fi from another fault fj?. Different properties of the distinguishability measure are analyzed. It is shown for example, that for linear descriptor models with Gaussian noise, distinguishability gives an upper limit for the fault to noise ratio of any linear residual generator. The proposed measure is used for quantitative analysis of a nonlinear mean value model of gas flows in a heavy-duty diesel engine to analyze how fault diagnosability performance varies for different operating points. It is also used to formulate the sensor selection problem, i.e., to find a cheapest set of available sensors that should be used in a system to achieve required fault diagnosability performance. As a case study, quantitative fault diagnosability analysis is used during the design of an engine misfire detection algorithm based on the crankshaft angular velocity measured at the flywheel. Decisions during the development of the misfire detection algorithm are motivated using quantitative analysis of the misfire detectability performance showing, for example, varying detection performance at different operating points and for different cylinders to identify when it is more difficult to detect misfires. This thesis presents a framework for quantitative fault detectability and isolability analysis that is a useful tool during the design of a diagnosis system. The different applications show examples of how quantitate analysis can be applied during a design process either as feedback to an engineer or when formulating different design steps as optimization problems to assure that required performance can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

White, Staci A. "Quantifying Model Error in Bayesian Parameter Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433771825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Adamcik, Martin. "Collective reasoning under uncertainty and inconsistency." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/collective-reasoning-under-uncertainty-and-inconsistency(7fab8021-8beb-45e7-8b45-7cb4fadd70be).html.

Full text
Abstract:
In this thesis we investigate some global desiderata for probabilistic knowledge merging given several possibly jointly inconsistent, but individually consistent knowledge bases. We show that the most naive methods of merging, which combine applications of a single expert inference process with the application of a pooling operator, fail to satisfy certain basic consistency principles. We therefore adopt a different approach. Following recent developments in machine learning where Bregman divergences appear to be powerful, we define several probabilistic merging operators which minimise the joint divergence between merged knowledge and given knowledge bases. In particular we prove that in many cases the result of applying such operators coincides with the sets of fixed points of averaging projective procedures - procedures which combine knowledge updating with pooling operators of decision theory. We develop relevant results concerning the geometry of Bregman divergences and prove new theorems in this field. We show that this geometry connects nicely with some desirable principles which have arisen in the epistemology of merging. In particular, we prove that the merging operators which we define by means of convex Bregman divergences satisfy analogues of the principles of merging due to Konieczny and Pino-Perez. Additionally, we investigate how such merging operators behave with respect to principles concerning irrelevant information, independence and relativisation which have previously been intensively studied in case of single-expert probabilistic inference. Finally, we argue that two particular probabilistic merging operators which are based on Kullback-Leibler divergence, a special type of Bregman divergence, have overall the most appealing properties amongst merging operators hitherto considered. By investigating some iterative procedures we propose algorithms to practically compute them.
APA, Harvard, Vancouver, ISO, and other styles
10

Macêra, Márcia Aparecida Centanin. "Uso dos métodos clássico e bayesiano para os modelos não-lineares heterocedásticos simétricos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14092011-164458/.

Full text
Abstract:
Os modelos normais de regressão têm sido utilizados durante muitos anos para a análise de dados. Mesmo nos casos em que a normalidade não podia ser suposta, tentava-se algum tipo de transformação com o intuito de alcançar a normalidade procurada. No entanto, na prática, essas suposições sobre normalidade e linearidade nem sempre são satisfeitas. Como alternativas à técnica clássica, foram desenvolvidas novas classes de modelos de regressão. Nesse contexto, focamos a classe de modelos em que a distribuição assumida para a variável resposta pertence à classe de distribuições simétricas. O objetivo geral desse trabalho é a modelagem desta classe no contexto bayesiano, em particular a modelagem da classe de modelos não-lineares heterocedásticos simétricos. Vale ressaltar que esse trabalho tem ligação com duas linhas de pesquisa, a saber: a inferência estatística abordando aspectos da teoria assintótica e a inferência bayesiana considerando aspectos de modelagem e critérios de seleção de modelos baseados em métodos de simulação de Monte Carlo em Cadeia de Markov (MCMC). Uma primeira etapa consiste em apresentar a classe dos modelos não-lineares heterocedásticos simétricos bem como a inferência clássica dos parâmetros desses modelos. Posteriormente, propomos uma abordagem bayesiana para esses modelos, cujo objetivo é mostrar sua viabilidade e comparar a inferência bayesiana dos parâmetros estimados via métodos MCMC com a inferência clássica das estimativas obtidas por meio da ferramenta GAMLSS. Além disso, utilizamos o método bayesiano de análise de influência caso a caso baseado na divergência de Kullback-Leibler para detectar observações influentes nos dados. A implementação computacional foi desenvolvida no software R e para detalhes dos programas pode ser consultado aos autores do trabalho
The normal regression models have been used for many years for data analysis. Even in cases where normality could not be assumed, was trying to be some kind of transformation in order to achieve the normality sought. However, in practice, these assumptions about normality and linearity are not always satisfied. As alternatives to classical technique new classes of regression models were developed. In this context, we focus on the class of models in which the distribution assumed for the response variable belongs to the symmetric distributions class. The aim of this work is the modeling of this class in the bayesian context, in particular the modeling of the nonlinear models heteroscedastic symmetric class. Note that this work is connected with two research lines, the statistical inference addressing aspects of asymptotic theory and the bayesian inference considering aspects of modeling and criteria for models selection based on simulation methods Monte Carlo Markov Chain (MCMC). A first step is to present the nonlinear models heteroscedastic symmetric class as well as the classic inference of parameters of these models. Subsequently, we propose a bayesian approach to these models, whose objective is to show their feasibility and compare the estimated parameters bayesian inference by MCMC methods with the classical inference of the estimates obtained by GAMLSS tool. In addition, we use the bayesian method of influence analysis on a case based on the Kullback-Leibler divergence for detecting influential observations in the data. The computational implementation was developed in the software R and programs details can be found at the studys authors
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Kullback-Leibler divergence"

1

Соловйов, Володимир Миколайович, Андрій Олександрович Бєлінський,, A. V. Matviychuk, and O. A. Serdyuk. Permutation Based Complexity Measures and Crashes. Братислава-Харьков, ВШЭМ – ХНЭУ им. С. Кузнеца, 2021. http://dx.doi.org/10.31812/123456789/4397.

Full text
Abstract:
A comprehensive analysis of permutation measures of the complexity of economic systems is performed by calculating the permutation entropy and the Kullback-Leibler divergence within the algorithm of a sliding window. A comparative analysis of these measures with the daily values of the Dow Jones index, WTI oil prices and Bitcoin prices indicate the possibility of their use as indicators-precursors of the known crashes in selected markets
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kullback-Leibler divergence"

1

Polani, Daniel. "Kullback-Leibler Divergence." In Encyclopedia of Systems Biology, 1087–88. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Joyce, James M. "Kullback-Leibler Divergence." In International Encyclopedia of Statistical Science, 720–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-04898-2_327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roldán, Édgar. "Dissipation and Kullback–Leibler Divergence." In Irreversibility and Dissipation in Microscopic Systems, 37–59. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07079-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roldán, Édgar. "Estimating the Kullback–Leibler Divergence." In Irreversibility and Dissipation in Microscopic Systems, 61–85. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07079-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Zhirong, He Zhang, Zhijian Yuan, and Erkki Oja. "Kullback-Leibler Divergence for Nonnegative Matrix Factorization." In Lecture Notes in Computer Science, 250–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21735-7_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Agrawal, Rohit, Yi-Hsiu Chen, Thibaut Horel, and Salil Vadhan. "Unifying Computational Entropies via Kullback–Leibler Divergence." In Advances in Cryptology – CRYPTO 2019, 831–58. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26951-7_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hongtian, Bin Jiang, Ningyun Lu, and Wen Chen. "PCA and Kullback-Leibler Divergence-Based FDD Methods." In Data-driven Detection and Diagnosis of Faults in Traction Systems of High-speed Trains, 119–35. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46263-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huynh, Hiep Xuan, Cang Anh Phan, Tu Cam Thi Tran, Hai Thanh Nguyen, and Dinh Quoc Truong. "Threshold Text Classification with Kullback–Leibler Divergence Approach." In Machine Learning and Mechanics Based Soft Computing Applications, 1–11. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6450-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Corduas, Marcella. "Assessing Similarity of Rating Distributions by Kullback-Leibler Divergence." In Classification and Multivariate Analysis for Complex Data Structures, 221–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13312-1_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chirco, Goffredo. "Rényi Relative Entropy from Homogeneous Kullback-Leibler Divergence Lagrangian." In Lecture Notes in Computer Science, 744–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80209-7_80.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kullback-Leibler divergence"

1

Raiber, Fiana, and Oren Kurland. "Kullback-Leibler Divergence Revisited." In ICTIR '17: ACM SIGIR International Conference on the Theory of Information Retrieval. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3121050.3121062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xiangfei, Huan Zhao, and Han Ding. "Kullback-Leibler Divergence-Based Visual Servoing." In 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, 2021. http://dx.doi.org/10.1109/aim46487.2021.9517706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nomura, Ryo. "Source Resolvability with Kullback-Leibler Divergence." In 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018. http://dx.doi.org/10.1109/isit.2018.8437647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahuja, Kartik. "Estimating Kullback-Leibler Divergence Using Kernel Machines." In 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019. http://dx.doi.org/10.1109/ieeeconf44664.2019.9049082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Im, Chaewon, Seongjin Ahn, and Dongweon Yoon. "Modulation Classification Based on Kullback-Leibler Divergence." In 2020 IEEE 15th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET). IEEE, 2020. http://dx.doi.org/10.1109/tcset49122.2020.235457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sum, John, Chi-sing Leung, and Lipin Hsu. "Fault tolerant learning using Kullback-Leibler divergence." In TENCON 2007 - 2007 IEEE Region 10 Conference. IEEE, 2007. http://dx.doi.org/10.1109/tencon.2007.4429073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zeng, Jia, Xiao-Qin Cao, and Hong Yan. "Human Promoter Recognition using Kullback-Leibler Divergence." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pheng, Hang See, Siti Mariyam Shamsuddin, Wong Yee Leng, and Razana Alwee. "Kullback Leibler divergence for image quantitative evaluation." In ADVANCES IN INDUSTRIAL AND APPLIED MATHEMATICS: Proceedings of 23rd Malaysian National Symposium of Mathematical Sciences (SKSM23). Author(s), 2016. http://dx.doi.org/10.1063/1.4954516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Perez-Cruz, Fernando. "Kullback-Leibler divergence estimation of continuous distributions." In 2008 IEEE International Symposium on Information Theory - ISIT. IEEE, 2008. http://dx.doi.org/10.1109/isit.2008.4595271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mansouri, Majdi, Hazem Nounou, and Mohamed Nounou. "Kullback-Leibler divergence -based improved particle filter." In 2014 11th International Multi-Conference on Systems, Signals & Devices (SSD). IEEE, 2014. http://dx.doi.org/10.1109/ssd.2014.6808793.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Kullback-Leibler divergence"

1

Wilson, D., Matthew Kamrath, Caitlin Haedrich, Daniel Breton, and Carl Hart. Urban noise distributions and the influence of geometric spreading on skewness. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42483.

Full text
Abstract:
Statistical distributions of urban noise levels are influenced by many complex phenomena, including spatial and temporal variations in the source level, multisource mixtures, propagation losses, and random fading from multipath reflections. This article provides a broad perspective on the varying impacts of these phenomena. Distributions incorporating random fading and averaging (e.g., gamma and noncentral Erlang) tend to be negatively skewed on logarithmic (decibel) axes but can be positively skewed if the fading process is strongly modulated by source power variations (e.g., compound gamma). In contrast, distributions incorporating randomly positioned sources and explicit geometric spreading [e.g., exponentially modified Gaussian (EMG)] tend to be positively skewed with exponential tails on logarithmic axes. To evaluate the suitability of the various distributions, one-third octave band sound-level data were measured at 37 locations in the North End of Boston, MA. Based on the Kullback-Leibler divergence as calculated across all of the locations and frequencies, the EMG provides the most consistently good agreement with the data, which were generally positively skewed. The compound gamma also fits the data well and even outperforms the EMG for the small minority of cases exhibiting negative skew. The lognormal provides a suitable fit in cases in which particular non-traffic noise sources dominate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography