Littérature scientifique sur le sujet « Kullback-leibler average »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Kullback-leibler average ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Kullback-leibler average"

1

Luan, Yu, Hong Zuo Li et Ya Fei Wang. « Acoustic Features Selection of Speaker Verification Based on Average KL Distance ». Applied Mechanics and Materials 373-375 (août 2013) : 629–33. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.629.

Texte intégral
Résumé :
This paper proposes a new Average Kullback-Leibler distance to make an optimal feature selection algorithm for the matching score fusion of speaker verification. The advantage of this novel distance is to overcome the shortcoming of the asymmetry of conventional Kullback-Leibler distance, which can ensure the accuracy and robustness of the computation of the information content between matching scores of two acoustic features. From the experimental results by a variety of fusion schemes, it is found that the matching score fusion between MFCC and residual phase gains most information content. It indicates this scheme can yield an excellent performance.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lu, Wanbo, et Wenhui Shi. « Model Averaging Estimation Method by Kullback–Leibler Divergence for Multiplicative Error Model ». Complexity 2022 (27 avril 2022) : 1–13. http://dx.doi.org/10.1155/2022/7706992.

Texte intégral
Résumé :
In this paper, we propose the model averaging estimation method for multiplicative error model and construct the corresponding weight choosing criterion based on the Kullback–Leibler divergence with a hyperparameter to avoid the problem of overfitting. The resulting model average estimator is proved to be asymptotically optimal. It is shown that the Kullback–Leibler model averaging (KLMA) estimator asymptotically minimizes the in-sample Kullback–Leibler divergence and improves the forecast accuracy of out-of-sample even under different loss functions. In simulations, we show that the KLMA estimator compares favorably with smooth-AIC estimator (SAIC), smooth-BIC estimator (SBIC), and Mallows model averaging estimator (MMA), especially when some nonlinear noise is added to the data generation process. The empirical applications in the daily range of S&P500 and price duration of IBM show that the out-of-sample forecasting capacity of the KLMA estimator is better than that of other methods.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Nielsen, Frank. « On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means ». Entropy 21, no 5 (11 mai 2019) : 485. http://dx.doi.org/10.3390/e21050485.

Texte intégral
Résumé :
The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Battistelli, Giorgio, et Luigi Chisci. « Kullback–Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability ». Automatica 50, no 3 (mars 2014) : 707–18. http://dx.doi.org/10.1016/j.automatica.2013.11.042.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hsu, Chia-Ling, et Wen-Chung Wang. « Multidimensional Computerized Adaptive Testing Using Non-Compensatory Item Response Theory Models ». Applied Psychological Measurement 43, no 6 (26 octobre 2018) : 464–80. http://dx.doi.org/10.1177/0146621618800280.

Texte intégral
Résumé :
Current use of multidimensional computerized adaptive testing (MCAT) has been developed in conjunction with compensatory multidimensional item response theory (MIRT) models rather than with non-compensatory ones. In recognition of the usefulness of MCAT and the complications associated with non-compensatory data, this study aimed to develop MCAT algorithms using non-compensatory MIRT models and to evaluate their performance. For the purpose of the study, three item selection methods were adapted and compared, namely, the Fisher information method, the mutual information method, and the Kullback–Leibler information method. The results of a series of simulations showed that the Fisher information and mutual information methods performed similarly, and both outperformed the Kullback–Leibler information method. In addition, it was found that the more stringent the termination criterion and the higher the correlation between the latent traits, the higher the resulting measurement precision and test reliability. Test reliability was very similar across the dimensions, regardless of the correlation between the latent traits and termination criterion. On average, the difficulties of the administered items were found to be at a lower level than the examinees’ abilities, which shed light on item bank construction for non-compensatory items.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Marsh, Patrick. « THE PROPERTIES OF KULLBACK–LEIBLER DIVERGENCE FOR THE UNIT ROOT HYPOTHESIS ». Econometric Theory 25, no 6 (décembre 2009) : 1662–81. http://dx.doi.org/10.1017/s0266466609990284.

Texte intégral
Résumé :
The fundamental contributions made by Paul Newbold have highlighted how crucial it is to detect when economic time series have unit roots. This paper explores the effects that model specification has on our ability to do that. Asymptotic power, a natural choice to quantify these effects, does not accurately predict finite-sample power. Instead, here the Kullback–Leibler divergence between the unit root null and any alternative is used and its numeric and analytic properties detailed. Numerically it behaves in a similar way to finite-sample power. However, because it is analytically available we are able to prove that it is a minimizable function of the degree of trending in any included deterministic component and of the correlation of the underlying innovations. It is explicitly confirmed, therefore, that it is approximately linear trends and negative unit root moving average innovations that minimize the efficacy of unit root inferential tools. Applied to the Nelson and Plosser macroeconomic series the effect that different types of trends included in the model have on unit root inference is clearly revealed.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yang, Ce, Dong Han, Weiqing Sun et Kunpeng Tian. « Distributionally Robust Model of Energy and Reserve Dispatch Based on Kullback–Leibler Divergence ». Electronics 8, no 12 (1 décembre 2019) : 1454. http://dx.doi.org/10.3390/electronics8121454.

Texte intégral
Résumé :
This paper proposes a distance-based distributionally robust energy and reserve (DB-DRER) dispatch model via Kullback–Leibler (KL) divergence, considering the volatile of renewable energy generation. Firstly, a two-stage optimization model is formulated to minimize the expected total cost of energy and reserve (ER) dispatch. Then, KL divergence is adopted to establish the ambiguity set. Distinguished from conventional robust optimization methodology, the volatile output of renewable power generation is assumed to follow the unknown probability distribution that is restricted in the ambiguity set. DB-DRER aims at minimizing the expected total cost in the worst-case probability distributions of renewables. Combining with the designed empirical distribution function, the proposed DB-DRER model can be reformulated into a mixed integer nonlinear programming (MINLP) problem. Furthermore, using the generalized Benders decomposition, a decomposition method is proposed and sample average approximation (SAA) method is applied to solve this problem. Finally, simulation result of the proposed method is compared with those of stochastic optimization and conventional robust optimization methods on the 6-bus system and IEEE 118-bus system, which demonstrates the effectiveness and advantages of the method proposed.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Makalic, E., et D. F. Schmidt. « Fast Computation of the Kullback–Leibler Divergence and Exact Fisher Information for the First-Order Moving Average Model ». IEEE Signal Processing Letters 17, no 4 (avril 2010) : 391–93. http://dx.doi.org/10.1109/lsp.2009.2039659.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Weijs, Steven V., et Nick van de Giesen. « Accounting for Observational Uncertainty in Forecast Verification : An Information-Theoretical View on Forecasts, Observations, and Truth ». Monthly Weather Review 139, no 7 (1 juillet 2011) : 2156–62. http://dx.doi.org/10.1175/2011mwr3573.1.

Texte intégral
Résumé :
Abstract Recently, an information-theoretical decomposition of Kullback–Leibler divergence into uncertainty, reliability, and resolution was introduced. In this article, this decomposition is generalized to the case where the observation is uncertain. Along with a modified decomposition of the divergence score, a second measure, the cross-entropy score, is presented, which measures the estimated information loss with respect to the truth instead of relative to the uncertain observations. The difference between the two scores is equal to the average observational uncertainty and vanishes when observations are assumed to be perfect. Not acknowledging for observation uncertainty can lead to both overestimation and underestimation of forecast skill, depending on the nature of the noise process.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gao, Zhang, Xiao et Li. « Kullback–Leibler Divergence Based Probabilistic Approach for Device-Free Localization Using Channel State Information ». Sensors 19, no 21 (3 novembre 2019) : 4783. http://dx.doi.org/10.3390/s19214783.

Texte intégral
Résumé :
Recently, people have become more and more interested in wireless sensing applications, among which indoor localization is one of the most attractive. Generally, indoor localization can be classified as device-based and device-free localization (DFL). The former requires a target to carry certain devices or sensors to assist the localization process, whereas the latter has no such requirement, which merely requires the wireless network to be deployed around the environment to sense the target, rendering it much more challenging. Channel State Information (CSI)—a kind of information collected in the physical layer—is composed of multiple subcarriers, boasting highly fined granularity, which has gradually become a focus of indoor localization applications. In this paper, we propose an approach to performing DFL tasks by exploiting the uncertainty of CSI. We respectively utilize the CSI amplitudes and phases of multiple communication links to construct fingerprints, each of which is a set of multivariate Gaussian distributions that reflect the uncertainty information of CSI. Additionally, we propose a kind of combined fingerprints to simultaneously utilize the CSI amplitudes and phases, hoping to improve localization accuracy. Then, we adopt a Kullback–Leibler divergence (KL-divergence) based kernel function to calculate the probabilities that a testing fingerprint belongs to all the reference locations. Next, to localize the target, we utilize the computed probabilities as weights to average the reference locations. Experimental results show that the proposed approach, whatever type of fingerprints is used, outperforms the existing Pilot and Nuzzer systems in two typical indoor environments. We conduct extensive experiments to explore the effects of different parameters on localization performance, and the results demonstrate the efficiency of the proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Kullback-leibler average"

1

FANTACCI, CLAUDIO. « Distributed multi-object tracking over sensor networks : a random finite set approach ». Doctoral thesis, 2015. http://hdl.handle.net/2158/1003256.

Texte intégral
Résumé :
The aim of the present dissertation is to address distributed tracking over a network of heterogeneous and geographically dispersed nodes (or agents) with sensing, communication and processing capabilities. Tracking is carried out in the Bayesian framework and its extension to a distributed context is made possible via an information-theoretic approach to data fusion which exploits consensus algorithms and the notion of Kullback–Leibler Average (KLA) of the Probability Density Functions (PDFs) to be fused. The first step toward distributed tracking considers a single moving object. Consensus takes place in each agent for spreading information over the network so that each node can track the object. To achieve such a goal, consensus is carried out on the local single-object posterior distribution, which is the result of local data processing, in the Bayesian setting, exploiting the last available measurement about the object. Such an approach is called Consensus on Posteriors (CP). The first contribution of the present work is an improvement to the CP algorithm, namely Parallel Consensus on Likelihoods and Priors (CLCP). The idea is to carry out, in parallel, a separate consensus for the novel information (likelihoods) and one for the prior information (priors). This parallel procedure is conceived to avoid underweighting the novel information during the fusion steps. The outcomes of the two consensuses are then combined to provide the fused posterior density. Furthermore, the case of a single highly-maneuvering object is addressed. To this end, the object is modeled as a jump Markovian system and the multiple model (MM) filtering approach is adopted for local estimation. Thus, the consensus algorithms needs to be re-designed to cope with this new scenario. The second contribution has been to devise two novel consensus MM filters to be used for tracking a maneuvering object. The novel consensus-based MM filters are based on the First Order Generalized Pseudo-Bayesian (GPB1) and Interacting Multiple Model (IMM) filters. The next step is in the direction of distributed estimation of multiple moving objects. In order to model, in a rigorous and elegant way, a possibly time-varying number of objects present in a given area of interest, the Random Finite Set (RFS) formulation is adopted since it provides the notion of probability density for multi-object states that allows to directly extend existing tools in distributed estimation to multi-object tracking. The multi-object Bayes filter proposed by Mahler is a theoretically grounded solution to recursive Bayesian tracking based on RFSs. However, the multi-object Bayes recursion, unlike the single-object counterpart, is affected by combinatorial complexity and is, therefore, computationally infeasible except for very small-scale problems involving few objects and/or measurements. For this reason, the computationally tractable Probability Hypothesis Density (PHD) and Cardinalized PHD (CPHD) filtering approaches will be used as a first endeavour to distributed multiobject filtering. The third contribution is the generalisation of the single-object KLA to the RFS framework, which is the theoretical fundamental step for developing a novel consensus algorithm based on CPHD filtering, namely the Consensus CPHD (CCPHD). Each tracking agent locally updates multi-object CPHD, i.e. the cardinality distribution and the PHD, exploiting the multi-object dynamics and the available local measurements, exchanges such information with communicating agents and then carries out a fusion step to combine the information from all neighboring agents. The last theoretical step of the present dissertation is toward distributed filtering with the further requirement of unique object identities. To this end the labeled RFS framework is adopted as it provides a tractable approach to the multi-object Bayesian recursion. The δ- GLMB filter is an exact closed-form solution to the multi-object Bayes recursion which jointly yields state and label (or trajectory) estimates in the presence of clutter, misdetections and association uncertainty. Due to the presence of explicit data associations in the δ-GLMB filter, the number of components in the posterior grows without bound in time. The fourth contribution of this thesis is an efficient approximation of the δ-GLMB filter, namely Marginalized δ-GLMB (Mδ-GLMB), which preserves key summary statistics (i.e. both the PHD and cardinality distribution) of the full labeled posterior. This approximation also facilitates efficient multi-sensor tracking with detection-based measurements. Simulation results are presented to verify the proposed approach. Finally, distributed labeled multi-object tracking over sensor networks is taken into account. The last contribution is a further generalization of the KLA to the labeled RFS framework, which enables the development of two novel consensus tracking filters, namely the Consensus Marginalized δ-Generalized Labeled Multi-Bernoulli (CM-δGLMB) and the Consensus Labeled Multi-Bernoulli (CLMB) tracking filters. The proposed algorithms provide a fully distributed, scalable and computationally efficient solution for multi-object tracking. Simulation experiments on challenging single-object or multi-object tracking scenarios confirm the effectiveness of the proposed contributions.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Kullback-leibler average"

1

« * for a replicate design account for VarD ; * for example here, set VarD=0 ; varD=0.0 ; s = sqrt(varD + sigmaW*sigmaW) ; It is worth noting here that REML modelling in replicate designs and the resulting ABE assessments are sensitive to the way in which the variance-covariance matrix is constructed (Patterson and Jones, 2002a). The recommended FDA procedure (FDA Guidance, 2001) provides bi-ased variance estimates (Patterson and Jones, 2002c) in certain situa-tions ; however, it also constrains the Type I error rate to be less than 5% for average bioequivalence due to the constraints placed on the variance-covariance parameter space, which is a desirable property for regulators reviewing such data. 7.7 Kullback–Leibler divergence Dragalin and Fedorov (1999) and Dragalin et al. (2002) pointed out some disadvantages of using the metrics for ABE, PBE and IBE, that we have described in the previous sections, and proposed a unified approach to equivalence testing based on the Kullback–Leibler divergence (KLD) (Kullback and Leibler, 1951). In this approach bioequivalence testing is regarded as evaluating the distance between two distributions of selected pharmacokinetic statistics or parameters for T and R. For example, the selected statistics might be log(AUC) or log(Cmax), as used in the previous sections. To demonstrate bioequivalence, the following hypotheses are tested : H : d(f ) > ; d vs. H , (7.15) where f are the appropriate density functions of the observa-tions from T and R, respectively, and d is a pre-defined boundary or goal-post. Equivalence is determined if the following null hypothesis is rejected. For convenience the upper bound of a 90% confidence interval, d . If d then bioequivalence is accepted ; otherwise it is rejected. Under the assumption that T and R have the same variance, i.e., σ , the KLD for ABE becomes (µ −µ ) d( fT , f σ2 which differs from the (unscaled) measure defined in Section 4.2. If the statistics (e.g., log(AUC)) for T and R are normally distributed with means µ , respectively, and variances σ ». Dans Design and Analysis of Cross-Over Trials, 371. Chapman and Hall/CRC, 2003. http://dx.doi.org/10.1201/9781420036091-26.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Kullback-leibler average"

1

Lu, Kelin, Kuo-Chu Chang et Rui Zhou. « Weighted Kullback-Leibler average-based distributed filtering algorithm ». Dans SPIE Defense + Security, sous la direction de Ivan Kadar. SPIE, 2015. http://dx.doi.org/10.1117/12.2177493.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Baobao, et Lianzhong Zhang. « Weight Kullback-Leibler Average Interactive multiple model Probabilistic Data Association Filter ». Dans 2020 5th International Conference on Mechanical, Control and Computer Engineering (ICMCCE). IEEE, 2020. http://dx.doi.org/10.1109/icmcce51767.2020.00269.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Fan, Cody, Tsang-Kai Chang et Ankur Mehta. « Kullback-Leibler Average of von Mises Distributions in Multi-Agent Systems ». Dans 2020 59th IEEE Conference on Decision and Control (CDC). IEEE, 2020. http://dx.doi.org/10.1109/cdc42340.2020.9303876.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

YuMing, Du. « Evaluation Criterion of Linear Model Order Selection Approaches Based Average Kullback-Leibler Divergence ». Dans 2009 WRI Global Congress on Intelligent Systems. IEEE, 2009. http://dx.doi.org/10.1109/gcis.2009.340.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Katariya, Sumeet, Branislav Kveton, Csaba Szepesvári, Claire Vernade et Zheng Wen. « Bernoulli Rank-1 Bandits for Click Feedback ». Dans Twenty-Sixth International Joint Conference on Artificial Intelligence. California : International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/278.

Texte intégral
Résumé :
The probability that a user will click a search result depends both on its relevance and its position on the results page. The position based model explains this behavior by ascribing to every item an attraction probability, and to every position an examination probability. To be clicked, a result must be both attractive and examined. The probabilities of an item-position pair being clicked thus form the entries of a rank-1 matrix. We propose the learning problem of a Bernoulli rank-1 bandit where at each step, the learning agent chooses a pair of row and column arms, and receives the product of their Bernoulli-distributed values as a reward. This is a special case of the stochastic rank-1 bandit problem considered in recent work that proposed an elimination based algorithm Rank1Elim, and showed that Rank1Elim's regret scales linearly with the number of rows and columns on "benign" instances. These are the instances where the minimum of the average row and column rewards mu is bounded away from zero. The issue with Rank1Elim is that it fails to be competitive with straightforward bandit strategies as mu tends to 0. In this paper we propose Rank1ElimKL, which replaces the crude confidence intervals of Rank1Elim with confidence intervals based on Kullback-Leibler (KL) divergences. With the help of a novel result concerning the scaling of KL divergences we prove that with this change, our algorithm will be competitive no matter the value of mu. Experiments with synthetic data confirm that on benign instances the performance of Rank1ElimKL is significantly better than that of even Rank1Elim. Similarly, experiments with models derived from real-data confirm that the improvements are significant across the board, regardless of whether the data is benign or not.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie