Articles de revues sur le sujet « Kullback-leibler average »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Kullback-leibler average.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Kullback-leibler average ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Luan, Yu, Hong Zuo Li et Ya Fei Wang. « Acoustic Features Selection of Speaker Verification Based on Average KL Distance ». Applied Mechanics and Materials 373-375 (août 2013) : 629–33. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.629.

Texte intégral
Résumé :
This paper proposes a new Average Kullback-Leibler distance to make an optimal feature selection algorithm for the matching score fusion of speaker verification. The advantage of this novel distance is to overcome the shortcoming of the asymmetry of conventional Kullback-Leibler distance, which can ensure the accuracy and robustness of the computation of the information content between matching scores of two acoustic features. From the experimental results by a variety of fusion schemes, it is found that the matching score fusion between MFCC and residual phase gains most information content. It indicates this scheme can yield an excellent performance.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lu, Wanbo, et Wenhui Shi. « Model Averaging Estimation Method by Kullback–Leibler Divergence for Multiplicative Error Model ». Complexity 2022 (27 avril 2022) : 1–13. http://dx.doi.org/10.1155/2022/7706992.

Texte intégral
Résumé :
In this paper, we propose the model averaging estimation method for multiplicative error model and construct the corresponding weight choosing criterion based on the Kullback–Leibler divergence with a hyperparameter to avoid the problem of overfitting. The resulting model average estimator is proved to be asymptotically optimal. It is shown that the Kullback–Leibler model averaging (KLMA) estimator asymptotically minimizes the in-sample Kullback–Leibler divergence and improves the forecast accuracy of out-of-sample even under different loss functions. In simulations, we show that the KLMA estimator compares favorably with smooth-AIC estimator (SAIC), smooth-BIC estimator (SBIC), and Mallows model averaging estimator (MMA), especially when some nonlinear noise is added to the data generation process. The empirical applications in the daily range of S&P500 and price duration of IBM show that the out-of-sample forecasting capacity of the KLMA estimator is better than that of other methods.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Nielsen, Frank. « On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means ». Entropy 21, no 5 (11 mai 2019) : 485. http://dx.doi.org/10.3390/e21050485.

Texte intégral
Résumé :
The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Battistelli, Giorgio, et Luigi Chisci. « Kullback–Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability ». Automatica 50, no 3 (mars 2014) : 707–18. http://dx.doi.org/10.1016/j.automatica.2013.11.042.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hsu, Chia-Ling, et Wen-Chung Wang. « Multidimensional Computerized Adaptive Testing Using Non-Compensatory Item Response Theory Models ». Applied Psychological Measurement 43, no 6 (26 octobre 2018) : 464–80. http://dx.doi.org/10.1177/0146621618800280.

Texte intégral
Résumé :
Current use of multidimensional computerized adaptive testing (MCAT) has been developed in conjunction with compensatory multidimensional item response theory (MIRT) models rather than with non-compensatory ones. In recognition of the usefulness of MCAT and the complications associated with non-compensatory data, this study aimed to develop MCAT algorithms using non-compensatory MIRT models and to evaluate their performance. For the purpose of the study, three item selection methods were adapted and compared, namely, the Fisher information method, the mutual information method, and the Kullback–Leibler information method. The results of a series of simulations showed that the Fisher information and mutual information methods performed similarly, and both outperformed the Kullback–Leibler information method. In addition, it was found that the more stringent the termination criterion and the higher the correlation between the latent traits, the higher the resulting measurement precision and test reliability. Test reliability was very similar across the dimensions, regardless of the correlation between the latent traits and termination criterion. On average, the difficulties of the administered items were found to be at a lower level than the examinees’ abilities, which shed light on item bank construction for non-compensatory items.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Marsh, Patrick. « THE PROPERTIES OF KULLBACK–LEIBLER DIVERGENCE FOR THE UNIT ROOT HYPOTHESIS ». Econometric Theory 25, no 6 (décembre 2009) : 1662–81. http://dx.doi.org/10.1017/s0266466609990284.

Texte intégral
Résumé :
The fundamental contributions made by Paul Newbold have highlighted how crucial it is to detect when economic time series have unit roots. This paper explores the effects that model specification has on our ability to do that. Asymptotic power, a natural choice to quantify these effects, does not accurately predict finite-sample power. Instead, here the Kullback–Leibler divergence between the unit root null and any alternative is used and its numeric and analytic properties detailed. Numerically it behaves in a similar way to finite-sample power. However, because it is analytically available we are able to prove that it is a minimizable function of the degree of trending in any included deterministic component and of the correlation of the underlying innovations. It is explicitly confirmed, therefore, that it is approximately linear trends and negative unit root moving average innovations that minimize the efficacy of unit root inferential tools. Applied to the Nelson and Plosser macroeconomic series the effect that different types of trends included in the model have on unit root inference is clearly revealed.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yang, Ce, Dong Han, Weiqing Sun et Kunpeng Tian. « Distributionally Robust Model of Energy and Reserve Dispatch Based on Kullback–Leibler Divergence ». Electronics 8, no 12 (1 décembre 2019) : 1454. http://dx.doi.org/10.3390/electronics8121454.

Texte intégral
Résumé :
This paper proposes a distance-based distributionally robust energy and reserve (DB-DRER) dispatch model via Kullback–Leibler (KL) divergence, considering the volatile of renewable energy generation. Firstly, a two-stage optimization model is formulated to minimize the expected total cost of energy and reserve (ER) dispatch. Then, KL divergence is adopted to establish the ambiguity set. Distinguished from conventional robust optimization methodology, the volatile output of renewable power generation is assumed to follow the unknown probability distribution that is restricted in the ambiguity set. DB-DRER aims at minimizing the expected total cost in the worst-case probability distributions of renewables. Combining with the designed empirical distribution function, the proposed DB-DRER model can be reformulated into a mixed integer nonlinear programming (MINLP) problem. Furthermore, using the generalized Benders decomposition, a decomposition method is proposed and sample average approximation (SAA) method is applied to solve this problem. Finally, simulation result of the proposed method is compared with those of stochastic optimization and conventional robust optimization methods on the 6-bus system and IEEE 118-bus system, which demonstrates the effectiveness and advantages of the method proposed.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Makalic, E., et D. F. Schmidt. « Fast Computation of the Kullback–Leibler Divergence and Exact Fisher Information for the First-Order Moving Average Model ». IEEE Signal Processing Letters 17, no 4 (avril 2010) : 391–93. http://dx.doi.org/10.1109/lsp.2009.2039659.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Weijs, Steven V., et Nick van de Giesen. « Accounting for Observational Uncertainty in Forecast Verification : An Information-Theoretical View on Forecasts, Observations, and Truth ». Monthly Weather Review 139, no 7 (1 juillet 2011) : 2156–62. http://dx.doi.org/10.1175/2011mwr3573.1.

Texte intégral
Résumé :
Abstract Recently, an information-theoretical decomposition of Kullback–Leibler divergence into uncertainty, reliability, and resolution was introduced. In this article, this decomposition is generalized to the case where the observation is uncertain. Along with a modified decomposition of the divergence score, a second measure, the cross-entropy score, is presented, which measures the estimated information loss with respect to the truth instead of relative to the uncertain observations. The difference between the two scores is equal to the average observational uncertainty and vanishes when observations are assumed to be perfect. Not acknowledging for observation uncertainty can lead to both overestimation and underestimation of forecast skill, depending on the nature of the noise process.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gao, Zhang, Xiao et Li. « Kullback–Leibler Divergence Based Probabilistic Approach for Device-Free Localization Using Channel State Information ». Sensors 19, no 21 (3 novembre 2019) : 4783. http://dx.doi.org/10.3390/s19214783.

Texte intégral
Résumé :
Recently, people have become more and more interested in wireless sensing applications, among which indoor localization is one of the most attractive. Generally, indoor localization can be classified as device-based and device-free localization (DFL). The former requires a target to carry certain devices or sensors to assist the localization process, whereas the latter has no such requirement, which merely requires the wireless network to be deployed around the environment to sense the target, rendering it much more challenging. Channel State Information (CSI)—a kind of information collected in the physical layer—is composed of multiple subcarriers, boasting highly fined granularity, which has gradually become a focus of indoor localization applications. In this paper, we propose an approach to performing DFL tasks by exploiting the uncertainty of CSI. We respectively utilize the CSI amplitudes and phases of multiple communication links to construct fingerprints, each of which is a set of multivariate Gaussian distributions that reflect the uncertainty information of CSI. Additionally, we propose a kind of combined fingerprints to simultaneously utilize the CSI amplitudes and phases, hoping to improve localization accuracy. Then, we adopt a Kullback–Leibler divergence (KL-divergence) based kernel function to calculate the probabilities that a testing fingerprint belongs to all the reference locations. Next, to localize the target, we utilize the computed probabilities as weights to average the reference locations. Experimental results show that the proposed approach, whatever type of fingerprints is used, outperforms the existing Pilot and Nuzzer systems in two typical indoor environments. We conduct extensive experiments to explore the effects of different parameters on localization performance, and the results demonstrate the efficiency of the proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Liang, Yunyun, et Shengli Zhang. « Integrating Second-order Moving Average and Over-sampling Algorithm to Predict Apoptosis Protein Subcellular Localization ». Current Bioinformatics 15, no 6 (11 novembre 2020) : 517–27. http://dx.doi.org/10.2174/1574893614666190902155811.

Texte intégral
Résumé :
Background: Apoptosis proteins have a key role in the development and the homeostasis of the organism, and are very important to understand the mechanism of cell proliferation and death. The function of apoptosis protein is closely related to its subcellular location. Objective: Prediction of apoptosis protein subcellular localization is a meaningful task. Methods: In this study, we predict the apoptosis protein subcellular location by using the PSSMbased second-order moving average descriptor, nonnegative matrix factorization based on Kullback-Leibler divergence and over-sampling algorithms. This model is named by SOMAPKLNMF- OS and constructed on the ZD98, ZW225 and CL317 benchmark datasets. Then, the support vector machine is adopted as the classifier, and the bias-free jackknife test method is used to evaluate the accuracy. Results: Our prediction system achieves the favorable and promising performance of the overall accuracy on the three datasets and also outperforms the other listed models. Conclusion: The results show that our model offers a high throughput tool for the identification of apoptosis protein subcellular localization.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Gul, Noor, Ijaz Mansoor Qureshi, Sadiq Akbar, Muhammad Kamran et Imtiaz Rasool. « One-to-Many Relationship Based Kullback Leibler Divergence against Malicious Users in Cooperative Spectrum Sensing ». Wireless Communications and Mobile Computing 2018 (2 septembre 2018) : 1–14. http://dx.doi.org/10.1155/2018/3153915.

Texte intégral
Résumé :
The centralized cooperative spectrum sensing (CSS) allows unlicensed users to share their local sensing observations with the fusion center (FC) for sensing the licensed user spectrum. Although collaboration leads to better sensing, malicious user (MU) participation in CSS results in performance degradation. The proposed technique is based on Kullback Leibler Divergence (KLD) algorithm for mitigating the MUs attack in CSS. The secondary users (SUs) inform FC about the primary user (PU) spectrum availability by sending received energy statistics. Unlike the previous KLD algorithm where the individual SU sensing information is utilized for measuring the KLD, in this work MUs are identified and separated based on the individual SU decision and the average sensing statistics received from all other users. The proposed KLD assigns lower weights to the sensing information of MUs, while the normal SUs information receives higher weights. The proposed method has been tested in the presence of always yes, always no, opposite, and random opposite MUs. Simulations confirm that the proposed KLD scheme has surpassed the existing soft combination schemes in estimating the PU activity.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Darscheid, Paul, Anneli Guthke et Uwe Ehret. « A Maximum-Entropy Method to Estimate Discrete Distributions from Samples Ensuring Nonzero Probabilities ». Entropy 20, no 8 (13 août 2018) : 601. http://dx.doi.org/10.3390/e20080601.

Texte intégral
Résumé :
When constructing discrete (binned) distributions from samples of a data set, applications exist where it is desirable to assure that all bins of the sample distribution have nonzero probability. For example, if the sample distribution is part of a predictive model for which we require returning a response for the entire codomain, or if we use Kullback–Leibler divergence to measure the (dis-)agreement of the sample distribution and the original distribution of the variable, which, in the described case, is inconveniently infinite. Several sample-based distribution estimators exist which assure nonzero bin probability, such as adding one counter to each zero-probability bin of the sample histogram, adding a small probability to the sample pdf, smoothing methods such as Kernel-density smoothing, or Bayesian approaches based on the Dirichlet and Multinomial distribution. Here, we suggest and test an approach based on the Clopper–Pearson method, which makes use of the binominal distribution. Based on the sample distribution, confidence intervals for bin-occupation probability are calculated. The mean of each confidence interval is a strictly positive estimator of the true bin-occupation probability and is convergent with increasing sample size. For small samples, it converges towards a uniform distribution, i.e., the method effectively applies a maximum entropy approach. We apply this nonzero method and four alternative sample-based distribution estimators to a range of typical distributions (uniform, Dirac, normal, multimodal, and irregular) and measure the effect with Kullback–Leibler divergence. While the performance of each method strongly depends on the distribution type it is applied to, on average, and especially for small sample sizes, the nonzero, the simple “add one counter”, and the Bayesian Dirichlet-multinomial model show very similar behavior and perform best. We conclude that, when estimating distributions without an a priori idea of their shape, applying one of these methods is favorable.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Bounoua, Wahiba, Amina B. Benkara, Abdelmalek Kouadri et Azzeddine Bakdi. « Online monitoring scheme using principal component analysis through Kullback-Leibler divergence analysis technique for fault detection ». Transactions of the Institute of Measurement and Control 42, no 6 (18 décembre 2019) : 1225–38. http://dx.doi.org/10.1177/0142331219888370.

Texte intégral
Résumé :
Principal component analysis (PCA) is a common tool in the literature and widely used for process monitoring and fault detection. Traditional PCA is associated with the two well-known control charts, the Hotelling’s T2 and the squared prediction error (SPE), as monitoring statistics. This paper develops the use of new measures based on a distribution dissimilarity technique named Kullback-Leibler divergence (KLD) through PCA by measuring the difference between online estimated and offline reference density functions. For processes with PCA scores following a multivariate Gaussian distribution, KLD is computed on both principal and residual subspaces defined by PCA in a moving window to extract the local disparity information. The potentials of the proposed algorithm are afterwards demonstrated through an application on two well-known processes in chemical industries; the Tennessee Eastman process as a reference benchmark and three tank system as an experimental validation. The monitoring performance was compared to recent results from other multivariate statistical process monitoring (MSPM) techniques. The proposed method showed superior robustness and effectiveness recording the lowest average missed detection rate and false alarm rates in process fault detection.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Fuentes, Jesús, et Jorge Gonçalves. « Rényi Entropy in Statistical Mechanics ». Entropy 24, no 8 (5 août 2022) : 1080. http://dx.doi.org/10.3390/e24081080.

Texte intégral
Résumé :
Rényi entropy was originally introduced in the field of information theory as a parametric relaxation of Shannon (in physics, Boltzmann–Gibbs) entropy. This has also fuelled different attempts to generalise statistical mechanics, although mostly skipping the physical arguments behind this entropy and instead tending to introduce it artificially. However, as we will show, modifications to the theory of statistical mechanics are needless to see how Rényi entropy automatically arises as the average rate of change of free energy over an ensemble at different temperatures. Moreover, this notion is extended by considering distributions for isospectral, non-isothermal processes, resulting in relative versions of free energy, in which the Kullback–Leibler divergence or the relative version of Rényi entropy appear within the structure of the corrections to free energy. These generalisations of free energy recover the ordinary thermodynamic potential whenever isothermal processes are considered.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Rathnayake, Tharindu, Ruwan Tennakoon, Amirali Khodadadian Gostar, Alireza Bab-Hadiashar et Reza Hoseinnezhad. « Information Fusion for Industrial Mobile Platform Safety via Track-Before-Detect Labeled Multi-Bernoulli Filter ». Sensors 19, no 9 (29 avril 2019) : 2016. http://dx.doi.org/10.3390/s19092016.

Texte intégral
Résumé :
This paper presents a novel Track-Before-Detect (TBD) Labeled Multi-Bernoulli (LMB) filter tailored for industrial mobile platform safety applications. At the core of the developed solution is two techniques for fusion of color and edge information in visual tracking. We derive an application specific separable likelihood function that captures the geometric shape of the human targets wearing safety vests. We use a novel geometric shape likelihood along with a color likelihood to devise two Bayesian updates steps which fuse shape and color related information. One approach is sequential and the other is based on weighted Kullback–Leibler average (KLA). Experimental results show that the KLA based fusion variant of the proposed algorithm outperforms both the sequential update based variant and a state-of-art method in terms of the performance metrics commonly used in computer vision literature.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Zhang, Jian, et Ling Shen. « Applied Technology in an Adaptive Particle Filter Based on Interval Estimation and KLD-Resampling ». Advanced Materials Research 1014 (juillet 2014) : 452–58. http://dx.doi.org/10.4028/www.scientific.net/amr.1014.452.

Texte intégral
Résumé :
Particle filter as a sequential Monte Carlo method is widely applied in stochastic sampling for state estimation in a recursive Bayesian filtering framework. The efficiency and accuracy of the particle filter depends on the number of particles and the relocating method. The automatic selection of sample size for a given task is therefore essential for reducing unnecessary computation and for optimal performance, especially when the posterior distribution greatly varies overtime. This paper presents an adaptive resampling method (IE_KLD_PF) based on interval estimation, and after interval estimating the expectation of the system states, the new algorithm adopts Kullback-Leibler distance (KLD) to determine the number of particles to resample from the interval and update the filter results by current observation information. Simulations are performed to show that the proposed filter can reduce the average number of samples significantly compared to the fixed sample size particle filter.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Merchant, Naveed, et Jeffrey D. Hart. « A Bayesian Motivated Two-Sample Test Based on Kernel Density Estimates ». Entropy 24, no 8 (3 août 2022) : 1071. http://dx.doi.org/10.3390/e24081071.

Texte intégral
Résumé :
A new nonparametric test of equality of two densities is investigated. The test statistic is an average of log-Bayes factors, each of which is constructed from a kernel density estimate. Prior densities for the bandwidths of the kernel estimates are required, and it is shown how to choose priors so that the log-Bayes factors can be calculated exactly. Critical values of the test statistic are determined by a permutation distribution, conditional on the data. An attractive property of the methodology is that a critical value of 0 leads to a test for which both type I and II error probabilities tend to 0 as sample sizes tend to ∞. Existing results on Kullback–Leibler loss of kernel estimates are crucial to obtaining these asymptotic results, and also imply that the proposed test works best with heavy-tailed kernels. Finite sample characteristics of the test are studied via simulation, and extensions to multivariate data are straightforward, as illustrated by an application to bivariate connectionist data.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Newton, Paul K., et Stephen A. DeSalvo. « The Shannon entropy of Sudoku matrices ». Proceedings of the Royal Society A : Mathematical, Physical and Engineering Sciences 466, no 2119 (10 février 2010) : 1957–75. http://dx.doi.org/10.1098/rspa.2009.0522.

Texte intégral
Résumé :
We study properties of an ensemble of Sudoku matrices (a special type of doubly stochastic matrix when normalized) using their statistically averaged singular values. The determinants are very nearly Cauchy distributed about the origin. The largest singular value is , while the others decrease approximately linearly. The normalized singular values (obtained by dividing each singular value by the sum of all nine singular values) are then used to calculate the average Shannon entropy of the ensemble, a measure of the distribution of ‘energy’ among the singular modes and interpreted as a measure of the disorder of a typical matrix. We show the Shannon entropy of the ensemble to be 1.7331±0.0002, which is slightly lower than an ensemble of 9×9 Latin squares, but higher than a certain collection of 9×9 random matrices used for comparison. Using the notion of relative entropy or Kullback–Leibler divergence , which gives a measure of how one distribution differs from another, we show that the relative entropy between the ensemble of Sudoku matrices and Latin squares is of the order of 10 −5 . By contrast, the relative entropy between Sudoku matrices and the collection of random matrices has the much higher value, being of the order of 10 −3 , with the Shannon entropy of the Sudoku matrices having better distribution among the modes. We finish by ‘reconstituting’ the ‘average’ Sudoku matrix from its averaged singular components.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Xin, Dongjin, et Lingfeng Shi. « Trajectory Modeling by Distributed Gaussian Processes in Multiagent Systems ». Sensors 22, no 20 (17 octobre 2022) : 7887. http://dx.doi.org/10.3390/s22207887.

Texte intégral
Résumé :
This paper considers trajectory a modeling problem for a multi-agent system by using the Gaussian processes. The Gaussian process, as the typical data-driven method, is well suited to characterize the model uncertainties and perturbations in a complex environment. To address model uncertainties and noises disturbances, a distributed Gaussian process is proposed to characterize the system model by using local information exchange among neighboring agents, in which a number of agents cooperate without central coordination to estimate a common Gaussian process function based on local measurements and datum received from neighbors. In addition, both the continuous-time system model and the discrete-time system model are considered, in which we design a control Lyapunov function to learn the continuous-time model, and a distributed model predictive control-based approach is used to learn the discrete-time model. Furthermore, we apply a Kullback–Leibler average consensus fusion algorithm to fuse the local prediction results (mean and variance) of the desired Gaussian process. The performance of the proposed distributed Gaussian process is analyzed and is verified by two trajectory tracking examples.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Tang, Yan, Zhijin Zhao, Chun Li et Xueyi Ye. « Open set recognition algorithm based on Conditional Gaussian Encoder ». Mathematical Biosciences and Engineering 18, no 5 (2021) : 6620–37. http://dx.doi.org/10.3934/mbe.2021328.

Texte intégral
Résumé :
<abstract> <p>For the existing Closed Set Recognition (CSR) methods mistakenly identify unknown jamming signals as a known class, a Conditional Gaussian Encoder (CG-Encoder) for 1-dimensional signal Open Set Recognition (OSR) is designed. The network retains the original form of the signal as much as possible and deep neural network is used to extract useful information. CG-Encoder adopts residual network structure and a new Kullback-Leibler (KL) divergence is defined. In the training phase, the known classes are approximated to different Gaussian distributions in the latent space and the discrimination between classes is increased to improve the recognition performance of the known classes. In the testing phase, a specific and effective OSR algorithm flow is designed. Simulation experiments are carried out on 9 jamming types. The results show that the CSR and OSR performance of CG-Encoder is better than that of the other three kinds of network structures. When the openness is the maximum, the open set average accuracy of CG-Encoder is more than 70%, which is about 30% higher than the worst algorithm, and about 20% higher than the better one. When the openness is the minimum, the average accuracy of OSR is more than 95%.</p> </abstract>
Styles APA, Harvard, Vancouver, ISO, etc.
22

Tran, Quang-Duy, et Sang-Hoon Bae. « An Efficiency Enhancing Methodology for Multiple Autonomous Vehicles in an Urban Network Adopting Deep Reinforcement Learning ». Applied Sciences 11, no 4 (8 février 2021) : 1514. http://dx.doi.org/10.3390/app11041514.

Texte intégral
Résumé :
To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Monteiro, Rodrigo Paula, Carmelo Jose Albanez Bastos-Filho, Mariela Cerrada, Diego Cabrera et Rene Vinicio Sanchez. « Using the Kullback-Leibler Divergence and Kolmogorov-Smirnov Test to Select Input Sizes to the Fault Diagnosis Problem Based on a CNN Model ». Learning and Nonlinear Models 18, no 2 (30 juin 2021) : 16–26. http://dx.doi.org/10.21528/lnlm-vol18-no2-art2.

Texte intégral
Résumé :
Choosing a suitable size for signal representations, e.g., frequency spectra, in a given machine learning problem is not a trivial task. It may strongly affect the performance of the trained models. Many solutions have been proposed to solve this problem. Most of them rely on designing an optimized input or selecting the most suitable input according to an exhaustive search. In this work, we used the Kullback-Leibler Divergence and the Kolmogorov-Smirnov Test to measure the dissimilarity among signal representations belonging to equal and different classes, i.e., we measured the intraclass and interclass dissimilarities. Moreover, we analyzed how this information relates to the classifier performance. The results suggested that both the interclass and intraclass dissimilarities were related to the model accuracy since they indicate how easy a model can learn discriminative information from the input data. The highest ratios between the average interclass and intraclass dissimilarities were related to the most accurate classifiers. We can use this information to select a suitable input size to train the classification model. The approach was tested on two data sets related to the fault diagnosis of reciprocating compressors.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Zheng, Han, Zanyang Cui et Xingchen Zhang. « Automatic Discovery of Railway Train Driving Modes Using Unsupervised Deep Learning ». ISPRS International Journal of Geo-Information 8, no 7 (27 juin 2019) : 294. http://dx.doi.org/10.3390/ijgi8070294.

Texte intégral
Résumé :
Driving modes play vital roles in understanding the stochastic nature of a railway system and can support studies of automatic driving and capacity utilization optimization. Integrated trajectory data containing information such as GPS trajectories and gear changes can be good proxies in the study of driving modes. However, in the absence of labeled data, discovering driving modes is challenging. In this paper, instead of classical models (railway-specified feature extraction and classical clustering), we used five deep unsupervised learning models to overcome this difficulty. In these models, adversarial autoencoders and stacked autoencoders are used as feature extractors, along with generative adversarial network-based and Kullback–Leibler (KL) divergence-based networks as clustering models. An experiment based on real and artificial datasets showed the following: (i) The proposed deep learning models outperform the classical models by 27.64% on average. (ii) Integrated trajectory data can improve the accuracy of unsupervised learning by approximately 13.78%. (iii) The different performance rankings of models based on indices with labeled data and indices without labeled data demonstrate the insufficiency of people’s understanding of the existing modes. This study also analyzes the relationship between the discovered modes and railway carrying capacity.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Lian, Feng, Liming Hou, Bo Wei et Chongzhao Han. « Sensor Selection for Decentralized Large-Scale Multi-Target Tracking Network ». Sensors 18, no 12 (23 novembre 2018) : 4115. http://dx.doi.org/10.3390/s18124115.

Texte intégral
Résumé :
A new optimization algorithm of sensor selection is proposed in this paper for decentralized large-scale multi-target tracking (MTT) network within a labeled random finite set (RFS) framework. The method is performed based on a marginalized δ-generalized labeled multi-Bernoulli RFS. The rule of weighted Kullback-Leibler average (KLA) is used to fuse local multi-target densities. A new metric, named as the label assignment (LA) metric, is proposed to measure the distance for two labeled sets. The lower bound of LA metric based mean square error between the labeled multi-target state set and its estimate is taken as the optimized objective function of sensor selection. The proposed bound is obtained by the information inequality to RFS measurement. Then, we present the sequential Monte Carlo and Gaussian mixture implementations for the bound. Another advantage of the bound is that it provides a basis for setting the weights of KLA. The coordinate descent method is proposed to compromise the computational cost of sensor selection and the accuracy of MTT. Simulations verify the effectiveness of our method under different signal-to- noise ratio scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Jeyaraj, Pandia Rajan, et Edward Rajan Samuel Nadar. « Effective textile quality processing and an accurate inspection system using the advanced deep learning technique ». Textile Research Journal 90, no 9-10 (23 octobre 2019) : 971–80. http://dx.doi.org/10.1177/0040517519884124.

Texte intégral
Résumé :
This research paper focuses on the innovative detection of defects in fabric. This approach is based on the design and development of a computer-assisted system using the deep learning technique. The classification network is modeled using the ResNet512-based Convolutional Neural Network to learn the deep features in the presented fabric. Being an accurate method, this enables accurate localization of minute defects too. Our classification is based on three major steps; firstly, an image acquired by the NI Vision model and pre-processed for a standard pattern to Kullback Leibler Divergence calculation. Secondly, standard textile fabrics are presented to train the Convolutional Neural Network to classify the defective region and the defect-free region. Finally, the testing fabrics are examined by the trained deep Convolutional Neural Network algorithm. To verify the performance, multiple fabrics are presented and the classification accuracy is evaluated. For standard defects on defective fabrics, an average accuracy of 96.5% with 98.5% precision is obtained. Experimental results on the standard Textile Texture Database dataset confirmed that our method provides better results compared with similar recent classification methods, such as the Support Vector Machine and Bayesian classifier.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Khitsenko, Vladimir E., et Nikita A. Fedotov. « Possibilities of analysis of nominative signs in tasks of information security ». Digital technology security, no 1 (30 mars 2022) : 61–84. http://dx.doi.org/10.17212/2782-2230-2022-1-61-84.

Texte intégral
Résumé :
Using various examples, the article demonstrates and discusses the possibilities of testing hypotheses and applying information measures to identify and assess the strength of the connection of nominative features in classification problems in the analysis of information security. The main type of presentation of the initial data in this scale is a contingency table of nominative features or an "object-feature" table, from which frequencies of coincidence of feature categories and a contingency table can be obtained. Using this table, it is easy to test the hypothesis of independence or homogeneity of features. An alternative approach to this analysis is considered based on the Kullback statistics, which is the average discriminating information in favor of the hypothesis of the dependence of features. In particular cases, the hypothesis of the symmetry of square tables is of practical interest, which can also be tested on the basis of information measures and criteria. An example of the processing of dichotomous data of the "yes-no" type according to the Cochran test is shown. The paper discusses ways to measure the strength of the connection of features. Illustrative examples of calculating measures based on chi-square statistics and directed measures are considered. The possibilities of various information characteristics are discussed in the form of a relative decrease in the entropy of one feature with a known other, or in the form of a weighted average amount of information falling on different categories of a feature. These measures are useful for comparative analysis of nominative features in decision-making problems. Shannon's informativeness index, Kullback-Leibler divergence, and a measure of pairwise differentiation of protection efficiency classes according to the laws of distribution of the corresponding categories of a feature are used. The classical procedures for testing hypotheses and approaches based on information characteristics are consistently compared. The methods and examples considered in the work cover many urgent problems of information security associated with nominative features.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Xiao, Yu, et Xiaoxiang Hu. « Waveform Design for Multi-Target Detection Based on Two-Stage Information Criterion ». Entropy 24, no 8 (3 août 2022) : 1075. http://dx.doi.org/10.3390/e24081075.

Texte intégral
Résumé :
Parameter estimation accuracy and average sample number (ASN) reduction are important to improving target detection performance in sequential hypothesis tests. Multiple-input multiple-output (MIMO) radar can balance between parameter estimation accuracy and ASN reduction through waveform diversity. In this study, we propose a waveform design method based on a two-stage information criterion to improve multi-target detection performance. In the first stage, the waveform is designed to estimate the target parameters based on the criterion of single-hypothesis mutual information (MI) maximization under the constraint of the signal-to-noise ratio (SNR). In the second stage, the objective function is designed based on the criterion of MI minimization and Kullback–Leibler divergence (KLD) maximization between multi-hypothesis posterior probabilities, and the waveform is chosen from the waveform library of the first-stage parameter estimation. Furthermore, an adaptive waveform design algorithm framework for multi-target detection is proposed. The simulation results reveal that the waveform design based on the two-stage information criterion can rapidly detect the target direction. In addition, the waveform design based on the criterion of dual-hypothesis MI minimization can improve the parameter estimation performance, whereas the design based on the criterion of dual-hypothesis KLD maximization can improve the target detection performance.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Hu, Yanzhu, Zhen Meng, Xinbo Ai, Yu Hu, Yixin Zhang et Yanchao Shao. « Performance Enhancement of the Location and Recognition of a Φ-OTDR System Using CEEMDAN-KL and AMNBP ». Applied Sciences 10, no 9 (27 avril 2020) : 3047. http://dx.doi.org/10.3390/app10093047.

Texte intégral
Résumé :
It is commonly known that for characteristics, such as long-distance, high-sensitivity, and full-scale monitoring, phase-sensitive optical time-domain reflectometry (Φ-OTDR) has developed rapidly in many fields, especially with the arrival of 5G. Nevertheless, there are still some problems obstructing the application for practical environments. First, the fading effect leads to some results falling into the dead zone, which cannot be demodulated effectively. Second, because of the high sensitivity, the Φ-OTDR system is easy to be interfered with by strong noise in practical environments. Third, the large volume of data caused by the fast responses require a lot of calculations. All the above problems hinder the performance of Φ-OTDR in practical applications. This paper proposes an integration method based on a complete ensemble empirical mode decomposition with adaptive noise and Kullback–Leibler divergence (CEEMDAN-KL) and an adaptive moving neighbor binary pattern (AMNBP) to enhance the performance of Φ-OTDR. CEEMDAN-KL improved the signal characteristics in low signal-to-noise ratio (SNR) conditions. AMNBP optimized the location and recognition via a high calculation efficiency. Experimental results show that the average recognition rate of four kinds of events reached 94.03% and the calculation efficiency increased by 20.0%, which show the excellent performance of Φ-OTDR regarding location and recognition in practical environments.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Zhou, Yang, Rui Fu, Chang Wang et Ruibin Zhang. « Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning ». Sensors 20, no 18 (4 septembre 2020) : 5034. http://dx.doi.org/10.3390/s20185034.

Texte intégral
Résumé :
Building a human-like car-following model that can accurately simulate drivers’ car-following behaviors is helpful to the development of driving assistance systems and autonomous driving. Recent studies have shown the advantages of applying reinforcement learning methods in car-following modeling. However, a problem has remained where it is difficult to manually determine the reward function. This paper proposes a novel car-following model based on generative adversarial imitation learning. The proposed model can learn the strategy from drivers’ demonstrations without specifying the reward. Gated recurrent units was incorporated in the actor-critic network to enable the model to use historical information. Drivers’ car-following data collected by a test vehicle equipped with a millimeter-wave radar and controller area network acquisition card was used. The participants were divided into two driving styles by K-means with time-headway and time-headway when braking used as input features. Adopting five-fold cross-validation for model evaluation, the results show that the proposed model can reproduce drivers’ car-following trajectories and driving styles more accurately than the intelligent driver model and the recurrent neural network-based model, with the lowest average spacing error (19.40%) and speed validation error (5.57%), as well as the lowest Kullback-Leibler divergences of the two indicators used for driving style clustering.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Alroy, John. « The shape of terrestrial abundance distributions ». Science Advances 1, no 8 (septembre 2015) : e1500082. http://dx.doi.org/10.1126/sciadv.1500082.

Texte intégral
Résumé :
Ecologists widely accept that the distribution of abundances in most communities is fairly flat but heavily dominated by a few species. The reason for this is that species abundances are thought to follow certain theoretical distributions that predict such a pattern. However, previous studies have focused on either a few theoretical distributions or a few empirical distributions. I illustrate abundance patterns in 1055 samples of trees, bats, small terrestrial mammals, birds, lizards, frogs, ants, dung beetles, butterflies, and odonates. Five existing theoretical distributions make inaccurate predictions about the frequencies of the most common species and of the average species, and most of them fit the overall patterns poorly, according to the maximum likelihood–related Kullback-Leibler divergence statistic. Instead, the data support a low-dominance distribution here called the “double geometric.” Depending on the value of its two governing parameters, it may resemble either the geometric series distribution or the lognormal series distribution. However, unlike any other model, it assumes both that richness is finite and that species compete unequally for resources in a two-dimensional niche landscape, which implies that niche breadths are variable and that trait distributions are neither arrayed along a single dimension nor randomly associated. The hypothesis that niche space is multidimensional helps to explain how numerous species can coexist despite interacting strongly.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Diamond, Phil, Peter Kloeden et Igor Vladimirov. « Mean anisotropy of homogeneous Gaussian random fields and anisotropic norms of linear translation-invariant operators on multidimensional integer lattices ». Journal of Applied Mathematics and Stochastic Analysis 16, no 3 (1 janvier 2003) : 209–31. http://dx.doi.org/10.1155/s1048953303000169.

Texte intégral
Résumé :
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Yang, Zhen, Xuefei Xu, Keke Wang, Xin Li et Chi Ma. « Multitarget Detection of Transmission Lines Based on DANet and YOLOv4 ». Scientific Programming 2021 (17 décembre 2021) : 1–12. http://dx.doi.org/10.1155/2021/6235452.

Texte intégral
Résumé :
In order to accurately identify targets such as insulators, shock hammers, bird nests, and spacers on high-voltage transmission lines, this paper proposes a multitarget detection model for transmission lines based on DANet and YOLOv4. First, the DANet and YOLOv4 are fused to solve the difficulty in understanding the scene and the discrimination of pixels caused by the complex and diverse scenes of UAV’ (unmanned aerial vehicle) aerial images (lighting, viewing angle, scale, occlusion, and so on) so as to improve the significance of the detection target. Gaussian function and KL (Kullback–Leibler) divergence are used to improve the nonmaximum suppression in YOLOv4 so as to improve the recognition rate of occluded targets; the focal loss function and the balanced cross entropy function are used to improve the loss function of YOLOv4 in order to reduce the impact of not only the imbalance between the background and the detection target but also the imbalance among the samples, which is aimed at improving the accuracy of the detection. Then, a data set is made for the experiment by using the UAV inspection image provided by a power grid company in Eastern Inner Mongolia. Finally, the algorithm proposed in this paper is compared with other target detection algorithms. Experimental results show that the average detection accuracy of the proposed algorithm can reach 94.7%, and the detection time of each image is 0.05 seconds. The method has good accuracy, real-time, and robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Feng, Jianxin, Geng Zhang, Xinhui Li, Yuanming Ding, Zhiguo Liu, Chengsheng Pan, Siyuan Deng et Hui Fang. « A Compositional Transformer Based Autoencoder for Image Style Transfer ». Electronics 12, no 5 (1 mars 2023) : 1184. http://dx.doi.org/10.3390/electronics12051184.

Texte intégral
Résumé :
Image style transfer has become a key technique in modern photo-editing applications. Although significant progress has been made to blend content from one image with style from another image, the synthesized image may have a hallucinatory effect when the texture from the style image is rich when processing high-resolution image style transfer tasks. In this paper, we propose a novel attention mechanism, named compositional attention, to design a compositional transformer-based autoencoder (CTA) to solve this above-mentioned issue. With the support from this module, our model is capable of generating high-quality images when transferring from texture-riched style images to content images with semantics. Additionally, we embed region-based consistency terms in our loss function for ensuring internal structure semantic preservation in our synthesized image. Moreover, information theory-based CTA is discussed and Kullback–Leibler divergence loss is introduced to preserve more brightness information for photo-realistic style transfer. Extensive experimental results based on three benchmark datasets, namely Churches, Flickr Landscapes, and Flickr Faces HQ, confirmed excellent performance when compared to several state-of-the-art methods. Based on a user study assessment, the majority number of users, ranging from 61% to 66%, gave high scores on the transfer effects of our method compared to 9% users who supported the second best method. Further, for the questions of realism and style transfer quality, we achieved the best score, i.e., an average of 4.5 out of 5 compared to other style transfer methods.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Li, Yang, Zhuang Li, Yanping Wang, Guangda Xie, Yun Lin, Wenjie Shen et Wen Jiang. « Improving the Performance of RODNet for MMW Radar Target Detection in Dense Pedestrian Scene ». Mathematics 11, no 2 (10 janvier 2023) : 361. http://dx.doi.org/10.3390/math11020361.

Texte intégral
Résumé :
In the field of autonomous driving, millimeter-wave (MMW) radar is often used as a supplement sensor of other types of sensors, such as optics, in severe weather conditions to provide target-detection services for autonomous driving. RODNet (A Real-Time Radar Object-Detection Network) is one of the most widely used MMW radar range–azimuth (RA) image sequence target-detection algorithms based on Convolutional Neural Networks (CNNs). However, RODNet adopts an object-location similarity (OLS) detection method that is independent of the number of targets to obtain the final target detections from the predicted confidence map. Therefore, it gives a poor performance on missed detection ratio in dense pedestrian scenes. Based on the analysis of the predicted confidence map distribution characteristics, we propose a new generative model-based target-location detection algorithm to improve the performance of RODNet in dense pedestrian scenes. The confidence value and space distribution predicted by RODNet are analyzed in this paper. It shows that the space distribution is more robust than the value distribution for clustering. This is useful in selecting a clustering method to estimate the clustering centers of multiple targets in close range under the effects of distributed target and radar measurement variance and multipath scattering. Another key idea of this algorithm is the derivation of a Gaussian Mixture Model with target number (GMM-TN) for generating the likelihood probability distributions of different target number assumptions. Furthermore, a minimum Kullback–Leibler (KL) divergence target number estimation scheme is proposed combined with K-means clustering and a GMM-TN model. Through the CRUW dataset, the target-detection experiment on a dense pedestrian scene is carried out, and the confidence distribution under typical hidden variable conditions is analyzed. The effectiveness of the improved algorithm is verified: the Average Precision (AP) is improved by 29% and the Average Recall (AR) is improved by 36%.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Zhao, Yun, Xiuguo Zhang, Zijing Shang et Zhiying Cao. « A Novel Hybrid Method for KPI Anomaly Detection Based on VAE and SVDD ». Symmetry 13, no 11 (5 novembre 2021) : 2104. http://dx.doi.org/10.3390/sym13112104.

Texte intégral
Résumé :
Key performance indicator (KPI) anomaly detection is the underlying core technology in Artificial Intelligence for IT operations (AIOps). It has an important impact on subsequent anomaly location and root cause analysis. Variational auto-encoder (VAE) is a symmetry network structure composed of encoder and decoder, which has attracted extensive attention because of its ability to capture complex KPI data features and better detection results. However, VAE is not well applied to the modeling of KPI time series data and it is often necessary to set the threshold to obtain more accurate results. In response to these problems, this paper proposes a novel hybrid method for KPI anomaly detection based on VAE and support vector data description (SVDD). This method consists of two modules: a VAE reconstructor and SVDD anomaly detector. In the VAE reconstruction module, firstly, bi-directional long short-term memory (BiLSTM) is used to replace the traditional feedforward neural network in VAE to capture the time correlation of sequences; then, batch normalization is used at the output of the encoder to prevent the disappearance of KL (Kullback–Leibler) divergence, which prevents ignoring latent variables to reconstruct data directly. Finally, exponentially weighted moving average (EWMA) is used to smooth the reconstruction error, which reduces false positives and false negatives during the detection process. In the SVDD anomaly detection module, smoothed reconstruction errors are introduced into the SVDD for training to determine the threshold of adaptively anomaly detection. Experimental results on the public dataset show that this method has a better detection effect than baseline methods.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Bhattacharjee, Shameek, Venkata Praveen Kumar Madhavarapu, Simone Silvestri et Sajal K. Das. « Attack Context Embedded Data Driven Trust Diagnostics in Smart Metering Infrastructure ». ACM Transactions on Privacy and Security 24, no 2 (février 2021) : 1–36. http://dx.doi.org/10.1145/3426739.

Texte intégral
Résumé :
Spurious power consumption data reported from compromised meters controlled by organized adversaries in the Advanced Metering Infrastructure (AMI) may have drastic consequences on a smart grid’s operations. While existing research on data falsification in smart grids mostly defends against isolated electricity theft, we introduce a taxonomy of various data falsification attack types, when smart meters are compromised by organized or strategic rivals. To counter these attacks, we first propose a coarse-grained and a fine-grained anomaly-based security event detection technique that uses indicators such as deviation and directional change in the time series of the proposed anomaly detection metrics to indicate: (i) occurrence, (ii) type of attack, and (iii) attack strategy used, collectively known as attack context . Leveraging the attack context information, we propose three attack response metrics to the inferred attack context: (a) an unbiased mean indicating a robust location parameter; (b) a median absolute deviation indicating a robust scale parameter; and (c) an attack probability time ratio metric indicating the active time horizon of attacks. Subsequently, we propose a trust scoring model based on Kullback-Leibler (KL) divergence, that embeds the appropriate unbiased mean, the median absolute deviation, and the attack probability ratio metric at runtime to produce trust scores for each smart meter. These trust scores help classify compromised smart meters from the non-compromised ones. The embedding of the attack context, into the trust scoring model, facilitates accurate and rapid classification of compromised meters, even under large fractions of compromised meters, generalize across various attack strategies and margins of false data. Using real datasets collected from two different AMIs, experimental results show that our proposed framework has a high true positive detection rate, while the average false alarm and missed detection rates are much lesser than 10% for most attack combinations for two different real AMI micro-grid datasets. Finally, we also establish fundamental theoretical limits of the proposed method, which will help assess the applicability of our method to other domains.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Breuer, Rinat, et Itamar Levi. « How Bad Are Bad Templates ? Optimistic Design-Stage Side-Channel Security Evaluation and its Cost ». Cryptography 4, no 4 (8 décembre 2020) : 36. http://dx.doi.org/10.3390/cryptography4040036.

Texte intégral
Résumé :
Cryptographic designs are vulnerable to side-channel analysis attacks. Evaluating their security during design stages is of crucial importance. The latter is achieved by very expensive (slow) analog transient-noise simulations over advanced fabrication process technologies. The main challenge of such rigorous security-evaluation analysis lies in the fact that technologies are becoming more and more complex and the physical properties of manufactured devices vary significantly due to process variations. In turn, a detailed security evaluation process imposes exponential time complexity with the circuit-size, the number of physical implementation corners (statistical variations) and the accuracy of the circuit-simulator. Given these circumstances, what is the cost of not exhausting the entire implementation space? In terms of simulation-time complexity, the benefits would clearly be significant; however, we are interested in evaluating the security implications. This question can be formulated for many other interesting side-channel contexts such as for example, how would an attack-outcome vary when the adversary is building a leakage template over one device, i.e., one physical corner, and it performs an evaluation (attack) phase of a device drawn from a different statistical corner? Alternatively, is it safe to assume that a typical (average) corner would represent the worst case in terms of security evaluation or would it be advisable to perform a security evaluation over another specific view? Finally, how would the outcome vary concretely? We ran in-depth experiments to answer these questions in the hope of finding a nice tradeoff between simulation efforts and expertise, and security-evaluation degradation. We evaluate the results utilizing methodologies such as template-attacks with a clear distinction between profiling and attack-phase statistical views. This exemplary view of what an adversary might capture in these scenarios is followed by a more complete statistical evaluation analysis utilizing tools such as the Kullback–Leibler (KL) divergence and the Jensen-Shannon (JS) divergence to draw conclusions.
Styles APA, Harvard, Vancouver, ISO, etc.
39

McElroy, Tucker, et Anindya Roy. « The Inverse Kullback-Leibler Method for Fitting Vector Moving Averages ». Journal of Time Series Analysis 39, no 2 (7 décembre 2017) : 172–91. http://dx.doi.org/10.1111/jtsa.12276.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Bokova, Olga A., et Anatoly A. Veryaev. « Subjective perception of inequality and injustice by schoolchildren and students : experience of empirical research ». Perspectives of Science and Education 56, no 2 (1 mai 2022) : 381–407. http://dx.doi.org/10.32744/pse.2022.2.23.

Texte intégral
Résumé :
Introduction. In modern society, quite a lot of attention is paid to the development of young people, and therefore it seems relevant to study the various parameters of its public mood associated with the perception of inequality and injustice. The purpose of the article is to describe the results of an empirical study aimed at studying the perception of inequality and injustice by schoolchildren and students. Materials and methods. Empirical data are obtained using various standardized methodologies aimed at identifying certain parameters of human development that are significant for the specific perception of inequality and injustice. The results of using the Kullback-Leibler measure to determine the frequency of words related to the understanding of inequality and injustice are presented. The study sample consists of 677 people: students 334 people aged 18 to 23 years old, high school students 343 respondents, aged 14 to 17 years old. Research results. The results of the correlation analysis showed a high (p r ≥ 0.8-0.9) and an average level of reliability of the correlation relationship (p r ≥ 0.5 - 0.7) with both negative and positive values between the various parameters we have chosen methods: various aspects of social frustration are negatively associated with positive coping strategies, optimism and fairness (in the range r=0.92 - 0.52; p≤ 0.01); various aspects of justice with positive expectations, seeking social support, respect for others - self-respect, achievement of results (range r= 0.98 - 0.77; p≤ 0.01); verbal aggression with a negative value r= -0.84 - 0.67; p≤ 0.01. We consider these values continuum: the relationship of justice with positive parameters gives grounds for understanding that in situations of injustice they will acquire a negative value. Three significant factors have been identified that give an idea of the parameters that affect the perception of inequality and injustice by modern schoolchildren and students. Factor load matrix after rotation Promax (PCA) with values from r = 0.5 to p r = 0.9. The correlation coefficient was calculated for the confidence level p≤ 0.01. The results of calculations for verifying personal ideas about inequality and injustice and highlighting their features against the background of statistical data range from 250 to 450 units. frequency distribution density. Conclusion. The results of the study showed that there are certain cognitive, emotional, behavioral and social parameters associated with the perception of inequality and injustice by schoolchildren and students. An important result has been obtained, which consists in the fact that words with a positive emotional connotation correspond, on average, to high frequencies of the use of words recorded by the respondents.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Amari, Shun-ichi. « Integration of Stochastic Models by Minimizing α-Divergence ». Neural Computation 19, no 10 (octobre 2007) : 2780–96. http://dx.doi.org/10.1162/neco.2007.19.10.2780.

Texte intégral
Résumé :
When there are a number of stochastic models in the form of probability distributions, one needs to integrate them. Mixtures of distributions are frequently used, but exponential mixtures also provide a good means of integration. This letter proposes a one-parameter family of integration, called α-integration, which includes all of these well-known integrations. These are generalizations of various averages of numbers such as arithmetic, geometric, and harmonic averages. There are psychophysical experiments that suggest that α-integrations are used in the brain. The α-divergence between two distributions is defined, which is a natural generalization of Kullback-Leibler divergence and Hellinger distance, and it is proved that α-integration is optimal in the sense of minimizing α-divergence. The theory is applied to generalize the mixture of experts and the product of experts to the α-mixture of experts. The α-predictive distribution is also stated in the Bayesian framework.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Wang, Li-Min, Peng Chen, Musa Mammadov, Yang Liu et Si-Yuan Wu. « Alleviating the independence assumptions of averaged one-dependence estimators by model weighting ». Intelligent Data Analysis 25, no 6 (29 octobre 2021) : 1431–51. http://dx.doi.org/10.3233/ida-205400.

Texte intégral
Résumé :
Of numerous proposals to refine naive Bayes by weakening its attribute independence assumption, averaged one-dependence estimators (AODE) has been shown to be able to achieve significantly higher classification accuracy at a moderate cost in classification efficiency. However, all one-dependence estimators (ODEs) in AODE have the same weights and are treated equally. To address this issue, model weighting, which assigns discriminate weights to ODEs and then linearly combine their probability estimates, has been proved to be an efficient and effective approach. Most information-theoretic weighting metrics, including mutual information, Kullback-Leibler measure and the information gain, place more emphasis on the correlation between root attribute (value) and class variable. We argue that the topology of each ODE can be divided into a set of local directed acyclic graphs (DAGs) based on the independence assumption, and multivariate mutual information is introduced to measure the extent to which the DAGs fit data. Based on this premise, in this study we propose a novel weighted AODE algorithm, called AWODE, that adaptively selects weights to alleviate the independence assumption and make the learned probability distribution fit the instance. The proposed approach is validated on 40 benchmark datasets from UCI machine learning repository. The experimental results reveal that, AWODE achieves bias-variance trade-off and is a competitive alternative to single-model Bayesian learners (such as TAN and KDB) and other weighted AODEs (such as WAODE).
Styles APA, Harvard, Vancouver, ISO, etc.
43

Mikami, Tsuyoshi, Hirotaka Takahashi et Kazuya Yonezawa. « DETECTING NONLINEAR AND NONSTATIONARY PROPERTIES OF POST-APNEIC SNORING SOUNDS USING HILBERT–HUANG TRANSFORM ». Biomedical Engineering : Applications, Basis and Communications 31, no 03 (27 mai 2019) : 1950017. http://dx.doi.org/10.4015/s1016237219500170.

Texte intégral
Résumé :
This study focuses on patients with severe obstructive sleep apnea syndrome (OSAS), and clarifies the existence of nonlinear and nonstationary properties in post-apneic snoring sounds. Many researchers have tried to discover intrinsic properties of the snoring sounds in OSAS patients for the past decades using linear frequency analysis, but no one has shown any evidence of the existence of nonlinearity and nonstationarity based on the quantitative evaluation of the post-apneic snoring sounds. In this study, Hilbert–Huang transform (HHT), which is designed for analyzing nonlinear and nonstationary signals, is adopted to generate a time-frequency map and the temporal variation in the spectral density is quantitatively evaluated using the averaged Kullback–Leibler divergence (AKL). As a result, for six OSAS patients, there is a tendency that most of AKL calculated from post-apneic snores are higher than those from non-apneic snores, which indicates that post-apneic snores are more nonstationary. In addition, we also evaluated the difference between the HHT time-frequency maps and spectrograms using short-time fourier transforms (STFT). Such analyses revealed that frequency fluctuations inherent to snoring can be adequately represented through HHT, but not with STFT. These nonlinear and nonstationary properties seem to be highly related to the physiological phenomena of OSAS, two of which are the explosive airflow after reopening of the closed airway and the collision vibration of the soft tissues.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Stoy, Paul C., Anam M. Khan, Aaron Wipf, Nick Silverman et Scott L. Powell. « The spatial variability of NDVI within a wheat field : Information content and implications for yield and grain protein monitoring ». PLOS ONE 17, no 3 (22 mars 2022) : e0265243. http://dx.doi.org/10.1371/journal.pone.0265243.

Texte intégral
Résumé :
Wheat is a staple crop that is critical for feeding a hungry and growing planet, but its nutritive value has declined as global temperatures have warmed. The price offered to producers depends not only on yield but also grain protein content (GPC), which are often negatively related at the field scale but can positively covary depending in part on management strategies, emphasizing the need to understand their variability within individual fields. We measured yield and GPC in a winter wheat field in Sun River, Montana, USA, and tested the ability of normalized difference vegetation index (NDVI) measurements from an unoccupied aerial vehicle (UAV) on spatial scales of ~10 cm and from Landsat on spatial scales of 30 m to predict them. Landsat observations were poorly related to yield and GPC measurements. A multiple linear model using information from four (three) UAV flyovers was selected as the most parsimonious and predicted 26% (40%) of the variability in wheat yield (GPC). We sought to understand the optimal spatial scale for interpreting UAV observations given that the ~ 10 cm pixels yielded more than 12 million measurements at far finer resolution than the 12 m scale of the harvester. The variance in NDVI observations was “averaged out” at larger pixel sizes but only ~ 20% of the total variance was averaged out at the spatial scale of the harvester on some measurement dates. Spatial averaging to the scale of the harvester also made little difference in the total information content of NDVI fit using Beta distributions as quantified using the Kullback-Leibler divergence. Radially-averaged power spectra of UAV-measured NDVI revealed relatively steep power-law relationships with exponentially less variance at finer spatial scales. Results suggest that larger pixels can reasonably capture the information content of within-field NDVI, but the 30 m Landsat scale is too coarse to describe some of the key features of the field, which are consistent with topography, historic management practices, and edaphic variability. Future research should seek to determine an ‘optimum’ spatial scale for NDVI observations that minimizes effort (and therefore cost) while maintaining the ability of producers to make management decisions that positively impact wheat yield and GPC.
Styles APA, Harvard, Vancouver, ISO, etc.
45

ANDERSON, KIMBERLY M., JASON ABBOTT, SHAOHUA ZHAO, EILEEN LIU et SUNEE HIMATHONGKHAM. « Molecular Subtyping of Shiga Toxin–Producing Escherichia coli Using a Commercial Repetitive Sequence–Based PCR Assay† ». Journal of Food Protection 78, no 5 (1 mai 2015) : 902–11. http://dx.doi.org/10.4315/0362-028x.jfp-14-430.

Texte intégral
Résumé :
PCR-based typing methods, such as repetitive sequence–based PCR (rep-PCR), may facilitate the identification of Shiga toxin–producing Escherichia coli (STEC) by serving as screening methods to reduce the number of isolates to be processed for further confirmation. In this study, we used a commercial rep-PCR typing system to generate DNA fingerprint profiles for STEC O157 (n = 60) and non-O157 (n = 91) isolates from human, food, and animal samples and then compared the results with those obtained from pulsed-field gel electrophoresis (PFGE). Fifteen serogroups were analyzed using the Kullback Leibler or extended Jaccard statistical method, and the unweighted pair group method of averages algorithm was used to create dendrograms. Among the 151 STEC isolates tested, all were typeable by rep-PCR. Among the non-O157 isolates, rep-PCR clustered 79 (88.8%) of 89 isolates according to serogroup status, with peak differences ranging from 1 (96.4% similarity) to 12 (58.7% similarity). The genetic relatedness of the non-O157 serogroups mirrored the branching of distinct clonal groups elucidated by other investigators. Although the discriminatory power of rep-PCR (Simpson's index of diversity [SID] = 0.954) for the O157 isolates was less than that of PFGE (SID = 0.993), rep-PCR was able to identify 29 pattern types, suggesting that this method can be used for strain typing, although not to the same level as PFGE. Similar results were obtained from analysis of the non-O157 isolates. With rep-PCR, we assigned non-O157 isolates to 46 pattern types with a SID of 0.977. By PFGE, non-O157 STEC strains were divided into 77 pattern types with a SID of 0.996. Together, these results indicate the ability of the rep-PCR typing system to distinguish between and within O157 and non-O157 STEC groups. Rapid PCR-based typing methods could be invaluable tools for use in outbreak investigations by excluding unrelated STEC isolates within 24 h.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Mao, Xian-Ling, Bo-Si Feng, Yi-Jing Hao, Liqiang Nie, Heyan Huang et Guihua Wen. « S2JSD-LSH : A Locality-Sensitive Hashing Schema for Probability Distributions ». Proceedings of the AAAI Conference on Artificial Intelligence 31, no 1 (12 février 2017). http://dx.doi.org/10.1609/aaai.v31i1.10989.

Texte intégral
Résumé :
To compare the similarity of probability distributions, the information-theoretically motivated metrics like Kullback-Leibler divergence (KL) and Jensen-Shannon divergence (JSD) are often more reasonable compared with metrics for vectors like Euclidean and angular distance. However, existing locality-sensitive hashing (LSH) algorithms cannot support the information-theoretically motivated metrics for probability distributions. In this paper, we first introduce a new approximation formula for S2JSD-distance, and then propose a novel LSH scheme adapted to S2JSD-distance for approximate nearest neighbors search in high-dimensional probability distributions. We define the specific hashing functions, and prove their local-sensitivity. Furthermore, extensive empirical evaluations well illustrate the effectiveness of the proposed hashing schema on six public image datasets and two text datasets, in terms of mean Average Precision, Precision@N and Precision-Recall curve.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Kiso-Farnè, Kaori, et Tatsuaki Tsuruyama. « Epidermal growth factor receptor cascade prioritizes the maximization of signal transduction ». Scientific Reports 12, no 1 (10 octobre 2022). http://dx.doi.org/10.1038/s41598-022-20663-0.

Texte intégral
Résumé :
AbstractMany studies have been performed to quantify cell signaling. Cell signaling molecules are phosphorylated in response to extracellular stimuli, with the phosphorylation sequence forming a signal cascade. The information gain during a signal event is given by the logarithm of the phosphorylation molecule ratio. The average information gain can be regarded as the signal transduction quantity (ST), which is identical to the Kullback–Leibler divergence (KLD), a relative entropy. We previously reported that if the total ST value in a given signal cascade is maximized, the ST rate (STR) of each signaling molecule per signal duration (min) approaches a constant value. To experimentally verify this theoretical conclusion, we measured the STR of the epidermal growth factor (EGF)-related cascade in A431 skin cancer cells following stimulation with EGF using antibody microarrays against phosphorylated signal molecules. The results were consistent with those from the theoretical analysis. Thus, signaling transduction systems may adopt a strategy that prioritizes the maximization of ST. Furthermore, signal molecules with similar STRs may form a signal cascade. In conclusion, ST and STR are promising properties for quantitative analysis of signal transduction.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Xie, Yi, Fei Shen, Jianqing Zhu et Huanqiang Zeng. « Viewpoint robust knowledge distillation for accelerating vehicle re-identification ». EURASIP Journal on Advances in Signal Processing 2021, no 1 (26 juillet 2021). http://dx.doi.org/10.1186/s13634-021-00767-x.

Texte intégral
Résumé :
AbstractVehicle re-identification is a challenging task that matches vehicle images captured by different cameras. Recent vehicle re-identification approaches exploit complex deep networks to learn viewpoint robust features for obtaining accurate re-identification results, which causes large computations in their testing phases to restrict the vehicle re-identification speed. In this paper, we propose a viewpoint robust knowledge distillation (VRKD) method for accelerating vehicle re-identification. The VRKD method consists of a complex teacher network and a simple student network. Specifically, the teacher network uses quadruple directional deep networks to learn viewpoint robust features. The student network only contains a shallow backbone sub-network and a global average pooling layer. The student network distills viewpoint robust knowledge from the teacher network via minimizing the Kullback-Leibler divergence between the posterior probability distributions resulted from the student and teacher networks. As a result, the vehicle re-identification speed is significantly accelerated since only the student network of small testing computations is demanded. Experiments on VeRi776 and VehicleID datasets show that the proposed VRKD method outperforms many state-of-the-art vehicle re-identification approaches with better accurate and speed performance.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Trivedi, Prashant, et Nandyala Hemachandra. « Multi-Agent Natural Actor-Critic Reinforcement Learning Algorithms ». Dynamic Games and Applications, 16 juin 2022. http://dx.doi.org/10.1007/s13235-022-00449-9.

Texte intégral
Résumé :
AbstractMulti-agent actor-critic algorithms are an important part of the Reinforcement Learning (RL) paradigm. We propose three fully decentralized multi-agent natural actor-critic (MAN) algorithms in this work. The objective is to collectively find a joint policy that maximizes the average long-term return of these agents. In the absence of a central controller and to preserve privacy, agents communicate some information to their neighbors via a time-varying communication network. We prove convergence of all the three MAN algorithms to a globally asymptotically stable set of the ODE corresponding to actor update; these use linear function approximations. We show that the Kullback–Leibler divergence between policies of successive iterates is proportional to the objective function’s gradient. We observe that the minimum singular value of the Fisher information matrix is well within the reciprocal of the policy parameter dimension. Using this, we theoretically show that the optimal value of the deterministic variant of the MAN algorithm at each iterate dominates that of the standard gradient-based multi-agent actor-critic (MAAC) algorithm. To our knowledge, it is the first such result in multi-agent reinforcement learning (MARL). To illustrate the usefulness of our proposed algorithms, we implement them on a bi-lane traffic network to reduce the average network congestion. We observe an almost 25% reduction in the average congestion in 2 MAN algorithms; the average congestion in another MAN algorithm is on par with the MAAC algorithm. We also consider a generic 15 agent MARL; the performance of the MAN algorithms is again as good as the MAAC algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Gallagher, Ryan J., Morgan R. Frank, Lewis Mitchell, Aaron J. Schwartz, Andrew J. Reagan, Christopher M. Danforth et Peter Sheridan Dodds. « Generalized word shift graphs : a method for visualizing and explaining pairwise comparisons between texts ». EPJ Data Science 10, no 1 (19 janvier 2021). http://dx.doi.org/10.1140/epjds/s13688-021-00260-3.

Texte intégral
Résumé :
AbstractA common task in computational text analyses is to quantify how two corpora differ according to a measurement like word frequency, sentiment, or information content. However, collapsing the texts’ rich stories into a single number is often conceptually perilous, and it is difficult to confidently interpret interesting or unexpected textual patterns without looming concerns about data artifacts or measurement validity. To better capture fine-grained differences between texts, we introduce generalized word shift graphs, visualizations which yield a meaningful and interpretable summary of how individual words contribute to the variation between two texts for any measure that can be formulated as a weighted average. We show that this framework naturally encompasses many of the most commonly used approaches for comparing texts, including relative frequencies, dictionary scores, and entropy-based measures like the Kullback–Leibler and Jensen–Shannon divergences. Through a diverse set of case studies ranging from presidential speeches to tweets posted in urban green spaces, we demonstrate how generalized word shift graphs can be flexibly applied across domains for diagnostic investigation, hypothesis generation, and substantive interpretation. By providing a detailed lens into textual shifts between corpora, generalized word shift graphs help computational social scientists, digital humanists, and other text analysis practitioners fashion more robust scientific narratives.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie