Dissertations / Theses on the topic 'Kullback-Leibler divergence'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 48 dissertations / theses for your research on the topic 'Kullback-Leibler divergence.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
MESEJO-LEON, DANIEL ALEJANDRO. "APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=33305@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos de consultas por segundo e precisão.
In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.
Nounagnon, Jeannette Donan. "Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/86593.
Full textPh. D.
Junior, Willian Darwin. "Agrupamento de textos utilizando divergência Kullback-Leibler." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-30032016-160011/.
Full textThis work proposes a methodology for grouping texts for the purposes of textual searching in general but also specifically for aiding in distributing law processes in order to reduce time applied in solving judicial conflicts. The proposed methodology uses the Kullback-Leibler divergence applied to frequency distributions of word stems occurring in the texts. Several groups of stems are considered, built up on their occurrence frequency among the texts and the resulting distributions are taken regarding each one of those groups. For each group, divergences are computed based on the distribution taken from a reference text originated from the assembling of all sample texts, yelding one value for each text in relation to each group of stems. Finally, those values are taken as attributes of each text in a clusterization process driven by a K-Means algorithm implementation providing a grouping for the texts. The methodology is tested for simple toy examples and applied to cases of electrical failure registering, texts with similar issues and law texts and compared to an expert\'s classification. As byproducts from the conducted research, a graphical development environment for Pattern Recognition and Bayesian Networks based models and a study on the possibilities of using parallel processing in Bayesian Networks learning have also been obtained.
Harmouche, Jinane. "Statistical Incipient Fault Detection and Diagnosis with Kullback-Leibler Divergence : from Theory to Applications." Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0022/document.
Full textThis phD dissertation deals with the detection and diagnosis of incipient faults in engineering and industrial systems by non-parametric statistical approaches. An incipient fault is supposed to provoke an abnormal change in the measurements of the system variables. However, this change is imperceptible and also unpredictable due to the large signal-to-fault ratio and the low fault-to-noise ratio characterizing the incipient fault. The detection and identification of a global change require a ’global’ approach that takes into account the total faults signature. In this context, the Kullback-Leibler divergence is considered to be a ’global’ fault indicator, which is recommended sensitive to abnormal small variations hidden in noise. A ’global’ spectral analysis approach is also proposed for the diagnosis of faults with a frequency signature. The ’global’ statistical approach is proved on two application studies. The first one concerns the detection and characterization of minor cracks in conductive structures. The second application concerns the diagnosis of bearing faults in electrical rotating machines. In addition, the fault estimation problem is addressed in this work. A theoretical study is conducted to obtain an analytical model of the KL divergence, from which an estimate of the amplitude of the incipient fault is derived
Chhogyal, Kinzang. "Belief Change: A Probabilistic Inquiry." Thesis, Griffith University, 2016. http://hdl.handle.net/10072/366331.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Science, Environment, Engineering and Technology
Full Text
Zhou, Ruikun. "A Kullback-Leiber Divergence Filter for Anomaly Detection in Non-Destructive Pipeline Inspection." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40987.
Full textJung, Daniel. "Diagnosability performance analysis of models and fault detectors." Doctoral thesis, Linköpings universitet, Fordonssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117058.
Full textWhite, Staci A. "Quantifying Model Error in Bayesian Parameter Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433771825.
Full textAdamcik, Martin. "Collective reasoning under uncertainty and inconsistency." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/collective-reasoning-under-uncertainty-and-inconsistency(7fab8021-8beb-45e7-8b45-7cb4fadd70be).html.
Full textMacêra, Márcia Aparecida Centanin. "Uso dos métodos clássico e bayesiano para os modelos não-lineares heterocedásticos simétricos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14092011-164458/.
Full textThe normal regression models have been used for many years for data analysis. Even in cases where normality could not be assumed, was trying to be some kind of transformation in order to achieve the normality sought. However, in practice, these assumptions about normality and linearity are not always satisfied. As alternatives to classical technique new classes of regression models were developed. In this context, we focus on the class of models in which the distribution assumed for the response variable belongs to the symmetric distributions class. The aim of this work is the modeling of this class in the bayesian context, in particular the modeling of the nonlinear models heteroscedastic symmetric class. Note that this work is connected with two research lines, the statistical inference addressing aspects of asymptotic theory and the bayesian inference considering aspects of modeling and criteria for models selection based on simulation methods Monte Carlo Markov Chain (MCMC). A first step is to present the nonlinear models heteroscedastic symmetric class as well as the classic inference of parameters of these models. Subsequently, we propose a bayesian approach to these models, whose objective is to show their feasibility and compare the estimated parameters bayesian inference by MCMC methods with the classical inference of the estimates obtained by GAMLSS tool. In addition, we use the bayesian method of influence analysis on a case based on the Kullback-Leibler divergence for detecting influential observations in the data. The computational implementation was developed in the software R and programs details can be found at the studys authors
LANDO, TOMMASO. "Funzionali statistici nella classe delle equazioni di stima generalizzate." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/16889.
Full textZhao, Ying, and ying zhao@rmit edu au. "Effective Authorship Attribution in Large Document Collections." RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080730.162501.
Full textJohnson, Nicholas Alexander. "Delay estimation in computer networks." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4595.
Full textHo, Fu-Hsuan. "Aspects algorithmiques du modèle continu à énergie aléatoire." Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30184.
Full textThis thesis explores the algorithmic perspectives of the branching random walk and the continuous random energy model (CREM). Namely, we are interested in constructing polynomial-time algorithms that can sample the model's Gibbs measure with high probability, and to indentify the hardness regime, which consists of any inverse temperature bêta such that such polynomial-time algorithms do not exist. In Chapter 1, we provide a historical overview of the models and motivate the algorithmic problems under investigation. We also provide an overview on the mean-field spin glasses that motivates the line of our research. In Chapter 2, we address the sampling problem of the Gibbs measure in the context of branching random walk. We identify a critical inverse temperature bêta_c, identical to the static critical point, that the a hardness transition occurs. In the subcritical regime bêta < bêta_c, we establish a recursive sampling algorithm is able to sample the Gibbs measure efficiently. In the supercritical regime bêta > bêta_c,we show that we cannot find polynomial-time algorithm that belongs to a certain class of algorithms. In Chapter 3, we turn our attention to the same sampling problem for the con¬tinuous random energy model (CREM). For the case where the covariance function of this model is concave, we show that for any inverse temperature bêta < to infinity, the recursive sampling algorithm considered in Chapter 2 is able to sample the Gibbs measure efficiently. For the non-concave case, we identify a critical point bêta_G that similar hardness transition as the one in Chapter 2 occurs. We also provide a lower bound of the CREM free energy that might be of independent interest. In Chapter 4, we study the negative moment of the CREM partition function. While this is not connected directly to the main theme of the thesis, it spins off during the course of research. In Chapter 5, we provide an outlook of some further directions that might be interesting to investigate
Chatoux, Hermine. "Prise en compte métrologique de la couleur dans un contexte de classification et d'indexation." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2267/document.
Full textThe PhD thesis objective is to study a colour’s correct and complete processing, respecting metrological constraint. The lack of compatible approaches justified that we reformulate the main image processing tools that are gradient, key point detector and descriptor. The proposed approaches are generic: channel count independent and taking the sensor’s or eye’s sensitivity curves into account.The full-vector gradient is born from this metrological objective. Proof of concept was realised on colour, multi and hyper-spectral images. The extension developed for human vision deficiency opens interesting perspectives to study of the human vision system. This gradient is the centre of the key point detector proposition, also generic.We also showed how necessary was a mathematically valid choice of distance between features. We revealed the importance of the pair feature/distance and completed the work with a pair: RC2O/Kulback-Leibler divergence based on colour differences.For each development, we propose unbiased validation protocols linked to synthetic images generators exploring the most spatial-chromatic complexity possible. Our hypothesis being that the extraction difficulty comes from the discrimination complexity between colour distributions in the processing area. We also compared our proposition to state of the art approaches in recurring datasets/protocols
Blons, Estelle. "Dynamiques individuelles et collectives de la complexité de signaux physiologiques en situation de stress induit." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0152.
Full textRecent studies in human health assume a causal link between the complexity of psychophysiological control systems and the complexity of their resulting biosignals. This PhD illustrates the aforementioned principle by relying on an interdisciplinary approach, combining physiology, psychology and signal processing. The dynamics of human output physiological signals are studied in response to induced stress in individual or collective situations. The objective is to extract individual signatures depicting the central and autonomic regulations at rest or in different experimental situations. Since stress is a multifactorial process depending on the individual perception and interpretation of a situation, the study of physiological signals is combined with the evaluation of psychological contextual and dispositional characteristics. We focus our attention on cardiac regulations which are analysed from the time series defined by the successive durations of the RR intervals. Statistical signal processing methods, either temporal, frequency or non-linear, are used to study the adaptive capacities of individuals facing different situations of cognitive tasks associated or not with stressors. A particular interest is given to multiscale entropy to assess the complexity of signals, which makes it possible to consider the interconnections existing between cortical, subcortical structures and autonomic cardiac regulations. The probability density functions of recorded cardiac signals along each different experimental situation are compared two by two by using the Kullback-Leibler divergence, and in particular the estimate of the asymptotic increment of the divergence of Kullback-Leibler. The results show that studying cardiac signals allows to discriminate the psychophysiological state of an individual when facing either cognitive tasks or stressful situations. Psychophysiological state differences emerge during stress, not only at an individual level, but also at a collective one, for which the subject is not directly confronted with stressful stimuli. The stress is therefore empathic. Two experimental applications are carried out from our results. First, we show that the cardiac complexity, which is altered in people stressed at work, can be improved by cardiac coherence biofeedback training. Second, signal processing methods are also used to the study of postural regulation. Overall, our results strengthen the interest of human monitoring in health
Sibim, Alessandra Cristiane. "Estimação e diagnóstico na distribuição exponencial por partes em análise de sobrevivência com fração de cura." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-09062011-151222/.
Full textThe main objective is to develop procedures inferences in a bayesian perspective for survival models with (or without) the cure rate based on piecewise exponential distribution. The methodology is based on bayesian methods for Markov Chain Monte Carlo (MCMC). To detect influential observations in the models considering bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence (Cho et al., 2009). Furthermore, we propose the negative binomial model destructive cure rate. The proposed model is more general than the survival models with cure rate, since the probability to estimate the number of cases which were not eliminated by an initial treatment
Zegers, Pablo, B. Frieden, Carlos Alarcón, and Alexis Fuentes. "Information Theoretical Measures for Achieving Robust Learning Machines." MDPI AG, 2016. http://hdl.handle.net/10150/621411.
Full textMohammad, Maruf. "Cellular diagnostic systems using hidden Markov models." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/29520.
Full textPh. D.
Mohammad, Maruf H. "Cellular diagnostic systems using hidden Markov models." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29520.
Full textPh. D.
Al, Hage Joelle. "Fusion de données tolérante aux défaillances : application à la surveillance de l’intégrité d’un système de localisation." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10074/document.
Full textThe interest of research in the multi-sensor data fusion field is growing because of its various applications sectors. Particularly, in the field of robotics and localization, the use of different sensors informations is a vital step to ensure a reliable position estimation. In this context of multi-sensor data fusion, we consider the diagnosis, leading to the identification of the cause of a failure, and the sensors faults tolerance aspect, discussed in limited work in the literature. We chose to develop an approach based on a purely informational formalism: information filter on the one hand and tools of the information theory on the other. Residuals based on the Kullback-Leibler divergence are developed. These residuals allow to detect and to exclude the faulty sensors through optimized thresholding methods. This theory is tested in two applications. The first application is the fault tolerant collaborative localization of a multi-robot system. The second application is the localization in outdoor environments using a tightly coupled GNSS/odometer with a fault tolerant aspect
Mamouni, Nezha. "Utilisation des Copules en Séparation Aveugle de Sources Indépendantes/Dépendantes." Thesis, Reims, 2020. http://www.theses.fr/2020REIMS007.
Full textThe problem of Blind Source Separation (BSS) consists in retrieving unobserved mixed signals from unknown mixtures of them, where there is no, or very limited, information about the source signals and/or the mixing system. In this thesis, we present algorithms in order to separate instantaneous and convolutive mixtures. The principle of these algorithms is to minimize, appropriate separation criteria based on copula densities, using descent gradient type algorithms. These methods can magnificently separate instantaneous and convolutive mixtures of possibly dependent source components even when the copula model is unknown
Krishnan, Sharenya. "Text-Based Information Retrieval Using Relevance Feedback." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-53603.
Full textDehideniya, Mahasen Bandara. "Optimal Bayesian experimental designs for complex models." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/131625/1/Dissanayake%20Mudiyanselage%20Mahasen_Dehideniya_Thesis.pdf.
Full textSayyareh, Abdolreza. "Test of fit and model selection based on likelihood function." Phd thesis, AgroParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003400.
Full textJesus, Sandra Rêgo de. "Análise bayesiana objetiva para as distribuições normal generalizada e lognormal generalizada." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/4495.
Full textThe Generalized Normal (GN) and Generalized lognormal (logGN) distributions are flexible for accommodating features present in the data that are not captured by traditional distribution, such as the normal and the lognormal ones, respectively. These distributions are considered to be tools for the reduction of outliers and for the obtention of robust estimates. However, computational problems have always been the major obstacle to obtain the effective use of these distributions. This paper proposes the Bayesian reference analysis methodology to estimate the GN and logGN. The reference prior for a possible order of the model parameters is obtained. It is shown that the reference prior leads to a proper posterior distribution for all the proposed model. The development of Monte Carlo Markov Chain (MCMC) is considered for inference purposes. To detect possible influential observations in the models considered, the Bayesian method of influence analysis on a case based on the Kullback-Leibler divergence is used. In addition, a scale mixture of uniform representation of the GN and logGN distributions are exploited, as an alternative method in order, to allow the development of efficient Gibbs sampling algorithms. Simulation studies were performed to analyze the frequentist properties of the estimation procedures. Real data applications demonstrate the use of the proposed models.
As distribuições normal generalizada (NG) e lognormal generalizada (logNG) são flexíveis por acomodarem características presentes nos dados que não são capturadas por distribuições tradicionais, como a normal e a lognormal, respectivamente. Essas distribuições são consideradas ferramentas para reduzir as observações aberrantes e obter estimativas robustas. Entretanto o maior obstáculo para a utilização eficiente dessas distribuições tem sido os problemas computacionais. Este trabalho propõe a metodologia da análise de referência Bayesiana para estimar os parâmetros dos modelos NG e logNG. A função a priori de referência para uma possível ordem dos parâmetros do modelo é obtida. Mostra-se que a função a priori de referência conduz a uma distribuição a posteriori própria, em todos os modelos propostos. Para fins de inferência, é considerado o desenvolvimento de métodos Monte Carlo em Cadeias de Markov (MCMC). Para detectar possíveis observações influentes nos modelos considerados, é utilizado o método Bayesiano de análise de influência caso a caso, baseado na divergência de Kullback-Leibler. Além disso, uma representação de mistura de escala uniforme para as distribuições NG e logNG é utilizada, como um método alternativo, para permitir o desenvolvimento de algoritmos de amostrador de Gibbs. Estudos de simulação foram desenvolvidos para analisar as propriedades frequentistas dos processos de estimação. Aplicações a conjuntos de dados reais mostraram a aplicabilidade dos modelos propostos.
Silveti, Falls Antonio. "First-order noneuclidean splitting methods for large-scale optimization : deterministic and stochastic algorithms." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC204.
Full textIn this work we develop and examine two novel first-order splitting algorithms for solving large-scale composite optimization problems in infinite-dimensional spaces. Such problems are ubiquitous in many areas of science and engineering, particularly in data science and imaging sciences. Our work is focused on relaxing the Lipschitz-smoothness assumptions generally required by first-order splitting algorithms by replacing the Euclidean energy with a Bregman divergence. These developments allow one to solve problems having more exotic geometry than that of the usual Euclidean setting. One algorithm is hybridization of the conditional gradient algorithm, making use of a linear minimization oracle at each iteration, with an augmented Lagrangian algorithm, allowing for affine constraints. The other algorithm is a primal-dual splitting algorithm incorporating Bregman divergences for computing the associated proximal operators. For both of these algorithms, our analysis shows convergence of the Lagrangian values, subsequential weak convergence of the iterates to solutions, and rates of convergence. In addition to these novel deterministic algorithms, we introduce and study also the stochastic extensions of these algorithms through a perturbation perspective. Our results in this part include almost sure convergence results for all the same quantities as in the deterministic setting, with rates as well. Finally, we tackle new problems that are only accessible through the relaxed assumptions our algorithms allow. We demonstrate numerical efficiency and verify our theoretical results on problems like low rank, sparse matrix completion, inverse problems on the simplex, and entropically regularized Wasserstein inverse problems
Daher, Mohamad. "Fusion multi-capteurs tolérante aux fautes pour un niveau d'intégrité élevé du suivi de la personne." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10136/document.
Full textAbout one third of home-dwelling older people suffer a fall each year. The most painful falls occur when the person is alone and unable to get up, resulting in huge number of elders which are associated with institutionalization and high morbidity-mortality rate. The PAL (Personally Assisted Living) system appears to be one of the solutions of this problem. This ambient intelligence system allows elderly people to live in an intelligent and pro-active environment. This thesis describes the ongoing work of in-home elder tracking, activities daily living recognition, and automatic fall detection system using a set of non-intrusive sensors that grants privacy and comfort to the elders. In addition, a fault-tolerant fusion method is proposed using a purely informational formalism: information filter on the one hand, and information theory tools on the other hand. Residues based on the Kullback-Leibler divergence are used. Using an appropriate thresholding, these residues lead to the detection and the exclusion of sensors faults. The proposed algorithms were validated with many different scenarios containing the different activities: walking, sitting, standing, lying down, and falling. The performances of the developed methods showed a sensitivity of more than 94% for the fall detection of persons and more than 92% for the discrimination between the different ADLs (Activities of the daily life)
Santos, Tiago Souza dos. "Segmenta??o Fuzzy de Texturas e V?deos." Universidade Federal do Rio Grande do Norte, 2012. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18063.
Full textConselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico
The segmentation of an image aims to subdivide it into constituent regions or objects that have some relevant semantic content. This subdivision can also be applied to videos. However, in these cases, the objects appear in various frames that compose the videos. The task of segmenting an image becomes more complex when they are composed of objects that are defined by textural features, where the color information alone is not a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation algorithm that uses affinity functions in order to assign to each element in an image a grade of membership for each object (between 0 and 1). This work presents a modification of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and spatial complexity. The algorithm was adapted to segmenting color videos, treating them as 3D volume. In order to perform segmentation in videos, conventional color model or a hybrid model obtained by a method for choosing the best channels were used. The Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive affinity functions defined for each object texture. Two types of affinity functions were used, one defined using the normal (or Gaussian) probability distribution and the other using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a measure of the difference between two probability distributions. Finally, the algorithm was tested in somes videos and also in texture mosaic images composed by images of the Brodatz album
A segmenta??o de uma imagem tem como objetivo subdividi-la em partes ou objetos constituintes que tenham algum conte?do sem?ntico relevante. Esta subdivis?o pode tamb?m ser aplicada a um v?deo, por?m, neste, os objetos est?o presentes nos diversos quadros que comp?em o v?deo. A tarefa de segmentar uma imagem torna-se mais complexa quando estas s?o compostas por objetos que contenham caracter?sticas texturais, com pouca ou nenhuma informa??o de cor. A segmenta??o difusa, do Ingl?s fuzzy, ? uma t?cnica de segmenta??o por crescimento de regi?es que determina para cada elemento da imagem um grau de pertin?ncia (entre zero e um) indicando a confian?a de que esse elemento perten?a a um determinado objeto ou regi?o existente na imagem, fazendo-se uso de fun??es de afinidade para obter esses valores de pertin?ncia. Neste trabalho ? apresentada uma modifica??o do algoritmo de segmenta??o fuzzy proposto por Carvalho [Carvalho et al. 2005], a fim de se obter melhorias na complexidade temporal e espacial. O algoritmo foi adaptado para segmentar v?deos coloridos tratando-os como volumes 3D. Para segmentar os v?deos, foram utilizadas informa??es provenientes de um modelo de cor convencional ou de um modelo h?brido obtido atrav?s de uma metodologia para a escolha dos melhores canais para realizar a segmenta??o. O algoritmo de segmenta??o fuzzy foi aplicado tamb?m na segmenta??o de texturas, fazendo-se uso de fun??es de afinidades adaptativas ?s texturas de cada objeto. Dois tipos de fun??es de afinidades foram utilizadas, uma utilizando a distribui??o normal de probabilidade, ou Gaussiana, e outra utilizando a diverg?ncia Skew. Esta ?ltima, uma varia??o da diverg?ncia de Kullback- Leibler, ? uma medida da diverg?ncia entre duas distribui??es de probabilidades. Por fim, o algoritmo foi testado com alguns v?deos e tamb?m com imagens de mosaicos de texturas criadas a partir do ?lbum de Brodatz e outros
Xie, Xinwen. "Quality strategy and method for transmission : application to image." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2251/document.
Full textThis thesis focuses on the study of image quality strategies in wireless communication systems and the design of new quality evaluation metrics:Firstly, a new reduced-reference image quality metric, based on statistical model in complex wavelet domain, has been proposed. The magnitude and the relative phase information of the Dual-tree Complex Wavelet Transform coefficients are modelled by using probability density function and the parameters served as reduced-reference features which will be transmitted to the receiver. Then, a Generalized Regression Neural Network approach is exploited to construct the mapping relation between reduced-reference feature and the objective score.Secondly, with the new metric, a new decoding strategy is proposed for a realistic wireless transmission system, which can improve the quality of experience (QoE) while ensuring the quality of service (QoS). For this, a new database including large physiological vision tests has been constructed to collect the visual preference of people when they are selecting the images with different decoding configurations, and a classifier based on support vector machine or K-nearest neighboring is utilized to automatically select the decoding configuration.Finally, according to specific property of the distortion and people's preference, an improved metric has been proposed. It is the combination of global feature and local feature and has been demonstrated having a good performance in optimization of the decoding strategy.The experimental results validate the effectiveness of the proposed image quality metrics and the quality strategies
Martín, Fernández Josep Antoni. "Medidas de diferencia y clasificación automática no paramétrica de datos composicionales." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/6704.
Full textLa memoria de la tesis se inicia con un capítulo introductorio donde se presentan los elementos básicos de las técnicas de clasificación automática no paramétrica. Se pone especial énfasis en aquellos elementos susceptibles de ser adaptados para su aplicación en clasificaciones de datos composicionales. En el segundo capítulo se aborda el análisis de los conceptos más importantes en torno a los datos composicionales. En este capítulo, los esfuerzos se han concentrado principalmente en estudiar las medidas de diferencia entre datos composicionales junto con las medidas de tendencia central y de dispersión. Con ello se dispone de las herramientas necesarias para proceder al desarrollo de una metodología apropiada para la clasificación no paramétrica de datos composicionales, consistente en incorporar los elementos anteriores a las técnicas habituales y adaptarlas en la medida de lo necesario. El tercer capítulo se dedica exclusivamente a proponer nuevas medidas de diferencia entre datos composicionales basadas en las medidas de divergencia entre distribuciones de probabilidad. En el cuarto capítulo se incorporan las peculiaridades de los datos composicionales a las técnicas de clasificación y se exponen las pautas a seguir en el uso práctico de estas técnicas. El capítulo se completa con la aplicación de la metodología expuesta a un caso práctico. En el quinto capítulo de esta tesis se aborda el denominado problema de los ceros. Se analizan los inconvenientes de los métodos usuales de substitución y se propone una nueva fórmula de substitución de los ceros por redondeo. El capítulo finaliza con el estudio de un caso práctico. En el epílogo de esta memoria se presentan las conclusiones del trabajo de investigación y se indican la líneas futuras de trabajo. En los apéndices finales de esta memoria se recogen los conjuntos de datos utilizados en los casos prácticos que se han desarrollado en la presente tesis. Esta memoria se completa con la lista de las referencias bibliográficas más relevantes que se han consultado para llevar a cabo este trabajo de investigación.
On March 23, 2001 Josep Antoni Martín-Fernández from the Dept. of Computer Sciences and Applied Mathematics of the University of Girona (Catalonia-Spain), presented his PhD thesis, entitled "Measures of difference and non-parametric cluster analysis for compositional data" at the Technical University of Barcelona. A short resumee follows:
Compositional data are by definition proportions of some whole. Thus, their natural sample space is the open simplex and interest lies in the relative behaviour of the components. Basic operations defined on the simplex induce a vector space structure, which justifies the developement of its algebraic-geometric structure: scalar product, norm, and distance. At the same time, hierarchic methods of classification require to establish in advance some or all of the following measures: difference, central tendency and dispersion, in accordance with the nature of the data. J. A. Martín-Fernández studies the requirements for these measures when the data are compositional in type and presents specific measures to be used with the most usual non-parametric methods of cluster analysis. As a part of his thesis he also introduced the centering operation, which has been shown to be a powerful tool to visualize compositional data sets. Furthermore, he defines a new dissimilarity based on measures of divergence between multinomial probability distributions, which is compatible with the nature of compositional data. Finally, J. A. Martín-Fernández presents in his thesis a new method to attack the "Achilles heel" of any statistical analysis of compositional data: the presence of zero values, based on a multiplicative approach which respects the essential properties of this type of data.
Abci, Boussad. "Approche informationnelle pour la navigation autonome tolérante aux défauts : application aux systèmes robotiques mobiles." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I073.
Full textOver the last years, autonomous navigation for mobile robot systems has known an increasing interest from the scientific community. This is mainly due to the diversity of its applications and the different challenges that it represents. Without any human intervention, autonomous navigation must be safe, reliable and accurate. Nevertheless, it may be subject to various degradations that could compromise its objective. Indeed, external disturbances, as well as sensor and actuator faults, may affect the different aspects of autonomous navigation, which are localization, path planning and trajectory tracking. This is why we are devoting this thesis to the design of new algorithms that contribute to make the navigation system robust against external disturbances and tolerant to sensor and actuator fauts. We have adopted a residual generation based fault-diagnosis strategy combined with a robust sliding mode controller that is robust against a certain class of perturbations that are not necessary uniformly bounded. The proposed diagnostic layer is purely informational. It is based on the use of two information filters with different evolution models, and the divergences of Bhattacharyya and Kullback-Leibler for residual design. These residuals are evaluated using statistical methods, in order to detect, isolate then exclude sensor and actuator faults from the navigation system. The proposed approach is applied to different differential drive mobile-robot systems. Experimental results obtained by using the CRIStAL robotic platform, so-called PRETIL, are presented and discussed
Degenne, Rémy. "Impact of structure on the design and analysis of bandit algorithms." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7179.
Full textIn this Thesis, we study sequential learning problems called stochastic multi-armed bandits. First a new bandit algorithm is presented. The analysis of that algorithm uses confidence intervals on the mean of the arms reward distributions, as most bandit proofs do. In a parametric setting, we derive concentration inequalities which quantify the deviation between the mean parameter of a distribution and its empirical estimation in order to obtain confidence intervals. These inequalities are presented as bounds on the Kullback-Leibler divergence. Three extensions of the stochastic multi-armed bandit problem are then studied. First we study the so-called combinatorial semi-bandit problem, in which an algorithm chooses a set of arms and the reward of each of these arms is observed. The minimal attainable regret then depends on the correlation between the arm distributions. We consider then a setting in which the observation mechanism changes. One source of difficulty of the bandit problem is the scarcity of information: only the arm pulled is observed. We show how to use efficiently eventual supplementary free information (which do not influence the regret). Finally a new family of algorithms is introduced to obtain both regret minimization and est arm identification regret guarantees. Each algorithm of the family realizes a trade-off between regret and time needed to identify the best arm. In a second part we study the so-called pure exploration problem, in which an algorithm is not evaluated on its regret but on the probability that it returns a wrong answer to a question on the arm distributions. We determine the complexity of such problems and design with performance close to that complexity
Torres, Huircaman Milflen. "Detección de condición falla de encolamientos de cambios de estado de móviles prepago a través de divergencia de Kullvack-Leibler." Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/111949.
Full textLa industria de telefonía móvil de prepago chilena concentra al 70% de los clientes móviles de los principales operadores en el país. Este servicio utiliza un proceso de descuento y abono en línea que permite rebajar en forma casi instantánea los créditos consumidos al utilizar los servicios de voz y datos habilitados en el terminal, y abonar el crédito correspondientes cuando se hace aplica una recarga prepagada, que son las operaciones más habituales que se aplican para cambiar el estado de operación de un terminal móvil prepago. La dinámica de estas transiciones depende de manera íntima de la operatividad del sistema computacional que administra y ejecuta estos cambios. Su arquitectura, del tipo servidor-cola de comandos, utiliza una filosofía first-in first-out (FIFO) para procesar cada comando asociado a la transición de estado que debe aplicarse sobre cada terminal de la red. Este sistema de administración de comandos puede colapsar si la demanda por cambios de estado aumenta en forma repentina y supera la capacidad de procesamiento del servidor. Ello tiene como consecuencia un aumento desmedido del tamaño de la cola de comandos, lo que a su vez, puede originar problemas en las prestaciones de telecomunicaciones dentro de la red y pérdidas monetarias al operador al dejar fuera de línea el sistema de cobro. Este fenómeno, que se denomina encolamiento, es controlado en los sistemas comerciales utilizando alarmas por umbral, las que indican a los administradores del sistema la necesidad de activar las contramedidas necesarias para restablecer el correcto funcionamiento del sistema. Sin embargo, el valor de este umbral es fijado sin utilizar necesariamente criterios de optimalidad de desempeño, lo que reduce la eficiencia en la operación técnica y comercial del servicio. La hipótesis de trabajo de esta investigación es que el uso un umbral ``duro'' puede ser mejorado al emplear un enfoque que incorpore la historia del proceso que describe la longitud de la cola de comandos, como el basado en las distribuciones de probabilidad de las condiciones de operación normal y de encolamiento. Para validar esta conjetura, se diseñó un detector de encolamientos basado en la divergencia de Kullback-Leibler, la que permite comparar la distribución instantánea de las observaciones con las correspondientes a la condición de operación normal y de encolamiento. La metodología empleada para validar esta tesis se basó en la simulación computacional de las transiciones de estado descrita mediante el uso de una cadena de Markov de 3 estados, que se utilizó para cuantificar la operación del detector y compararla con las métricas asociadas a la detección dura mediante umbrales. Las métricas de desempeño empleadas fueron el porcentaje de errores de tipo I (no detección) y de tipo II (falso positivo), las cuales fueron calculadas en forma empírica en ambos detectores. Además, el funcionamiento del detector fue validado con datos reales de operación a partir de un registro de 14 meses de observaciones. Los resultados obtenidos avalan la hipótesis planteada, en el sentido que se observaron mejoras de desempeño de hasta un 60% en la detección de encolamiento y un 85% en la disminución de falsos positivos al comparar el detector de Kullback-Leibler con aquellos basados en umbral. En este sentido, estos resultados constituyen un avance importante en el aumento de la precisión y confiabilidad de detección de condiciones de fallas que justifica la incorporación de esta nueva estrategia en el ambiente de operaciones de una empresa de telecomunicaciones. Además, la hace eventualmente extensible a procesos controlados a través de colas.
Filippi, Sarah. "Stratégies optimistes en apprentissage par renforcement." Phd thesis, Ecole nationale supérieure des telecommunications - ENST, 2010. http://tel.archives-ouvertes.fr/tel-00551401.
Full textHee, Sonke. "Computational Bayesian techniques applied to cosmology." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273346.
Full textCen, Kun. "The Use Of Kullback-Leibler Divergence In Opinion Retrieval." Thesis, 2008. http://hdl.handle.net/10012/4081.
Full text陳力嘉. "Kullback-Leibler divergence based test for detecting differentially expressed genes." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/51228353977258529392.
Full textChen, Shun-Lung, and 陳順隆. "Acoustic Recognition Using Tandem System with Kullback-Leibler Divergence and Hierarchical Multi-layer Perceptron." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/80451777015559629372.
Full textDELLA, CIOPPA LORENZO. "Differential entropy based methods for thresholding in wavelet bases and other applications." Doctoral thesis, 2021. http://hdl.handle.net/11573/1566221.
Full text(8695017), Surabhi Bhadauria. "ASSOCIATION OF TOO SHORT ARCS USING ADMISSIBLE REGION." Thesis, 2020.
Find full textThe near-Earth space is filled with over 300,000 artificial debris objects with a diameter larger than one cm. For objects in GEO and MEO region, the observations are made mainly through optical sensors. These sensors take observations over a short time which cover only a negligible part of the object's orbit. Two or more such observations are taken as one single Too Short Arc (TSA). Each set of TSA from an optical sensor consists of several angles, the angles of right ascension, declination, along with the rate of change of the right ascension angle and the declination angle. However, such observational data obtained from one TSA because it is covering only a very small fraction of the orbit, is not sufficient for the complete initial determination of an object's orbit. For a newly detected unknown object, only TSAs are available with no information about the orbit of the object. Therefore, two or more such TSAs that belong to the same object are required for its orbit determination. To solve this correlation problem, the framework of the probabilistic Admissible Region is used, which restricts possible orbits based on a single TSA. To propagate the Admissible Region to the time of a second TSA, it is represented in closed-form Gaussian Mixture representation. This way, a propagation with an Extended Kalman filter is possible. To decide if two TSAs are correlated, that is if they belong to the same object, respectively, an overlap between the regions is found in a suitable orbital mechanic's based coordinate frame. To compute the overlap, the information measure of Kullback-Leibler divergence is used.
Sečkárová, Vladimíra. "Kombinování diskrétních pravděpodobnostních rozdělení pomocí křížové entropie pro distribuované rozhodování." Doctoral thesis, 2015. http://www.nusl.cz/ntk/nusl-350939.
Full textKolba, Mark Philip. "Information-Based Sensor Management for Static Target Detection Using Real and Simulated Data." Diss., 2009. http://hdl.handle.net/10161/1313.
Full textIn the modern sensing environment, large numbers of sensor tasking decisions must be made using an increasingly diverse and powerful suite of sensors in order to best fulfill mission objectives in the presence of situationally-varying resource constraints. Sensor management algorithms allow the automation of some or all of the sensor tasking process, meaning that sensor management approaches can either assist or replace a human operator as well as ensure the safety of the operator by removing that operator from a dangerous operational environment. Sensor managers also provide improved system performance over unmanaged sensing approaches through the intelligent control of the available sensors. In particular, information-theoretic sensor management approaches have shown promise for providing robust and effective sensor manager performance.
This work develops information-theoretic sensor managers for a general static target detection problem. Two types of sensor managers are developed. The first considers a set of discrete objects, such as anomalies identified by an anomaly detector or grid cells in a gridded region of interest. The second considers a continuous spatial region in which targets may be located at any point in continuous space. In both types of sensor managers, the sensor manager uses a Bayesian, probabilistic framework to model the environment and tasks the sensor suite to make new observations that maximize the expected information gain for the system. The sensor managers are compared to unmanaged sensing approaches using simulated data and using real data from landmine detection and unexploded ordnance (UXO) discrimination applications, and it is demonstrated that the sensor managers consistently outperform the unmanaged approaches, enabling targets to be detected more quickly using the sensor managers. The performance improvement represented by the rapid detection of targets is of crucial importance in many static target detection applications, resulting in higher rates of advance and reduced costs and resource consumption in both military and civilian applications.
Dissertation
Mielke, Matthias. "Maximum Likelihood Theory for Retention of Effect Non-Inferiority Trials." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B3D4-3.
Full text(6561242), Piyush Pandita. "BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY." Thesis, 2019.
Find full textVaidhiyan, Nidhin Koshy. "Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/2669.
Full textVaidhiyan, Nidhin Koshy. "Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2669.
Full textFANTACCI, CLAUDIO. "Distributed multi-object tracking over sensor networks: a random finite set approach." Doctoral thesis, 2015. http://hdl.handle.net/2158/1003256.
Full text