Dissertations / Theses on the topic 'Kullback-Leibler divergence'

To see the other types of publications on this topic, follow the link: Kullback-Leibler divergence.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Kullback-Leibler divergence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MESEJO-LEON, DANIEL ALEJANDRO. "APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=33305@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos de consultas por segundo e precisão.
In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.
APA, Harvard, Vancouver, ISO, and other styles
2

Nounagnon, Jeannette Donan. "Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/86593.

Full text
Abstract:
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking. This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty. However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy. The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Junior, Willian Darwin. "Agrupamento de textos utilizando divergência Kullback-Leibler." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-30032016-160011/.

Full text
Abstract:
O presente trabalho propõe uma metodologia para agrupamento de textos que possa ser utilizada tanto em busca textual em geral como mais especificamente na distribuição de processos jurídicos para fins de redução do tempo de resolução de conflitos judiciais. A metodologia proposta utiliza a divergência Kullback-Leibler aplicada às distribuições de frequência dos radicais (semantemas) das palavras presentes nos textos. Diversos grupos de radicais são considerados, formados a partir da frequência com que ocorrem entre os textos, e as distribuições são tomadas em relação a cada um desses grupos. Para cada grupo, as divergências são calculadas em relação à distribuição de um texto de referência formado pela agregação de todos os textos da amostra, resultando em um valor para cada texto em relação a cada grupo de radicais. Ao final, esses valores são utilizados como atributos de cada texto em um processo de clusterização utilizando uma implementação do algoritmo K-Means, resultando no agrupamento dos textos. A metodologia é testada em exemplos simples de bancada e aplicada a casos concretos de registros de falhas elétricas, de textos com temas em comum e de textos jurídicos e o resultado é comparado com uma classificação realizada por um especialista. Como subprodutos da pesquisa realizada, foram gerados um ambiente gráfico de desenvolvimento de modelos baseados em Reconhecimento de Padrões e Redes Bayesianas e um estudo das possibilidades de utilização de processamento paralelo na aprendizagem de Redes Bayesianas.
This work proposes a methodology for grouping texts for the purposes of textual searching in general but also specifically for aiding in distributing law processes in order to reduce time applied in solving judicial conflicts. The proposed methodology uses the Kullback-Leibler divergence applied to frequency distributions of word stems occurring in the texts. Several groups of stems are considered, built up on their occurrence frequency among the texts and the resulting distributions are taken regarding each one of those groups. For each group, divergences are computed based on the distribution taken from a reference text originated from the assembling of all sample texts, yelding one value for each text in relation to each group of stems. Finally, those values are taken as attributes of each text in a clusterization process driven by a K-Means algorithm implementation providing a grouping for the texts. The methodology is tested for simple toy examples and applied to cases of electrical failure registering, texts with similar issues and law texts and compared to an expert\'s classification. As byproducts from the conducted research, a graphical development environment for Pattern Recognition and Bayesian Networks based models and a study on the possibilities of using parallel processing in Bayesian Networks learning have also been obtained.
APA, Harvard, Vancouver, ISO, and other styles
4

Harmouche, Jinane. "Statistical Incipient Fault Detection and Diagnosis with Kullback-Leibler Divergence : from Theory to Applications." Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0022/document.

Full text
Abstract:
Les travaux de cette thèse portent sur la détection et le diagnostic des défauts naissants dans les systèmes d’ingénierie et industriels, par des approches statistiques non-paramétriques. Un défaut naissant est censé provoquer comme tout défaut un changement anormal dans les mesures des variables du système. Ce changement est imperceptible mais aussi imprévisible dû à l’important rapport signal-sur défaut, et le faible rapport défaut-sur-bruit caractérisant le défaut naissant. La détection et l’identification d’un changement général nécessite une approche globale qui prend en compte la totalité de la signature des défauts. Dans ce cadre, la divergence de Kullback-Leibler est proposée comme indicateur général de défauts, sensible aux petites variations anormales cachées dans les variations du bruit. Une approche d’analyse spectrale globale est également proposée pour le diagnostic de défauts ayant une signature fréquentielle. L’application de l’approche statistique globale est illustrée sur deux études différentes. La première concerne la détection et la caractérisation, par courants de Foucault, des fissures dans les structures conductrices. La deuxième application concerne le diagnostic des défauts de roulements dans les machines électriques tournantes. En outre, ce travail traite le problème d’estimation de l’amplitude des défauts naissants. Une analyse théorique menée dans le cadre d’une modélisation par analyse en composantes principales, conduit à un modèle analytique de la divergence ne dépendant que des paramètres du défaut
This phD dissertation deals with the detection and diagnosis of incipient faults in engineering and industrial systems by non-parametric statistical approaches. An incipient fault is supposed to provoke an abnormal change in the measurements of the system variables. However, this change is imperceptible and also unpredictable due to the large signal-to-fault ratio and the low fault-to-noise ratio characterizing the incipient fault. The detection and identification of a global change require a ’global’ approach that takes into account the total faults signature. In this context, the Kullback-Leibler divergence is considered to be a ’global’ fault indicator, which is recommended sensitive to abnormal small variations hidden in noise. A ’global’ spectral analysis approach is also proposed for the diagnosis of faults with a frequency signature. The ’global’ statistical approach is proved on two application studies. The first one concerns the detection and characterization of minor cracks in conductive structures. The second application concerns the diagnosis of bearing faults in electrical rotating machines. In addition, the fault estimation problem is addressed in this work. A theoretical study is conducted to obtain an analytical model of the KL divergence, from which an estimate of the amplitude of the incipient fault is derived
APA, Harvard, Vancouver, ISO, and other styles
5

Chhogyal, Kinzang. "Belief Change: A Probabilistic Inquiry." Thesis, Griffith University, 2016. http://hdl.handle.net/10072/366331.

Full text
Abstract:
The belief state of a rational agent may be viewed as consisting of sentences that are either beliefs, disbeliefs or neither (non-beliefs). When probabilities are used to model the belief state, beliefs hold a probability of 1, disbeliefs a probability of 0, and non-beliefs a probability between 0 and 1. Probabilistic belief contraction is an operation on the belief state that takes a belief as input and turns it into a non-belief whereas probabilistic belief revision takes a disbelief and turns it into a belief. Given a probabilistic belief state P , the contraction of P by an input a is denoted as Pa− and can be determined as the mixture of P and P ∗a, where P ∗a is the belief state that is a result of revising P by ¬a. The proportion of P and P ∗ that are used in the mixture is set by the mixing factor. Thus, the mixing factor has an important role to play in determining the contracted belief state Pa−.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Ruikun. "A Kullback-Leiber Divergence Filter for Anomaly Detection in Non-Destructive Pipeline Inspection." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40987.

Full text
Abstract:
Anomaly detection generally refers to algorithmic procedures aimed at identifying relatively rare events in data sets that differ substantially from the majority of the data set to which they belong. In the context of data series generated by sensors mounted on mobile devices for non-destructive inspection and monitoring, anomalies typically identify defects to be detected, therefore defining the main task of this class of devices. In this case, a useful way of operationally defining anomalies is to look at their information content with respect to the background data, which is typically noisy and therefore easily masking the relevant events if unfiltered. In this thesis, a Kullback-Leibler (KL) Divergence filter is proposed to detect signals with relatively high information content, namely anomalies, within data series. The data is generated by using the model of a broad class of proximity sensors that apply to devices commonly used in engineering practice. This includes, for example, sensory devices mounted on mobile robotic devices for the non-destructive inspection of hazardous or other environments that may not be accessible to humans for direct inspection. The raw sensory data generated by this class of sensors is often challenging to analyze due to the prevalence of noise over the signal content that reveals the presence of relevant features, as for example damage in gas pipelines. The proposed filter is built to detect the difference of information content between the data series collected by the sensor and a baseline data series, with the advantage of not requiring the design of a threshold. Moreover, differing from the traditional filters which need the prior knowledge or distribution assumptions about the data, this KL Divergence filter is model free and suitable for all kinds of raw sensory data. Of course, it is also compatible with classical signal distribution assumptions, such as Gaussian approximation, for instance. Also, the robustness and sensitivity of the KL Divergence filter are discussed under different scenarios with various signal to noise ratios of data generated by a simulator reproducing very realistic scenarios and based on models of real sensors provided by manufacturers or widely accepted in the literature.
APA, Harvard, Vancouver, ISO, and other styles
7

Jung, Daniel. "Diagnosability performance analysis of models and fault detectors." Doctoral thesis, Linköpings universitet, Fordonssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117058.

Full text
Abstract:
Model-based diagnosis compares observations from a system with predictions using a mathematical model to detect and isolate faulty components. Analyzing which faults that can be detected and isolated given the model gives useful information when designing a diagnosis system. This information can be used, for example, to determine which residual generators can be generated or to select a sufficient set of sensors that can be used to detect and isolate the faults. With more information about the system taken into consideration during such an analysis, more accurate estimations can be computed of how good fault detectability and isolability that can be achieved. Model uncertainties and measurement noise are the main reasons for reduced fault detection and isolation performance and can make it difficult to design a diagnosis system that fulfills given performance requirements. By taking information about different uncertainties into consideration early in the development process of a diagnosis system, it is possible to predict how good performance can be achieved by a diagnosis system and avoid bad design choices. This thesis deals with quantitative analysis of fault detectability and isolability performance when taking model uncertainties and measurement noise into consideration. The goal is to analyze fault detectability and isolability performance given a mathematical model of the monitored system before a diagnosis system is developed. A quantitative measure of fault detectability and isolability performance for a given model, called distinguishability, is proposed based on the Kullback-Leibler divergence. The distinguishability measure answers questions like "How difficult is it to isolate a fault fi from another fault fj?. Different properties of the distinguishability measure are analyzed. It is shown for example, that for linear descriptor models with Gaussian noise, distinguishability gives an upper limit for the fault to noise ratio of any linear residual generator. The proposed measure is used for quantitative analysis of a nonlinear mean value model of gas flows in a heavy-duty diesel engine to analyze how fault diagnosability performance varies for different operating points. It is also used to formulate the sensor selection problem, i.e., to find a cheapest set of available sensors that should be used in a system to achieve required fault diagnosability performance. As a case study, quantitative fault diagnosability analysis is used during the design of an engine misfire detection algorithm based on the crankshaft angular velocity measured at the flywheel. Decisions during the development of the misfire detection algorithm are motivated using quantitative analysis of the misfire detectability performance showing, for example, varying detection performance at different operating points and for different cylinders to identify when it is more difficult to detect misfires. This thesis presents a framework for quantitative fault detectability and isolability analysis that is a useful tool during the design of a diagnosis system. The different applications show examples of how quantitate analysis can be applied during a design process either as feedback to an engineer or when formulating different design steps as optimization problems to assure that required performance can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

White, Staci A. "Quantifying Model Error in Bayesian Parameter Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433771825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Adamcik, Martin. "Collective reasoning under uncertainty and inconsistency." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/collective-reasoning-under-uncertainty-and-inconsistency(7fab8021-8beb-45e7-8b45-7cb4fadd70be).html.

Full text
Abstract:
In this thesis we investigate some global desiderata for probabilistic knowledge merging given several possibly jointly inconsistent, but individually consistent knowledge bases. We show that the most naive methods of merging, which combine applications of a single expert inference process with the application of a pooling operator, fail to satisfy certain basic consistency principles. We therefore adopt a different approach. Following recent developments in machine learning where Bregman divergences appear to be powerful, we define several probabilistic merging operators which minimise the joint divergence between merged knowledge and given knowledge bases. In particular we prove that in many cases the result of applying such operators coincides with the sets of fixed points of averaging projective procedures - procedures which combine knowledge updating with pooling operators of decision theory. We develop relevant results concerning the geometry of Bregman divergences and prove new theorems in this field. We show that this geometry connects nicely with some desirable principles which have arisen in the epistemology of merging. In particular, we prove that the merging operators which we define by means of convex Bregman divergences satisfy analogues of the principles of merging due to Konieczny and Pino-Perez. Additionally, we investigate how such merging operators behave with respect to principles concerning irrelevant information, independence and relativisation which have previously been intensively studied in case of single-expert probabilistic inference. Finally, we argue that two particular probabilistic merging operators which are based on Kullback-Leibler divergence, a special type of Bregman divergence, have overall the most appealing properties amongst merging operators hitherto considered. By investigating some iterative procedures we propose algorithms to practically compute them.
APA, Harvard, Vancouver, ISO, and other styles
10

Macêra, Márcia Aparecida Centanin. "Uso dos métodos clássico e bayesiano para os modelos não-lineares heterocedásticos simétricos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14092011-164458/.

Full text
Abstract:
Os modelos normais de regressão têm sido utilizados durante muitos anos para a análise de dados. Mesmo nos casos em que a normalidade não podia ser suposta, tentava-se algum tipo de transformação com o intuito de alcançar a normalidade procurada. No entanto, na prática, essas suposições sobre normalidade e linearidade nem sempre são satisfeitas. Como alternativas à técnica clássica, foram desenvolvidas novas classes de modelos de regressão. Nesse contexto, focamos a classe de modelos em que a distribuição assumida para a variável resposta pertence à classe de distribuições simétricas. O objetivo geral desse trabalho é a modelagem desta classe no contexto bayesiano, em particular a modelagem da classe de modelos não-lineares heterocedásticos simétricos. Vale ressaltar que esse trabalho tem ligação com duas linhas de pesquisa, a saber: a inferência estatística abordando aspectos da teoria assintótica e a inferência bayesiana considerando aspectos de modelagem e critérios de seleção de modelos baseados em métodos de simulação de Monte Carlo em Cadeia de Markov (MCMC). Uma primeira etapa consiste em apresentar a classe dos modelos não-lineares heterocedásticos simétricos bem como a inferência clássica dos parâmetros desses modelos. Posteriormente, propomos uma abordagem bayesiana para esses modelos, cujo objetivo é mostrar sua viabilidade e comparar a inferência bayesiana dos parâmetros estimados via métodos MCMC com a inferência clássica das estimativas obtidas por meio da ferramenta GAMLSS. Além disso, utilizamos o método bayesiano de análise de influência caso a caso baseado na divergência de Kullback-Leibler para detectar observações influentes nos dados. A implementação computacional foi desenvolvida no software R e para detalhes dos programas pode ser consultado aos autores do trabalho
The normal regression models have been used for many years for data analysis. Even in cases where normality could not be assumed, was trying to be some kind of transformation in order to achieve the normality sought. However, in practice, these assumptions about normality and linearity are not always satisfied. As alternatives to classical technique new classes of regression models were developed. In this context, we focus on the class of models in which the distribution assumed for the response variable belongs to the symmetric distributions class. The aim of this work is the modeling of this class in the bayesian context, in particular the modeling of the nonlinear models heteroscedastic symmetric class. Note that this work is connected with two research lines, the statistical inference addressing aspects of asymptotic theory and the bayesian inference considering aspects of modeling and criteria for models selection based on simulation methods Monte Carlo Markov Chain (MCMC). A first step is to present the nonlinear models heteroscedastic symmetric class as well as the classic inference of parameters of these models. Subsequently, we propose a bayesian approach to these models, whose objective is to show their feasibility and compare the estimated parameters bayesian inference by MCMC methods with the classical inference of the estimates obtained by GAMLSS tool. In addition, we use the bayesian method of influence analysis on a case based on the Kullback-Leibler divergence for detecting influential observations in the data. The computational implementation was developed in the software R and programs details can be found at the studys authors
APA, Harvard, Vancouver, ISO, and other styles
11

LANDO, TOMMASO. "Funzionali statistici nella classe delle equazioni di stima generalizzate." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/16889.

Full text
Abstract:
A study of majorization for the sake of measuring the "distance" or "divergence" between distributions, useful for estimation methods."Modified" divergence mesures are proposed,for estimation with small samples.Some solutions to estimation problems in the Rasch model are introduced.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Ying, and ying zhao@rmit edu au. "Effective Authorship Attribution in Large Document Collections." RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080730.162501.

Full text
Abstract:
Techniques that can effectively identify authors of texts are of great importance in scenarios such as detecting plagiarism, and identifying a source of information. A range of attribution approaches has been proposed in recent years, but none of these are particularly satisfactory; some of them are ad hoc and most have defects in terms of scalability, effectiveness, and computational cost. Good test collections are critical for evaluation of authorship attribution (AA) techniques. However, there are no standard benchmarks available in this area; it is almost always the case that researchers have their own test collections. Furthermore, collections that have been explored in AA are usually small, and thus whether the existing approaches are reliable or scalable is unclear. We develop several AA collections that are substantially larger than those in literature; machine learning methods are used to establish the value of using such corpora in AA. The results, also used as baseline results in this thesis, show that the developed text collections can be used as standard benchmarks, and are able to clearly distinguish between different approaches. One of the major contributions is that we propose use of the Kullback-Leibler divergence, a measure of how different two distributions are, to identify authors based on elements of writing style. The results show that our approach is at least as effective as, if not always better than, the best existing attribution methods-that is, support vector machines-for two-class AA, and is superior for multi-class AA. Moreover our proposed method has much lower computational cost and is cheaper to train. Style markers are the key elements of style analysis. We explore several approaches to tokenising documents to extract style markers, examining which marker type works the best. We also propose three systems that boost the AA performance by combining evidence from various marker types, motivated from the observation that there is no one type of marker that can satisfy all AA scenarios. To address the scalability of AA, we propose the novel task of authorship search (AS), inspired by document search and intended for large document collections. Our results show that AS is reasonably effective to find documents by a particular author, even within a collection consisting of half a million documents. Beyond search, we also propose the AS-based method to identify authorship. Our method is substantially more scalable than any method published in prior AA research, in terms of the collection size and the number of candidate authors; the discrimination is scaled up to several hundred authors.
APA, Harvard, Vancouver, ISO, and other styles
13

Johnson, Nicholas Alexander. "Delay estimation in computer networks." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4595.

Full text
Abstract:
Computer networks are becoming increasingly large and complex; more so with the recent penetration of the internet into all walks of life. It is essential to be able to monitor and to analyse networks in a timely and efficient manner; to extract important metrics and measurements and to do so in a way which does not unduly disturb or affect the performance of the network under test. Network tomography is one possible method to accomplish these aims. Drawing upon the principles of statistical inference, it is often possible to determine the statistical properties of either the links or the paths of the network, whichever is desired, by measuring at the most convenient points thus reducing the effort required. In particular, bottleneck-link detection methods in which estimates of the delay distributions on network links are inferred from measurements made at end-points on network paths, are examined as a means to determine which links of the network are experiencing the highest delay. Initially two published methods, one based upon a single Gaussian distribution and the other based upon the method-of-moments, are examined by comparing their performance using three metrics: robustness to scaling, bottleneck detection accuracy and computational complexity. Whilst there are many published algorithms, there is little literature in which said algorithms are objectively compared. In this thesis, two network topologies are considered, each with three configurations in order to determine performance in six scenarios. Two new estimation methods are then introduced, both based on Gaussian mixture models which are believed to offer an advantage over existing methods in certain scenarios. Computationally, a mixture model algorithm is much more complex than a simple parametric algorithm but the flexibility in modelling an arbitrary distribution is vastly increased. Better model accuracy potentially leads to more accurate estimation and detection of the bottleneck. The concept of increasing flexibility is again considered by using a Pearson type-1 distribution as an alternative to the single Gaussian distribution. This increases the flexibility but with a reduced complexity when compared with mixture model approaches which necessitate the use of iterative approximation methods. A hybrid approach is also considered where the method-of-moments is combined with the Pearson type-1 method in order to circumvent problems with the output stage of the former. This algorithm has a higher variance than the method-of-moments but the output stage is more convenient for manipulation. Also considered is a new approach to detection algorithms which is not dependant on any a-priori parameter selection and makes use of the Kullback-Leibler divergence. The results show that it accomplishes its aim but is not robust enough to replace the current algorithms. Delay estimation is then cast in a different role, as an integral part of an algorithm to correlate input and output streams in an anonymising network such as the onion router (TOR). TOR is used by users in an attempt to conceal network traffic from observation. Breaking the encryption protocols used is not possible without significant effort but by correlating the un-encrypted input and output streams from the TOR network, it is possible to provide a degree of certainty about the ownership of traffic streams. The delay model is essential as the network is treated as providing a pseudo-random delay to each packet; having an accurate model allows the algorithm to better correlate the streams.
APA, Harvard, Vancouver, ISO, and other styles
14

Ho, Fu-Hsuan. "Aspects algorithmiques du modèle continu à énergie aléatoire." Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30184.

Full text
Abstract:
Cette thèse explore les perspectives algorithmiques de la marche aléatoire branchante et du modèle continu d'énergie aléatoire (CREM). Nous nous intéressons notamment à la construction d'algorithmes en temps polynomial capables d'échan¬tillonner la mesure de Gibbs du modèle avec une grande probabilité, et à identifier le régime de dureté, qui consiste en toute température inverse bêta telle que de tels algorithmes en temps polynomial n'existent pas. Dans le Chapitre 1, nous fournissons un aperçu historique des modèles et moti¬vons les problèmes algorithmiques étudiés. Nous donnons également un aperçu des verres de spin à champ moyen qui motive la ligne de notre recherche. Dans le Chapitre 2, nous abordons le problème de l'échantillonnage de la mesure de Gibbs dans le contexte de la marche aléatoire branchante. Nous identifions une température inverse critique bêta_c, identique au point critique statique, où une tran¬sition de dureté se produit. Dans le régime sous-critique bêta < bêta_c, nous établissons qu'un algorithme d'échantillonnage récursif est capable d'échantillonner efficace¬ment la mesure de Gibbs. Dans le régime supercritique bêta > bêta_c, nous montrons que nous ne pouvons pas trouver d'algorithme en temps polynomial qui appartienne à une certaine classe d'algorithmes. Dans le Chapitre 3, nous portons notre attention sur le même problème d'échan¬tillonnage pour le modèle continu d'énergie aléatoire (CREM). Dans le cas où la fonction de covariance de ce modèle est concave, nous montrons que pour toute température inverse bêta < à l'infini, l'algorithme d'échantillonnage récursif considéré au Chapitre 2 est capable d'échantillonner efficacement la mesure de Gibbs. Pour le cas non concave, nous identifions un point critique bêta_G où une transition de dureté similaire à celle du Chapitre 2 se produit. Nous fournissons également une borne inférieure de l'énergie libre du CREM qui pourrait être d'un intérêt indépendant. Dans le Chapitre 4, nous étudions le moment négatif de la fonction de partition du CREM. Bien que cela ne soit pas directement lié au thème principal de la thèse, cela découle du cours de la recherche. Dans le Chapitre 5, nous donnons un aperçu de certaines orientations futures qui pourraient être intéressantes à étudier
This thesis explores the algorithmic perspectives of the branching random walk and the continuous random energy model (CREM). Namely, we are interested in constructing polynomial-time algorithms that can sample the model's Gibbs measure with high probability, and to indentify the hardness regime, which consists of any inverse temperature bêta such that such polynomial-time algorithms do not exist. In Chapter 1, we provide a historical overview of the models and motivate the algorithmic problems under investigation. We also provide an overview on the mean-field spin glasses that motivates the line of our research. In Chapter 2, we address the sampling problem of the Gibbs measure in the context of branching random walk. We identify a critical inverse temperature bêta_c, identical to the static critical point, that the a hardness transition occurs. In the subcritical regime bêta < bêta_c, we establish a recursive sampling algorithm is able to sample the Gibbs measure efficiently. In the supercritical regime bêta > bêta_c,we show that we cannot find polynomial-time algorithm that belongs to a certain class of algorithms. In Chapter 3, we turn our attention to the same sampling problem for the con¬tinuous random energy model (CREM). For the case where the covariance function of this model is concave, we show that for any inverse temperature bêta < to infinity, the recursive sampling algorithm considered in Chapter 2 is able to sample the Gibbs measure efficiently. For the non-concave case, we identify a critical point bêta_G that similar hardness transition as the one in Chapter 2 occurs. We also provide a lower bound of the CREM free energy that might be of independent interest. In Chapter 4, we study the negative moment of the CREM partition function. While this is not connected directly to the main theme of the thesis, it spins off during the course of research. In Chapter 5, we provide an outlook of some further directions that might be interesting to investigate
APA, Harvard, Vancouver, ISO, and other styles
15

Chatoux, Hermine. "Prise en compte métrologique de la couleur dans un contexte de classification et d'indexation." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2267/document.

Full text
Abstract:
Cette thèse aborde la question du traitement correct et complet de la couleur selon les contraintes métrologiques. Le manque d’approches adaptées a justifié la reformulation principaux outils de traitement d’images que sont le gradient, la détection et la description de points d’intérêt. Les approches proposées sont génériques : indépendantes du nombre de canaux d’acquisition (de la couleur à l’hyper-spectral), de la plage spectrale considérée et prenant en compte les courbes de sensibilité spectrales du capteur ou de l’œil.Le full-vector gradient nait de cet objectif métrologique. La preuve de concept est effectuée sur des images couleurs, multi et hyper-spectrales. L’extension développée pour l’analyse de la déficience visuelle ouvre également de nombreuses s perspectives intéressantes pour l’analyse du système visuel humain. Ce gradient est au cœur de la proposition d’un détecteur de points d’intérêt, lui aussi générique. Nous montrons la nécessité d’un choix mathématiquement valide de la distance entre attributs et l’importance de la cohérence de la paire attribut/distance. Une paire attribut/distance complète l’ensemble.Pour chaque développement, nous proposons des protocoles objectifs de validation liés à des générateurs d’images de synthèse explorant toute la complexité spatio-chromatique possible. Notre hypothèse est que la difficulté d’extraction du gradient/des points d’intérêts… est liée à la complexité de discrimination des distributions couleur dans la zone de traitement. Une confrontation aux approches courantes du domaine a été également mise en œuvre
The PhD thesis objective is to study a colour’s correct and complete processing, respecting metrological constraint. The lack of compatible approaches justified that we reformulate the main image processing tools that are gradient, key point detector and descriptor. The proposed approaches are generic: channel count independent and taking the sensor’s or eye’s sensitivity curves into account.The full-vector gradient is born from this metrological objective. Proof of concept was realised on colour, multi and hyper-spectral images. The extension developed for human vision deficiency opens interesting perspectives to study of the human vision system. This gradient is the centre of the key point detector proposition, also generic.We also showed how necessary was a mathematically valid choice of distance between features. We revealed the importance of the pair feature/distance and completed the work with a pair: RC2O/Kulback-Leibler divergence based on colour differences.For each development, we propose unbiased validation protocols linked to synthetic images generators exploring the most spatial-chromatic complexity possible. Our hypothesis being that the extraction difficulty comes from the discrimination complexity between colour distributions in the processing area. We also compared our proposition to state of the art approaches in recurring datasets/protocols
APA, Harvard, Vancouver, ISO, and other styles
16

Blons, Estelle. "Dynamiques individuelles et collectives de la complexité de signaux physiologiques en situation de stress induit." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0152.

Full text
Abstract:
Les études récentes en santé humaine supposent un lien de causalité entre la complexité des systèmes de contrôle psychophysiologique et la complexité des biosignaux qu’ils émettent. Le travail mené dans le cadre de cette thèse illustre ce principe en s’appuyant sur une démarche interdisciplinaire, combinant physiologie, psychologie et traitement du signal. Il vise à étudier les dynamiques des signaux physiologiques émis par l’Homme, en réponse à un stress induit en situation individuelle ou collective. Le stress étant un processus multifactoriel qui dépend de la perception et de l’interprétation d’une situation donnée par un individu, l’étude des signaux physiologiques est combinée à l’évaluation de caractéristiques psychologiques contextuelles et dispositionnelles. En particulier, nous nous intéressons aux régulations cardiaques qui sont analysées à partir des séries temporelles définies par les durées successives des intervalles RR. Des approches statistiques temporelles, fréquentielles ou non-linéaires sont utilisées afin d’étudier les capacités d’adaptation des individus confrontés à différentes situations de tâches cognitives associées ou non à des facteurs stressants. Il s’agit d’extraire des signatures caractéristiques des régulations centrales et autonomes, au repos ou dans différentes situations expérimentales. Dans ce travail, un intérêt particulier est accordé à l’entropie multi-échelles afin d’évaluer la complexité des signaux, une complexité induite par les interconnexions existant entre structures corticales, sous-corticales et régulations autonomes cardiaques. Nous proposons également d’analyser les signaux collectés durant les différentes situations expérimentales, en comparant deux à deux leurs densités de probabilité à partir de la divergence de Kullback-Leibler, et en particulier d’une estimation de l'incrément asymptotique de la divergence de Kullback-Leibler. Les résultats obtenus mettent en évidence que l’étude des signaux cardiaques peut permettre d’appréhender l’état psychophysiologique d’un individu lorsqu’il est confronté à des situations de tâches cognitives et de stress. Des différences d’états apparaissent non seulement à l’échelle individuelle, mais également à l’échelle collective, lorsque l’individu n’est pas directement confronté aux stimuli stressants mais que le stress est de nature empathique. Enfin, deux applications sont réalisées. Nous montrons que la complexité des signaux cardiaques, altérée chez des personnes stressées au travail, peut être améliorée par un entraînement à la cohérence cardiaque. Nous appliquons également les méthodes de traitement du signal à l’étude de la régulation posturale. L’ensemble de nos résultats renforcent l’intérêt du monitoring de l’humain en matière de santé
Recent studies in human health assume a causal link between the complexity of psychophysiological control systems and the complexity of their resulting biosignals. This PhD illustrates the aforementioned principle by relying on an interdisciplinary approach, combining physiology, psychology and signal processing. The dynamics of human output physiological signals are studied in response to induced stress in individual or collective situations. The objective is to extract individual signatures depicting the central and autonomic regulations at rest or in different experimental situations. Since stress is a multifactorial process depending on the individual perception and interpretation of a situation, the study of physiological signals is combined with the evaluation of psychological contextual and dispositional characteristics. We focus our attention on cardiac regulations which are analysed from the time series defined by the successive durations of the RR intervals. Statistical signal processing methods, either temporal, frequency or non-linear, are used to study the adaptive capacities of individuals facing different situations of cognitive tasks associated or not with stressors. A particular interest is given to multiscale entropy to assess the complexity of signals, which makes it possible to consider the interconnections existing between cortical, subcortical structures and autonomic cardiac regulations. The probability density functions of recorded cardiac signals along each different experimental situation are compared two by two by using the Kullback-Leibler divergence, and in particular the estimate of the asymptotic increment of the divergence of Kullback-Leibler. The results show that studying cardiac signals allows to discriminate the psychophysiological state of an individual when facing either cognitive tasks or stressful situations. Psychophysiological state differences emerge during stress, not only at an individual level, but also at a collective one, for which the subject is not directly confronted with stressful stimuli. The stress is therefore empathic. Two experimental applications are carried out from our results. First, we show that the cardiac complexity, which is altered in people stressed at work, can be improved by cardiac coherence biofeedback training. Second, signal processing methods are also used to the study of postural regulation. Overall, our results strengthen the interest of human monitoring in health
APA, Harvard, Vancouver, ISO, and other styles
17

Sibim, Alessandra Cristiane. "Estimação e diagnóstico na distribuição exponencial por partes em análise de sobrevivência com fração de cura." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-09062011-151222/.

Full text
Abstract:
O principal objetivo deste trabalho é desenvolver procedimentos inferências em uma perspectiva bayesiana para modelos de sobrevivência com (ou sem) fração de cura baseada na distribuição exponencial por partes. A metodologia bayesiana é baseada em métodos de Monte Carlo via Cadeias de Markov (MCMC). Para detectar observações influentes nos modelos considerados foi usado o método bayesiano de análise de influência caso a caso (Cho et al., 2009), baseados na divergência de Kullback-Leibler. Além disso, propomos o modelo destrutivo binomial negativo com fração de cura. O modelo proposto é mais geral que os modelos de sobrevivência com fração de cura, já que permitem estimar a probabilidade do número de causas que não foram eliminadas por um tratamento inicial
The main objective is to develop procedures inferences in a bayesian perspective for survival models with (or without) the cure rate based on piecewise exponential distribution. The methodology is based on bayesian methods for Markov Chain Monte Carlo (MCMC). To detect influential observations in the models considering bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence (Cho et al., 2009). Furthermore, we propose the negative binomial model destructive cure rate. The proposed model is more general than the survival models with cure rate, since the probability to estimate the number of cases which were not eliminated by an initial treatment
APA, Harvard, Vancouver, ISO, and other styles
18

Zegers, Pablo, B. Frieden, Carlos Alarcón, and Alexis Fuentes. "Information Theoretical Measures for Achieving Robust Learning Machines." MDPI AG, 2016. http://hdl.handle.net/10150/621411.

Full text
Abstract:
Information theoretical measures are used to design, from first principles, an objective function that can drive a learning machine process to a solution that is robust to perturbations in parameters. Full analytic derivations are given and tested with computational examples showing that indeed the procedure is successful. The final solution, implemented by a robust learning machine, expresses a balance between Shannon differential entropy and Fisher information. This is also surprising in being an analytical relation, given the purely numerical operations of the learning machine.
APA, Harvard, Vancouver, ISO, and other styles
19

Mohammad, Maruf. "Cellular diagnostic systems using hidden Markov models." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/29520.

Full text
Abstract:
Radio frequency system optimization and troubleshooting remains one of the most challenging aspects of working in a cellular network. To stay competitive, cellular providers continually monitor the performance of their networks and use this information to determine where to improve or expand services. As a result, operators are saddled with the task of wading through overwhelmingly large amounts of data in order to trouble-shoot system problems. Part of the difficulty of this task is that for many complicated problems such as hand-off failure, clues about the cause of the failure are hidden deep within the statistics of underlying dynamic physical phenomena like fading, shadowing, and interference. In this research we propose that Hidden Markov Models (HMMs) be used as a method to infer signature statistics about the nature and sources of faults in a cellular system by fitting models to various time-series data measured throughout the network. By including HMMs in the network management tool, a provider can explore the statistical relationships between channel dynamics endemic to a cell and its resulting performance. This research effort also includes a new distance measure between a pair of HMMs that approximates the Kullback-Leibler divergence (KLD). Since there is no closed-form solution to calculate the KLD between the HMMs, the proposed analytical expression is very useful in classification and identification problems. A novel HMM based position location technique has been introduced that may be very useful for applications involving cognitive radios.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Mohammad, Maruf H. "Cellular diagnostic systems using hidden Markov models." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29520.

Full text
Abstract:
Radio frequency system optimization and troubleshooting remains one of the most challenging aspects of working in a cellular network. To stay competitive, cellular providers continually monitor the performance of their networks and use this information to determine where to improve or expand services. As a result, operators are saddled with the task of wading through overwhelmingly large amounts of data in order to trouble-shoot system problems. Part of the difficulty of this task is that for many complicated problems such as hand-off failure, clues about the cause of the failure are hidden deep within the statistics of underlying dynamic physical phenomena like fading, shadowing, and interference. In this research we propose that Hidden Markov Models (HMMs) be used as a method to infer signature statistics about the nature and sources of faults in a cellular system by fitting models to various time-series data measured throughout the network. By including HMMs in the network management tool, a provider can explore the statistical relationships between channel dynamics endemic to a cell and its resulting performance. This research effort also includes a new distance measure between a pair of HMMs that approximates the Kullback-Leibler divergence (KLD). Since there is no closed-form solution to calculate the KLD between the HMMs, the proposed analytical expression is very useful in classification and identification problems. A novel HMM based position location technique has been introduced that may be very useful for applications involving cognitive radios.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Al, Hage Joelle. "Fusion de données tolérante aux défaillances : application à la surveillance de l’intégrité d’un système de localisation." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10074/document.

Full text
Abstract:
L'intérêt des recherches dans le domaine de la fusion de données multi-capteurs est en plein essor en raison de la diversité de ses secteurs d'applications. Plus particulièrement, dans le domaine de la robotique et de la localisation, l'exploitation des différentes informations fournies par les capteurs constitue une étape primordiale afin d'assurer une estimation fiable de la position. Dans ce contexte de fusion de données multi-capteurs, nous nous attachons à traiter le diagnostic, menant à l'identification de la cause d'une défaillance, et la tolérance de l'approche proposée aux défauts de capteurs, peu abordés dans la littérature.Nous avons fait le choix de développer une approche basée sur un formalisme purement informationnel : filtre informationnel d'une part, et outils de la théorie de l'information d'autre part. Des résidus basés sur la divergence de Kullback-Leibler sont développés. Via des méthodes optimisées de seuillage, ces résidus conduisent à la détection et à l'exclusion de ces défauts capteurs. La théorie proposée est éprouvée sur deux applications de localisation. La première application concerne la localisation collaborative, tolérante aux défauts d'un système multi-robots. La seconde application traite de la localisation en milieu ouvert utilisant un couplage serré GNSS/odométrie tolérant aux défauts
The interest of research in the multi-sensor data fusion field is growing because of its various applications sectors. Particularly, in the field of robotics and localization, the use of different sensors informations is a vital step to ensure a reliable position estimation. In this context of multi-sensor data fusion, we consider the diagnosis, leading to the identification of the cause of a failure, and the sensors faults tolerance aspect, discussed in limited work in the literature. We chose to develop an approach based on a purely informational formalism: information filter on the one hand and tools of the information theory on the other. Residuals based on the Kullback-Leibler divergence are developed. These residuals allow to detect and to exclude the faulty sensors through optimized thresholding methods. This theory is tested in two applications. The first application is the fault tolerant collaborative localization of a multi-robot system. The second application is the localization in outdoor environments using a tightly coupled GNSS/odometer with a fault tolerant aspect
APA, Harvard, Vancouver, ISO, and other styles
22

Mamouni, Nezha. "Utilisation des Copules en Séparation Aveugle de Sources Indépendantes/Dépendantes." Thesis, Reims, 2020. http://www.theses.fr/2020REIMS007.

Full text
Abstract:
Le problème de la séparation aveugle de sources (SAS) consiste à retrouver des signaux non observés à partir de mélanges inconnus de ceux-ci, où on ne dispose pas, ou très peu, d'informations sur les signaux source et/ou le système de mélange. Dans cette thèse, nous présentons des algorithmes pour séparer des mélanges linéaires instantanés et convolutifs de sources avec composantes indépendantes/dépendantes. Le principe des algorithmes proposés est de minimiser des critères de séparation, bien définis, basés sur les densités de copules, en utilisant des algorithmes de type descente du gradient.Nous montrons que les méthodes proposées peuvent séparer des mélanges de sources avec composantes dépendantes, où le modèle de copule est inconnu
The problem of Blind Source Separation (BSS) consists in retrieving unobserved mixed signals from unknown mixtures of them, where there is no, or very limited, information about the source signals and/or the mixing system. In this thesis, we present algorithms in order to separate instantaneous and convolutive mixtures. The principle of these algorithms is to minimize, appropriate separation criteria based on copula densities, using descent gradient type algorithms. These methods can magnificently separate instantaneous and convolutive mixtures of possibly dependent source components even when the copula model is unknown
APA, Harvard, Vancouver, ISO, and other styles
23

Krishnan, Sharenya. "Text-Based Information Retrieval Using Relevance Feedback." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-53603.

Full text
Abstract:
Europeana, a freely accessible digital library with an idea to make Europe's cultural and scientific heritage available to the public was founded by the European Commission in 2008. The goal was to deliver a semantically enriched digital content with multilingual access to it. Even though they managed to increase the content of data they slowly faced the problem of retrieving information in an unstructured form. So to complement the Europeana portal services, ASSETS (Advanced Search Service and Enhanced Technological Solutions) was introduced with services that sought to improve the usability and accessibility of Europeana. My contribution is to study different text-based information retrieval models, their relevance feedback techniques and to implement one simple model. The thesis explains a detailed overview of the information retrieval process along with the implementation of the chosen strategy for relevance feedback that generates automatic query expansion. Finally, the thesis concludes with the analysis made using relevance feedback, discussion on the model implemented and then an assessment on future use of this model both as a continuation of my work and using this model in ASSETS.
APA, Harvard, Vancouver, ISO, and other styles
24

Dehideniya, Mahasen Bandara. "Optimal Bayesian experimental designs for complex models." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/131625/1/Dissanayake%20Mudiyanselage%20Mahasen_Dehideniya_Thesis.pdf.

Full text
Abstract:
The complexity of statistical models that are used to describe biological processes poses significant computational challenges in design of experiments. To address such challenges, in this thesis, new methods are developed in optimisation and approximate inference, and are applied in real-world experiments. The proposed methods enable practitioners to gain greater insight and understanding into the biological processes they are studying, and this is demonstrated by designing experiments to understand important biological processes in epidemiology and ecology such as the spread of infectious diseases and interactions between predator and prey in environmental systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Sayyareh, Abdolreza. "Test of fit and model selection based on likelihood function." Phd thesis, AgroParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003400.

Full text
Abstract:
Notre travail port sur l'inf´erence au sujet de l'AIC (un cas de vraisemblance p`enalis´ee) d'Akaike (1973), o`u comme estimateur de divergence de Kullback-Leibler est intimement reli´ee `a l'estimateur de maximum de vraisemblance. Comme une partie de la statistique inf´erentielle, dans le contexte de test d'hypoth`ese, la divergence de Kullback-Leibler et le lemme de Neyman-Pearson sont deux concepts fondamentaux. Tous les deux sont au sujet du rapports de 11 vraisemblance. Neyman-Pearson est au sujet du taux d'erreur du test du rapport de vraisemblance et la divergence de Kullback-Leibler est l'esp´erance du rapport de log-vraisemblance.
APA, Harvard, Vancouver, ISO, and other styles
26

Jesus, Sandra Rêgo de. "Análise bayesiana objetiva para as distribuições normal generalizada e lognormal generalizada." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/4495.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:04:53Z (GMT). No. of bitstreams: 1 6424.pdf: 5426262 bytes, checksum: 82bb9386f85845b0d3db787265ea8236 (MD5) Previous issue date: 2014-11-21
The Generalized Normal (GN) and Generalized lognormal (logGN) distributions are flexible for accommodating features present in the data that are not captured by traditional distribution, such as the normal and the lognormal ones, respectively. These distributions are considered to be tools for the reduction of outliers and for the obtention of robust estimates. However, computational problems have always been the major obstacle to obtain the effective use of these distributions. This paper proposes the Bayesian reference analysis methodology to estimate the GN and logGN. The reference prior for a possible order of the model parameters is obtained. It is shown that the reference prior leads to a proper posterior distribution for all the proposed model. The development of Monte Carlo Markov Chain (MCMC) is considered for inference purposes. To detect possible influential observations in the models considered, the Bayesian method of influence analysis on a case based on the Kullback-Leibler divergence is used. In addition, a scale mixture of uniform representation of the GN and logGN distributions are exploited, as an alternative method in order, to allow the development of efficient Gibbs sampling algorithms. Simulation studies were performed to analyze the frequentist properties of the estimation procedures. Real data applications demonstrate the use of the proposed models.
As distribuições normal generalizada (NG) e lognormal generalizada (logNG) são flexíveis por acomodarem características presentes nos dados que não são capturadas por distribuições tradicionais, como a normal e a lognormal, respectivamente. Essas distribuições são consideradas ferramentas para reduzir as observações aberrantes e obter estimativas robustas. Entretanto o maior obstáculo para a utilização eficiente dessas distribuições tem sido os problemas computacionais. Este trabalho propõe a metodologia da análise de referência Bayesiana para estimar os parâmetros dos modelos NG e logNG. A função a priori de referência para uma possível ordem dos parâmetros do modelo é obtida. Mostra-se que a função a priori de referência conduz a uma distribuição a posteriori própria, em todos os modelos propostos. Para fins de inferência, é considerado o desenvolvimento de métodos Monte Carlo em Cadeias de Markov (MCMC). Para detectar possíveis observações influentes nos modelos considerados, é utilizado o método Bayesiano de análise de influência caso a caso, baseado na divergência de Kullback-Leibler. Além disso, uma representação de mistura de escala uniforme para as distribuições NG e logNG é utilizada, como um método alternativo, para permitir o desenvolvimento de algoritmos de amostrador de Gibbs. Estudos de simulação foram desenvolvidos para analisar as propriedades frequentistas dos processos de estimação. Aplicações a conjuntos de dados reais mostraram a aplicabilidade dos modelos propostos.
APA, Harvard, Vancouver, ISO, and other styles
27

Silveti, Falls Antonio. "First-order noneuclidean splitting methods for large-scale optimization : deterministic and stochastic algorithms." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC204.

Full text
Abstract:
Dans ce travail, nous développons et examinons deux nouveaux algorithmes d'éclatement du premier ordre pour résoudre des problèmes d'optimisation composites à grande échelle dans des espaces à dimensions infinies. Ces problèmes sont au coeur de nombres de domaines scientifiques et d'ingénierie, en particulier la science des données et l'imagerie. Notre travail est axé sur l'assouplissement des hypothèses de régularité de Lipschitz généralement requises par les algorithmes de fractionnement du premier ordre en remplaçant l'énergie euclidienne par une divergence de Bregman. Ces développements permettent de résoudre des problèmes ayant une géométrie plus exotique que celle du cadre euclidien habituel. Un des algorithmes développés est l'hybridation de l'algorithme de gradient conditionnel, utilisant un oracle de minimisation linéaire à chaque itération, avec méthode du Lagrangien augmenté, permettant ainsi la prise en compte de contraintes affines. L'autre algorithme est un schéma d'éclatement primal-dual incorporant les divergences de Bregman pour le calcul des opérateurs proximaux associés. Pour ces deux algorithmes, nous montrons la convergence des valeurs Lagrangiennes, la convergence faible des itérés vers les solutions ainsi que les taux de convergence. En plus de ces nouveaux algorithmes déterministes, nous introduisons et étudions également leurs extensions stochastiques au travers d'un point de vue d'analyse de stablité aux perturbations. Nos résultats dans cette partie comprennent des résultats de convergence presque sûre pour les mêmes quantités que dans le cadre déterministe, avec des taux de convergence également. Enfin, nous abordons de nouveaux problèmes qui ne sont accessibles qu'à travers les hypothèses relâchées que nos algorithmes permettent. Nous démontrons l'efficacité numérique et illustrons nos résultats théoriques sur des problèmes comme la complétion de matrice parcimonieuse de rang faible, les problèmes inverses sur le simplexe, ou encore les problèmes inverses impliquant la distance de Wasserstein régularisée
In this work we develop and examine two novel first-order splitting algorithms for solving large-scale composite optimization problems in infinite-dimensional spaces. Such problems are ubiquitous in many areas of science and engineering, particularly in data science and imaging sciences. Our work is focused on relaxing the Lipschitz-smoothness assumptions generally required by first-order splitting algorithms by replacing the Euclidean energy with a Bregman divergence. These developments allow one to solve problems having more exotic geometry than that of the usual Euclidean setting. One algorithm is hybridization of the conditional gradient algorithm, making use of a linear minimization oracle at each iteration, with an augmented Lagrangian algorithm, allowing for affine constraints. The other algorithm is a primal-dual splitting algorithm incorporating Bregman divergences for computing the associated proximal operators. For both of these algorithms, our analysis shows convergence of the Lagrangian values, subsequential weak convergence of the iterates to solutions, and rates of convergence. In addition to these novel deterministic algorithms, we introduce and study also the stochastic extensions of these algorithms through a perturbation perspective. Our results in this part include almost sure convergence results for all the same quantities as in the deterministic setting, with rates as well. Finally, we tackle new problems that are only accessible through the relaxed assumptions our algorithms allow. We demonstrate numerical efficiency and verify our theoretical results on problems like low rank, sparse matrix completion, inverse problems on the simplex, and entropically regularized Wasserstein inverse problems
APA, Harvard, Vancouver, ISO, and other styles
28

Daher, Mohamad. "Fusion multi-capteurs tolérante aux fautes pour un niveau d'intégrité élevé du suivi de la personne." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10136/document.

Full text
Abstract:
Environ un tiers des personnes âgées vivant à domicile souffrent d'une chute chaque année. Les chutes les plus graves se produisent lorsque la personne est seule et incapable de se lever, ce qui entraîne un grand nombre de personnes âgées admis au service de gériatrique et un taux de mortalité malheureusement élevé. Le système PAL (Personally Assisted Living) apparaît comme une des solutions de ce problème. Ce système d’intelligence ambiante permet aux personnes âgées de vivre dans un environnement intelligent et pro-actif. Le travail de cette thèse s’inscrit dans le cadre de suivi des personnes âgées avec un maintien à domicile, la reconnaissance quotidienne des activités et le système automatique de détection des chutes à l'aide d'un ensemble de capteurs non intrusifs qui accorde l'intimité et le confort aux personnes âgées. En outre, une méthode de fusion tolérante aux fautes est proposée en utilisant un formalisme purement informationnel: filtre informationnel d’une part, et outils de la théorie de l’information d’autre part. Des résidus basés sur la divergence de Kullback-Leibler sont utilisés. Via un seuillage adéquat, ces résidus conduisent à la détection et à l’exclusion des défauts capteurs. Les algorithmes proposés ont été validés avec plusieurs scénarii différents contenant les différentes activités: marcher, s’asseoir, debout, se coucher et tomber. Les performances des méthodes développées ont montré une sensibilité supérieure à 94% pour la détection de chutes de personnes et plus de 92% pour la discrimination entre les différentes ADL (Activités de la vie quotidienne)
About one third of home-dwelling older people suffer a fall each year. The most painful falls occur when the person is alone and unable to get up, resulting in huge number of elders which are associated with institutionalization and high morbidity-mortality rate. The PAL (Personally Assisted Living) system appears to be one of the solutions of this problem. This ambient intelligence system allows elderly people to live in an intelligent and pro-active environment. This thesis describes the ongoing work of in-home elder tracking, activities daily living recognition, and automatic fall detection system using a set of non-intrusive sensors that grants privacy and comfort to the elders. In addition, a fault-tolerant fusion method is proposed using a purely informational formalism: information filter on the one hand, and information theory tools on the other hand. Residues based on the Kullback-Leibler divergence are used. Using an appropriate thresholding, these residues lead to the detection and the exclusion of sensors faults. The proposed algorithms were validated with many different scenarios containing the different activities: walking, sitting, standing, lying down, and falling. The performances of the developed methods showed a sensitivity of more than 94% for the fall detection of persons and more than 92% for the discrimination between the different ADLs (Activities of the daily life)
APA, Harvard, Vancouver, ISO, and other styles
29

Santos, Tiago Souza dos. "Segmenta??o Fuzzy de Texturas e V?deos." Universidade Federal do Rio Grande do Norte, 2012. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18063.

Full text
Abstract:
Made available in DSpace on 2014-12-17T15:48:04Z (GMT). No. of bitstreams: 1 TiagoSS_DISSERT.pdf: 2900373 bytes, checksum: ea7bd73351348f5c75a5bf4f337c599f (MD5) Previous issue date: 2012-08-17
Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico
The segmentation of an image aims to subdivide it into constituent regions or objects that have some relevant semantic content. This subdivision can also be applied to videos. However, in these cases, the objects appear in various frames that compose the videos. The task of segmenting an image becomes more complex when they are composed of objects that are defined by textural features, where the color information alone is not a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation algorithm that uses affinity functions in order to assign to each element in an image a grade of membership for each object (between 0 and 1). This work presents a modification of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and spatial complexity. The algorithm was adapted to segmenting color videos, treating them as 3D volume. In order to perform segmentation in videos, conventional color model or a hybrid model obtained by a method for choosing the best channels were used. The Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive affinity functions defined for each object texture. Two types of affinity functions were used, one defined using the normal (or Gaussian) probability distribution and the other using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a measure of the difference between two probability distributions. Finally, the algorithm was tested in somes videos and also in texture mosaic images composed by images of the Brodatz album
A segmenta??o de uma imagem tem como objetivo subdividi-la em partes ou objetos constituintes que tenham algum conte?do sem?ntico relevante. Esta subdivis?o pode tamb?m ser aplicada a um v?deo, por?m, neste, os objetos est?o presentes nos diversos quadros que comp?em o v?deo. A tarefa de segmentar uma imagem torna-se mais complexa quando estas s?o compostas por objetos que contenham caracter?sticas texturais, com pouca ou nenhuma informa??o de cor. A segmenta??o difusa, do Ingl?s fuzzy, ? uma t?cnica de segmenta??o por crescimento de regi?es que determina para cada elemento da imagem um grau de pertin?ncia (entre zero e um) indicando a confian?a de que esse elemento perten?a a um determinado objeto ou regi?o existente na imagem, fazendo-se uso de fun??es de afinidade para obter esses valores de pertin?ncia. Neste trabalho ? apresentada uma modifica??o do algoritmo de segmenta??o fuzzy proposto por Carvalho [Carvalho et al. 2005], a fim de se obter melhorias na complexidade temporal e espacial. O algoritmo foi adaptado para segmentar v?deos coloridos tratando-os como volumes 3D. Para segmentar os v?deos, foram utilizadas informa??es provenientes de um modelo de cor convencional ou de um modelo h?brido obtido atrav?s de uma metodologia para a escolha dos melhores canais para realizar a segmenta??o. O algoritmo de segmenta??o fuzzy foi aplicado tamb?m na segmenta??o de texturas, fazendo-se uso de fun??es de afinidades adaptativas ?s texturas de cada objeto. Dois tipos de fun??es de afinidades foram utilizadas, uma utilizando a distribui??o normal de probabilidade, ou Gaussiana, e outra utilizando a diverg?ncia Skew. Esta ?ltima, uma varia??o da diverg?ncia de Kullback- Leibler, ? uma medida da diverg?ncia entre duas distribui??es de probabilidades. Por fim, o algoritmo foi testado com alguns v?deos e tamb?m com imagens de mosaicos de texturas criadas a partir do ?lbum de Brodatz e outros
APA, Harvard, Vancouver, ISO, and other styles
30

Xie, Xinwen. "Quality strategy and method for transmission : application to image." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2251/document.

Full text
Abstract:
Cette thèse porte sur l’étude des stratégies d’amélioration de la qualité d’image dans les systèmes de communication sans fil et sur la conception de nouvelles métriques d’évaluation de la qualité. Tout d'abord, une nouvelle métrique de qualité d'image à référence réduite, basée sur un modèle statistique dans le domaine des ondelettes complexes, a été proposée. Les informations d’amplitude et de phase relatives des coefficients issues de la transformée en ondelettes complexes sont modélisées à l'aide de fonctions de densité de probabilité. Les paramètres associés à ces fonctions constituent la référence réduite qui sera transmise au récepteur. Ensuite, une approche basée sur les réseaux de neurones à régression généralisée est exploitée pour construire la relation de cartographie entre les caractéristiques de la référence réduite et le score objectif.Deuxièmement, avec la nouvelle métrique, une nouvelle stratégie de décodage est proposée pour la transmission d’image sur un canal de transmission sans fil réaliste. Ainsi, la qualité d’expérience (QoE) est améliorée tout en garantissant une bonne qualité de service (QoS). Pour cela, une nouvelle base d’images a été construite et des tests d’évaluation subjective de la qualité de ces images ont été effectués pour collecter les préférences visuelles des personnes lorsqu’elles sélectionnent les images avec différentes configurations de décodage. Un classificateur basé sur les algorithmes SVM et des k plus proches voisins sont utilisés pour la sélection automatique de la meilleure configuration de décodage.Enfin, une amélioration de la métrique a été proposée permettant de mieux prendre en compte les spécificités de la distorsion et la préférence des utilisateurs. Pour cela, nous avons combiné les caractéristiques globales et locales de l’image conduisant ainsi à une amélioration de la stratégie de décodage.Les résultats expérimentaux valident l'efficacité des métriques de qualité d'image et des stratégies de transmission d’images proposées
This thesis focuses on the study of image quality strategies in wireless communication systems and the design of new quality evaluation metrics:Firstly, a new reduced-reference image quality metric, based on statistical model in complex wavelet domain, has been proposed. The magnitude and the relative phase information of the Dual-tree Complex Wavelet Transform coefficients are modelled by using probability density function and the parameters served as reduced-reference features which will be transmitted to the receiver. Then, a Generalized Regression Neural Network approach is exploited to construct the mapping relation between reduced-reference feature and the objective score.Secondly, with the new metric, a new decoding strategy is proposed for a realistic wireless transmission system, which can improve the quality of experience (QoE) while ensuring the quality of service (QoS). For this, a new database including large physiological vision tests has been constructed to collect the visual preference of people when they are selecting the images with different decoding configurations, and a classifier based on support vector machine or K-nearest neighboring is utilized to automatically select the decoding configuration.Finally, according to specific property of the distortion and people's preference, an improved metric has been proposed. It is the combination of global feature and local feature and has been demonstrated having a good performance in optimization of the decoding strategy.The experimental results validate the effectiveness of the proposed image quality metrics and the quality strategies
APA, Harvard, Vancouver, ISO, and other styles
31

Martín, Fernández Josep Antoni. "Medidas de diferencia y clasificación automática no paramétrica de datos composicionales." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/6704.

Full text
Abstract:
Es muy frecuente encontrar datos de tipo composicional en disciplinas tan dispares como son, entre otras, las ciencias de la tierra, la medicina, y la economía. También es frecuente en estos ámbitos el uso de técnicas de clasificación no paramétrica para la detección de agrupaciones naturales en los datos. Sin embargo, una búsqueda bibliográfica bastante exhaustiva y la presentación de resultados preliminares sobre el tema en congresos de ámbito internacional han permitido constatar la inexistencia de un cuerpo teórico y metodológico apropiado que permita desarrollar pautas y recomendaciones a seguir en el momento de realizar una clasificación no paramétrica de datos composicionales. Por estos motivos se ha elegido como tema de tesis la adaptación y desarrollo de métodos de agrupación adecuados a datos de naturaleza composicional, es decir, datos tales que el valor de cada una de sus componentes expresa una proporción respecto de un total. El título de la misma, "Medidas de diferencia y clasificación automática no paramétrica de datos composicionales", recoge no sólo este propósito, sino que añade la expresión "medidas de diferencia" con el propósito de reflejar el peso específico importante que tiene el estudio de este tipo de medida en el desarrollo del trabajo. La expresión "no paramétrica'' se refiere a que en la misma no se considerarán técnicas de clasificación que presuponen la existencia de un modelo de distribución de probabilidad para las observaciones objeto de la agrupación.

La memoria de la tesis se inicia con un capítulo introductorio donde se presentan los elementos básicos de las técnicas de clasificación automática no paramétrica. Se pone especial énfasis en aquellos elementos susceptibles de ser adaptados para su aplicación en clasificaciones de datos composicionales. En el segundo capítulo se aborda el análisis de los conceptos más importantes en torno a los datos composicionales. En este capítulo, los esfuerzos se han concentrado principalmente en estudiar las medidas de diferencia entre datos composicionales junto con las medidas de tendencia central y de dispersión. Con ello se dispone de las herramientas necesarias para proceder al desarrollo de una metodología apropiada para la clasificación no paramétrica de datos composicionales, consistente en incorporar los elementos anteriores a las técnicas habituales y adaptarlas en la medida de lo necesario. El tercer capítulo se dedica exclusivamente a proponer nuevas medidas de diferencia entre datos composicionales basadas en las medidas de divergencia entre distribuciones de probabilidad. En el cuarto capítulo se incorporan las peculiaridades de los datos composicionales a las técnicas de clasificación y se exponen las pautas a seguir en el uso práctico de estas técnicas. El capítulo se completa con la aplicación de la metodología expuesta a un caso práctico. En el quinto capítulo de esta tesis se aborda el denominado problema de los ceros. Se analizan los inconvenientes de los métodos usuales de substitución y se propone una nueva fórmula de substitución de los ceros por redondeo. El capítulo finaliza con el estudio de un caso práctico. En el epílogo de esta memoria se presentan las conclusiones del trabajo de investigación y se indican la líneas futuras de trabajo. En los apéndices finales de esta memoria se recogen los conjuntos de datos utilizados en los casos prácticos que se han desarrollado en la presente tesis. Esta memoria se completa con la lista de las referencias bibliográficas más relevantes que se han consultado para llevar a cabo este trabajo de investigación.
On March 23, 2001 Josep Antoni Martín-Fernández from the Dept. of Computer Sciences and Applied Mathematics of the University of Girona (Catalonia-Spain), presented his PhD thesis, entitled "Measures of difference and non-parametric cluster analysis for compositional data" at the Technical University of Barcelona. A short resumee follows:

Compositional data are by definition proportions of some whole. Thus, their natural sample space is the open simplex and interest lies in the relative behaviour of the components. Basic operations defined on the simplex induce a vector space structure, which justifies the developement of its algebraic-geometric structure: scalar product, norm, and distance. At the same time, hierarchic methods of classification require to establish in advance some or all of the following measures: difference, central tendency and dispersion, in accordance with the nature of the data. J. A. Martín-Fernández studies the requirements for these measures when the data are compositional in type and presents specific measures to be used with the most usual non-parametric methods of cluster analysis. As a part of his thesis he also introduced the centering operation, which has been shown to be a powerful tool to visualize compositional data sets. Furthermore, he defines a new dissimilarity based on measures of divergence between multinomial probability distributions, which is compatible with the nature of compositional data. Finally, J. A. Martín-Fernández presents in his thesis a new method to attack the "Achilles heel" of any statistical analysis of compositional data: the presence of zero values, based on a multiplicative approach which respects the essential properties of this type of data.
APA, Harvard, Vancouver, ISO, and other styles
32

Abci, Boussad. "Approche informationnelle pour la navigation autonome tolérante aux défauts : application aux systèmes robotiques mobiles." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I073.

Full text
Abstract:
La navigation autonome des systèmes robotiques mobiles a suscité un grand intérêt dans la communauté scientifique ces dernières années. Cela est principalement dû à la diversité de ses secteurs d’applications et les différents challenges qu'elle représente. En raison de l'absence d'une intervention humaine, la navigation autonome doit être sûre, fiable et précise. Néanmoins, elle peut être sujet à différentes dégradations qui peuvent compromettre son objectif. En effet, les perturbations externes, tout comme les défauts capteurs et actionneurs, affectent les différents aspects de la navigation autonome que sont la localisation, la planification et le suivi de trajectoire. C'est pourquoi nous consacrons cette thèse à l'étude et à la conception de nouveaux algorithmes qui contribuent à rendre le système de navigation robuste et tolérant aux défauts. Nous avons fait le choix d'utiliser des algorithmes de diagnostic de défauts capteurs et actionneurs à base de résidus, et une commande robuste par modes glissants permettant d'assurer une tolérance passive contre une classe plus large de perturbations externes, qui ne sont pas forcément bornées d'une manière uniforme. La couche de diagnostic proposée est purement informationnelle. Elle se base sur l'utilisation de deux filtres informationnels avec différents modèles d'évolution, et les divergences de Bhattacharyya et de Kullback-Leibler pour la conception des résidus. Ces résidus sont évalués via des méthodes statistiques pour permettre la détection, la localisation et l'exclusion de défauts capteurs et actionneurs. L'approche proposée est appliquée sur des systèmes robotiques mobiles à roues avec entraînement différentiel. Les résultats expérimentaux obtenus sur la plate-forme robotique PRETIL de CRIStAL sont présentés et discutés
Over the last years, autonomous navigation for mobile robot systems has known an increasing interest from the scientific community. This is mainly due to the diversity of its applications and the different challenges that it represents. Without any human intervention, autonomous navigation must be safe, reliable and accurate. Nevertheless, it may be subject to various degradations that could compromise its objective. Indeed, external disturbances, as well as sensor and actuator faults, may affect the different aspects of autonomous navigation, which are localization, path planning and trajectory tracking. This is why we are devoting this thesis to the design of new algorithms that contribute to make the navigation system robust against external disturbances and tolerant to sensor and actuator fauts. We have adopted a residual generation based fault-diagnosis strategy combined with a robust sliding mode controller that is robust against a certain class of perturbations that are not necessary uniformly bounded. The proposed diagnostic layer is purely informational. It is based on the use of two information filters with different evolution models, and the divergences of Bhattacharyya and Kullback-Leibler for residual design. These residuals are evaluated using statistical methods, in order to detect, isolate then exclude sensor and actuator faults from the navigation system. The proposed approach is applied to different differential drive mobile-robot systems. Experimental results obtained by using the CRIStAL robotic platform, so-called PRETIL, are presented and discussed
APA, Harvard, Vancouver, ISO, and other styles
33

Degenne, Rémy. "Impact of structure on the design and analysis of bandit algorithms." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7179.

Full text
Abstract:
Cette thèse porte sur des problèmes d'apprentissage statistique séquentiel, dits bandits stochastiques à plusieurs bras. Dans un premier temps un algorithme de bandit est présenté. L'analyse de cet algorithme, comme la majorité des preuves usuelles de bornes de regret pour algorithmes de bandits, utilise des intervalles de confiance pour les moyennes des bras. Dans un cadre paramétrique,on prouve des inégalités de concentration quantifiant la déviation entre le paramètre d'une distribution et son estimation empirique, afin d'obtenir de tels intervalles. Ces inégalités sont exprimées en fonction de la divergence de Kullback-Leibler. Trois extensions du problème de bandits sont ensuite étudiées. Premièrement on considère le problème dit de semi-bandit combinatoire, dans lequel un algorithme choisit un ensemble de bras et la récompense de chaque bras est observée. Le regret minimal atteignable dépend alors de la corrélation entre les bras. On considère ensuite un cadre où on change le mécanisme d'obtention des observations provenant des différents bras. Une source de difficulté du problème de bandits est la rareté de l'information: seul le bras choisi est observé. On montre comment on peut tirer parti de la disponibilité d'observations supplémentaires gratuites, ne participant pas au regret. Enfin, une nouvelle famille d'algorithmes est présentée afin d'obtenir à la fois des guaranties de minimisation de regret et d'identification du meilleur bras. Chacun des algorithmes réalise un compromis entre regret et temps d'identification. On se penche dans un deuxième temps sur le problème dit d'exploration pure, dans lequel un algorithme n'est pas évalué par son regret mais par sa probabilité d'erreur quant à la réponse à une question posée sur le problème. On détermine la complexité de tels problèmes et on met au point des algorithmes approchant cette complexité
In this Thesis, we study sequential learning problems called stochastic multi-armed bandits. First a new bandit algorithm is presented. The analysis of that algorithm uses confidence intervals on the mean of the arms reward distributions, as most bandit proofs do. In a parametric setting, we derive concentration inequalities which quantify the deviation between the mean parameter of a distribution and its empirical estimation in order to obtain confidence intervals. These inequalities are presented as bounds on the Kullback-Leibler divergence. Three extensions of the stochastic multi-armed bandit problem are then studied. First we study the so-called combinatorial semi-bandit problem, in which an algorithm chooses a set of arms and the reward of each of these arms is observed. The minimal attainable regret then depends on the correlation between the arm distributions. We consider then a setting in which the observation mechanism changes. One source of difficulty of the bandit problem is the scarcity of information: only the arm pulled is observed. We show how to use efficiently eventual supplementary free information (which do not influence the regret). Finally a new family of algorithms is introduced to obtain both regret minimization and est arm identification regret guarantees. Each algorithm of the family realizes a trade-off between regret and time needed to identify the best arm. In a second part we study the so-called pure exploration problem, in which an algorithm is not evaluated on its regret but on the probability that it returns a wrong answer to a question on the arm distributions. We determine the complexity of such problems and design with performance close to that complexity
APA, Harvard, Vancouver, ISO, and other styles
34

Torres, Huircaman Milflen. "Detección de condición falla de encolamientos de cambios de estado de móviles prepago a través de divergencia de Kullvack-Leibler." Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/111949.

Full text
Abstract:
Magíster en Ingeniería de Redes de Comunicaciones
La industria de telefonía móvil de prepago chilena concentra al 70% de los clientes móviles de los principales operadores en el país. Este servicio utiliza un proceso de descuento y abono en línea que permite rebajar en forma casi instantánea los créditos consumidos al utilizar los servicios de voz y datos habilitados en el terminal, y abonar el crédito correspondientes cuando se hace aplica una recarga prepagada, que son las operaciones más habituales que se aplican para cambiar el estado de operación de un terminal móvil prepago. La dinámica de estas transiciones depende de manera íntima de la operatividad del sistema computacional que administra y ejecuta estos cambios. Su arquitectura, del tipo servidor-cola de comandos, utiliza una filosofía first-in first-out (FIFO) para procesar cada comando asociado a la transición de estado que debe aplicarse sobre cada terminal de la red. Este sistema de administración de comandos puede colapsar si la demanda por cambios de estado aumenta en forma repentina y supera la capacidad de procesamiento del servidor. Ello tiene como consecuencia un aumento desmedido del tamaño de la cola de comandos, lo que a su vez, puede originar problemas en las prestaciones de telecomunicaciones dentro de la red y pérdidas monetarias al operador al dejar fuera de línea el sistema de cobro. Este fenómeno, que se denomina encolamiento, es controlado en los sistemas comerciales utilizando alarmas por umbral, las que indican a los administradores del sistema la necesidad de activar las contramedidas necesarias para restablecer el correcto funcionamiento del sistema. Sin embargo, el valor de este umbral es fijado sin utilizar necesariamente criterios de optimalidad de desempeño, lo que reduce la eficiencia en la operación técnica y comercial del servicio. La hipótesis de trabajo de esta investigación es que el uso un umbral ``duro'' puede ser mejorado al emplear un enfoque que incorpore la historia del proceso que describe la longitud de la cola de comandos, como el basado en las distribuciones de probabilidad de las condiciones de operación normal y de encolamiento. Para validar esta conjetura, se diseñó un detector de encolamientos basado en la divergencia de Kullback-Leibler, la que permite comparar la distribución instantánea de las observaciones con las correspondientes a la condición de operación normal y de encolamiento. La metodología empleada para validar esta tesis se basó en la simulación computacional de las transiciones de estado descrita mediante el uso de una cadena de Markov de 3 estados, que se utilizó para cuantificar la operación del detector y compararla con las métricas asociadas a la detección dura mediante umbrales. Las métricas de desempeño empleadas fueron el porcentaje de errores de tipo I (no detección) y de tipo II (falso positivo), las cuales fueron calculadas en forma empírica en ambos detectores. Además, el funcionamiento del detector fue validado con datos reales de operación a partir de un registro de 14 meses de observaciones. Los resultados obtenidos avalan la hipótesis planteada, en el sentido que se observaron mejoras de desempeño de hasta un 60% en la detección de encolamiento y un 85% en la disminución de falsos positivos al comparar el detector de Kullback-Leibler con aquellos basados en umbral. En este sentido, estos resultados constituyen un avance importante en el aumento de la precisión y confiabilidad de detección de condiciones de fallas que justifica la incorporación de esta nueva estrategia en el ambiente de operaciones de una empresa de telecomunicaciones. Además, la hace eventualmente extensible a procesos controlados a través de colas.
APA, Harvard, Vancouver, ISO, and other styles
35

Filippi, Sarah. "Stratégies optimistes en apprentissage par renforcement." Phd thesis, Ecole nationale supérieure des telecommunications - ENST, 2010. http://tel.archives-ouvertes.fr/tel-00551401.

Full text
Abstract:
Cette thèse traite de méthodes « model-based » pour résoudre des problèmes d'apprentissage par renforcement. On considère un agent confronté à une suite de décisions et un environnement dont l'état varie selon les décisions prises par l'agent. Ce dernier reçoit tout au long de l'interaction des récompenses qui dépendent à la fois de l'action prise et de l'état de l'environnement. L'agent ne connaît pas le modèle d'interaction et a pour but de maximiser la somme des récompenses reçues à long terme. Nous considérons différents modèles d'interactions : les processus de décisions markoviens, les processus de décisions markoviens partiellement observés et les modèles de bandits. Pour ces différents modèles, nous proposons des algorithmes qui consistent à construire à chaque instant un ensemble de modèles permettant d'expliquer au mieux l'interaction entre l'agent et l'environnement. Les méthodes dites « model-based » que nous élaborons se veulent performantes tant en pratique que d'un point de vue théorique. La performance théorique des algorithmes est calculée en terme de regret qui mesure la différence entre la somme des récompenses reçues par un agent qui connaîtrait à l'avance le modèle d'interaction et celle des récompenses cumulées par l'algorithme. En particulier, ces algorithmes garantissent un bon équilibre entre l'acquisition de nouvelles connaissances sur la réaction de l'environnement (exploration) et le choix d'actions qui semblent mener à de fortes récompenses (exploitation). Nous proposons deux types de méthodes différentes pour contrôler ce compromis entre exploration et exploitation. Le premier algorithme proposé dans cette thèse consiste à suivre successivement une stratégie d'exploration, durant laquelle le modèle d'interaction est estimé, puis une stratégie d'exploitation. La durée de la phase d'exploration est contrôlée de manière adaptative ce qui permet d'obtenir un regret logarithmique dans un processus de décision markovien paramétrique même si l'état de l'environnement n'est que partiellement observé. Ce type de modèle est motivé par une application d'intérêt en radio cognitive qu'est l'accès opportuniste à un réseau de communication par un utilisateur secondaire. Les deux autres algorithmes proposés suivent des stratégies optimistes : l'agent choisit les actions optimales pour le meilleur des modèles possibles parmi l'ensemble des modèles vraisemblables. Nous construisons et analysons un tel algorithme pour un modèle de bandit paramétrique dans un cas de modèles linéaires généralisés permettant ainsi de considérer des applications telles que la gestion de publicité sur internet. Nous proposons également d'utiliser la divergence de Kullback-Leibler pour la construction de l'ensemble des modèles vraisemblables dans des algorithmes optimistes pour des processus de décision markoviens à espaces d'états et d'actions finis. L'utilisation de cette métrique améliore significativement le comportement de des algorithmes optimistes en pratique. De plus, une analyse du regret de chacun des algorithmes permet de garantir des performances théoriques similaires aux meilleurs algorithmes de l'état de l'art.
APA, Harvard, Vancouver, ISO, and other styles
36

Hee, Sonke. "Computational Bayesian techniques applied to cosmology." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273346.

Full text
Abstract:
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.
APA, Harvard, Vancouver, ISO, and other styles
37

Cen, Kun. "The Use Of Kullback-Leibler Divergence In Opinion Retrieval." Thesis, 2008. http://hdl.handle.net/10012/4081.

Full text
Abstract:
With the huge amount of subjective contents in on-line documents, there is a clear need for an information retrieval system that supports retrieval of documents containing opinions about the topic expressed in a user’s query. In recent years, blogs, a new publishing medium, have attracted a large number of people to express personal opinions covering all kinds of topics in response to the real-world events. The opinionated nature of blogs makes them a new interesting research area for opinion retrieval. Identification and extraction of subjective contents from blogs has become the subject of several research projects. In this thesis, four novel methods are proposed to retrieve blog posts that express opinions about the given topics. The first method utilizes the Kullback-Leibler divergence (KLD) to weight the lexicon of subjective adjectives around query terms. Considering the distances between the query terms and subjective adjectives, the second method uses KLD scores of subjective adjectives based on distances from the query terms for document re-ranking. The third method calculates KLD scores of subjective adjectives for predefined query categories. In the fourth method, collocates, words co-occurring with query terms in the corpus, are used to construct the subjective lexicon automatically. The KLD scores of collocates are then calculated and used for document ranking. Four groups of experiments are conducted to evaluate the proposed methods on the TREC test collections. The results of the experiments are compared with the baseline systems to determine the effectiveness of using KLD in opinion retrieval. Further studies are recommended to explore more sophisticated approaches to identify subjectivity and promising techniques to extract opinions.
APA, Harvard, Vancouver, ISO, and other styles
38

陳力嘉. "Kullback-Leibler divergence based test for detecting differentially expressed genes." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/51228353977258529392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Shun-Lung, and 陳順隆. "Acoustic Recognition Using Tandem System with Kullback-Leibler Divergence and Hierarchical Multi-layer Perceptron." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/80451777015559629372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

DELLA, CIOPPA LORENZO. "Differential entropy based methods for thresholding in wavelet bases and other applications." Doctoral thesis, 2021. http://hdl.handle.net/11573/1566221.

Full text
Abstract:
n this thesis the problem of automatically selecting expansion coefficients of a function f(t) in a wavelet basis expansion is considered. The problem is approached through an information-theoretic point of view and three different differential entropy based measures are proposed to compare expansion coefficients in order to find the optimal subset. Several theoretical results about differential entropy of a function are developed in order to provide a solid theoretical framework to the proposed measures. A numerical scheme to compute the differential entropy of a function is presented and its stability and convergence properties are proved. Numerical experiments concerning different wavelet bases and frames are presented, and the behaviour of the proposed measures is compared to state of the art methods. Moreover, numerical experiments highlight a connection between the proposed measures and the well-known information measure Normalized Compression Distance (NCD). In addition, Fourier basis is considered, and an application to Fourier shape descriptors is developed. As a last contribute, the problem of locating time-frequency interferences in multicomponent signals is considered. A method for time-domain location of modes interferences is presented relying on a filtered energy signal. The optimal amount of filtering is automatically detected in a rate/distortion-like curve by means of the proposed information measures.
APA, Harvard, Vancouver, ISO, and other styles
41

(8695017), Surabhi Bhadauria. "ASSOCIATION OF TOO SHORT ARCS USING ADMISSIBLE REGION." Thesis, 2020.

Find full text
Abstract:

The near-Earth space is filled with over 300,000 artificial debris objects with a diameter larger than one cm. For objects in GEO and MEO region, the observations are made mainly through optical sensors. These sensors take observations over a short time which cover only a negligible part of the object's orbit. Two or more such observations are taken as one single Too Short Arc (TSA). Each set of TSA from an optical sensor consists of several angles, the angles of right ascension, declination, along with the rate of change of the right ascension angle and the declination angle. However, such observational data obtained from one TSA because it is covering only a very small fraction of the orbit, is not sufficient for the complete initial determination of an object's orbit. For a newly detected unknown object, only TSAs are available with no information about the orbit of the object. Therefore, two or more such TSAs that belong to the same object are required for its orbit determination. To solve this correlation problem, the framework of the probabilistic Admissible Region is used, which restricts possible orbits based on a single TSA. To propagate the Admissible Region to the time of a second TSA, it is represented in closed-form Gaussian Mixture representation. This way, a propagation with an Extended Kalman filter is possible. To decide if two TSAs are correlated, that is if they belong to the same object, respectively, an overlap between the regions is found in a suitable orbital mechanic's based coordinate frame. To compute the overlap, the information measure of Kullback-Leibler divergence is used.

APA, Harvard, Vancouver, ISO, and other styles
42

Sečkárová, Vladimíra. "Kombinování diskrétních pravděpodobnostních rozdělení pomocí křížové entropie pro distribuované rozhodování." Doctoral thesis, 2015. http://www.nusl.cz/ntk/nusl-350939.

Full text
Abstract:
Dissertation abstract Title: Cross-entropy based combination of discrete probability distributions for distributed de- cision making Author: Vladimíra Sečkárová Author's email: seckarov@karlin.mff.cuni.cz Department: Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague Supervisor: Ing. Miroslav Kárný, DrSc., The Institute of Information Theory and Automation of the Czech Academy of Sciences Supervisor's email: school@utia.cas.cz Abstract: In this work we propose a systematic way to combine discrete probability distributions based on decision making theory and theory of information, namely the cross-entropy (also known as the Kullback-Leibler (KL) divergence). The optimal combination is a probability mass function minimizing the conditional expected KL-divergence. The ex- pectation is taken with respect to a probability density function also minimizing the KL divergence under problem-reflecting constraints. Although the combination is derived for the case when sources provided probabilistic type of information on the common support, it can applied to other types of given information by proposed transformation and/or extension. The discussion regarding proposed combining and sequential processing of available data, duplicate data, influence...
APA, Harvard, Vancouver, ISO, and other styles
43

Kolba, Mark Philip. "Information-Based Sensor Management for Static Target Detection Using Real and Simulated Data." Diss., 2009. http://hdl.handle.net/10161/1313.

Full text
Abstract:

In the modern sensing environment, large numbers of sensor tasking decisions must be made using an increasingly diverse and powerful suite of sensors in order to best fulfill mission objectives in the presence of situationally-varying resource constraints. Sensor management algorithms allow the automation of some or all of the sensor tasking process, meaning that sensor management approaches can either assist or replace a human operator as well as ensure the safety of the operator by removing that operator from a dangerous operational environment. Sensor managers also provide improved system performance over unmanaged sensing approaches through the intelligent control of the available sensors. In particular, information-theoretic sensor management approaches have shown promise for providing robust and effective sensor manager performance.

This work develops information-theoretic sensor managers for a general static target detection problem. Two types of sensor managers are developed. The first considers a set of discrete objects, such as anomalies identified by an anomaly detector or grid cells in a gridded region of interest. The second considers a continuous spatial region in which targets may be located at any point in continuous space. In both types of sensor managers, the sensor manager uses a Bayesian, probabilistic framework to model the environment and tasks the sensor suite to make new observations that maximize the expected information gain for the system. The sensor managers are compared to unmanaged sensing approaches using simulated data and using real data from landmine detection and unexploded ordnance (UXO) discrimination applications, and it is demonstrated that the sensor managers consistently outperform the unmanaged approaches, enabling targets to be detected more quickly using the sensor managers. The performance improvement represented by the rapid detection of targets is of crucial importance in many static target detection applications, resulting in higher rates of advance and reduced costs and resource consumption in both military and civilian applications.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
44

Mielke, Matthias. "Maximum Likelihood Theory for Retention of Effect Non-Inferiority Trials." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B3D4-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

(6561242), Piyush Pandita. "BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY." Thesis, 2019.

Find full text
Abstract:
Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.
The aim of conducting these experiments could be to study the production of a material that has great applicability.
One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.
The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.
This gives rise to what is known as epistemic uncertainty.
This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.
The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
46

Vaidhiyan, Nidhin Koshy. "Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/2669.

Full text
Abstract:
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances. In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance. In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker. In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.
APA, Harvard, Vancouver, ISO, and other styles
47

Vaidhiyan, Nidhin Koshy. "Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2669.

Full text
Abstract:
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances. In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance. In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker. In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.
APA, Harvard, Vancouver, ISO, and other styles
48

FANTACCI, CLAUDIO. "Distributed multi-object tracking over sensor networks: a random finite set approach." Doctoral thesis, 2015. http://hdl.handle.net/2158/1003256.

Full text
Abstract:
The aim of the present dissertation is to address distributed tracking over a network of heterogeneous and geographically dispersed nodes (or agents) with sensing, communication and processing capabilities. Tracking is carried out in the Bayesian framework and its extension to a distributed context is made possible via an information-theoretic approach to data fusion which exploits consensus algorithms and the notion of Kullback–Leibler Average (KLA) of the Probability Density Functions (PDFs) to be fused. The first step toward distributed tracking considers a single moving object. Consensus takes place in each agent for spreading information over the network so that each node can track the object. To achieve such a goal, consensus is carried out on the local single-object posterior distribution, which is the result of local data processing, in the Bayesian setting, exploiting the last available measurement about the object. Such an approach is called Consensus on Posteriors (CP). The first contribution of the present work is an improvement to the CP algorithm, namely Parallel Consensus on Likelihoods and Priors (CLCP). The idea is to carry out, in parallel, a separate consensus for the novel information (likelihoods) and one for the prior information (priors). This parallel procedure is conceived to avoid underweighting the novel information during the fusion steps. The outcomes of the two consensuses are then combined to provide the fused posterior density. Furthermore, the case of a single highly-maneuvering object is addressed. To this end, the object is modeled as a jump Markovian system and the multiple model (MM) filtering approach is adopted for local estimation. Thus, the consensus algorithms needs to be re-designed to cope with this new scenario. The second contribution has been to devise two novel consensus MM filters to be used for tracking a maneuvering object. The novel consensus-based MM filters are based on the First Order Generalized Pseudo-Bayesian (GPB1) and Interacting Multiple Model (IMM) filters. The next step is in the direction of distributed estimation of multiple moving objects. In order to model, in a rigorous and elegant way, a possibly time-varying number of objects present in a given area of interest, the Random Finite Set (RFS) formulation is adopted since it provides the notion of probability density for multi-object states that allows to directly extend existing tools in distributed estimation to multi-object tracking. The multi-object Bayes filter proposed by Mahler is a theoretically grounded solution to recursive Bayesian tracking based on RFSs. However, the multi-object Bayes recursion, unlike the single-object counterpart, is affected by combinatorial complexity and is, therefore, computationally infeasible except for very small-scale problems involving few objects and/or measurements. For this reason, the computationally tractable Probability Hypothesis Density (PHD) and Cardinalized PHD (CPHD) filtering approaches will be used as a first endeavour to distributed multiobject filtering. The third contribution is the generalisation of the single-object KLA to the RFS framework, which is the theoretical fundamental step for developing a novel consensus algorithm based on CPHD filtering, namely the Consensus CPHD (CCPHD). Each tracking agent locally updates multi-object CPHD, i.e. the cardinality distribution and the PHD, exploiting the multi-object dynamics and the available local measurements, exchanges such information with communicating agents and then carries out a fusion step to combine the information from all neighboring agents. The last theoretical step of the present dissertation is toward distributed filtering with the further requirement of unique object identities. To this end the labeled RFS framework is adopted as it provides a tractable approach to the multi-object Bayesian recursion. The δ- GLMB filter is an exact closed-form solution to the multi-object Bayes recursion which jointly yields state and label (or trajectory) estimates in the presence of clutter, misdetections and association uncertainty. Due to the presence of explicit data associations in the δ-GLMB filter, the number of components in the posterior grows without bound in time. The fourth contribution of this thesis is an efficient approximation of the δ-GLMB filter, namely Marginalized δ-GLMB (Mδ-GLMB), which preserves key summary statistics (i.e. both the PHD and cardinality distribution) of the full labeled posterior. This approximation also facilitates efficient multi-sensor tracking with detection-based measurements. Simulation results are presented to verify the proposed approach. Finally, distributed labeled multi-object tracking over sensor networks is taken into account. The last contribution is a further generalization of the KLA to the labeled RFS framework, which enables the development of two novel consensus tracking filters, namely the Consensus Marginalized δ-Generalized Labeled Multi-Bernoulli (CM-δGLMB) and the Consensus Labeled Multi-Bernoulli (CLMB) tracking filters. The proposed algorithms provide a fully distributed, scalable and computationally efficient solution for multi-object tracking. Simulation experiments on challenging single-object or multi-object tracking scenarios confirm the effectiveness of the proposed contributions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography