Dissertations / Theses on the topic 'Bayesian Inference Damage Detection'

To see the other types of publications on this topic, follow the link: Bayesian Inference Damage Detection.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Bayesian Inference Damage Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Goi, Yoshinao. "Bayesian Damage Detection for Vibration Based Bridge Health Monitoring." Kyoto University, 2018. http://hdl.handle.net/2433/232013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lebre, Sophie. "Stochastic process analysis for Genomics and Dynamic Bayesian Networks inference." Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00260250.

Full text
Abstract:
This thesis is dedicated to the development of statistical and computational methods for the analysis of DNA sequences and gene expression time series.

First we study a parsimonious Markov model called Mixture Transition Distribution (MTD) model which is a mixture of Markovian transitions. The overly high number of constraints on the parameters of this model hampers the formulation of an analytical expression of the Maximum Likelihood Estimate (MLE). We propose to approach the MLE thanks to an EM algorithm. After comparing the performance of this algorithm to results from the litterature, we use it to evaluate the relevance of MTD modeling for bacteria DNA coding sequences in comparison with standard Markovian modeling.

Then we propose two different approaches for genetic regulation network recovering. We model those genetic networks with Dynamic Bayesian Networks (DBNs) whose edges describe the dependency relationships between time-delayed genes expression. The aim is to estimate the topology of this graph despite the overly low number of repeated measurements compared with the number of observed genes.

To face this problem of dimension, we first assume that the dependency relationships are homogeneous, that is the graph topology is constant across time. Then we propose to approximate this graph by considering partial order dependencies. The concept of partial order dependence graphs, already introduced for static and non directed graphs, is adapted and characterized for DBNs using the theory of graphical models. From these results, we develop a deterministic procedure for DBNs inference.

Finally, we relax the homogeneity assumption by considering the succession of several homogeneous phases. We consider a multiple changepoint
regression model. Each changepoint indicates a change in the regression model parameters, which corresponds to the way an expression level depends on the others. Using reversible jump MCMC methods, we develop a stochastic algorithm which allows to simultaneously infer the changepoints location and the structure of the network within the phases delimited by the changepoints.

Validation of those two approaches is carried out on both simulated and real data analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Ko, Kyungduk. "Bayesian wavelet approaches for parameter estimation and change point detection in long memory processes." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/2804.

Full text
Abstract:
The main goal of this research is to estimate the model parameters and to detect multiple change points in the long memory parameter of Gaussian ARFIMA(p, d, q) processes. Our approach is Bayesian and inference is done on wavelet domain. Long memory processes have been widely used in many scientific fields such as economics, finance and computer science. Wavelets have a strong connection with these processes. The ability of wavelets to simultaneously localize a process in time and scale domain results in representing many dense variance-covariance matrices of the process in a sparse form. A wavelet-based Bayesian estimation procedure for the parameters of Gaussian ARFIMA(p, d, q) process is proposed. This entails calculating the exact variance-covariance matrix of given ARFIMA(p, d, q) process and transforming them into wavelet domains using two dimensional discrete wavelet transform (DWT2). Metropolis algorithm is used for sampling the model parameters from the posterior distributions. Simulations with different values of the parameters and of the sample size are performed. A real data application to the U.S. GNP data is also reported. Detection and estimation of multiple change points in the long memory parameter is also investigated. The reversible jump MCMC is used for posterior inference. Performances are evaluated on simulated data and on the Nile River dataset.
APA, Harvard, Vancouver, ISO, and other styles
4

Reichl, Johannes, and Sylvia Frühwirth-Schnatter. "A Censored Random Coefficients Model for the Detection of Zero Willingness to Pay." Springer, 2011. http://epub.wu.ac.at/3707/1/WU_epub_(2).pdf.

Full text
Abstract:
In this paper we address the problem of negative estimates of willingness to pay. We find that there exist a number of goods and services, especially in the fields of marketing and environmental valuation, for which only zero or positive WTP is meaningful. For the valuation of these goods an econometric model for the analysis of repeated dichotomous choice data is proposed. Our model restricts the domain of the estimates of WTP to strictly positive values, while also allowing for the detection of zero WTP. The model is tested on a simulated and a real data set.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Hanze. "Bayesian inference on quantile regression-based mixed-effects joint models for longitudinal-survival data from AIDS studies." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7456.

Full text
Abstract:
In HIV/AIDS studies, viral load (the number of copies of HIV-1 RNA) and CD4 cell counts are important biomarkers of the severity of viral infection, disease progression, and treatment evaluation. Recently, joint models, which have the capability on the bias reduction and estimates' efficiency improvement, have been developed to assess the longitudinal process, survival process, and the relationship between them simultaneously. However, the majority of the joint models are based on mean regression, which concentrates only on the mean effect of outcome variable conditional on certain covariates. In fact, in HIV/AIDS research, the mean effect may not always be of interest. Additionally, if obvious outliers or heavy tails exist, mean regression model may lead to non-robust results. Moreover, due to some data features, like left-censoring caused by the limit of detection (LOD), covariates with measurement errors and skewness, analysis of such complicated longitudinal and survival data still poses many challenges. Ignoring these data features may result in biased inference. Compared to the mean regression model, quantile regression (QR) model belongs to a robust model family, which can give a full scan of covariate effect at different quantiles of the response, and may be more robust to extreme values. Also, QR is more flexible, since the distribution of the outcome does not need to be strictly specified as certain parametric assumptions. These advantages make QR be receiving increasing attention in diverse areas. To the best of our knowledge, few study focuses on the QR-based joint models and applies to longitudinal-survival data with multiple features. Thus, in this dissertation research, we firstly developed three QR-based joint models via Bayesian inferential approach, including: (i) QR-based nonlinear mixed-effects joint models for longitudinal-survival data with multiple features; (ii) QR-based partially linear mixed-effects joint models for longitudinal data with multiple features; (iii) QR-based partially linear mixed-effects joint models for longitudinal-survival data with multiple features. The proposed joint models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also implemented to assess the performance of the proposed methods under different scenarios. Although this is a biostatistical methodology study, some interesting clinical findings are also discovered.
APA, Harvard, Vancouver, ISO, and other styles
6

Osborne, Michael A. "Bayesian Gaussian processes for sequential prediction, optimisation and quadrature." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:1418c926-6636-4d96-8bf6-5d94240f3d1f.

Full text
Abstract:
We develop a family of Bayesian algorithms built around Gaussian processes for various problems posed by sensor networks. We firstly introduce an iterative Gaussian process for multi-sensor inference problems, and show how our algorithm is able to cope with data that may be noisy, missing, delayed and/or correlated. Our algorithm can also effectively manage data that features changepoints, such as sensor faults. Extensions to our algorithm allow us to tackle some of the decision problems faced in sensor networks, including observation scheduling. Along these lines, we also propose a general method of global optimisation, Gaussian process global optimisation (GPGO), and demonstrate how it may be used for sensor placement. Our algorithms operate within a complete Bayesian probabilistic framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian quadrature, a principled method of approximate integration. Similar techniques also allow us to produce full posterior distributions for any hyperparameters of interest, such as the location of changepoints. We frame the selection of the positions of the hyperparameter samples required by Bayesian quadrature as a decision problem, with the aim of minimising the uncertainty we possess about the values of the integrals we are approximating. Taking this approach, we have developed sampling for Bayesian quadrature (SBQ), a principled competitor to Monte Carlo methods. We conclude by testing our proposals on real weather sensor networks. We further benchmark GPGO on a wide range of canonical test problems, over which it achieves a significant improvement on its competitors. Finally, the efficacy of SBQ is demonstrated in the context of both prediction and optimisation.
APA, Harvard, Vancouver, ISO, and other styles
7

Suvorov, Anton. "Molecular Evolution of Odonata Opsins, Odonata Phylogenomics and Detection of False Positive Sequence Homology Using Machine Learning." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7320.

Full text
Abstract:
My dissertation comprises three related topics of evolutionary and computational biology, which correspond to the three Chapters. Chapter 1 focuses on tempo and mode of evolution in visual genes, namely opsins, via duplication events and subsequent molecular adaptation in Odonata (dragonflies and damselflies). Gene duplication plays a central role in adaptation to novel environments by providing new genetic material for functional divergence and evolution of biological complexity. Odonata have the largest opsin repertoire of any insect currently known. In particular our results suggest that both the blue sensitive (BS) and long-wave sensitive (LWS) opsin classes were subjected to strong positive selection that greatly weakens after multiple duplication events, a pattern that is consistent with the permanent heterozygote model. Due to the immense interspecific variation and duplicability potential of opsin genes among odonates, they represent a unique model system to test hypotheses regarding opsin gene duplication and diversification at the molecular level. Chapter 2 primarily focuses on reconstruction of the phylogenetic backbone of Odonata using RNA-seq data. In order to reconstruct the evolutionary history of Odonata, we performed comprehensive phylotranscriptomic analyses of 83 species covering 75% of all extant odonate families. Using maximum likelihood, Bayesian, coalescent-based and alignment free tree inference frameworks we were able to test, refine and resolve previously controversial relationships within the order. In particular, we confirmed the monophyly of Zygoptera, recovered Gomphidae and Petaluridae as sister groups with high confidence and identified Calopterygoidea as monophyletic. Fossil calibration coupled with diversification analyses provided insight into key events that influenced the evolution of Odonata. Specifically, we determined that there was a possible mass extinction of ancient odonate diversity during the P-Tr crisis and a single odonate lineage persisted following this extinction event. Lastly, Chapter 3 focuses on identification of erroneously assigned sequence homology using the intelligent agents of machine learning techniques. Accurate detection of homologous relationships of biological sequences (DNA or amino acid) amongst organisms is an important and often difficult task that is essential to various evolutionary studies, ranging from building phylogenies to predicting functional gene annotations. We developed biologically informative features that can be extracted from multiple sequence alignments of putative homologous genes (orthologs and paralogs) and further utilized in context of guided experimentation to verify false positive outcomes.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Fan. "Statistical Methods for Characterizing Genomic Heterogeneity in Mixed Samples." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/419.

Full text
Abstract:
"Recently, sequencing technologies have generated massive and heterogeneous data sets. However, interpretation of these data sets is a major barrier to understand genomic heterogeneity in complex diseases. In this dissertation, we develop a Bayesian statistical method for single nucleotide level analysis and a global optimization method for gene expression level analysis to characterize genomic heterogeneity in mixed samples. The detection of rare single nucleotide variants (SNVs) is important for understanding genetic heterogeneity using next-generation sequencing (NGS) data. Various computational algorithms have been proposed to detect variants at the single nucleotide level in mixed samples. Yet, the noise inherent in the biological processes involved in NGS technology necessitates the development of statistically accurate methods to identify true rare variants. At the single nucleotide level, we propose a Bayesian probabilistic model and a variational expectation maximization (EM) algorithm to estimate non-reference allele frequency (NRAF) and identify SNVs in heterogeneous cell populations. We demonstrate that our variational EM algorithm has comparable sensitivity and specificity compared with a Markov Chain Monte Carlo (MCMC) sampling inference algorithm, and is more computationally efficient on tests of relatively low coverage (27x and 298x) data. Furthermore, we show that our model with a variational EM inference algorithm has higher specificity than many state-of-the-art algorithms. In an analysis of a directed evolution longitudinal yeast data set, we are able to identify a time-series trend in non-reference allele frequency and detect novel variants that have not yet been reported. Our model also detects the emergence of a beneficial variant earlier than was previously shown, and a pair of concomitant variants. Characterization of heterogeneity in gene expression data is a critical challenge for personalized treatment and drug resistance due to intra-tumor heterogeneity. Mixed membership factorization has become popular for analyzing data sets that have within-sample heterogeneity. In recent years, several algorithms have been developed for mixed membership matrix factorization, but they only guarantee estimates from a local optimum. At the gene expression level, we derive a global optimization (GOP) algorithm that provides a guaranteed epsilon-global optimum for a sparse mixed membership matrix factorization problem for molecular subtype classification. We test the algorithm on simulated data and find the algorithm always bounds the global optimum across random initializations and explores multiple modes efficiently. The GOP algorithm is well-suited for parallel computations in the key optimization steps. "
APA, Harvard, Vancouver, ISO, and other styles
9

Asgrimsson, David Steinar. "Quantifying uncertainty in structural condition with Bayesian deep learning : A study on the Z-24 bridge benchmark." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251451.

Full text
Abstract:
A machine learning approach to damage detection is presented for a bridge structural health monitoring system, validated on the renowned Z-24 bridge benchmark dataset where a sensor instrumented, threespan bridge was realistically damaged in stages. A Bayesian autoencoder neural network is trained to reconstruct raw sensor data sequences, with uncertainty bounds in prediction. The reconstruction error is then compared with a healthy-state error distribution and the sequence determined to come from a healthy state or not. Several realistic damage stages were successfully detected, making this a viable approach in a data-based monitoring system of an operational bridge. This is a fully operational, machine learning based bridge damage detection system, that is learned directly from raw sensor data.
En maskininlärningsmetod för strukturell skadedetektering av broar presenteras. Metoden valideras på det kända referensdataset Z-24, där en sensor-instrumenterad trespannsbro stegvist skadats. Ett Bayesianskt neuralt nätverk med autoenkoders tränas till att rekonstruera råa sensordatasekvenser, med osäkerhetsgränser i förutsägningen. Rekonstrueringsavvikelsen jämförs med avvikelsesfördelningen i oskadat tillstånd och sekvensen bedöms att komma från ett skadad eller icke skadat tillstånd. Flera realistiska stegvisa skadetillstånd upptäcktes, vilket gör metoden användbar i ett databaserat skadedetektionssystem för en bro i full storlek. Detta är ett lovande steg mot ett helt operativt databaserat skadedetektionssystem.
APA, Harvard, Vancouver, ISO, and other styles
10

Kennedy, Justin M. "Wave-induced marine craft motion estimation and control." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/213481/1/Justin_Kennedy_Thesis.pdf.

Full text
Abstract:
Marine craft at sea are affected by environmental disturbances including long-term ocean currents and relatively higher frequency wave disturbances. These disturbances impact on vessels resulting in wave-induced motion which reduces the performance of motion control systems and impacts on the safety of crew and cargo. This thesis investigates parameter estimation techniques for the online estimation of wave-induced motion models and platform control of marine craft in the presence of environmental disturbances.
APA, Harvard, Vancouver, ISO, and other styles
11

Stanaway, Mark Andrew. "Hierarchical Bayesian models for estimating the extent of plant pest invasions." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/40852/1/Mark_Stanaway_Thesis.pdf.

Full text
Abstract:
Plant biosecurity requires statistical tools to interpret field surveillance data in order to manage pest incursions that threaten crop production and trade. Ultimately, management decisions need to be based on the probability that an area is infested or free of a pest. Current informal approaches to delimiting pest extent rely upon expert ecological interpretation of presence / absence data over space and time. Hierarchical Bayesian models provide a cohesive statistical framework that can formally integrate the available information on both pest ecology and data. The overarching method involves constructing an observation model for the surveillance data, conditional on the hidden extent of the pest and uncertain detection sensitivity. The extent of the pest is then modelled as a dynamic invasion process that includes uncertainty in ecological parameters. Modelling approaches to assimilate this information are explored through case studies on spiralling whitefly, Aleurodicus dispersus and red banded mango caterpillar, Deanolis sublimbalis. Markov chain Monte Carlo simulation is used to estimate the probable extent of pests, given the observation and process model conditioned by surveillance data. Statistical methods, based on time-to-event models, are developed to apply hierarchical Bayesian models to early detection programs and to demonstrate area freedom from pests. The value of early detection surveillance programs is demonstrated through an application to interpret surveillance data for exotic plant pests with uncertain spread rates. The model suggests that typical early detection programs provide a moderate reduction in the probability of an area being infested but a dramatic reduction in the expected area of incursions at a given time. Estimates of spiralling whitefly extent are examined at local, district and state-wide scales. The local model estimates the rate of natural spread and the influence of host architecture, host suitability and inspector efficiency. These parameter estimates can support the development of robust surveillance programs. Hierarchical Bayesian models for the human-mediated spread of spiralling whitefly are developed for the colonisation of discrete cells connected by a modified gravity model. By estimating dispersal parameters, the model can be used to predict the extent of the pest over time. An extended model predicts the climate restricted distribution of the pest in Queensland. These novel human-mediated movement models are well suited to demonstrating area freedom at coarse spatio-temporal scales. At finer scales, and in the presence of ecological complexity, exploratory models are developed to investigate the capacity for surveillance information to estimate the extent of red banded mango caterpillar. It is apparent that excessive uncertainty about observation and ecological parameters can impose limits on inference at the scales required for effective management of response programs. The thesis contributes novel statistical approaches to estimating the extent of pests and develops applications to assist decision-making across a range of plant biosecurity surveillance activities. Hierarchical Bayesian modelling is demonstrated as both a useful analytical tool for estimating pest extent and a natural investigative paradigm for developing and focussing biosecurity programs.
APA, Harvard, Vancouver, ISO, and other styles
12

Saade, Alaa. "Spectral inference methods on sparse graphs : theory and applications." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE024/document.

Full text
Abstract:
Face au déluge actuel de données principalement non structurées, les graphes ont démontré, dans une variété de domaines scientifiques, leur importance croissante comme language abstrait pour décrire des interactions complexes entre des objets complexes. L’un des principaux défis posés par l’étude de ces réseaux est l’inférence de propriétés macroscopiques à grande échelle, affectant un grand nombre d’objets ou d’agents, sur la seule base des interactions microscopiquesqu’entretiennent leurs constituants élémentaires. La physique statistique, créée précisément dans le but d’obtenir les lois macroscopiques de la thermodynamique à partir d’un modèle idéal de particules en interaction, fournit une intuition décisive dans l’étude des réseaux complexes.Dans cette thèse, nous utilisons des méthodes issues de la physique statistique des systèmes désordonnés pour mettre au point et analyser de nouveaux algorithmes d’inférence sur les graphes. Nous nous concentrons sur les méthodes spectrales, utilisant certains vecteurs propres de matrices bien choisies, et sur les graphes parcimonieux, qui contiennent une faible quantité d’information. Nous développons une théorie originale de l’inférence spectrale, fondée sur une relaxation de l’optimisation de certaines énergies libres en champ moyen. Notre approche est donc entièrement probabiliste, et diffère considérablement des motivations plus classiques fondées sur l’optimisation d’une fonction de coût. Nous illustrons l’efficacité de notre approchesur différents problèmes, dont la détection de communautés, la classification non supervisée à partir de similarités mesurées aléatoirement, et la complétion de matrices
In an era of unprecedented deluge of (mostly unstructured) data, graphs are proving more and more useful, across the sciences, as a flexible abstraction to capture complex relationships between complex objects. One of the main challenges arising in the study of such networks is the inference of macroscopic, large-scale properties affecting a large number of objects, based solely on he microscopic interactions between their elementary constituents. Statistical physics, precisely created to recover the macroscopic laws of thermodynamics from an idealized model of interacting particles, provides significant insight to tackle such complex networks.In this dissertation, we use methods derived from the statistical physics of disordered systems to design and study new algorithms for inference on graphs. Our focus is on spectral methods, based on certain eigenvectors of carefully chosen matrices, and sparse graphs, containing only a small amount of information. We develop an original theory of spectral inference based on a relaxation of various meanfield free energy optimizations. Our approach is therefore fully probabilistic, and contrasts with more traditional motivations based on the optimization of a cost function. We illustrate the efficiency of our approach on various problems, including community detection, randomized similarity-based clustering, and matrix completion
APA, Harvard, Vancouver, ISO, and other styles
13

Decelle, Aurélien. "Statistical physics of disordered networks - Spin Glasses on hierarchical lattices and community inference on random graphs." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00653375.

Full text
Abstract:
Cette thèse aborde des aspects fondamentales et appliquées de la théorie des verres de spin etplus généralement des systèmes complexes. Les premiers modèles théoriques décrivant la transitionvitreuse sont apparues dans les années 1970. Ceux-ci décrivaient les verres à l'aide d'interactionsaléatoires. Il a fallu alors plusieurs années avant qu'une théorie de champs moyen pour ces systèmessoient comprises. De nos jours il existe un grand nombre de modèles tombant dans la classe de" champs moyen " et qui sont bien compris à la fois analytiquement, mais également numériquementgrâce à des outils tels que le monte-carlo ou la méthode de la cavité. Par ailleurs il est bien connu quele groupe de renormalisation a échoué jusque ici à pouvoir prédire le comportement des observablescritiques dans les verres hors champs moyen. Nous avons donc choisi d'étudier des systèmes eninteraction à longue portée dont on ignore encore si la physique est identique à celle du champmoyen. Nous avons montré dans une première partie, la facilité avec laquelle on peut décrire unetransformation du groupe de renormalisation dans les systèmes ferromagnétiques en interaction àlongue portée dé finies sur le réseau hiérarchique de Dyson. Dans un second temps, nous avons portéenotre attention sur des modèles de verre de spin sur ce même réseau. Un début d'analyse sur cestransformations dans l'espace réel est présenté ainsi qu'une comparaison de la mesure de l'exposantcritique nu par différentes méthodes. Si la transformation décrite semble prometteuse il faut cependantnoter que celle-ci doit encore être améliorée afin d'être considérée comme une méthode valide pournotre système. Nous avons continué dans cette même direction en analysant un modèle d'énergiesaléatoires toujours en utilisant la topologie du réseau hiérarchique. Nous avons étudié numériquementce système dans lequel nous avons pu observer l'existence d'une transition de phase de type " criseentropique " tout à fait similaire à celle du REM de Derrida. Toutefois, notre modèle présente desdifférences importantes avec ce dernier telles que le comportement non-analytique de l'entropie à latransition, ainsi que l'émergence de " criticalité " dont la présence serait à confirmer par d'autres études.Nous montrons également à l'aide de notre méthode numérique comment la température critique dece système peut-être estimée de trois façon différentes.Dans une dernière partie nous avons abordé des problèmes liés aux systèmes complexes. Il aété remarqué récemment que les modèles étudiés dans divers domaines, par exemple la physique, labiologie ou l'informatique, étaient très proches les uns des autres. Ceci est particulièrement vrai dansl'optimisation combinatoire qui a en partie été étudiée par des méthodes de physique statistique. Cesméthodes issues de la théories des verres de spin et des verres structuraux ont été très utilisées pourétudier les transitions de phase qui ont lieux dans ces systèmes ainsi que pour inventer de nouveauxalgorithmes pour ces modèles. Nous avons étudié le problème de l'inférence de modules dans lesréseaux à l'aide de ces même méthodes. Nous présentons une analyse sur la détection des modules topologiques dans des réseaux aléatoires et démontrons la présence d'une transition de phase entre une région où ces modules sont indétectables et une région où ils sont détectables. Par ailleurs, nous avons implémenté pour ces problèmes un algorithme utilisant Belief Propagation afin d'inférer les modules ainsi que d'apprendre leurs propriétés en ayant pour unique information la structure du réseau. Finalementnous avons appliqué cet algorithme sur des réseaux construits à partir de données réelles et discutonsles développements à apporter à notre méthode.
APA, Harvard, Vancouver, ISO, and other styles
14

Harlé, Flore. "Détection de ruptures multiples dans des séries temporelles multivariées : application à l'inférence de réseaux de dépendance." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT043/document.

Full text
Abstract:
Cette thèse présente une méthode pour la détection hors-ligne de multiples ruptures dans des séries temporelles multivariées, et propose d'en exploiter les résultats pour estimer les relations de dépendance entre les variables du système. L'originalité du modèle, dit du Bernoulli Detector, réside dans la combinaison de statistiques locales issues d'un test robuste, comparant les rangs des observations, avec une approche bayésienne. Ce modèle non paramétrique ne requiert pas d'hypothèse forte sur les distributions des données. Il est applicable sans ajustement à la loi gaussienne comme sur des données corrompues par des valeurs aberrantes. Le contrôle de la détection d'une rupture est prouvé y compris pour de petits échantillons. Pour traiter des séries temporelles multivariées, un terme est introduit afin de modéliser les dépendances entre les ruptures, en supposant que si deux entités du système étudié sont connectées, les événements affectant l'une s'observent instantanément sur l'autre avec une forte probabilité. Ainsi, le modèle s'adapte aux données et la segmentation tient compte des événements communs à plusieurs signaux comme des événements isolés. La méthode est comparée avec d'autres solutions de l'état de l'art, notamment sur des données réelles de consommation électrique et génomiques. Ces expériences mettent en valeur l'intérêt du modèle pour la détection de ruptures entre des signaux indépendants, conditionnellement indépendants ou complètement connectés. Enfin, l'idée d'exploiter les synchronisations entre les ruptures pour l'estimation des relations régissant les entités du système est développée, grâce au formalisme des réseaux bayésiens. En adaptant la fonction de score d'une méthode d'apprentissage de la structure, il est vérifié que le modèle d'indépendance du système peut être en partie retrouvé grâce à l'information apportée par les ruptures, estimées par le modèle du Bernoulli Detector
This thesis presents a method for the multiple change-points detection in multivariate time series, and exploits the results to estimate the relationships between the components of the system. The originality of the model, called the Bernoulli Detector, relies on the combination of a local statistics from a robust test, based on the computation of ranks, with a global Bayesian framework. This non parametric model does not require strong hypothesis on the distribution of the observations. It is applicable without modification on gaussian data as well as data corrupted by outliers. The detection of a single change-point is controlled even for small samples. In a multivariate context, a term is introduced to model the dependencies between the changes, assuming that if two components are connected, the events occurring in the first one tend to affect the second one instantaneously. Thanks to this flexible model, the segmentation is sensitive to common changes shared by several signals but also to isolated changes occurring in a single signal. The method is compared with other solutions of the literature, especially on real datasets of electrical household consumption and genomic measurements. These experiments enhance the interest of the model for the detection of change-points in independent, conditionally independent or fully connected signals. The synchronization of the change-points within the time series is finally exploited in order to estimate the relationships between the variables, with the Bayesian network formalism. By adapting the score function of a structure learning method, it is checked that the independency model that describes the system can be partly retrieved through the information given by the change-points, estimated by the Bernoulli Detector
APA, Harvard, Vancouver, ISO, and other styles
15

Teixeira, Josiele da Silva. "Identificação de danos estruturais via método de Monte Carlo com cadeias de Markov." Universidade do Estado do Rio de Janeiro, 2014. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6733.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O presente trabalho apresenta um estudo referente à aplicação da abordagem Bayesiana como técnica de solução do problema inverso de identificação de danos estruturais, onde a integridade da estrutura é continuamente descrita por um parâmetro estrutural denominado parâmetro de coesão. A estrutura escolhida para análise é uma viga simplesmente apoiada do tipo Euler-Bernoulli. A identificação de danos é baseada em alterações na resposta impulsiva da estrutura, provocadas pela presença dos mesmos. O problema direto é resolvido através do Método de Elementos Finitos (MEF), que, por sua vez, é parametrizado pelo parâmetro de coesão da estrutura. O problema de identificação de danos é formulado como um problema inverso, cuja solução, do ponto de vista Bayesiano, é uma distribuição de probabilidade a posteriori para cada parâmetro de coesão da estrutura, obtida utilizando-se a metodologia de amostragem de Monte Carlo com Cadeia de Markov. As incertezas inerentes aos dados medidos serão contempladas na função de verossimilhança. Três estratégias de solução são apresentadas. Na Estratégia 1, os parâmetros de coesão da estrutura são amostrados de funções densidade de probabilidade a posteriori que possuem o mesmo desvio padrão. Na Estratégia 2, após uma análise prévia do processo de identificação de danos, determina-se regiões da viga potencialmente danificadas e os parâmetros de coesão associados à essas regiões são amostrados a partir de funções de densidade de probabilidade a posteriori que possuem desvios diferenciados. Na Estratégia 3, após uma análise prévia do processo de identificação de danos, apenas os parâmetros associados às regiões identificadas como potencialmente danificadas são atualizados. Um conjunto de resultados numéricos é apresentado levando-se em consideração diferentes níveis de ruído para as três estratégias de solução apresentadas.
This work presents a study on the application of Bayesian approach as a technique for solving the inverse problem of structural damage identification, where the integrity of the structure is continuously described by a structural cohesion parameter. The structure chosen for analysis is a simply supported Euler - Bernoulli beam. The damage identification is based on changes in the impulse response of the structure caused by the presence thereof. The direct problem is solved by the finite element method (FEM), which, in turn, is parameterized by the cohesion parameter of the structure. The problem of identifying damages is formulated as an inverse problem, whose solution, from the Bayesian framework, is a posteriori probability distribution of the cohesion parameter, obtained using the sampling methodology of Monte Carlo with Markov Chain. The uncertainties inherent to the measured data will be included in the likelihood function. Three solution strategies are presented. In the Strategy 1, the cohesion parameters of the structure are sampled from probability density functions a posteriori that have the same standard deviation. In the Strategy 2, after a previous analysis of the damage identification process, are determined potentially damaged regions and the cohesion parameters associated with these regions are sampled from probability density functions a posteriori that have different deviations. In the Strategy 3, after a preliminary analysis of the damage identification process, only the parameters associated with regions identifed as potentially damaged are updated. A set of numerical results are presented taking into account different noise levels for the three considered strategies.
APA, Harvard, Vancouver, ISO, and other styles
16

Rozas, Rony. "Intégration du retour d'expérience pour une stratégie de maintenance dynamique." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1112/document.

Full text
Abstract:
L'optimisation de stratégies de maintenance est un sujet primordial pour un grand nombre d'industriels. Il s'agit d'établir un plan de maintenance qui garantisse des niveaux de sécurité, de sûreté et de fiabilité élevé avec un coût minimum et respectant d'éventuelles contraintes. Le nombre de travaux grandissant sur l'optimisation de paramètres de maintenance et notamment sur la planification d'actions préventives de maintenance souligne l'intérêt de ce problème. Un grand nombre d'études sur la maintenance repose sur une modélisation du processus de dégradation du système étudié. Les Modèles Graphiques Probabilistes (MGP) et particulièrement les MGP Markoviens (MGPM) fournissent un cadre de travail pour la modélisation de processus stochastiques complexes. Le problème de ce type d'approche est que la qualité des résultats est dépendante de celle du modèle. De plus, les paramètres du système considéré peuvent évoluer au cours du temps. Cette évolution est généralement la conséquence d'un changement de fournisseur pour les pièces de remplacement ou d'un changement de paramètres d'exploitation. Cette thèse aborde le problème d'adaptation dynamique d'une stratégie de maintenance face à un système dont les paramètres changent. La méthodologie proposée repose sur des algorithmes de détection de changement dans un flux de données séquentielles et sur une nouvelle méthode d'inférence probabiliste spécifique aux réseaux bayésiens dynamiques. D'autre part, les algorithmes proposés dans cette thèse sont mis en place dans le cadre d'un projet d'étude avec Bombardier Transport. L'étude porte sur la maintenance du système d'accès voyageurs d'une nouvelle automotrice destiné à une exploitation sur le réseau ferré d'Ile-de-France. L'objectif général est de garantir des niveaux de sécurité et de fiabilité importants au cours de l'exploitation du train
The optimization of maintenance strategies is a major issue for many industrial applications. It involves establishing a maintenance plan that ensures security levels, security and high reliability with minimal cost and respecting any constraints. The increasing number of works on optimization of maintenance parameters in particular in scheduling preventive maintenance action underlines the importance of this issue. A large number of studies on maintenance are based on a modeling of the degradation of the system studied. Probabilistic Models Graphics (PGM) and especially Markovian PGM (M-PGM) provide a framework for modeling complex stochastic processes. The issue with this approach is that the quality of the results is dependent on the model. More system parameters considered may change over time. This change is usually the result of a change of supplier for replacement parts or a change in operating parameters. This thesis deals with the issue of dynamic adaptation of a maintenance strategy, with a system whose parameters change. The proposed methodology is based on change detection algorithms in a stream of sequential data and a new method for probabilistic inference specific to the dynamic Bayesian networks. Furthermore, the algorithms proposed in this thesis are implemented in the framework of a research project with Bombardier Transportation. The study focuses on the maintenance of the access system of a new automotive designed to operate on the rail network in Ile-de-France. The overall objective is to ensure a high level of safety and reliability during train operation
APA, Harvard, Vancouver, ISO, and other styles
17

Qin, Yingying. "Early breast anomalies detection with microwave and ultrasound modalities." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG058.

Full text
Abstract:
Résumé: L'imagerie du sein est développée en associant données micro-ondes (MW) et ultrasonores (US) afin de détecter de manière précoce des tumeurs. On souhaite qu'aucune contrainte soit imposée, le sein étant supposé libre. Une 1re approche utilise des informations sur les frontières des tissus provenant de données de réflexion US. La régularisation intègre que deux pixels voisins présentent des propriétés MW similaires s'il ne sont pas sur une frontière. Ceci est appliqué au sein de la méthode itérative de Born distordue. Une 2de approche implique une régularisation déterministe préservant les bords via variables auxiliaires indiquant si un pixel est ou non sur un bord. Ces variables sont partagées par les paramètres MW et US. Ceux-ci sont conjointement optimisés à partir d'ume approche de minimisation alternée. L'algorithme met alternitivement à jour contraste US, marqueurs, et contraste MW. Une 3e approche implique réseaux de neurones convolutifs. Le courant de contraste estimé et le champ diffusé sont les entrées. Une structure multi-flux se nourrit des données MW et US. Le réseau produit les cartes des paramètres MW et US en temps réel. Outre la tâche de régression, une stratégie d'apprentissage multitâche est utilisée avec un classificateur qui associe chaque pixel à un type de tissu pour produire une image de segmentation. La perte pondérée attribue une pénalité plus élevée aux pixels dans les tumeurs si il sont mal classés. Une 4e approche implique un formalisme bayésien où la distribution a posteriori jointe est obtenue via la règle de Bayes ; cette distribution est ensuite approchée par une loi séparable de forme libre pour chaque ensemble d'inconnues pour obtenir l'estimation. Toutes ces méthodes de résolution sont illustrées et comparées à partir d'un grand nombre de données simulées sur des modèles synthétiques simples et sur des coupes transversales de fantômes mammaires numériques anatomiquement réalistes dérivés d'IRM dans lesquels de petites tumeurs artificielles sont insérées
Imaging of the breast for early detec-tion of tumors is studied by associating microwave (MW) and ultrasound (US) data. No registration is enforced since a free pending breast is tackled. A 1st approach uses prior information on tissue boundaries yielded from US reflection data. Regularization incorporates that two neighboring pixels should exhibit similar MW properties when not on a boundary while a jump allowed otherwise. This is enforced in the distorted Born iterative and the contrast source inversion methods. A 2nd approach involves deterministic edge preserving regularization via auxiliary variables indicating if a pixel is on an edge or not, edge markers being shared by MW and US parameters. Those are jointly optimized from the last parameter profiles and guide the next optimization as regularization term coefficients. Alternate minimization is to update US contrast, edge markers and MW contrast. A 3rd approach involves convolutional neural networks. Estimated contrast current and scattered field are the inputs. A multi-stream structure is employed to feed MW and US data. The network outputs the maps of MW and US parameters to perform real-time. Apart from the regression task, a multi-task learning strategy is used with a classifier that associates each pixel to a tissue type to yield a segmentation image. Weighted loss assigns a higher penalty to pixels in tumors when wrongly classified. A 4th approach involves a Bayesian formalism where the joint posterior distribution is obtained via Bayes’ rule; this true distribution is then approximated by a free-form separable law for each set of unknowns to get the estimate sought. All those solution methods are illustrated and compared from a wealth of simulated data on simple synthetic models and on 2D cross-sections of anatomically-realistic MRI-derived numerical breast phantoms in which small artificial tumors are inserted
APA, Harvard, Vancouver, ISO, and other styles
18

Tiomoko, ali Hafiz. "Nouvelles méthodes pour l’apprentissage non-supervisé en grandes dimensions." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC074/document.

Full text
Abstract:
Motivée par les récentes avancées dans l'analyse théorique des performances des algorithmes d'apprentissage automatisé, cette thèse s'intéresse à l'analyse de performances et à l'amélioration de la classification nonsupervisée de données et graphes en grande dimension. Spécifiquement, dans la première grande partie de cette thèse, en s'appuyant sur des outils avancés de la théorie des grandes matrices aléatoires, nous analysons les performances de méthodes spectrales sur des modèles de graphes réalistes et denses ainsi que sur des données en grandes dimensions en étudiant notamment les valeurs propres et vecteurs propres des matrices d'affinités de ces données. De nouvelles méthodes améliorées sont proposées sur la base de cette analyse théorique et démontrent à travers de nombreuses simulations que leurs performances sont meilleures comparées aux méthodes de l'état de l'art. Dans la seconde partie de la thèse, nous proposons un nouvel algorithme pour la détection de communautés hétérogènes entre plusieurs couches d'un graphe à plusieurs types d'interaction. Une approche bayésienne variationnelle est utilisée pour approximer la distribution apostériori des variables latentes du modèle. Toutes les méthodes proposées dans cette thèse sont utilisées sur des bases de données synthétiques et sur des données réelles et présentent de meilleures performances en comparaison aux approches standard de classification dans les contextes susmentionnés
Spurred by recent advances on the theoretical analysis of the performances of the data-driven machine learning algorithms, this thesis tackles the performance analysis and improvement of high dimensional data and graph clustering. Specifically, in the first bigger part of the thesis, using advanced tools from random matrix theory, the performance analysis of spectral methods on dense realistic graph models and on high dimensional kernel random matrices is performed through the study of the eigenvalues and eigenvectors of the similarity matrices characterizing those data. New improved methods are proposed and are shown to outperform state-of-the-art approaches. In a second part, a new algorithm is proposed for the detection of heterogeneous communities from multi-layer graphs using variational Bayes approaches to approximate the posterior distribution of the sought variables. The proposed methods are successfully applied to synthetic benchmarks as well as real-world datasets and are shown to outperform standard approaches to clustering in those specific contexts
APA, Harvard, Vancouver, ISO, and other styles
19

Monteiro, João Filipe Gonçalves. "Modelo combinado captura-recaptura e transectos lineares: uma abordagem bayesiana." Doctoral thesis, Universidade de Évora, 2010. http://hdl.handle.net/10174/17969.

Full text
Abstract:
Neste trabalho apresenta-se uma abordagem bayesiana para estimar a probabilidade de detectar um anirnal/objecto na distância zero, conhecida como go, utilizando o modelo cornbinado de captura-recaptura e transectos lineares (Alpizar-Jara e Pollock, 1999, Ern Marine Mammal Survey and Assessment Methods99-114 pp.). Um estimador para o tamanho da população pode ser enviesado se a heterogeneidade não for considerada na modelação das probabilidades de captura, relativa às características inerentes dos indivíduos que são difíceis de medir ou não observáveis. Este tipo de problema tem sido tradicionalmente e abordado mediante os modelos de captura-recaptura para populações fechadas, designados por Mh e Mth. esta tese formula-se um modelo generalizado combinado de captura-recaptura e transectos lineares para populações fechadas que incorpora heterogeneidade nas probabilidades de detecção relativa às características inerentes dos indivíduos. A probabilidade de detectar um indivíduo em cima da linha do transecto percorrido, é estimada admitindo que é menor ou igual a 1. Assume-se que a probabilidade de avistar um animal depende de características individuais. A heterogeneidade observável nas probabilidades de captura dos indivíduos na população é modelada através da regressão logística utilizando covariárieis tais como o sexo, a idade1, o tamanho do grupo em que o animal se encontra. A heterogeneidade não observável é modelada através de um efeito aleatório, utilizando a inferência bayesiana. O parâmetro go é estimado corno sendo uma média baseada na informação dos indivíduos como se estivessem na linha do transecto. O desempenho dos estimadores da probabilidade de um indivíduo ser observado na distância zero é analisado através de simulações11 realizadas no programa R, e comparada com as situações cm que apenas é modelada a heterogeneidade observável ou quando são modeladas ambas as heterogeneidades, observável e não observável. As distribuições a posteriori dos parâmetros que determinam a função de detecção são obtidas através do método de amostragem Gibbs através do método de Monte Carlo baseado em cadeias de Markov implementado no WINBUGS. Os resultados são ilustrados com dados reais da população de ungulados de montanha (Rupicapra p. pyrenaica) do Parque Nacional dos Perinéus (sul da França); ABSTRACT: This work presents a Bayesian approach to estimate the probability of an animal/object being detected on the transect line, known as go, using the combined line transect and capture-recapture model (Alpizar-.Jara and Pollock, 1999, ln Marine Mammal Survey and Assessmentl Methods, 99 114 pp.). An estimator for population size is generally be biased under the presence of heterogeneity in capture probabilities, relative to the inherent characteristics of the individuals. That sort of heterogeneity is difficult to measure because it is not observable. This kind of problem has been traditionally approached using capture-recapture models for dosed populations, designagnated by Mh and Mth. On this thesis it is formulated a generalized combined capture-recapture and line transect model used for closed populations that takes into account the heterogeneity in detection probability relative to the inherent characteristic of the individuals. The probability of sighting an animal (or an object) on the transect line, is estimated assuming that it is less or equal to 1. We assume that resighting probabilities depend on individual characteristics. Logistic regression in used to model observable heterogeneity in individual detection probabilities using covariates, such as sex, age, and group size. Non-observable heterogeneity is modelled as a random effect. Go is estimated as an average of individual based information as if each individual was on the center transect line. The performance of probability at distance zero estimators based on combined models is analyzed through simulation, using software R, and compared when only observable heterogeneity are modelled, and when both observable and no observable heterogeneity are modelled. The posterior distributions of the key parameters of the detection function were obtained using Gibbs sampling through Markov chain Monte Carlo implemented in WINBCGSI The results are illustrated by an example using chamois (Rupicapra p. pyrenaica) population from Cauterets, Parc National des Pyrénées (southern France).
APA, Harvard, Vancouver, ISO, and other styles
20

Narasimha, Rajesh. "Application of Information Theory and Learning to Network and Biological Tomography." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19889.

Full text
Abstract:
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
APA, Harvard, Vancouver, ISO, and other styles
21

Sahin, Serdar. "Advanced receivers for distributed cooperation in mobile ad hoc networks." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0089.

Full text
Abstract:
Les réseaux ad hoc mobiles (MANETs) sont des systèmes de communication sans fil rapidement déployables et qui fonctionnent avec une coordination minimale, ceci afin d'éviter les pertes d'efficacité spectrale induites par la signalisation. Les stratégies de transmissions coopératives présentent un intérêt pour les MANETs, mais la nature distribuée de tels protocoles peut augmenter le niveau d'interférence avec un impact autant plus sévère que l'on cherche à pousser les limites des efficacités énergétique et spectrale. L'impact de l'interférence doit alors être réduit par l'utilisation d'algorithmes de traitement du signal au niveau de la couche PHY, avec une complexité calculatoire raisonnable. Des avancées récentes sur les techniques de conception de récepteurs numériques itératifs proposent d'exploiter l'inférence bayésienne approximée et des techniques de passage de message associés afin d'améliorer le potentiel des turbo-détecteurs plus classiques. Entre autres, la propagation d'espérance (EP) est une technique flexible, qui offre des compromis attractifs de complexité et de performance dans des situations où la propagation de croyance conventionnel est limité par sa complexité calculatoire. Par ailleurs, grâce à des techniques émergentes de l'apprentissage profond, de telles structures itératives peuvent être projetés vers des réseaux de détection profonds, où l'apprentissage des hyper-paramètres algorithmiques améliore davantage les performances. Dans cette thèse nous proposons des égaliseurs à retour de décision à réponse impulsionnelle finie basée sur la propagation d'espérance (EP) qui apportent des améliorations significatives, en particulier pour des applications à haute efficacité spectrale vis à vis des turbo-détecteurs conventionnels, tout en ayant l'avantage d'être asymptotiquement prédictibles. Nous proposons un cadre générique pour la conception de récepteurs dans le domaine fréquentiel, afin d'obtenir des architectures de détection avec une faible complexité calculatoire. Cette approche est analysée théoriquement et numériquement, avec un accent mis sur l'égalisation des canaux sélectifs en fréquence, et avec des extensions pour de la détection dans des canaux qui varient dans le temps ou pour des systèmes multi-antennes. Nous explorons aussi la conception de détecteurs multi-utilisateurs, ainsi que l'impact de l'estimation du canal, afin de comprendre le potentiel et le limite de cette approche. Pour finir, nous proposons une méthode de prédiction performance à taille finie, afin de réaliser une abstraction de lien pour l'égaliseur domaine fréquentiel à base d'EP. L'impact d'un modélisation plus fine de la couche PHY est évalué dans le contexte de la diffusion coopérative pour des MANETs tactiques, grâce à un simulateur flexible de couche MAC
Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulator
APA, Harvard, Vancouver, ISO, and other styles
22

Pepi, Chiara. "Suitability of dynamic identification for damage detection in the light of uncertainties on a cable stayed footbridge." Doctoral thesis, 2019. http://hdl.handle.net/2158/1187384.

Full text
Abstract:
Structural identification is a very important task especially in all those countries characterized by significant historical and architectural patrimony and strongly vulnerable infrastructures, subjected to inherent degradation with time and to natural hazards e.g. seismic loads. Structural response of existing constructions is usually estimated using suitable numerical models which are driven by a set of geometrical and/or mechanical parameters that are mainly unknown and/or affected by different levels of uncertainties. Some of these information can be obtained by experimental tests but it is practically impossible to have all the required data to have reliable response estimations. For these reasons it is current practice to calibrate some of the significant unknown and/or uncertain geometrical and mechanical parameters using measurements of the actual response (static and/or dynamic) and solving an inverse structural problem. Model calibration is also affected by uncertainties due to the quality (e.g. signal to noise ratio, random properties) of the measured data and to the algorithms used to estimate structural parameters. In this thesis a new robust framework to be used in structural identification is proposed in order to have a reliable numerical model that can be used both for random response estimation and for structural health monitoring. First a parametric numerical model of the existing structural system is developed and updated using probabilistic Bayesian framework. Second, virtual samples of the structural response affected by random loads are evaluated. Third, this virtual samples are used as virtual experimental response in order to analyze the uncertainties on the main modal parameters varying the number and time length of samples, the identification technique and the target response. Finally, the information given by the measurement uncertainties are used to assess the capability of vibration based damage identification method.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Che-Hsun, and 劉哲勳. "A Novel Android Malware Detection Using Bayesian Inference." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16037116098850344753.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
103
Android malware detection has been a popular research topic due to non-negligible amount of malware targeting the Android operating system. In particular, the naive Bayes generative classifier is a common technique widely adopted in many papers. However, we found that the naive Bayes classifier performs badly in Contagio Malware Dump dataset, which could result from the assumption that no feature dependency exists. In this paper, we propose a lightweight method for Android malware detection, which improves the performance of Bayesian classification on the Contagio Malware Dump dataset. It performs static analysis to gather malicious features from an application, and applies principal component analysis to reduce the dependencies among them. With the hidden naive Bayes model, we can infer the identity of the application. In an evaluation with 15,573 normal applications and 3,150 malicious samples, our work detects 94.5% of the malware with a false positive rate of 1.0%. The experiment also shows that our approach is feasible on smartphones.
APA, Harvard, Vancouver, ISO, and other styles
24

Bhattacharya, Archan. "Inference for controlled branching process, Bayesian inference for zero-inflated count data and Bayesian techniques for hairline fracture detection and reconstruction." 2007. http://purl.galileo.usg.edu/uga%5Fetd/bhattacharya%5Farchan%5F200705%5Fphd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gonzalez, Ruben. "Bayesian Methods for On-Line Gross Error Detection and Compensation." Master's thesis, 2010. http://hdl.handle.net/10048/1541.

Full text
Abstract:
Data reconciliation and gross error detection are traditional methods toward detecting mass balance inconsistency within process instrument data. These methods use a static approach for statistical evaluation. This thesis is concerned with using an alternative statistical approach (Bayesian statistics) to detect mass balance inconsistency in real time. The proposed dynamic Baysian solution makes use of a state space process model which incorporates mass balance relationships so that a governing set of mass balance variables can be estimated using a Kalman filter. Due to the incorporation of mass balances, many model parameters are defined by first principles. However, some parameters, namely the observation and state covariance matrices, need to be estimated from process data before the dynamic Bayesian methods could be applied. This thesis makes use of Bayesian machine learning techniques to estimate these parameters, separating process disturbances from instrument measurement noise.
Process Control
APA, Harvard, Vancouver, ISO, and other styles
26

Ratto, Christopher Ralph. "Nonparametric Bayesian Context Learning for Buried Threat Detection." Diss., 2012. http://hdl.handle.net/10161/5413.

Full text
Abstract:

This dissertation addresses the problem of detecting buried explosive threats (i.e., landmines and improvised explosive devices) with ground-penetrating radar (GPR) and hyperspectral imaging (HSI) across widely-varying environmental conditions. Automated detection of buried objects with GPR and HSI is particularly difficult due to the sensitivity of sensor phenomenology to variations in local environmental conditions. Past approahces have attempted to mitigate the effects of ambient factors by designing statistical detection and classification algorithms to be invariant to such conditions. These methods have generally taken the approach of extracting features that exploit the physics of a particular sensor to provide a low-dimensional representation of the raw data for characterizing targets from non-targets. A statistical classification rule is then usually applied to the features. However, it may be difficult for feature extraction techniques to adapt to the highly nonlinear effects of near-surface environmental conditions on sensor phenomenology, as well as to re-train the classifier for use under new conditions. Furthermore, the search for an invariant set of features ignores that possibility that one approach may yield best performance under one set of terrain conditions (e.g., dry), and another might be better for another set of conditions (e.g., wet).

An alternative approach to improving detection performance is to consider exploiting differences in sensor behavior across environments rather than mitigating them, and treat changes in the background data as a possible source of supplemental information for the task of classifying targets and non-targets. This approach is referred to as context-dependent learning.

Although past researchers have proposed context-based approaches to detection and decision fusion, the definition of context used in this work differs from those used in the past. In this work, context is motivated by the physical state of the world from which an observation is made, and not from properties of the observation itself. The proposed context-dependent learning technique therefore utilized additional features that characterize soil properties from the sensor background, and a variety of nonparametric models were proposed for clustering these features into individual contexts. The number of contexts was assumed to be unknown a priori, and was learned via Bayesian inference using Dirichlet process priors.

The learned contextual information was then exploited by an ensemble on classifiers trained for classifying targets in each of the learned contexts. For GPR applications, the classifiers were trained for performing algorithm fusion For HSI applications, the classifiers were trained for performing band selection. The detection performance of all proposed methods were evaluated on data from U.S. government test sites. Performance was compared to several algorithms from the recent literature, several which have been deployed in fielded systems. Experimental results illustrate the potential for context-dependent learning to improve detection performance of GPR and HSI across varying environments.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
27

Xun, Xiaolei. "Statistical Inference in Inverse Problems." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874.

Full text
Abstract:
Inverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low emission sources on a large random background, and parameter estimation methods for partial differential equation (PDE) models. Source detection problem arises, for instance, in some homeland security applications. We address the problem of detecting presence and location of a small low emission source inside an object, when the background noise dominates. The goal is to reach the signal-to-noise ratio levels on the order of 10^-3. We develop a Bayesian approach to this problem in two-dimension. The method allows inference not only about the existence of the source, but also about its location. We derive Bayes factors for model selection and estimation of location based on Markov chain Monte Carlo simulation. A simulation study shows that with sufficiently high total emission level, our method can effectively locate the source. Differential equation (DE) models are widely used to model dynamic processes in many fields. The forward problem of solving equations for given parameters that define the DEs has been extensively studied in the past. However, the inverse problem of estimating parameters based on observed state variables is relatively sparse in the statistical literature, and this is especially the case for PDE models. We propose two joint modeling schemes to solve for constant parameters in PDEs: a parameter cascading method and a Bayesian treatment. In both methods, the unknown functions are expressed via basis function expansion. For the parameter cascading method, we develop the algorithm to estimate the parameters and derive a sandwich estimator of the covariance matrix. For the Bayesian method, we develop the joint model for data and the PDE, and describe how the Markov chain Monte Carlo technique is employed to make posterior inference. A straightforward two-stage method is to first fit the data and then to estimate parameters by the least square principle. The three approaches are illustrated using simulated examples and compared via simulation studies. Simulation results show that the proposed methods outperform the two-stage method.
APA, Harvard, Vancouver, ISO, and other styles
28

Su, Wanhua. "Efficient Kernel Methods for Statistical Detection." Thesis, 2008. http://hdl.handle.net/10012/3598.

Full text
Abstract:
This research is motivated by a drug discovery problem -- the AIDS anti-viral database from the National Cancer Institute. The objective of the study is to develop effective statistical methods to model the relationship between the chemical structure of a compound and its activity against the HIV-1 virus. And as a result, the structure-activity model can be used to predict the activity of new compounds and thus helps identify those active chemical compounds that can be used as drug candidates. Since active compounds are generally rare in a compound library, we recognize the drug discovery problem as an application of the so-called statistical detection problem. In a typical statistical detection problem, we have data {Xi,Yi}, where Xi is the predictor vector of the ith observation and Yi={0,1} is its class label. The objective of a statistical detection problem is to identify class-1 observations, which are extremely rare. Besides drug discovery problem, other applications of statistical detection include direct marketing and fraud detection. We propose a computationally efficient detection method called LAGO, which stands for "locally adjusted GO estimator". The original idea is inspired by an ancient game known today as "GO". The construction of LAGO consists of two steps. In the first step, we estimate the density of class 1 with an adaptive bandwidth kernel density estimator. The kernel functions are located at and only at the class-1 observations. The bandwidth of the kernel function centered at a certain class-1 observation is calculated as the average distance between this class-1 observation and its K-nearest class-0 neighbors. In the second step, we adjust the density estimated in the first step locally according to the density of class 0. It can be shown that the amount of adjustment in the second step is approximately inversely proportional to the bandwidth calculated in the first step. Application to the NCI data demonstrates that LAGO is superior to methods such as K nearest neighbors and support vector machines. One drawback of the existing LAGO is that it only provides a point estimate of a test point's possibility of being class 1, ignoring the uncertainty of the model. In the second part of this thesis, we present a Bayesian framework for LAGO, referred to as BLAGO. This Bayesian approach enables quantification of uncertainty. Non-informative priors are adopted. The posterior distribution is calculated over a grid of (K, alpha) pairs by integrating out beta0 and beta1 using the Laplace approximation, where K and alpha are two parameters to construct the LAGO score. The parameters beta0, beta1 are the coefficients of the logistic transformation that converts the LAGO score to the probability scale. BLAGO provides proper probabilistic predictions that have support on (0,1) and captures uncertainty of the predictions as well. By avoiding Markov chain Monte Carlo algorithms and using the Laplace approximation, BLAGO is computationally very efficient. Without the need of cross-validation, BLAGO is even more computationally efficient than LAGO.
APA, Harvard, Vancouver, ISO, and other styles
29

Mustafa, Ghulam. "High fidelity micromechanics-based statistical analysis of composite material properties." Thesis, 2016. http://hdl.handle.net/1828/7100.

Full text
Abstract:
Composite materials are being widely used in light weight structural applications due to their high specific stiffness and strength properties. However, predicting their mechanical behaviour accurately is a difficult task because of the complicated nature of these heterogeneous materials. This behaviour is not easily modeled with most of existing macro mechanics based models. Designers compensate for the model unknowns in failure predictions by generating overly conservative designs with relatively simple ply stacking sequences, thereby mitigating many of the benefits promised by composites. The research presented in this dissertation was undertaken with the primary goal of providing efficient methodologies for use in the design of composite structures considering inherent material variability and model shortcomings. A micromechanics based methodology is proposed to simulate stiffness, strength, and fatigue behaviour of composites. The computational micromechanics framework is based on the properties of the constituents of composite materials: the fiber, matrix and fiber/matrix interface. This model helps the designer to understand in-depth the failure modes in these materials and design efficient structures utilizing arbitrary layups with a reduced requirement for supporting experimental testing. The only limiting factor in using a micromechanics model is the challenge in obtaining the constituent properties. The overall novelty of this dissertation is to calibrate these constituent properties by integrating the micromechanics approach with a Bayesian statistical model. The early research explored the probabilistic aspects of the constituent properties to calculate the stiffness characteristics of a unidirectional lamina. Then these stochastic stiffness properties were considered as an input to analyze the wing box of a wind turbine blade. Results of this study gave a gateway to map constituent uncertainties to the top-level structure. Next, a stochastic first ply failure load method was developed based on micromechanics and Bayesian inference. Finally, probabilistic SN curves of composite materials were calculated after fatigue model parameter calibration using Bayesian inference. Throughout this research, extensive experimental data sets from literature have been used to calibrate and evaluate the proposed models. The micromechanics based probabilistic framework formulated here is quite general, and applied on the specific application of a wind turbine blade. The procedure may be easily generalized to deal with other structural applications such as storage tanks, pressure vessels, civil structural cladding, unmanned air vehicles, automotive bodies, etc. which can be explored in future work.
Graduate
0548
enginer315@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
30

Kolba, Mark Philip. "Information-Based Sensor Management for Static Target Detection Using Real and Simulated Data." Diss., 2009. http://hdl.handle.net/10161/1313.

Full text
Abstract:

In the modern sensing environment, large numbers of sensor tasking decisions must be made using an increasingly diverse and powerful suite of sensors in order to best fulfill mission objectives in the presence of situationally-varying resource constraints. Sensor management algorithms allow the automation of some or all of the sensor tasking process, meaning that sensor management approaches can either assist or replace a human operator as well as ensure the safety of the operator by removing that operator from a dangerous operational environment. Sensor managers also provide improved system performance over unmanaged sensing approaches through the intelligent control of the available sensors. In particular, information-theoretic sensor management approaches have shown promise for providing robust and effective sensor manager performance.

This work develops information-theoretic sensor managers for a general static target detection problem. Two types of sensor managers are developed. The first considers a set of discrete objects, such as anomalies identified by an anomaly detector or grid cells in a gridded region of interest. The second considers a continuous spatial region in which targets may be located at any point in continuous space. In both types of sensor managers, the sensor manager uses a Bayesian, probabilistic framework to model the environment and tasks the sensor suite to make new observations that maximize the expected information gain for the system. The sensor managers are compared to unmanaged sensing approaches using simulated data and using real data from landmine detection and unexploded ordnance (UXO) discrimination applications, and it is demonstrated that the sensor managers consistently outperform the unmanaged approaches, enabling targets to be detected more quickly using the sensor managers. The performance improvement represented by the rapid detection of targets is of crucial importance in many static target detection applications, resulting in higher rates of advance and reduced costs and resource consumption in both military and civilian applications.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Qindan. "Adaptive Reliability Analysis of Reinforced Concrete Bridges Using Nondestructive Testing." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7920.

Full text
Abstract:
There has been increasing interest in evaluating the performance of existing reinforced concrete (RC) bridges just after natural disasters or man-made events especially when the defects are invisible, or in quantifying the improvement after rehabilitations. In order to obtain an accurate assessment of the reliability of a RC bridge, it is critical to incorporate information about its current structural properties, which reflects the possible aging and deterioration. This dissertation proposes to develop an adaptive reliability analysis of RC bridges incorporating the damage detection information obtained from nondestructive testing (NDT). In this study, seismic fragility is used to describe the reliability of a structure withstanding future seismic demand. It is defined as the conditional probability that a seismic demand quantity attains or exceeds a specified capacity level for given values of earthquake intensity. The dissertation first develops a probabilistic capacity model for RC columns and the capacity model can be used when the flexural stiffness decays nonuniformly over a column height. Then, a general methodology to construct probabilistic seismic demand models for RC highway bridges with one single-column bent is presented. Next, a combination of global and local NDT methods is proposed to identify in-place structural properties. The global NDT uses the dynamic responses of a structure to assess its global/equivalent structural properties and detect potential damage locations. The local NDT uses local measurements to identify the local characteristics of the structure. Measurement and modeling errors are considered in the application of the NDT methods and the analysis of the NDT data. Then, the information obtained from NDT is used in the probabilistic capacity and demand models to estimate the seismic fragility of the bridge. As an illustration, the proposed probabilistic framework is applied to a reinforced concrete bridge with a one-column bent. The result of the illustration shows that the proposed framework can successfully provide the up-to-date structural properties and accurate fragility estimates.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography