Dissertations / Theses on the topic 'Hidden Markov process'

To see the other types of publications on this topic, follow the link: Hidden Markov process.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Hidden Markov process.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jin, Chao. "A Sequential Process Monitoring Approach using Hidden Markov Model for Unobservable Process Drift." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1445341969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mattila, Robert. "Hidden Markov models : Identification, control and inverse filtering." Licentiate thesis, KTH, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223683.

Full text
Abstract:
The hidden Markov model (HMM) is one of the workhorse tools in, for example, statistical signal processing and machine learning. It has found applications in a vast number of fields, ranging all the way from bioscience to speech recognition to modeling of user interactions in social networks. In an HMM, a latent state transitions according to Markovian dynamics. The state is only observed indirectly via a noisy sensor – that is, it is hidden. This type of model is at the center of this thesis, which in turn touches upon three main themes. Firstly, we consider how the parameters of an HMM can be estimated from data. In particular, we explore how recently proposed methods of moments can be combined with more standard maximum likelihood (ML) estimation procedures. The motivation for this is that, albeit the ML estimate possesses many attractive statistical properties, many ML schemes have to rely on local-search procedures in practice, which are only guaranteed to converge to local stationary points in the likelihood surface – potentially inhibiting them from reaching the ML estimate. By combining the two types of algorithms, the goal is to obtain the benefits of both approaches: the consistency and low computational complexity of the former, and the high statistical efficiency of the latter. The filtering problem – estimating the hidden state of the system from observations – is of fundamental importance in many applications. As a second theme, we consider inverse filtering problems for HMMs. In these problems, the setup is reversed; what information about an HMM-filtering system is exposed by its state estimates? We show that it is possible to reconstruct the specifications of the sensor, as well as the observations that were made, from the filtering system’s posterior distributions of the latent state. This can be seen as a way of reverse engineering such a system, or as using an alternative data source to build a model. Thirdly, we consider Markov decision processes (MDPs) – systems with Markovian dynamics where the parameters can be influenced by the choice of a control input. In particular, we show how it is possible to incorporate prior information regarding monotonic structure of the optimal decision policy so as to accelerate its computation. Subsequently, we consider a real-world application by investigating how these models can be used to model the treatment of abdominal aortic aneurysms (AAAs). Our findings are that the structural properties of the optimal treatment policy are different than those used in clinical practice – in particular, that younger patients could benefit from earlier surgery. This indicates an opportunity for improved care of patients with AAAs.

QC 20180301

APA, Harvard, Vancouver, ISO, and other styles
3

Chamroukhi, Faicel. "Hidden process regression for curve modeling, classification and tracking." Compiègne, 2010. http://www.theses.fr/2010COMP1911.

Full text
Abstract:
Cette thèse s'est focalisée sur l'analyse de courbes à changements de régime. Nous proposons de nouvelles approches probabilistes génératives pour modéliser, classer et suivre temporellement de telles courbes. Le premier volet de la thèse concerne la modélisation et la classification (supervisée ou non) d'un ensemble de courbes indépendantes. Les approches proposées dans ce cadre, qui peuvent être appliquées aussi bien à une courbe qu'à un ensemble de courbes, reposent sur un modèle de régression spécifique incorporant un processus caché s'adaptant aussi bien à des changements de régimes brusques qu'à des changements lents. Le second volet de la thèse concerne la modélisation dynamique d'une séquence de courbes à changements de régime. Nous proposons pour cela des modèles autorégressifs intégrant eux même un processus caché et dont l'apprentissage est réalisé à la fois en mode "hors ligne", quand les courbes sont stockées à l'avance, et en mode "en ligne", quand les courbes arrivent au fur et à mesure au cours du temps. Le volet applicatif de la thèse concerne le diagnostic et le suivi d'état de fonctionnement du mécanisme d'aiguillage des rails qui est un organe impactant considérablement la disponibilité du réseau ferroviaire. Sa surveillance est essentielle pour mieux planifier les actions de maintenance. Les données disponibles pour réaliser cette tâche sont les courbes de puissance électrique acquises lors des manœuvres d'aiguillage, qui ont notamment la particularité de présenter des changements de régime. Les résultats obtenus sur des courbes simulées et des courbes acquises lors de manœuvres d'aiguillage illustrent l'utilité pratique des approches introduites dans cette thèse
This research addresses the problem of diagnosis and monitoring for predictive maintenance of the railway infrastructure. In particular, the switch mechanism is a vital organ because its operating state directly impacts the overall safety of the railway system and its proper functioning is required for the full availability of the transportation system; monitoring it is a key task within maintenance team actions. To monitor and diagnose the switch mechanism, the main available data are curves of electric power acquired during several switch operations. This study therefore focuses on modeling curve-valued or functional data presenting regime changes. In this thesis we propose new probabilistic generative machine learning methodologies for curve modeling, classification, clustering and tracking. First, the models we propose for a single curve or independent sets of curves are based on specific regression models incorporating a flexible hidden process. They are able to capture non-stationary (dynamic) behavior within the curves and address the problem of missing information regarding the underlying regimes, and the problem of complex shaped classes. We then propose dynamic models for learning from curve sequences to make decision and prediction over time. The developed approaches rely on autoregressive dynamic models governed by hidden processes. The learning of the models is performed in both a batch mode (in which the curves are stored in advance) and an online mode as the learning proceeds (in which the curves are analyzed one at a time). The obtained results on both simulated curves and the real world switch operation curves demonstrate the practical use of the ideas introduced in this thesis
APA, Harvard, Vancouver, ISO, and other styles
4

Balali, Samaneh. "Incorporating expert judgement into condition based maintenance decision support using a coupled hidden markov model and a partially observable markov decision process." Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=19510.

Full text
Abstract:
Preventive maintenance consists of activities performed to maintain a system in a satisfactory functional condition. Condition Based Maintenance (CBM) aims to reduce the cost of preventive maintenance by supporting decisions on performing maintenance actions, based on information reflecting a system's health condition. In practice, the condition related information can be obtained in various ways, including continuous condition monitoring performed by sensors, or subjective assessment performed by humans. An experienced engineer might provide such subjective assessment by visually inspecting a system, or by interpreting the data collected by condition monitoring devices, and hence give an 'expert judgement' on the state of the system. There is limited academic literature on the development of CBM models incorporating expert judgement. This research aims to reduce this gap by developing models that formally incorporate expert judgement into the CBM decisi on process. A Coupled Hidden Markov Model is proposed to model the evolutionary relationship between expert judgement and the true deterioration state of a system. This model is used to estimate the underlying condition of the system and predict the remaining time to failure. A training algorithm is developed to support model parameter estimation. The algorithm's performance is evaluated with respect to the number of expert judgements and initial settings of model parameters. A decision-making problem is formulated to account for the use of expert judgement in selecting maintenance actions in light of the physical investigation of the system's condition. A Partially Observable Markov Decision Process is proposed to recommend the most cost-effective decisions on inspection choice and maintenance action in two consecutive steps. An approximate method is developed to solve the proposed decision optimisation model and obtain the optimal policy. The sensitivity of the optimal policy is evaluated with respect to model parameters settings, such as the accuracy of the expert judgement.
APA, Harvard, Vancouver, ISO, and other styles
5

Wong, Wee Chin. "Estimation and control of jump stochastic systems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31775.

Full text
Abstract:
Thesis (Ph.D)--Chemical Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Jay H. Lee; Committee Member: Alexander Gray; Committee Member: Erik Verriest; Committee Member: Magnus Egerstedt; Committee Member: Martha Grover; Committee Member: Matthew Realff. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

Löhr, Wolfgang. "Models of Discrete-Time Stochastic Processes and Associated Complexity Measures." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-38267.

Full text
Abstract:
Many complexity measures are defined as the size of a minimal representation in a specific model class. One such complexity measure, which is important because it is widely applied, is statistical complexity. It is defined for discrete-time, stationary stochastic processes within a theory called computational mechanics. Here, a mathematically rigorous, more general version of this theory is presented, and abstract properties of statistical complexity as a function on the space of processes are investigated. In particular, weak-* lower semi-continuity and concavity are shown, and it is argued that these properties should be shared by all sensible complexity measures. Furthermore, a formula for the ergodic decomposition is obtained. The same results are also proven for two other complexity measures that are defined by different model classes, namely process dimension and generative complexity. These two quantities, and also the information theoretic complexity measure called excess entropy, are related to statistical complexity, and this relation is discussed here. It is also shown that computational mechanics can be reformulated in terms of Frank Knight's prediction process, which is of both conceptual and technical interest. In particular, it allows for a unified treatment of different processes and facilitates topological considerations. Continuity of the Markov transition kernel of a discrete version of the prediction process is obtained as a new result.
APA, Harvard, Vancouver, ISO, and other styles
7

Damian, Camilla, Zehra Eksi-Altay, and Rüdiger Frey. "EM algorithm for Markov chains observed via Gaussian noise and point process information: Theory and case studies." De Gruyter, 2018. http://dx.doi.org/10.1515/strm-2017-0021.

Full text
Abstract:
In this paper we study parameter estimation via the Expectation Maximization (EM) algorithm for a continuous-time hidden Markov model with diffusion and point process observation. Inference problems of this type arise for instance in credit risk modelling. A key step in the application of the EM algorithm is the derivation of finite-dimensional filters for the quantities that are needed in the E-Step of the algorithm. In this context we obtain exact, unnormalized and robust filters, and we discuss their numerical implementation. Moreover, we propose several goodness-of-fit tests for hidden Markov models with Gaussian noise and point process observation. We run an extensive simulation study to test speed and accuracy of our methodology. The paper closes with an application to credit risk: we estimate the parameters of a hidden Markov model for credit quality where the observations consist of rating transitions and credit spreads for US corporations.
APA, Harvard, Vancouver, ISO, and other styles
8

Löhr, Wolfgang. "Models of Discrete-Time Stochastic Processes and Associated Complexity Measures." Doctoral thesis, Max Planck Institut für Mathematik in den Naturwissenschaften, 2009. https://ul.qucosa.de/id/qucosa%3A11017.

Full text
Abstract:
Many complexity measures are defined as the size of a minimal representation in a specific model class. One such complexity measure, which is important because it is widely applied, is statistical complexity. It is defined for discrete-time, stationary stochastic processes within a theory called computational mechanics. Here, a mathematically rigorous, more general version of this theory is presented, and abstract properties of statistical complexity as a function on the space of processes are investigated. In particular, weak-* lower semi-continuity and concavity are shown, and it is argued that these properties should be shared by all sensible complexity measures. Furthermore, a formula for the ergodic decomposition is obtained. The same results are also proven for two other complexity measures that are defined by different model classes, namely process dimension and generative complexity. These two quantities, and also the information theoretic complexity measure called excess entropy, are related to statistical complexity, and this relation is discussed here. It is also shown that computational mechanics can be reformulated in terms of Frank Knight''s prediction process, which is of both conceptual and technical interest. In particular, it allows for a unified treatment of different processes and facilitates topological considerations. Continuity of the Markov transition kernel of a discrete version of the prediction process is obtained as a new result.
APA, Harvard, Vancouver, ISO, and other styles
9

Carvalho, Walter Augusto Fonsêca de 1964. "Processos de renovação obtidos por agregação de estados a partir de um processo markoviano." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306196.

Full text
Abstract:
Orientadores: Nancy Lopes Garcia, Alexsandro Giacomo Grimbert Gallo
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-24T12:54:22Z (GMT). No. of bitstreams: 1 Carvalho_WalterAugustoFonsecade_D.pdf: 1034671 bytes, checksum: 25dd72305f343655bedfde62a785a259 (MD5) Previous issue date: 2014
Resumo: Esta tese é dedicada ao estudo dos processos de renovação binários obtidos como agregação de estados a partir de processos Markovianos com alfabeto finito. Na primeira parte, utilizamos uma abordagem matricial para obter condições sob as quais o processo agregado pertence a cada uma das seguintes classes: (1) Markoviano de ordem finita, (2) processo de ordem infinita com probabilidades de transição contínuas, (3) processo Gibbsiano. A segunda parte trata da distância d entre processos de renovação binários. Obtivemos condições sob as quais esta distância pode ser atingida entre tais processos
Abstract: This thesis is devoted to the study of binary renewal processes obtained as aggregation of states from Markov processes with finite alphabet. In the rst part, we use a matrix approach to obtain conditions under which the aggregated process belongs to each of the following classes: (1) Markov of finite order, (2) process of infinite order with continuous transition probabilities, (3) Gibbsian process. The second part deals with the distance d between binary renewal processes. We obtain conditions under which this distance can be achieved between these processes
Doutorado
Estatistica
Doutor em Estatística
APA, Harvard, Vancouver, ISO, and other styles
10

Starke, Martin, Benjamin Beck, Denis Ritz, Frank Will, and Jürgen Weber. "Frequency based efficiency evaluation - from pattern recognition via backwards simulation to purposeful drive design." Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71072.

Full text
Abstract:
The efficiency of hydraulic drive systems in mobile machines is influenced by several factors, like the operators’ guidance, weather conditions, material respectively loading properties and primarily the working cycle. This leads to varying operation points, which have to be performed by the drive system. Regarding efficiency analysis, the usage of standardized working cycles gained through measurements or synthetically generated is state of the art. Thereby, only a small extract of the real usage profile is taken into account. This contribution deals with process pattern recognition (PPR) and frequency based efficiency evaluation to gain more precise information and conclusion for the drive design of mobile machines. By the example of an 18 t mobile excavator, the recognition system using Hidden – Markov - Models (HMM) and the efficiency evaluation process by means of backwards simulation of measured operation points will be described.
APA, Harvard, Vancouver, ISO, and other styles
11

Tang, Man. "Statistical methods for variant discovery and functional genomic analysis using next-generation sequencing data." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104039.

Full text
Abstract:
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data, allowing the identification of biomarkers in early disease diagnosis and driving the transformation of most disciplines in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. This dissertation focuses on modeling ``omics'' data in various NGS applications with a primary goal of developing novel statistical methods to identify sequence variants, find transcription factor (TF) binding patterns, and decode the relationship between TF and gene expression levels. Accurate and reliable identification of sequence variants, including single nucleotide polymorphisms (SNPs) and insertion-deletion polymorphisms (INDELs), plays a fundamental role in NGS applications. Existing methods for calling these variants often make simplified assumption of positional independence and fail to leverage the dependence of genotypes at nearby loci induced by linkage disequilibrium. We propose vi-HMM, a hidden Markov model (HMM)-based method for calling SNPs and INDELs in mapped short read data. Simulation experiments show that, under various sequencing depths, vi-HMM outperforms existing methods in terms of sensitivity and F1 score. When applied to the human whole genome sequencing data, vi-HMM demonstrates higher accuracy in calling SNPs and INDELs. One important NGS application is chromatin immunoprecipitation followed by sequencing (ChIP-seq), which characterizes protein-DNA relations through genome-wide mapping of TF binding sites. Multiple TFs, binding to DNA sequences, often show complex binding patterns, which indicate how TFs with similar functionalities work together to regulate the expression of target genes. To help uncover the transcriptional regulation mechanism, we propose a novel nonparametric Bayesian method to detect the clustering pattern of multiple-TF bindings from ChIP-seq datasets. Simulation study demonstrates that our method performs best with regard to precision, recall, and F1 score, in comparison to traditional methods. We also apply the method on real data and observe several TF clusters that have been recognized previously in mouse embryonic stem cells. Recent advances in ChIP-seq and RNA sequencing (RNA-Seq) technologies provides more reliable and accurate characterization of TF binding sites and gene expression measurements, which serves as a basis to study the regulatory functions of TFs on gene expression. We propose a log Gaussian cox process with wavelet-based functional model to quantify the relationship between TF binding site locations and gene expression levels. Through the simulation study, we demonstrate that our method performs well, especially with large sample size and small variance. It also shows a remarkable ability to distinguish real local feature in the function estimates.
Doctor of Philosophy
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data and bring out innovations in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. In this dissertation, we mainly focus on three problems closely related to NGS and its applications: (1) how to improve variant calling accuracy, (2) how to model transcription factor (TF) binding patterns, and (3) how to quantify of the contribution of TF binding on gene expression. We develop novel statistical methods to identify sequence variants, find TF binding patterns, and explore the relationship between TF binding and gene expressions. We expect our findings will be helpful in promoting a better understanding of disease causality and facilitating the design of personalized treatments.
APA, Harvard, Vancouver, ISO, and other styles
12

Almeida, Gustavo Matheus de. "Detecção de situações anormais em caldeiras de recuperação química." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3137/tde-01122006-155750/.

Full text
Abstract:
O desafio para a área de monitoramento de processos, em indústrias químicas, ainda é a etapa de detecção, com a necessidade de desenvolvimento de sistemas confiáveis. Pode-se resumir que um sistema é confiável, ao ser capaz de detectar as situações anormais, de modo precoce, e, ao mesmo tempo, de minimizar a geração de alarmes falsos. Ao se ter um sistema confiável, pode-se empregá-lo para auxiliar o operador, de fábricas, no processo de tomada de decisões. O objetivo deste estudo é apresentar uma metodologia, baseada na técnica, modelo oculto de Markov (HMM, acrônimo de ?Hidden Markov Model?), para se detectar situações anormais em caldeiras de recuperação química. As aplicações de maior sucesso de HMM são na área de reconhecimento de fala. Pode-se citar como aspectos positivos: o raciocínio probabilístico, a modelagem explícita, e a identificação a partir de dados históricos. Fez-se duas aplicações. O primeiro estudo de caso é no ?benchmark? de um sistema de evaporação múltiplo efeito de uma fábrica de produção de açúcar. Identificou-se um HMM, característico de operação normal, para se detectar cinco situações anormais no atuador responsável por regular o fluxo de xarope de açúcar para o primeiro evaporador. A detecção, para as três situações abruptas, é imediata, uma vez que o HMM foi capaz de detectar alterações, abruptas, no sinal da variável monitorada. Em relação às duas situações incipientes, foi possível detectá-las ainda em estágio inicial; ao ser o valor de f (vetor responsável por representar a intensidade de um evento anormal, com o tempo), no instante da detecção, próximo a zero, igual a 2,8% e 2,1%, respectivamente. O segundo estudo de caso é em uma caldeira de recuperação química, de uma fábrica de produção de celulose, no Brasil. O objetivo é monitorar o acúmulo de depósitos de cinzas sobre os equipamentos da sessão de transferência de calor convectivo, através de medições de perda de carga. Este é um dos principais desafios para se aumentar a eficiência operacional deste equipamento. Após a identificação de um HMM característico de perda de carga alta, pôde-se verificar a sua capacidade de informar o estado atual e, por consequência, a tendência do sistema, de modo similar à um preditor. Pôde-se demonstrar também a utilidade de se definir limites de controle, com o objetivo de se ter a informação sobre a distância entre o estado atual e os níveis de alarme de perda de carga.
The greatest challenge faced by the area of process monitoring in chemical industries still resides in the fault detection task, which aims at developing reliable systems. One may say that a system is reliable if it is able to perform early fault detection and, at the same time, to reduce the generation of false alarms. Once there is a reliable system available, it can be employed to help operators, in factories, in the decisionmaking process. The aim of this study is presenting a methodology, based on the Hidden Markov Model (HMM) technique, suggesting its use in the detection of abnormal situations in chemical recovery boilers. The most successful applications of HMM are in the area of speech recognition. Some of its advantages are: probabilistic reasoning, explicit modeling and the identification based on process history data. This study discusses two applications. The first one is on a benchmark of a multiple evaporation system in a sugar factory. A HMM representative of the normal operation was identified, in order to detect five abnormal situations at the actuator responsible for controlling the syrup flow to the first evaporator. The detection result for the three abrupt situations was immediate, since the HMM was capable of detecting the statistical changes on the signal of the monitored variable as soon as they occurred. Regarding to the two incipient situations, the detection was done at an early stage. For both events, the value of vector f (responsible for representing the strength of an abnormal event over time), at the time it occurred, was near zero, equal to 2.8 and 2.1%, respectively. The second case study deals with the application of HMM in a chemical recovery boiler, belonging to a cellulose mill, in Brazil. The aim is monitoring the accumulation of ash deposits over the equipments of the convective heat transfer section, through pressure drop measures. This is one of the main challenges to be overcome nowadays, bearing in mind the interest that exists in increasing the operational efficiency of this equipment. Initially, a HMM for high values of pressure drop was identified. With this model, it was possible to check its capacity to inform the current state, and consequently, the tendency of the system (similarly as a predictor). It was also possible to show the utility of defining control limits, in order to inform the operator the relative distance between the current state of the system and the alarm levels of pressure drop.
APA, Harvard, Vancouver, ISO, and other styles
13

White, Nicole. "Bayesian mixtures for modelling complex medical data : a case study in Parkinson’s disease." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/48202/1/Nicole_White_Thesis.pdf.

Full text
Abstract:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
APA, Harvard, Vancouver, ISO, and other styles
14

Baysse, Camille. "Analyse et optimisation de la fiabilité d'un équipement opto-électrique équipé de HUMS." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00986112.

Full text
Abstract:
Dans le cadre de l'optimisation de la fiabilité, Thales Optronique intègre désormais dans ses équipements, des systèmes d'observation de leur état de fonctionnement. Cette fonction est réalisée par des HUMS (Health & Usage Monitoring System). L'objectif de cette thèse est de mettre en place dans le HUMS, un programme capable d'évaluer l'état du système, de détecter les dérives de fonctionnement, d'optimiser les opérations de maintenance et d'évaluer les risques d'échec d'une mission, en combinant les procédés de traitement des données opérationnelles (collectées sur chaque appareil grâce au HUMS) et prévisionnelles (issues des analyses de fiabilité et des coûts de maintenance, de réparation et d'immobilisation). Trois algorithmes ont été développés. Le premier, basé sur un modèle de chaînes de Markov cachées, permet à partir de données opérationnelles, d'estimer à chaque instant l'état du système, et ainsi, de détecter un mode de fonctionnement dégradé de l'équipement (diagnostic). Le deuxième algorithme permet de proposer une stratégie de maintenance optimale et dynamique. Il consiste à rechercher le meilleur instant pour réaliser une maintenance, en fonction de l'état estimé de l'équipement. Cet algorithme s'appuie sur une modélisation du système, par un processus Markovien déterministe par morceaux (noté PDMP) et sur l'utilisation du principe d'arrêt optimal. La date de maintenance est déterminée à partir des données opérationnelles, prévisionnelles et de l'état estimé du système (pronostic). Quant au troisième algorithme, il consiste à déterminer un risque d'échec de mission et permet de comparer les risques encourus suivant la politique de maintenance choisie.Ce travail de recherche, développé à partir d'outils sophistiqués de probabilités théoriques et numériques, a permis de définir un protocole de maintenance conditionnelle à l'état estimé du système, afin d'améliorer la stratégie de maintenance, la disponibilité des équipements au meilleur coût, la satisfaction des clients et de réduire les coûts d'exploitation.
APA, Harvard, Vancouver, ISO, and other styles
15

Padilla, Pérez Nicolás. "Heterogeneidad de estados en Hidden Markov models." Tesis, Universidad de Chile, 2014. http://www.repositorio.uchile.cl/handle/2250/129971.

Full text
Abstract:
Magíster en Gestión de Operaciones
Ingeniero Civil Industrial
Hidden Markov models (HMM) han sido ampliamente usados para modelar comportamientos dinámicos tales como atención del consumidor, navegación en internet, relación con el cliente, elección de productos y prescripción de medicamentos por parte de los médicos. Usualmente, cuando se estima un HMM simultáneamente para todos los clientes, los parámetros del modelo son estimados asumiendo el mismo número de estados ocultos para cada cliente. Esta tesis busca estudiar la validez de este supuesto identificando si existe un potencial sesgo en la estimación cuando existe heterogeneidad en el número de estados. Para estudiar el potencial sesgo se realiza un extenso ejercicio de simulación de Monte Carlo. En particular se estudia: a) si existe o no sesgo en la estimación de parámetros, b) qué factores aumentan o disminuyen el sesgo, y c) qué métodos pueden ser usados para estimar correctamente el modelo cuando existe heterogeneidad en el número de estados. En el ejercicio de simulación, se generan datos utilizando un HMM con dos estados para el 50% de clientes y un HMM con tres estados para el 50% restante. Luego, se utiliza un procedimiento MCMC jerárquico Bayesiano para estimar los parámetros de un HMM con igual número de estados para todos los clientes. En cuanto a la existencia de sesgo, los resultados muestran que los parámetros a nivel individual son recuperados correctamente, sin embargo los parámetros a nivel agregado correspondientes a la distribución de heterogeneidad de los parámetros individuales deben ser reportados cuidadosamente. Esta dificultad es generada por la mezcla de dos segmentos de clientes con distinto comportamiento. En cuanto los factores que afectan el sesgo, los resultados muestran que: 1) cuando la proporción de clientes con dos estados aumenta, el sesgo de los resultados agregados también aumenta; 2) cuando se incorpora heterogeneidad en las probabilidades condicionales, se generan estados duplicados para los clientes con 2 estados y los estados no representan lo mismo para todos los clientes, incrementando el sesgo a nivel agregado; y 3) cuando el intercepto de las probabilidades condicionales es heterogéneo, incorporar variables exógenas puede ayudar a identificar los estados igualmente para todos los clientes. Para reducir los problemas mencionados se proponen dos enfoques. Primero, usar una mezcla de Gaussianas como distribución a priori para capturar heterogeneidad multimodal, y segundo usar un modelo de clase latente con HMMs de distintos número de estados para cada clase. El primer modelo ayuda en representar de mejor forma los resultados agregados. Sin embargo, el modelo no evita que existan estados duplicados para los clientes con menos estados. El segundo modelo captura la heterogeneidad en el número de estados, identificando correctamente el comportamiento a nivel agregado y evitando estados duplicados para clientes con dos estados. Finalmente, esta tesis muestra que en la mayoría de los casos estudiados, el supuesto de un número fijo de estados no genera sesgo a nivel individual cuando se incorpora heterogeneidad. Esto ayuda a mejorar la estimación, sin embargo se deben tomar precauciones al realizar conclusiones usando los resultados agregados.
APA, Harvard, Vancouver, ISO, and other styles
16

Le, Hai-Son Phuoc. "Probabilistic Models for Collecting, Analyzing, and Modeling Expression Data." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/245.

Full text
Abstract:
Advances in genomics allow researchers to measure the complete set of transcripts in cells. These transcripts include messenger RNAs (which encode for proteins) and microRNAs, short RNAs that play an important regulatory role in cellular networks. While this data is a great resource for reconstructing the activity of networks in cells, it also presents several computational challenges. These challenges include the data collection stage which often results in incomplete and noisy measurement, developing methods to integrate several experiments within and across species, and designing methods that can use this data to map the interactions and networks that are activated in specific conditions. Novel and efficient algorithms are required to successfully address these challenges. In this thesis, we present probabilistic models to address the set of challenges associated with expression data. First, we present a novel probabilistic error correction method for RNA-Seq reads. RNA-Seq generates large and comprehensive datasets that have revolutionized our ability to accurately recover the set of transcripts in cells. However, sequencing reads inevitably contain errors, which affect all downstream analyses. To address these problems, we develop an efficient hidden Markov modelbased error correction method for RNA-Seq data . Second, for the analysis of expression data across species, we develop clustering and distance function learning methods for querying large expression databases. The methods use a Dirichlet Process Mixture Model with latent matchings and infer soft assignments between genes in two species to allow comparison and clustering across species. Third, we introduce new probabilistic models to integrate expression and interaction data in order to predict targets and networks regulated by microRNAs. Combined, the methods developed in this thesis provide a solution to the pipeline of expression analysis used by experimentalists when performing expression experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Rosser, Gabriel A. "Mathematical modelling and analysis of aspects of bacterial motility." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:1af98367-aa2f-4af3-9344-8c361311b553.

Full text
Abstract:
The motile behaviour of bacteria underlies many important aspects of their actions, including pathogenicity, foraging efficiency, and ability to form biofilms. In this thesis, we apply mathematical modelling and analysis to various aspects of the planktonic motility of flagellated bacteria, guided by experimental observations. We use data obtained by tracking free-swimming Rhodobacter sphaeroides under a microscope, taking advantage of the availability of a large dataset acquired using a recently developed, high-throughput protocol. A novel analysis method using a hidden Markov model for the identification of reorientation phases in the tracks is described. This is assessed and compared with an established method using a computational simulation study, which shows that the new method has a reduced error rate and less systematic bias. We proceed to apply the novel analysis method to experimental tracks, demonstrating that we are able to successfully identify reorientations and record the angle changes of each reorientation phase. The analysis pipeline developed here is an important proof of concept, demonstrating a rapid and cost-effective protocol for the investigation of myriad aspects of the motility of microorganisms. In addition, we use mathematical modelling and computational simulations to investigate the effect that the microscope sampling rate has on the observed tracking data. This is an important, but often overlooked aspect of experimental design, which affects the observed data in a complex manner. Finally, we examine the role of rotational diffusion in bacterial motility, testing various models against the analysed data. This provides strong evidence that R. sphaeroides undergoes some form of active reorientation, in contrast to the mainstream belief that the process is passive.
APA, Harvard, Vancouver, ISO, and other styles
18

Berberovic, Adnan, and Alexander Eriksson. "A Multi-Factor Stock Market Model with Regime-Switches, Student's T Margins, and Copula Dependencies." Thesis, Linköpings universitet, Produktionsekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143715.

Full text
Abstract:
Investors constantly seek information that provides an edge over the market. One of the conventional methods is to find factors which can predict asset returns. In this study we improve the Fama and French Five-Factor model with Regime-Switches, student's t distributions and copula dependencies. We also add price momentum as a sixth factor and add a one-day lag to the factors. The Regime-Switches are obtained from a Hidden Markov Model with conditional Student's t distributions. For the return process we use factor data as input, Student's t distributed residuals, and Student's t copula dependencies. To fit the copulas, we develop a novel approach based on the Expectation-Maximisation algorithm. The results are promising as the quantiles for most of the portfolios show a good fit to the theoretical quantiles. Using a sophisticated Stochastic Programming model, we back-test the predictive power over a 26 year period out-of-sample. Furthermore we analyse the performance of different factors during different market regimes.
APA, Harvard, Vancouver, ISO, and other styles
19

Tong, Xiao Thomas. "Statistical Learning of Some Complex Systems: From Dynamic Systems to Market Microstructure." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10917.

Full text
Abstract:
A complex system is one with many parts, whose behaviors are strongly dependent on each other. There are two interesting questions about complex systems. One is to understand how to recover the true structure of a complex system from noisy data. The other is to understand how the system interacts with its environment. In this thesis, we address these two questions by studying two distinct complex systems: dynamic systems and market microstructure. To address the first question, we focus on some nonlinear dynamic systems. We develop a novel Bayesian statistical method, Gaussian Emulator, to estimate the parameters of dynamic systems from noisy data, when the data are either fully or partially observed. Our method shows that estimation accuracy is substantially improved and computation is faster, compared to the numerical solvers. To address the second question, we focus on the market microstructure of hidden liquidity. We propose some statistical models to explain the hidden liquidity under different market conditions. Our statistical results suggest that hidden liquidity can be reliably predicted given the visible state of the market.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
20

YOU, GUO-HUI, and 游國輝. "Hardware implementation of hidden Markov model scoring process." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/41607730896590372987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chang, Wang-Jung, and 張旺榮. "Improving Performance of Process Monitoring Using Hidden Markov Tree Model." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/abw47w.

Full text
Abstract:
碩士
中原大學
化學工程研究所
92
Wavelet-based hidden Markov tree (HMT) models is proposed to improve the conventional time-scale only statistical process model (SPC) for process monitoring. HMT in the wavelet domain can not only analyze the measurements at multiple scales in time and frequency but also capture the statistical behavior of real world measurements in these different scales. The former can provide better noise reduction and less signal distortion than conventional filtering methods; the latter can extract the statistical characteristics of the unmeasured dynamic disturbances, like the clustering and persistence of the practical data which are not considered in SPC. Based on HMT, a univariate and a multivariate SPC are respectively developed. Initially, the HMT model is trained in the wavelet domain using the data obtained from the normal operation regions. The model parameters are trained by the expectation maximization algorithm. After extracting the past operating information, the proposed method, like the philosophy of the traditional SPC, can generate simple monitoring charts, easily tracking and monitoring the occurrence of observable upsets. The comparisons of the existing SPC methods that explain the advantages of the properties of the newly proposed method are shown. They indicate that the proposed method can lead to more accurate results when the unmeasured disturbance series are getting strong correlation. Data from the monitoring practice in the industrial problems are presented to help readers delve into the matter.
APA, Harvard, Vancouver, ISO, and other styles
22

Guo, Jia-Liang, and 郭家良. "Process Discovery using Rule-Integrated Trees Hidden Semi-Markov Models." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/456975.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
105
To predict or to explain? With the dramatical growth of the volume of information generated from various information systems, data science has become popular and important in recent years while machine learning algorithms provide a very strong support and foundation for various data applications. Many data applications are based on black-box models. For example, a fraud detection system can predict which person will default but we cannot understand how the system consider it’s fraud. While white-box models are easy to understand but have relatively poor predictive performance. Hence, in this thesis, we propose a novel grafted tree algorithm to integrate trees of random forests. The model attempt to find a balance between a decision tree and a random forest. That is, the grafted tree have better interpretability and the performance than a single decision tree. With the decision tree is integrated from a random forest, it will be applied to Hidden semi-Markov models (HSMM) to build a Classification Tree Hidden Semi- Markov Model (CTHSMM) in order to discover underlying changes of a system. The experimental result shows that our proposed model RITHSMM is better than a simple decision tree based on Classification and Regression Trees and it can find more states/leaves so as to answer a kind of questions, “given a sequence of observable sequence, what are the most probable/relevant sequence of changes of a dynamic system?”.
APA, Harvard, Vancouver, ISO, and other styles
23

Frost, Andrew James. "Spatio-temporal hidden Markov models for incorporating interannual variability in rainfall." Thesis, 2004. http://hdl.handle.net/1959.13/24868.

Full text
Abstract:
Two new spatio-temporal hidden Markov models (HMM) are introduced in this thesis, with the purpose of capturing the persistent, spatially non-homogeneous nature of climate influence on annual rainfall series observed in Australia. The models extend the two-state HMM applied by Thyer (2001) by relaxing the assumption that all sites are under the same climate control. The Switch HMM (SHMM) allows at-site anomalous states, whilst still maintaining a regional control. The Regional HMM (RHMM), on the other hand, allows sites to be partitioned into different Markovian state regions. The analyses were conducted using a Bayesian framework to explicitly account for parameter uncertainty and select between competing hypotheses. Bayesian model averaging was used for comparison of the HMM and its generalisations. The HMM, SHMM and RHMM were applied to four groupings of four sites located on the Eastern coast of Australia, an area that has previously shown evidence of interannual persistence. In the majority of case studies, the RHMM variants showed greatest posterior weight, indicating that the data favoured the multiple region RHMM over the single region HMM or the SHMM variants. In no cases does the HMM produce the maximum marginal likelihood when compared to the SHMM and RHMM. The HMM state series and preferred model variants were sensitive to the parameterisation of the small-scale site-to-site correlation structure. Several parameterisations of the small-scale Gaussian correlation were trialled, namely Fitted Correlation, Exponential Decay Correlation, Empirical and Zero Correlation. Significantly, it was shown that annual rainfall data outliers can have a large effect on inference for a model that uses Gaussian distributions. The practical value of this modelling is demonstrated by the conditioning of the event based point rainfall model DRIP on the hidden state series of the HMM variants. Short timescale models typically underestimate annual variability because there is no explicit structure to incorporate long-term persistence. The two-state conditioned DRIP model was shown to reproduce the annual variability observed to a greater degree than the single state DRIP.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
24

Frost, Andrew James. "Spatio-temporal hidden Markov models for incorporating interannual variability in rainfall." 2004. http://hdl.handle.net/1959.13/24868.

Full text
Abstract:
Two new spatio-temporal hidden Markov models (HMM) are introduced in this thesis, with the purpose of capturing the persistent, spatially non-homogeneous nature of climate influence on annual rainfall series observed in Australia. The models extend the two-state HMM applied by Thyer (2001) by relaxing the assumption that all sites are under the same climate control. The Switch HMM (SHMM) allows at-site anomalous states, whilst still maintaining a regional control. The Regional HMM (RHMM), on the other hand, allows sites to be partitioned into different Markovian state regions. The analyses were conducted using a Bayesian framework to explicitly account for parameter uncertainty and select between competing hypotheses. Bayesian model averaging was used for comparison of the HMM and its generalisations. The HMM, SHMM and RHMM were applied to four groupings of four sites located on the Eastern coast of Australia, an area that has previously shown evidence of interannual persistence. In the majority of case studies, the RHMM variants showed greatest posterior weight, indicating that the data favoured the multiple region RHMM over the single region HMM or the SHMM variants. In no cases does the HMM produce the maximum marginal likelihood when compared to the SHMM and RHMM. The HMM state series and preferred model variants were sensitive to the parameterisation of the small-scale site-to-site correlation structure. Several parameterisations of the small-scale Gaussian correlation were trialled, namely Fitted Correlation, Exponential Decay Correlation, Empirical and Zero Correlation. Significantly, it was shown that annual rainfall data outliers can have a large effect on inference for a model that uses Gaussian distributions. The practical value of this modelling is demonstrated by the conditioning of the event based point rainfall model DRIP on the hidden state series of the HMM variants. Short timescale models typically underestimate annual variability because there is no explicit structure to incorporate long-term persistence. The two-state conditioned DRIP model was shown to reproduce the annual variability observed to a greater degree than the single state DRIP.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
25

Obado, Victor Owino. "A hidden Markov model process for wormhole attack detection in a localised underwater wireless sensor network." 2012. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1000399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hines, Keegan. "Bayesian approaches for modeling protein biophysics." Thesis, 2014. http://hdl.handle.net/2152/26016.

Full text
Abstract:
Proteins are the fundamental unit of computation and signal processing in biological systems. A quantitative understanding of protein biophysics is of paramount importance, since even slight malfunction of proteins can lead to diverse and severe disease states. However, developing accurate and useful mechanistic models of protein function can be strikingly elusive. I demonstrate that the adoption of Bayesian statistical methods can greatly aid in modeling protein systems. I first discuss the pitfall of parameter non-identifiability and how a Bayesian approach to modeling can yield reliable and meaningful models of molecular systems. I then delve into a particular case of non-identifiability within the context of an emerging experimental technique called single molecule photobleaching. I show that the interpretation of this data is non-trivial and provide a rigorous inference model for the analysis of this pervasive experimental tool. Finally, I introduce the use of nonparametric Bayesian inference for the analysis of single molecule time series. These methods aim to circumvent problems of model selection and parameter identifiability and are demonstrated with diverse applications in single molecule biophysics. The adoption of sophisticated inference methods will lead to a more detailed understanding of biophysical systems.
text
APA, Harvard, Vancouver, ISO, and other styles
27

"A MULTI-FUNCTIONAL PROVENANCE ARCHITECTURE: CHALLENGES AND SOLUTIONS." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-12-1419.

Full text
Abstract:
In service-oriented environments, services are put together in the form of a workflow with the aim of distributed problem solving. Capturing the execution details of the services' transformations is a significant advantage of using workflows. These execution details, referred to as provenance information, are usually traced automatically and stored in provenance stores. Provenance data contains the data recorded by a workflow engine during a workflow execution. It identifies what data is passed between services, which services are involved, and how results are eventually generated for particular sets of input values. Provenance information is of great importance and has found its way through areas in computer science such as: Bioinformatics, database, social, sensor networks, etc. Current exploitation and application of provenance data is very limited as provenance systems started being developed for specific applications. Thus, applying learning and knowledge discovery methods to provenance data can provide rich and useful information on workflows and services. Therefore, in this work, the challenges with workflows and services are studied to discover the possibilities and benefits of providing solutions by using provenance data. A multifunctional architecture is presented which addresses the workflow and service issues by exploiting provenance data. These challenges include workflow composition, abstract workflow selection, refinement, evaluation, and graph model extraction. The specific contribution of the proposed architecture is its novelty in providing a basis for taking advantage of the previous execution details of services and workflows along with artificial intelligence and knowledge management techniques to resolve the major challenges regarding workflows. The presented architecture is application-independent and could be deployed in any area. The requirements for such an architecture along with its building components are discussed. Furthermore, the responsibility of the components, related works and the implementation details of the architecture along with each component are presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Michael J. "Optimal Control and Estimation of Stochastic Systems with Costly Partial Information." Thesis, 2012. http://hdl.handle.net/1807/32792.

Full text
Abstract:
Stochastic control problems that arise in sequential decision making applications typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected as it is to decide how this information should be utilized for optimal decision-making. This type of joint optimization has been a long-standing problem in the operations research literature, and very few results regarding the structure of the optimal sampling and control policy have been published. In this thesis, the joint optimization of sampling and control is studied in the context of maintenance optimization. New theoretical results characterizing the structure of the optimal policy are established, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Applications in other areas such as healthcare decision-making and statistical process control are discussed. Statistical parameter estimation results are also developed with illustrative real-world numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
29

Jiang, Rui. "System Availability Maximization and Residual Life Prediction under Partial Observations." Thesis, 2011. http://hdl.handle.net/1807/31792.

Full text
Abstract:
Many real-world systems experience deterioration with usage and age, which often leads to low product quality, high production cost, and low system availability. Most previous maintenance and reliability models in the literature do not incorporate condition monitoring information for decision making, which often results in poor failure prediction for partially observable deteriorating systems. For that reason, the development of fault prediction and control scheme using condition-based maintenance techniques has received considerable attention in recent years. This research presents a new framework for predicting failures of a partially observable deteriorating system using Bayesian control techniques. A time series model is fitted to a vector observation process representing partial information about the system state. Residuals are then calculated using the fitted model, which are indicative of system deterioration. The deterioration process is modeled as a 3-state continuous-time homogeneous Markov process. States 0 and 1 are not observable, representing healthy (good) and unhealthy (warning) system operational conditions, respectively. Only the failure state 2 is assumed to be observable. Preventive maintenance can be carried out at any sampling epoch, and corrective maintenance is carried out upon system failure. The form of the optimal control policy that maximizes the long-run expected average availability per unit time has been investigated. It has been proved that a control limit policy is optimal for decision making. The model parameters have been estimated using the Expectation Maximization (EM) algorithm. The optimal Bayesian fault prediction and control scheme, considering long-run average availability maximization along with a practical statistical constraint, has been proposed and compared with the age-based replacement policy. The optimal control limit and sampling interval are calculated in the semi-Markov decision process (SMDP) framework. Another Bayesian fault prediction and control scheme has been developed based on the average run length (ARL) criterion. Comparisons with traditional control charts are provided. Formulae for the mean residual life and the distribution function of system residual life have been derived in explicit forms as functions of a posterior probability statistic. The advantage of the Bayesian model over the well-known 2-parameter Weibull model in system residual life prediction is shown. The methodologies are illustrated using simulated data, real data obtained from the spectrometric analysis of oil samples collected from transmission units of heavy hauler trucks in the mining industry, and vibration data from a planetary gearbox machinery application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography