Journal articles on the topic 'Hidden Markov process'

To see the other types of publications on this topic, follow the link: Hidden Markov process.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Hidden Markov process.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alshraideh, Hussam, and George Runger. "Process Monitoring Using Hidden Markov Models." Quality and Reliability Engineering International 30, no. 8 (September 2, 2013): 1379–87. http://dx.doi.org/10.1002/qre.1560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Bing, Ping Yan, Qiang Zhou, and Libing Feng. "State recognition method for machining process of a large spot welder based on improved genetic algorithm and hidden Markov model." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 231, no. 11 (January 27, 2016): 2135–46. http://dx.doi.org/10.1177/0954406215626942.

Full text
Abstract:
Large spot welder is an important equipment in rail transit equipment manufacturing industry, but having the problem of low utilization rate and low effectlvely machining rate. State monitoring can master its operating states real time and comprehensively, and providing data support for state recognition. Hidden Markov model is a state classification method, but it is sensitive to the initial model parameters and easy to trap into a local optima. Genetic algorithm is a global searching method; however, it is quite poor at hill climbing and also has the problem of premature convergence. In this paper, proposing the improved genetic algorithm, and combining improved genetic algorithm and hidden Markov model, a new method of state recognition method named improved genetic algorithm–hidden Markov model is proposed. In the proposed method, improved genetic algorithm is used for optimizing the initial parameters, and hidden Markov model as a classifier to recognize the operating states for machining process. This method is also compared with the other two recognition methods named adaptive genetic algorithm–hidden Markov model and hidden Markov model, in which adaptive genetic algorithm is similarly used for optimizing the initial parameters, however hidden Markov model (in both methods) as a classifier. Experimental results show that the proposed method is very effective, and the improved genetic algorithm–hidden Markov model recognition method is superior to the adaptive genetic algorithm–hidden Markov model and hidden Markov model recognition method.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Yangsheng, and Ming Ge. "Hidden Markov model-based process monitoring system." Journal of Intelligent Manufacturing 15, no. 3 (June 2004): 337–50. http://dx.doi.org/10.1023/b:jims.0000026572.03164.64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Elkimakh, Karima, and Abdelaziz Nasroallah. "Hidden Markov Model with Markovian emission." Monte Carlo Methods and Applications 26, no. 4 (December 1, 2020): 303–13. http://dx.doi.org/10.1515/mcma-2020-2072.

Full text
Abstract:
AbstractIn our paper [A. Nasroallah and K. Elkimakh, HMM with emission process resulting from a special combination of independent Markovian emissions, Monte Carlo Methods Appl. 23 2017, 4, 287–306] we have studied, in a first scenario, the three fundamental hidden Markov problems assuming that, given the hidden process, the observed one selects emissions from a combination of independent Markov chains evolving at the same time. Here, we propose to conduct the same study with a second scenario assuming that given the hidden process, the emission process selects emissions from a combination of independent Markov chain evolving according to their own clock. Three basic numerical examples are studied to show the proper functioning of the iterative algorithm adapted to the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Feng-Hui, Wai-Ki Ching, Jia-Wen Gu, and Tak-Kuen Siu. "Interacting default intensity with a hidden Markov process." Quantitative Finance 17, no. 5 (November 7, 2016): 781–94. http://dx.doi.org/10.1080/14697688.2016.1237036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zuk, Or, Ido Kanter, and Eytan Domany. "The Entropy of a Binary Hidden Markov Process." Journal of Statistical Physics 121, no. 3-4 (November 2005): 343–60. http://dx.doi.org/10.1007/s10955-005-7576-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ko, Stanley I. M., Terence T. L. Chong, and Pulak Ghosh. "Dirichlet Process Hidden Markov Multiple Change-point Model." Bayesian Analysis 10, no. 2 (June 2015): 275–96. http://dx.doi.org/10.1214/14-ba910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacquet, Philippe, Gadiel Seroussi, and Wojciech Szpankowski. "On the entropy of a hidden Markov process." Theoretical Computer Science 395, no. 2-3 (May 2008): 203–19. http://dx.doi.org/10.1016/j.tcs.2008.01.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Hongmin, Yisheng Guan, and Juan Rojas. "Analysis of multimodal Bayesian nonparametric autoregressive hidden Markov models for process monitoring in robotic contact tasks." International Journal of Advanced Robotic Systems 16, no. 2 (March 1, 2019): 172988141983484. http://dx.doi.org/10.1177/1729881419834840.

Full text
Abstract:
Robot introspection aids robots to understand what they do and how they do it. Previous robot introspection techniques have often used parametric hidden Markov models or supervised learning techniques, implying that the number of hidden states or classes is defined a priori and fixed through the entire modeling process. Fixed parameterizations limit the modeling power of a process to properly encode the data. Furthermore, first-order Markov models are limited in their ability to model complex data sequences that represent highly dynamic behaviors as they assume observations are conditionally independent given the state. In this work, we contribute a Bayesian nonparametric autoregressive Hidden Markov model for the monitoring of robot contact tasks, which are characterized by complex dynamical data that are hard to model. We used a nonparametric prior that endows our hidden Markov models with an unbounded number of hidden states for a given robot skill (or subtask). We use a hierarchical Dirichlet stochastic process prior to learn an hidden Markov model with a switching vector autoregressive observation model of wrench signatures and end-effector pose for the manipulation contact tasks. The proposed scheme monitors both nominal skill execution and anomalous behaviors. Two contact tasks are used to measure the effectiveness of our approach: (i) a traditional pick-and-place task composed of four skills and (ii) a cantilever snap assembly task (also composed of four skills). The modeling performance or our approach was compared with other methods, and classification accuracy measures were computed for skill and anomaly identification. The hierarchical Dirichlet stochastic process prior to learn an hidden Markov model with a switching vector autoregressive observation model was shown to have excellent process monitoring performance with higher identification rates and monitoring ability.
APA, Harvard, Vancouver, ISO, and other styles
10

Qi-feng, Yao, Dong Yun, and Wang Zhong-Zhi. "An Entropy Rate Theorem for a Hidden Inhomogeneous Markov Chain." Open Statistics & Probability Journal 8, no. 1 (September 30, 2017): 19–26. http://dx.doi.org/10.2174/1876527001708010019.

Full text
Abstract:
Objective: The main object of our study is to extend some entropy rate theorems to a Hidden Inhomogeneous Markov Chain (HIMC) and establish an entropy rate theorem under some mild conditions. Introduction: A hidden inhomogeneous Markov chain contains two different stochastic processes; one is an inhomogeneous Markov chain whose states are hidden and the other is a stochastic process whose states are observable. Materials and Methods: The proof of theorem requires some ergodic properties of an inhomogeneous Markov chain, and the flexible application of the properties of norm and the bounded conditions of series are also indispensable. Results: This paper presents an entropy rate theorem for an HIMC under some mild conditions and two corollaries for a hidden Markov chain and an inhomogeneous Markov chain. Conclusion: Under some mild conditions, the entropy rates of an inhomogeneous Markov chains, a hidden Markov chain and an HIMC are similar and easy to calculate.
APA, Harvard, Vancouver, ISO, and other styles
11

Sarno, Riyanarto, and Kelly Rossa Sungkono. "Recovering Truncated Streaming Event Log Using Coupled Hidden Markov Model." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 04 (August 8, 2019): 2059012. http://dx.doi.org/10.1142/s0218001420590120.

Full text
Abstract:
Process discovery is a technique for obtaining process model based on traces recorded in the event log. Nowadays, information systems produce streaming event logs to record their huge processes. The truncated streaming event log is a big issue in process discovery because it inflicts incomplete traces that make process discovery depict wrong processes in a process model. Earlier research suggested several methods for recovering the truncated streaming event log and none of them utilized Coupled Hidden Markov Model. This research proposes a method that combines Coupled Hidden Markov Model with Double States and the Modification of Viterbi–Backward method for recovering the truncated streaming event log. The first layer of states contains the transition probability of activities. The second layer of states uses patterns for detecting traces which have a low appearance in the event log. The experiment results showed that the proposed method recovered appropriately the truncated streaming event log. These results also have proven that the accuracies of recovered traces obtained by the proposed method are higher than those obtained by the Hidden Markov Model and the Coupled Hidden Markov Model.
APA, Harvard, Vancouver, ISO, and other styles
12

Lu, Sheng, Michael Carl, Xinyue Yao, and Wenchao Su. "Predicting translation behaviorsby using Hidden Markov Model." Translation, Cognition & Behavior 3, no. 1 (May 13, 2020): 76–99. http://dx.doi.org/10.1075/tcb.00035.lu.

Full text
Abstract:
Abstract The translation process can be studied as sequences of activity units. The application of machine learning technology offers researchers new possibilities in the study of the translation process. This research project developed a program, activity unit predictor, using the Hidden Markov Model. The program takes in duration, translation phase, target language and fixation as the input and produces an activity unit type as the output. The highest prediction accuracy reached is 61%. As one of the first endeavors, the program demonstrates strong potential of applying machine learning in translation process research.
APA, Harvard, Vancouver, ISO, and other styles
13

Fuentes-Beals, Camilo, Alejandro Valdés-Jiménez, and Gonzalo Riadi. "Hidden Markov Modeling with HMMTeacher." PLOS Computational Biology 18, no. 2 (February 10, 2022): e1009703. http://dx.doi.org/10.1371/journal.pcbi.1009703.

Full text
Abstract:
Is it possible to learn and create a first Hidden Markov Model (HMM) without programming skills or understanding the algorithms in detail? In this concise tutorial, we present the HMM through the 2 general questions it was initially developed to answer and describe its elements. The HMM elements include variables, hidden and observed parameters, the vector of initial probabilities, and the transition and emission probability matrices. Then, we suggest a set of ordered steps, for modeling the variables and illustrate them with a simple exercise of modeling and predicting transmembrane segments in a protein sequence. Finally, we show how to interpret the results of the algorithms for this particular problem. To guide the process of information input and explicit solution of the basic HMM algorithms that answer the HMM questions posed, we developed an educational webserver called HMMTeacher. Additional solved HMM modeling exercises can be found in the user’s manual and answers to frequently asked questions. HMMTeacher is available at https://hmmteacher.mobilomics.org, mirrored at https://hmmteacher1.mobilomics.org. A repository with the code of the tool and the webpage is available at https://gitlab.com/kmilo.f/hmmteacher.
APA, Harvard, Vancouver, ISO, and other styles
14

Kairong Li, Guixiang Chen, and Jilin Cheng. "Research on Hidden Markov Model-based Text Categorization Process." International Journal of Digital Content Technology and its Applications 5, no. 6 (June 30, 2011): 244–51. http://dx.doi.org/10.4156/jdcta.vol5.issue6.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Tao, Zhaojie Wang, Guoyu Yang, Yang Cui, Yuling Chen, and Xiaomei Yu. "Semi‐selfish mining based on hidden Markov decision process." International Journal of Intelligent Systems 36, no. 7 (April 6, 2021): 3596–612. http://dx.doi.org/10.1002/int.22428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wong, Wee Chin, and Jay H. Lee. "DISTURBANCE MODELING FOR PROCESS CONTROL VIA HIDDEN MARKOV MODELS." IFAC Proceedings Volumes 40, no. 5 (2007): 233–38. http://dx.doi.org/10.3182/20070606-3-mx-2915.00157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Tao, Richard M. Cullen, and Marshall Godwin. "Hidden Markov model using Dirichlet process for de-identification." Journal of Biomedical Informatics 58 (December 2015): S60—S66. http://dx.doi.org/10.1016/j.jbi.2015.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Fan, Shuai Tan, and Hongbo Shi. "Hidden Markov model-based approach for multimode process monitoring." Chemometrics and Intelligent Laboratory Systems 148 (November 2015): 51–59. http://dx.doi.org/10.1016/j.chemolab.2015.08.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhao, Jing, Yi Zhang, Shiliang Sun, and Haiwei Dai. "Variational Beta Process Hidden Markov Models with Shared Hidden States for Trajectory Recognition." Entropy 23, no. 10 (September 30, 2021): 1290. http://dx.doi.org/10.3390/e23101290.

Full text
Abstract:
Hidden Markov model (HMM) is a vital model for trajectory recognition. As the number of hidden states in HMM is important and hard to be determined, many nonparametric methods like hierarchical Dirichlet process HMMs and Beta process HMMs (BP-HMMs) have been proposed to determine it automatically. Among these methods, the sampled BP-HMM models the shared information among different classes, which has been proved to be effective in several trajectory recognition scenes. However, the existing BP-HMM maintains a state transition probability matrix for each trajectory, which is inconvenient for classification. Furthermore, the approximate inference of the BP-HMM is based on sampling methods, which usually takes a long time to converge. To develop an efficient nonparametric sequential model that can capture cross-class shared information for trajectory recognition, we propose a novel variational BP-HMM model, in which the hidden states can be shared among different classes and each class chooses its own hidden states and maintains a unified transition probability matrix. In addition, we derive a variational inference method for the proposed model, which is more efficient than sampling-based methods. Experimental results on a synthetic dataset and two real-world datasets show that compared with the sampled BP-HMM and other related models, the variational BP-HMM has better performance in trajectory recognition.
APA, Harvard, Vancouver, ISO, and other styles
20

Jensen, Jens Ledet. "Asymptotic normality of M-estimators in nonhomogeneous hidden Markov models." Journal of Applied Probability 48, A (August 2011): 295–306. http://dx.doi.org/10.1239/jap/1318940472.

Full text
Abstract:
Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore, a class of estimating equations is considered instead of the maximum likelihood estimate. The basic ingredients are mixing properties of the process and a general central limit theorem for weakly dependent variables.
APA, Harvard, Vancouver, ISO, and other styles
21

Jensen, Jens Ledet. "Asymptotic normality of M-estimators in nonhomogeneous hidden Markov models." Journal of Applied Probability 48, A (August 2011): 295–306. http://dx.doi.org/10.1017/s0021900200099290.

Full text
Abstract:
Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore, a class of estimating equations is considered instead of the maximum likelihood estimate. The basic ingredients are mixing properties of the process and a general central limit theorem for weakly dependent variables.
APA, Harvard, Vancouver, ISO, and other styles
22

Almutiri, Talal, and Farrukh Nadeem. "Markov Models Applications in Natural Language Processing: A Survey." International Journal of Information Technology and Computer Science 14, no. 2 (April 8, 2022): 1–16. http://dx.doi.org/10.5815/ijitcs.2022.02.01.

Full text
Abstract:
Markov models are one of the widely used techniques in machine learning to process natural language. Markov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state. The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): natural language generation, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Markov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.
APA, Harvard, Vancouver, ISO, and other styles
23

Lamine, Benrais, and Baha Nadia. "Object-Based Scene Classification Modeled by Hidden Markov Models Architecture." International Journal of Cognitive Informatics and Natural Intelligence 15, no. 4 (October 2021): 1–30. http://dx.doi.org/10.4018/ijcini.20211001.oa6.

Full text
Abstract:
Multiclass classification problems such as document classification, medical diagnosis or scene classification are very challenging to address due to similarities between mutual classes. The use of reliable tools is necessary to get good classification results. This paper addresses the scene classification problem using objects as attributes. The process of classification is modeled by a famous mathematical tool: The Hidden Markov Models. We introduce suitable relations that scale the parameters of the Hidden Markov Model into variables of scene classification. The construction of Hidden Markov Chains is supported with weight measures and sorting functions. Lastly, inference algorithms extract most suitable scene categories from the Discrete Markov Chain. A parallelism approach constructs several Discrete Markov Chains in order to improve the accuracy of the classification process. We provide numerous tests on different datasets and compare classification accuracies with some state of the art methods. The proposed approach distinguishes itself by outperforming the other.
APA, Harvard, Vancouver, ISO, and other styles
24

CAI, JINHAI, and ZHI-QIANG LIU. "MARKOV PROCESS IN PATTERN RECOGNITION." International Journal of Image and Graphics 01, no. 02 (April 2001): 287–311. http://dx.doi.org/10.1142/s0219467801000189.

Full text
Abstract:
Using the Markov random process, we developed two new approaches to pattern recognition: (1) Hidden Markov model for modeling spectral features for recognizing 2D shapes. This is because Fourier spectra are suitable for describing 2D shapes of simple closed contours and probabilistic models are capable of coping with random variations in object shapes. We will analyze the properties of spectral features derived from contours of 2D shapes and use these features in 2D pattern recognition. (2) Markov random fields for modeling 2D structural and statistical features. We will give a theoretic analysis of this approach, discuss the issues in the design of neighborhood system and cliques for Markov random field models, and analyze the properties of the models. We have applied the proposed approach to the recognition of unconstrained handwritten numerals and 2D shapes. Our extensive experimental results show that the proposed approach can achieve a higher performance than that reported recently in the literature.
APA, Harvard, Vancouver, ISO, and other styles
25

Mursyit, Mohammad, Aji Prasetya Wibawa, Ilham Ari Elbaith Zaeni, and Harits Ar Rosyid. "Pelabelan Kelas Kata Bahasa Jawa Menggunakan Hidden Markov Model." Mobile and Forensics 2, no. 2 (August 29, 2020): 71–83. http://dx.doi.org/10.12928/mf.v2i2.2450.

Full text
Abstract:
Part of Speech Tagging atau POS Tagging adalah proses memberikan label pada setiap kata dalam sebuah kalimat secara otomatis. Penelitian ini menggunakan algoritma Hidden Markov Model (HMM) untuk proses POS Tagging. Perlakuan untuk unknown words menggunakan Most Probable POS-Tag. Dataset yang digunakan berupa 10 cerita pendek berbahasa Jawa terdiri dari 10.180 kata yang telah diberikan tagsetBahasa Jawa. Pada penelitian ini proses POS Tagging menggunakan dua skenario. Skenario pertama yaitu menggunakan algoritma Hidden Markov Model (HMM) tanpa menggunakan perlakuan untuk unknown words. Skenario yang kedua menggunakan HMM dan Most Probable POS-Tag untuk perlakuan unknown words. Hasil menunjukan skenario pertama menghasilkan akurasi sebesar 45.5% dan skenario kedua menghasilkan akurasi sebesar 70.78%. Most Probable POS-Tag dapat meningkatkan akurasi pada POS Tagging tetapi tidak selalu menunjukan hasil yang benar dalam pemberian label. Most Probable POS-Tag dapat menghilangkan probabilitas bernilai Nol dari POS Tagging Hidden Markov Model. Hasil penelitian ini menunjukan bahwa POS Tagging dengan menggunakan Hidden Markov Model dipengaruhi oleh perlakuan terhadap unknown words, perbendaharaan kata dan hubungan label kata pada dataset.  Part of Speech Tagging or POS Tagging is the process of automatically giving labels to each word in a sentence. This study uses the Hidden Markov Model (HMM) algorithm for the POS Tagging process. Treatment for unknown words uses the Most Probable POS-Tag. The dataset used is in the form of 10 short stories in Javanese consisting of 10,180 words which have been given the Javanese tagset. In this study, the POS Tagging process uses two scenarios. The first scenario is using the Hidden Markov Model (HMM) algorithm without using treatment for unknown words. The second scenario uses HMM and Most Probable POS-Tag for treatment of unknown words. The results show that the first scenario produces an accuracy of 45.5% and the second scenario produces an accuracy of 70.78%. Most Probable POS-Tag can improve accuracy in POS Tagging but does not always produce correct labels. Most Probable POS-Tag can remove zero-value probability from POS Tagging Hidden Markov Model. The results of this study indicate that POS Tagging using the Hidden Markov Model is influenced by the treatment of unknown words, vocabulary and word label relationships in the dataset.
APA, Harvard, Vancouver, ISO, and other styles
26

Borucka, Anna, Edward Kozłowski, Rafał Parczewski, Katarzyna Antosz, Leszek Gil, and Daniel Pieniak. "Supply Sequence Modelling Using Hidden Markov Models." Applied Sciences 13, no. 1 (December 24, 2022): 231. http://dx.doi.org/10.3390/app13010231.

Full text
Abstract:
Logistics processes, their effective planning as well as proper management and effective implementation are of key importance in an enterprise. This article analyzes the process of supplying raw materials necessary for the implementation of production tasks. The specificity of the examined waste processing company requires the knowledge about the size of potential deliveries because the delivered waste must be properly managed and stored due to its toxicity to the natural environment. In the article, hidden Markov models were used to assess the level of supply. They are a statistical modeling tool used to analyze and predict the phenomena of a sequence of events. It is not always possible to provide sufficiently reliable information with the existing classical methods in this regard. Therefore, the article proposes modeling techniques with the help of stochastic processes. In hidden Markov models, the system is represented as a Markov process with states that are invisible to the observer but with a visible output (observation) that is a random state function. In the article, the distribution of outputs from the hidden states is defined by a polynomial distribution.
APA, Harvard, Vancouver, ISO, and other styles
27

Damian, Camilla, Zehra Eksi, and Rüdiger Frey. "EM algorithm for Markov chains observed via Gaussian noise and point process information: Theory and case studies." Statistics & Risk Modeling 35, no. 1-2 (January 1, 2018): 51–72. http://dx.doi.org/10.1515/strm-2017-0021.

Full text
Abstract:
AbstractIn this paper we study parameter estimation via the Expectation Maximization (EM) algorithm for a continuous-time hidden Markov model with diffusion and point process observation. Inference problems of this type arise for instance in credit risk modelling. A key step in the application of the EM algorithm is the derivation of finite-dimensional filters for the quantities that are needed in the E-Step of the algorithm. In this context we obtain exact, unnormalized and robust filters, and we discuss their numerical implementation. Moreover, we propose several goodness-of-fit tests for hidden Markov models with Gaussian noise and point process observation. We run an extensive simulation study to test speed and accuracy of our methodology. The paper closes with an application to credit risk: we estimate the parameters of a hidden Markov model for credit quality where the observations consist of rating transitions and credit spreads for US corporations.
APA, Harvard, Vancouver, ISO, and other styles
28

Härdle, Wolfgang Karl, Ostap Okhrin, and Weining Wang. "HIDDEN MARKOV STRUCTURES FOR DYNAMIC COPULAE." Econometric Theory 31, no. 5 (December 22, 2014): 981–1015. http://dx.doi.org/10.1017/s0266466614000607.

Full text
Abstract:
Understanding the time series dynamics of a multi-dimensional dependency structure is a challenging task. Multivariate covariance driven Gaussian or mixed normal time varying models have only a limited ability to capture important features of the data such as heavy tails, asymmetry, and nonlinear dependencies. The present paper tackles this problem by proposing and analyzing a hidden Markov model (HMM) for hierarchical Archimedean copulae (HAC). The HAC constitute a wide class of models for multi-dimensional dependencies, and HMM is a statistical technique for describing regime switching dynamics. HMM applied to HAC flexibly models multivariate dimensional non-Gaussian time series.We apply the expectation maximization (EM) algorithm for parameter estimation. Consistency results for both parameters and HAC structures are established in an HMM framework. The model is calibrated to exchange rate data with a VaR application. This example is motivated by a local adaptive analysis that yields a time varying HAC model. We compare its forecasting performance with that of other classical dynamic models. In another, second, application, we model a rainfall process. This task is of particular theoretical and practical interest because of the specific structure and required untypical treatment of precipitation data.
APA, Harvard, Vancouver, ISO, and other styles
29

Sarno, Riyanarto, and Kelly R. Sungkono. "Hidden Markov Model for Process Mining of Parallel Business Processes." International Review on Computers and Software (IRECOS) 11, no. 4 (April 30, 2016): 290. http://dx.doi.org/10.15866/irecos.v11i4.8700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wong, Wee Chin, and Jay H. Lee. "Fault Detection in Process Systems using Hidden Markov Disturbance Models." IFAC Proceedings Volumes 42, no. 11 (2009): 291–96. http://dx.doi.org/10.3182/20090712-4-tr-2008.00045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hovland, Geir E., and Brenan J. McCarragher. "Hidden Markov Models as a Process Monitor in Robotic Assembly." Modeling, Identification and Control: A Norwegian Research Bulletin 20, no. 4 (1999): 201–23. http://dx.doi.org/10.4173/mic.1999.4.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hovland, Geir E., and Brenan J. McCarragher. "Hidden Markov Models as a Process Monitor in Robotic Assembly." International Journal of Robotics Research 17, no. 2 (February 1998): 153–68. http://dx.doi.org/10.1177/027836499801700204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Wei, Ahmet Palazoğlu, and Jose A. Romagnoli. "Detecting abnormal process trends by wavelet-domain hidden Markov models." AIChE Journal 49, no. 1 (January 2003): 140–50. http://dx.doi.org/10.1002/aic.690490113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jing, Renzhi, and Ning Lin. "Tropical Cyclone Intensity Evolution Modeled as a Dependent Hidden Markov Process." Journal of Climate 32, no. 22 (October 22, 2019): 7837–55. http://dx.doi.org/10.1175/jcli-d-19-0027.1.

Full text
Abstract:
Abstract A hidden Markov model is developed to simulate tropical cyclone intensity evolution dependent on the surrounding large-scale environment. The model considers three unobserved (hidden) discrete states of storm intensity change and associates each state with a probability distribution of intensity change. The storm’s transit from one state to another is described as a Markov chain. Both the intensity change and state transit components of the model are dependent on environmental variables including potential intensity, vertical wind shear, relative humidity, and ocean feedback. This Markov Environment-Dependent Hurricane Intensity Model (MeHiM) is used to simulate the evolution of storm intensity along the storm track over the ocean, and a simple decay model is added to estimate the intensity change when the storm moves over land. Data for the North Atlantic (NA) basin from 1979 to 2014 (555 storms) are used for model development and evaluation. Probability distributions of 6- and 24-h intensity change, lifetime maximum intensity, and landfall intensity based on model simulations and observations compare well. Although the MeHiM is still limited in fully describing rapid intensification, it shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models).
APA, Harvard, Vancouver, ISO, and other styles
35

SETIAWATY, B. "THE ERGODICITY OF THE OBSERVED PROCESS OF A HIDDEN MARKOV MODEL." Journal of Mathematics and Its Applications 3, no. 1 (July 1, 2004): 27. http://dx.doi.org/10.29244/jmap.3.1.27-34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Larget, Bret. "A canonical representation for aggregated Markov processes." Journal of Applied Probability 35, no. 2 (June 1998): 313–24. http://dx.doi.org/10.1239/jap/1032192850.

Full text
Abstract:
A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.
APA, Harvard, Vancouver, ISO, and other styles
37

Larget, Bret. "A canonical representation for aggregated Markov processes." Journal of Applied Probability 35, no. 02 (June 1998): 313–24. http://dx.doi.org/10.1017/s0021900200014972.

Full text
Abstract:
A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Zhongsheng, Yongmin Yang, Zheng Hu, and Qinghu Zeng. "Fault prognosis of complex mechanical systems based on multi-sensor mixtured hidden semi-Markov models." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 227, no. 8 (November 21, 2012): 1853–63. http://dx.doi.org/10.1177/0954406212467260.

Full text
Abstract:
Accurate fault prognosis is of vital importance for condition-based maintenance. As to complex mechanical systems, multiple sensors are often used to collect condition signals and the observation process may rather be non-Gaussian and non-stationary. Traditional hidden semi-Markov models cannot provide adequate representation for multivariate non-Gaussian and non-stationary time series. The innovation of this article is to extend classical hidden semi-Markov models by modeling the observation as a linear mixture of non-Gaussian multi-sensor signals. The proposed model is called as a multi-sensor mixtured hidden semi-Markov model. Under this new framework, modified parameter re-estimation algorithms are derived in detail based on the complete-data expectation maximization algorithm. In the end the proposed prognostic methodology is validated on a practical bearing application. The experimental results show that the proposed method is indeed promising to obtain better prognostic performance than classical hidden semi-Markov models.
APA, Harvard, Vancouver, ISO, and other styles
39

Boys, R. J., and D. A. Henderson. "On Determining the Order of Markov Dependence of an Observed Process Governed by a Hidden Markov Model." Scientific Programming 10, no. 3 (2002): 241–51. http://dx.doi.org/10.1155/2002/683164.

Full text
Abstract:
This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the other governing model parameters. These samples allow coherent inferences to be made straightforwardly in contrast to those which use information criteria. The methods are illustrated by their application to both simulated and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
40

ELLIOTT, ROBERT J., and BING HAN. "A HIDDEN MARKOV APPROACH TO THE FORWARD PREMIUM PUZZLE." International Journal of Theoretical and Applied Finance 09, no. 07 (November 2006): 1009–20. http://dx.doi.org/10.1142/s0219024906003949.

Full text
Abstract:
A Hidden Markov Chain (HMC) is applied to study the forward premium puzzle. The weekly quotient of the interest rate differential divided by the log exchange rate change is modeled as a Hidden Markov process. Compared with existing standard approaches, the Hidden Markov approach allows a detailed analysis of the puzzle on a day-to-day basis while taking into full account the presence of noise in the observations. Two and three state models are investigated. A three-state HMC model performs better than two-state models. Application of the three-state model reveals that the above quotient is mostly zero, and hence leads to the rejection of the uncovered interest rate parity hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Yaping, Haiyan Li, Zhen Chen, and Ying Zhu. "An Improved Hidden Markov Model for Monitoring the Process with Autocorrelated Observations." Energies 15, no. 5 (February 24, 2022): 1685. http://dx.doi.org/10.3390/en15051685.

Full text
Abstract:
With the development of intelligent manufacturing, automated data acquisition techniques are widely used. The autocorrelations between data that are collected from production processes have become more common. Residual charts are a good approach to monitoring the process with data autocorrelation. An improved hidden Markov model (IHMM) for the prediction of autocorrelated observations and a new expectation maximization (EM) algorithm is proposed. A residual chart based on IHMM is employed to monitor the autocorrelated process. The numerical experiment shows that, in general, IHMMs outperform both conventional hidden Markov models (HMMs) and autoregressive (AR) models in quality shift diagnosis, decreasing the cost of missing alarms. Moreover, the times taken by IHMMs for training and prediction are found to be much less than those of HMMs.
APA, Harvard, Vancouver, ISO, and other styles
42

LÖHR, WOLFGANG, and NIHAT AY. "ON THE GENERATIVE NATURE OF PREDICTION." Advances in Complex Systems 12, no. 02 (April 2009): 169–94. http://dx.doi.org/10.1142/s0219525909002143.

Full text
Abstract:
Given an observed stochastic process, computational mechanics provides an explicit and efficient method of constructing a minimal hidden Markov model within the class of maximally predictive models. Here, the corresponding so-called ε-machine encodes the mechanisms of prediction. We propose an alternative notion of predictive models in terms of a hidden Markov model capable of generating the underlying stochastic process. A comparison of these two notions of prediction reveals that our approach is less restrictive and thereby allows for predictive models that are more concise than the ε-machine.
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Xiaoxiao, and Xiongbo Wan. "Dynamic-Event-Based Fault Detection for Markov Jump Systems Under Hidden-Markov Mode Observation." Journal of Advanced Computational Intelligence and Intelligent Informatics 24, no. 7 (December 20, 2020): 917–24. http://dx.doi.org/10.20965/jaciii.2020.p0917.

Full text
Abstract:
The fault detection (FD) problem is investigated for event-triggered discrete-time Markov jump systems (MJSs) with hidden-Markov mode observation. A dynamic-event-triggered mechanism, which includes some existing ones as special cases, is proposed to reduce unnecessary data transmissions to save network resources. Mode observation of the MJS by the FD filter (FDF) is governed by a hidden Markov process. By constructing a Markov-mode-dependent Lyapunov function, a sufficient condition in terms of linear matrix inequalities (LMIs) is obtained under which the filtering error system of the FD is stochastically stable with a prescribed H∞ performance index. The parameters of the FDF are explicitly given when these LMIs have feasible solutions. The effectiveness of the FD method is demonstrated by two numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
44

Korolkiewicz, Małgorzata Wiktoria. "A Dependent Hidden Markov Model of Credit Quality." International Journal of Stochastic Analysis 2012 (August 13, 2012): 1–13. http://dx.doi.org/10.1155/2012/719237.

Full text
Abstract:
We propose a dependent hidden Markov model of credit quality. We suppose that the "true" credit quality is not observed directly but only through noisy observations given by posted credit ratings. The model is formulated in discrete time with a Markov chain observed in martingale noise, where "noise" terms of the state and observation processes are possibly dependent. The model provides estimates for the state of the Markov chain governing the evolution of the credit rating process and the parameters of the model, where the latter are estimated using the EM algorithm. The dependent dynamics allow for the so-called "rating momentum" discussed in the credit literature and also provide a convenient test of independence between the state and observation dynamics.
APA, Harvard, Vancouver, ISO, and other styles
45

Herkenrath, Ulrich. "Generalized adaptive exponential smoothing of observations from an ergodic hidden Markov model." Journal of Applied Probability 36, no. 4 (December 1999): 987–98. http://dx.doi.org/10.1239/jap/1032374749.

Full text
Abstract:
We consider a sequence of observations which is generated by a so-called hidden Markov model. An exponential smoothing procedure applied to such an observation sequence generates an inhomogeneous Markov process as a sequence of smoothed values. If the state sequence of the underlying hidden Markov model is moreover ergodic, then for two classes of smoothing functions the strong ergodicity of the sequence of smoothed values is proved. As a consequence a central limit theorem and a law of large numbers hold true for the smoothed values. The proof uses general results for so-called convergent inhomogeneous Markov processes. The procedure proposed by the author can be applied to some time series discussed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
46

Herkenrath, Ulrich. "Generalized adaptive exponential smoothing of observations from an ergodic hidden Markov model." Journal of Applied Probability 36, no. 04 (December 1999): 987–98. http://dx.doi.org/10.1017/s0021900200017800.

Full text
Abstract:
We consider a sequence of observations which is generated by a so-called hidden Markov model. An exponential smoothing procedure applied to such an observation sequence generates an inhomogeneous Markov process as a sequence of smoothed values. If the state sequence of the underlying hidden Markov model is moreover ergodic, then for two classes of smoothing functions the strong ergodicity of the sequence of smoothed values is proved. As a consequence a central limit theorem and a law of large numbers hold true for the smoothed values. The proof uses general results for so-called convergent inhomogeneous Markov processes. The procedure proposed by the author can be applied to some time series discussed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
47

Bäuerle, Nicole, Igor Gilitschenski, and Uwe Hanebeck. "Exact and approximate hidden Markov chain filters based on discrete observations." Statistics & Risk Modeling 32, no. 3-4 (December 1, 2015): 159–76. http://dx.doi.org/10.1515/strm-2015-0004.

Full text
Abstract:
Abstract We consider a Hidden Markov Model (HMM) where the integrated continuous-time Markov chain can be observed at discrete time points perturbed by a Brownian motion. The aim is to derive a filter for the underlying continuous-time Markov chain. The recursion formula for the discrete-time filter is easy to derive, however involves densities which are very hard to obtain. In this paper we derive exact formulas for the necessary densities in the case the state space of the HMM consists of two elements only. This is done by relating the underlying integrated continuous-time Markov chain to the so-called asymmetric telegraph process and by using recent results on this process. In case the state space consists of more than two elements we present three different ways to approximate the densities for the filter. The first approach is based on the continuous filter problem. The second approach is to derive a PDE for the densities and solve it numerically. The third approach is a crude discrete time approximation of the Markov chain. All three approaches are compared in a numerical study.
APA, Harvard, Vancouver, ISO, and other styles
48

Sarno, Riyanarto, and Kelly R. Sungkono. "Coupled Hidden Markov Model for Process Mining of Invisible Prime Tasks." International Review on Computers and Software (IRECOS) 11, no. 6 (June 30, 2016): 539. http://dx.doi.org/10.15866/irecos.v11i6.9555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sun, Shiliang, Jing Zhao, and Qingbin Gao. "Modeling and recognizing human trajectories with beta process hidden Markov models." Pattern Recognition 48, no. 8 (August 2015): 2407–17. http://dx.doi.org/10.1016/j.patcog.2015.02.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bayraktar, Erhan, and Michael Ludkovski. "Sequential tracking of a hidden Markov chain using point process observations." Stochastic Processes and their Applications 119, no. 6 (June 2009): 1792–822. http://dx.doi.org/10.1016/j.spa.2008.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography