Добірка наукової літератури з теми "Hidden Markov models indexed by trees"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Hidden Markov models indexed by trees".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Hidden Markov models indexed by trees"

1

Huang, Huilin. "Strong Law of Large Numbers for Hidden Markov Chains Indexed by Cayley Trees." ISRN Probability and Statistics 2012 (September 23, 2012): 1–11. http://dx.doi.org/10.5402/2012/768657.

Повний текст джерела
Анотація:
We extend the idea of hidden Markov chains on lines to the situation of hidden Markov chains indexed by Cayley trees. Then, we study the strong law of large numbers for hidden Markov chains indexed by Cayley trees. As a corollary, we get the strong limit law of the conditional sample entropy rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Milone, Diego H., Leandro E. Di Persia, and María E. Torres. "Denoising and recognition using hidden Markov models with observation distributions modeled by hidden Markov trees." Pattern Recognition 43, no. 4 (April 2010): 1577–89. http://dx.doi.org/10.1016/j.patcog.2009.11.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

ANIGBOGU, J. C., and A. BELAÏD. "HIDDEN MARKOV MODELS IN TEXT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 06 (December 1995): 925–58. http://dx.doi.org/10.1142/s0218001495000389.

Повний текст джерела
Анотація:
A multi-level multifont character recognition is presented. The system proceeds by first delimiting the context of the characters. As a way of enhancing system performance, typographical information is extracted and used for font identification before actual character recognition is performed. This has the advantage of sure character identification as well as text reproduction in its original form. The font identification is based on decision trees where the characters are automatically arranged differently in confusion classes according to the physical characteristics of fonts. The character recognizers are built around the first and second order hidden Markov models (HMM) as well as Euclidean distance measures. The HMMs use the Viterbi and the Extended Viterbi algorithms to which enhancements were made. Also present is a majority-vote system that polls the other systems for “advice” before deciding on the identity of a character. Among other things, this last system is shown to give better results than each of the other systems applied individually. The system finally uses combinations of stochastic and dictionary verification methods for word recognition and error-correction.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Narayana, Pradyumna, J. Ross Beveridge, and Bruce A. Draper. "Interacting Hidden Markov Models for Video Understanding." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 11 (July 24, 2018): 1855020. http://dx.doi.org/10.1142/s0218001418550200.

Повний текст джерела
Анотація:
People, cars and other moving objects in videos generate time series data that can be labeled in many ways. For example, classifiers can label motion tracks according to the object type, the action being performed, or the trajectory of the motion. These labels can be generated for every frame as long as the object stays in view, so object tracks can be modeled as Markov processes with multiple noisy observation streams. A challenge in video recognition is to recover the true state of the track (i.e. its class, action and trajectory) using Markov models without (a) counter-factually assuming that the streams are independent or (b) creating a fully coupled Hidden Markov Model (FCHMM) with an infeasibly large state space. This paper introduces a new method for labeling sequences of hidden states. The method exploits external consistency constraints among streams without modeling complex joint distributions between them. For example, common sense semantics suggest that trees cannot walk. This is an example of an external constraint between an object label (“tree”) and an action label (“walk”). The key to exploiting external constraints is a new variation of the Viterbi algorithm which we call the Viterbi–Segre (VS) algorithm. VS restricts the solution spaces of factorized HMMs to marginal distributions that are compatible with joint distributions satisfying sets of external constraints. Experiments on synthetic data show that VS does a better job of estimating true states with the given observations than the traditional Viterbi algorithm applied to (a) factorized HMMs, (b) FCHMMs, or (c) partially-coupled HMMs that model pairwise dependencies. We then show that VS outperforms factorized and pairwise HMMs on real video data sets for which FCHMMs cannot feasibly be trained.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fredes, Luis, and Jean-François Marckert. "Invariant measures of interacting particle systems: Algebraic aspects." ESAIM: Probability and Statistics 24 (2020): 526–80. http://dx.doi.org/10.1051/ps/2020008.

Повний текст джерела
Анотація:
Consider a continuous time particle system ηt = (ηt(k), k ∈ 𝕃), indexed by a lattice 𝕃 which will be either ℤ, ℤ∕nℤ, a segment {1, ⋯ , n}, or ℤd, and taking its values in the set Eκ𝕃 where Eκ = {0, ⋯ , κ − 1} for some fixed κ ∈{∞, 2, 3, ⋯ }. Assume that the Markovian evolution of the particle system (PS) is driven by some translation invariant local dynamics with bounded range, encoded by a jump rate matrix ⊤. These are standard settings, satisfied by the TASEP, the voter models, the contact processes. The aim of this paper is to provide some sufficient and/or necessary conditions on the matrix ⊤ so that this Markov process admits some simple invariant distribution, as a product measure (if 𝕃 is any of the spaces mentioned above), the law of a Markov process indexed by ℤ or [1, n] ∩ ℤ (if 𝕃 = ℤ or {1, …, n}), or a Gibbs measure if 𝕃 = ℤ/nℤ. Multiple applications follow: efficient ways to find invariant Markov laws for a given jump rate matrix or to prove that none exists. The voter models and the contact processes are shown not to possess any Markov laws as invariant distribution (for any memory m). (As usual, a random process X indexed by ℤ or ℕ is said to be a Markov chain with memory m ∈ {0, 1, 2, ⋯ } if ℙ(Xk ∈ A | Xk−i, i ≥ 1) = ℙ(Xk ∈ A | Xk−i, 1 ≤ i ≤ m), for any k.) We also prove that some models close to these models do. We exhibit PS admitting hidden Markov chains as invariant distribution and design many PS on ℤ2, with jump rates indexed by 2 × 2 squares, admitting product invariant measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Durand, J. B., P. Goncalves, and Y. Guedon. "Computational Methods for Hidden Markov Tree Models—An Application to Wavelet Trees." IEEE Transactions on Signal Processing 52, no. 9 (September 2004): 2551–60. http://dx.doi.org/10.1109/tsp.2004.832006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tso, Brandt, and Joe L. Tseng. "Multi-resolution semantic-based imagery retrieval using hidden Markov models and decision trees." Expert Systems with Applications 37, no. 6 (June 2010): 4425–34. http://dx.doi.org/10.1016/j.eswa.2009.11.086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Do, M. N. "Fast approximation of Kullback-Leibler distance for dependence trees and hidden Markov models." IEEE Signal Processing Letters 10, no. 4 (April 2003): 115–18. http://dx.doi.org/10.1109/lsp.2003.809034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Maua, D. D., C. P. De Campos, A. Benavoli, and A. Antonucci. "Probabilistic Inference in Credal Networks: New Complexity Results." Journal of Artificial Intelligence Research 50 (July 28, 2014): 603–37. http://dx.doi.org/10.1613/jair.4355.

Повний текст джерела
Анотація:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove a similar result for inference in naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and naive Bayes structures are used in real applications of imprecise probability.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Segers, Johan. "One- versus multi-component regular variation and extremes of Markov trees." Advances in Applied Probability 52, no. 3 (September 2020): 855–78. http://dx.doi.org/10.1017/apr.2020.22.

Повний текст джерела
Анотація:
AbstractA Markov tree is a random vector indexed by the nodes of a tree whose distribution is determined by the distributions of pairs of neighbouring variables and a list of conditional independence relations. Upon an assumption on the tails of the Markov kernels associated to these pairs, the conditional distribution of the self-normalized random vector when the variable at the root of the tree tends to infinity converges weakly to a random vector of coupled random walks called a tail tree. If, in addition, the conditioning variable has a regularly varying tail, the Markov tree satisfies a form of one-component regular variation. Changing the location of the root, that is, changing the conditioning variable, yields a different tail tree. When the tails of the marginal distributions of the conditioning variables are balanced, these tail trees are connected by a formula that generalizes the time change formula for regularly varying stationary time series. The formula is most easily understood when the various one-component regular variation statements are tied up into a single multi-component statement. The theory of multi-component regular variation is worked out for general random vectors, not necessarily Markov trees, with an eye towards other models, graphical or otherwise.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Hidden Markov models indexed by trees"

1

Weibel, Julien. "Graphons de probabilités, limites de graphes pondérés aléatoires et chaînes de Markov branchantes cachées." Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1031.

Повний текст джерела
Анотація:
Les graphes sont des objets mathématiques qui servent à modéliser tout type de réseaux, comme les réseaux électriques, les réseaux de communications et les réseaux sociaux. Formellement un graphe est composé d'un ensemble de sommets et d'un ensemble d'arêtes reliant des paires de sommets. Les sommets représentent par exemple des individus, tandis que les arêtes représentent les interactions entre ces individus. Dans le cas d'un graphe pondéré, chaque arête possède un poids ou une décoration pouvant modéliser une distance, une intensité d'interaction, une résistance. La modélisation de réseaux réels fait souvent intervenir de grands graphes qui ont un grand nombre de sommets et d'arêtes.La première partie de cette thèse est consacrée à l'introduction et à l'étude des propriétés des objets limites des grands graphes pondérés : les graphons de probabilités. Ces objets sont une généralisation des graphons introduits et étudiés par Lovász et ses co-auteurs dans le cas des graphes sans poids sur les arêtes. À partir d'une distance induisant la topologie faible sur les mesures, nous définissons une distance de coupe sur les graphons de probabilités. Nous exhibons un critère de tension pour les graphons de probabilités lié à la compacité relative dans la distance de coupe. Enfin, nous prouvons que cette topologie coïncide avec la topologie induite par la convergence en distribution des sous-graphes échantillonnés. Dans la deuxième partie de cette thèse, nous nous intéressons aux modèles markoviens cachés indexés par des arbres. Nous montrons la consistance forte et la normalité asymptotique de l'estimateur de maximum de vraisemblance pour ces modèles sous des hypothèses standards. Nous montrons un théorème ergodique pour des chaînes de Markov branchantes indexés par des arbres avec des formes générales. Enfin, nous montrons que pour une chaîne stationnaire et réversible, le graphe ligne est la forme d'arbre induisant une variance minimale pour l'estimateur de moyenne empirique parmi les arbres avec un nombre donné de sommets
Graphs are mathematical objects used to model all kinds of networks, such as electrical networks, communication networks, and social networks. Formally, a graph consists of a set of vertices and a set of edges connecting pairs of vertices. The vertices represent, for example, individuals, while the edges represent the interactions between these individuals. In the case of a weighted graph, each edge has a weight or a decoration that can model a distance, an interaction intensity, or a resistance. Modeling real-world networks often involves large graphs with a large number of vertices and edges.The first part of this thesis is dedicated to introducing and studying the properties of the limit objects of large weighted graphs : probability-graphons. These objects are a generalization of graphons introduced and studied by Lovász and his co-authors in the case of unweighted graphs. Starting from a distance that induces the weak topology on measures, we define a cut distance on probability-graphons. We exhibit a tightness criterion for probability-graphons related to relative compactness in the cut distance. Finally, we prove that this topology coincides with the topology induced by the convergence in distribution of the sampled subgraphs. In the second part of this thesis, we focus on hidden Markov models indexed by trees. We show the strong consistency and asymptotic normality of the maximum likelihood estimator for these models under standard assumptions. We prove an ergodic theorem for branching Markov chains indexed by trees with general shapes. Finally, we show that for a stationary and reversible chain, the line graph is the tree shape that induces the minimal variance for the empirical mean estimator among trees with a given number of vertices
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lanka, Venkata Raghava Ravi Teja Lanka. "VEHICLE RESPONSE PREDICTION USING PHYSICAL AND MACHINE LEARNING MODELS." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511891682062084.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Guo, Jia-Liang, and 郭家良. "Process Discovery using Rule-Integrated Trees Hidden Semi-Markov Models." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/456975.

Повний текст джерела
Анотація:
碩士
國立中山大學
資訊管理學系研究所
105
To predict or to explain? With the dramatical growth of the volume of information generated from various information systems, data science has become popular and important in recent years while machine learning algorithms provide a very strong support and foundation for various data applications. Many data applications are based on black-box models. For example, a fraud detection system can predict which person will default but we cannot understand how the system consider it’s fraud. While white-box models are easy to understand but have relatively poor predictive performance. Hence, in this thesis, we propose a novel grafted tree algorithm to integrate trees of random forests. The model attempt to find a balance between a decision tree and a random forest. That is, the grafted tree have better interpretability and the performance than a single decision tree. With the decision tree is integrated from a random forest, it will be applied to Hidden semi-Markov models (HSMM) to build a Classification Tree Hidden Semi- Markov Model (CTHSMM) in order to discover underlying changes of a system. The experimental result shows that our proposed model RITHSMM is better than a simple decision tree based on Classification and Regression Trees and it can find more states/leaves so as to answer a kind of questions, “given a sequence of observable sequence, what are the most probable/relevant sequence of changes of a dynamic system?”.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tu, Cheng-En, and 杜承恩. "Mandarin Tone Recognition based on Decision Trees and Hidden Markov Models." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/74449857537411484291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Hidden Markov models indexed by trees"

1

Oswald, Julie N., Christine Erbe, William L. Gannon, Shyam Madhusudhana, and Jeanette A. Thomas. "Detection and Classification Methods for Animal Sounds." In Exploring Animal Behavior Through Sound: Volume 1, 269–317. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97540-1_8.

Повний текст джерела
Анотація:
AbstractClassification of the acoustic repertoires of animals into sound types is a useful tool for taxonomic studies, behavioral studies, and for documenting the occurrence of animals. Classification of acoustic repertoires enables the identification of species, age, gender, and individual identity, correlations between sound types and behavior, the identification of changes in vocal behavior over time or in response to anthropogenic noise, comparisons between the repertoires of populations living in different geographic regions and environments, and the development of software tools for automated signal processing. Techniques for classification have evolved over time as technical capabilities have expanded. Initially, researchers applied qualitative methods, such as listening and visually discerning sounds in spectrograms. Advances in computer technology and the development of software for the automatic detection and classification of sounds have allowed bioacousticians to quickly find sounds in recordings, thus significantly reducing analysis time and enabling the analysis of larger datasets. In this chapter, we present software algorithms for automated signal detection (based on energy, Teager–Kaiser energy, spectral entropy, matched filtering, and spectrogram cross-correlation) as well as for signal classification (e.g., parametric clustering, principal component analysis, discriminant function analysis, classification trees, artificial neural networks, random forests, Gaussian mixture models, support vector machines, dynamic time-warping, and hidden Markov models). Methods for evaluating the performance of automated tools are presented (i.e., receiver operating characteristics and precision-recall) and challenges with classifying animal sounds are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Elakya, R., S. Surya, Abinaya G (c1edcca3-9bd8-40f6-a0f6-da223586be33, S. Shanthana, and T. Manoranjitham. "Unveiling the Depths." In Advances in Environmental Engineering and Green Technologies, 279–92. IGI Global, 2024. https://doi.org/10.4018/979-8-3693-6670-7.ch013.

Повний текст джерела
Анотація:
Deep-sea ecosystems, harbouring extraordinary biodiversity and geological wonders, remain largely unexplored due to their remote and inhospitable nature. Recent technological advancements, particularly in the realm of machine learning (ML), offer promising avenues for unravelling the mysteries of the deep and safeguarding its fragile ecosystems. This abstract presents an overview of the manifold applications of ML in deep-sea ecosystem analysis, focusing on key areas such as species identification, habitat characterization, and event detection. Primarily, ML algorithms, including Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), and Random Forests, are deployed for automated species identification, leveraging vast repositories of underwater imagery. By discerning intricate visual patterns, these algorithms facilitate rapid biodiversity assessment and aid in monitoring shifts in species distributions. Moreover, ML techniques, such as Decision Trees, K-Nearest Neighbour (KNN), and deep learning architectures, are instrumental in habitat classification endeavours. Through the analysis of multifaceted sensor data, these algorithms enable the delineation of distinct seafloor habitats, empowering conservationists with invaluable insights into ecosystem structure and function. Furthermore, ML-based anomaly detection algorithms, Hidden Markov Models (HMMs), and reinforcement learning (RL) strategies contribute to event detection efforts in the deep sea. By scrutinizing temporal data streams from underwater sensors, these algorithms detect aberrations indicative of natural phenomena or anthropogenic disturbances, thus facilitating timely interventions and mitigative actions. In conclusion, the integration of ML techniques into deep-sea ecosystem analysis represents a paradigm shift in our approach to understanding and preserving these enigmatic realms. By automating labor-intensive tasks, enhancing data processing capabilities, and fostering real-time monitoring, ML empowers scientists and conservationists to unlock the secrets of the deep and embark on a journey towards sustainable stewardship of these invaluable ecosystems.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Hidden Markov models indexed by trees"

1

Jin, Shaohua, Yongxue Wang, Huitao Liu, Ying Tian, and Hui Li. "Some Strong Limit Theorems for Hidden Markov Models Indexed by a Non-homogeneous Tree." In 2010 Third International Symposium on Intelligent Information Technology and Security Informatics (IITSI). IEEE, 2010. http://dx.doi.org/10.1109/iitsi.2010.68.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Milone, Diego H., Diego R. Tomassi, and Leandro E. Di Persia. "Signal denoising with hidden Markov models using hidden Markov trees as observation densities." In 2008 IEEE Workshop on Signal Processing for Machine Learning. IEEE, 2008. http://dx.doi.org/10.1109/mlsp.2008.4685509.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lacey, Arron, Jingjing Deng, and Xianghua Xie. "Protein classification using Hidden Markov models and randomised decision trees." In 2014 7th International Conference on Biomedical Engineering and Informatics (BMEI). IEEE, 2014. http://dx.doi.org/10.1109/bmei.2014.7002856.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії