Auswahl der wissenschaftlichen Literatur zum Thema „Hidden Markov models indexed by trees“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Hidden Markov models indexed by trees" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Hidden Markov models indexed by trees"

1

Huang, Huilin. „Strong Law of Large Numbers for Hidden Markov Chains Indexed by Cayley Trees“. ISRN Probability and Statistics 2012 (23.09.2012): 1–11. http://dx.doi.org/10.5402/2012/768657.

Der volle Inhalt der Quelle
Annotation:
We extend the idea of hidden Markov chains on lines to the situation of hidden Markov chains indexed by Cayley trees. Then, we study the strong law of large numbers for hidden Markov chains indexed by Cayley trees. As a corollary, we get the strong limit law of the conditional sample entropy rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Milone, Diego H., Leandro E. Di Persia und María E. Torres. „Denoising and recognition using hidden Markov models with observation distributions modeled by hidden Markov trees“. Pattern Recognition 43, Nr. 4 (April 2010): 1577–89. http://dx.doi.org/10.1016/j.patcog.2009.11.010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

ANIGBOGU, J. C., und A. BELAÏD. „HIDDEN MARKOV MODELS IN TEXT RECOGNITION“. International Journal of Pattern Recognition and Artificial Intelligence 09, Nr. 06 (Dezember 1995): 925–58. http://dx.doi.org/10.1142/s0218001495000389.

Der volle Inhalt der Quelle
Annotation:
A multi-level multifont character recognition is presented. The system proceeds by first delimiting the context of the characters. As a way of enhancing system performance, typographical information is extracted and used for font identification before actual character recognition is performed. This has the advantage of sure character identification as well as text reproduction in its original form. The font identification is based on decision trees where the characters are automatically arranged differently in confusion classes according to the physical characteristics of fonts. The character recognizers are built around the first and second order hidden Markov models (HMM) as well as Euclidean distance measures. The HMMs use the Viterbi and the Extended Viterbi algorithms to which enhancements were made. Also present is a majority-vote system that polls the other systems for “advice” before deciding on the identity of a character. Among other things, this last system is shown to give better results than each of the other systems applied individually. The system finally uses combinations of stochastic and dictionary verification methods for word recognition and error-correction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Narayana, Pradyumna, J. Ross Beveridge und Bruce A. Draper. „Interacting Hidden Markov Models for Video Understanding“. International Journal of Pattern Recognition and Artificial Intelligence 32, Nr. 11 (24.07.2018): 1855020. http://dx.doi.org/10.1142/s0218001418550200.

Der volle Inhalt der Quelle
Annotation:
People, cars and other moving objects in videos generate time series data that can be labeled in many ways. For example, classifiers can label motion tracks according to the object type, the action being performed, or the trajectory of the motion. These labels can be generated for every frame as long as the object stays in view, so object tracks can be modeled as Markov processes with multiple noisy observation streams. A challenge in video recognition is to recover the true state of the track (i.e. its class, action and trajectory) using Markov models without (a) counter-factually assuming that the streams are independent or (b) creating a fully coupled Hidden Markov Model (FCHMM) with an infeasibly large state space. This paper introduces a new method for labeling sequences of hidden states. The method exploits external consistency constraints among streams without modeling complex joint distributions between them. For example, common sense semantics suggest that trees cannot walk. This is an example of an external constraint between an object label (“tree”) and an action label (“walk”). The key to exploiting external constraints is a new variation of the Viterbi algorithm which we call the Viterbi–Segre (VS) algorithm. VS restricts the solution spaces of factorized HMMs to marginal distributions that are compatible with joint distributions satisfying sets of external constraints. Experiments on synthetic data show that VS does a better job of estimating true states with the given observations than the traditional Viterbi algorithm applied to (a) factorized HMMs, (b) FCHMMs, or (c) partially-coupled HMMs that model pairwise dependencies. We then show that VS outperforms factorized and pairwise HMMs on real video data sets for which FCHMMs cannot feasibly be trained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fredes, Luis, und Jean-François Marckert. „Invariant measures of interacting particle systems: Algebraic aspects“. ESAIM: Probability and Statistics 24 (2020): 526–80. http://dx.doi.org/10.1051/ps/2020008.

Der volle Inhalt der Quelle
Annotation:
Consider a continuous time particle system ηt = (ηt(k), k ∈ 𝕃), indexed by a lattice 𝕃 which will be either ℤ, ℤ∕nℤ, a segment {1, ⋯ , n}, or ℤd, and taking its values in the set Eκ𝕃 where Eκ = {0, ⋯ , κ − 1} for some fixed κ ∈{∞, 2, 3, ⋯ }. Assume that the Markovian evolution of the particle system (PS) is driven by some translation invariant local dynamics with bounded range, encoded by a jump rate matrix ⊤. These are standard settings, satisfied by the TASEP, the voter models, the contact processes. The aim of this paper is to provide some sufficient and/or necessary conditions on the matrix ⊤ so that this Markov process admits some simple invariant distribution, as a product measure (if 𝕃 is any of the spaces mentioned above), the law of a Markov process indexed by ℤ or [1, n] ∩ ℤ (if 𝕃 = ℤ or {1, …, n}), or a Gibbs measure if 𝕃 = ℤ/nℤ. Multiple applications follow: efficient ways to find invariant Markov laws for a given jump rate matrix or to prove that none exists. The voter models and the contact processes are shown not to possess any Markov laws as invariant distribution (for any memory m). (As usual, a random process X indexed by ℤ or ℕ is said to be a Markov chain with memory m ∈ {0, 1, 2, ⋯ } if ℙ(Xk ∈ A | Xk−i, i ≥ 1) = ℙ(Xk ∈ A | Xk−i, 1 ≤ i ≤ m), for any k.) We also prove that some models close to these models do. We exhibit PS admitting hidden Markov chains as invariant distribution and design many PS on ℤ2, with jump rates indexed by 2 × 2 squares, admitting product invariant measures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Durand, J. B., P. Goncalves und Y. Guedon. „Computational Methods for Hidden Markov Tree Models—An Application to Wavelet Trees“. IEEE Transactions on Signal Processing 52, Nr. 9 (September 2004): 2551–60. http://dx.doi.org/10.1109/tsp.2004.832006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tso, Brandt, und Joe L. Tseng. „Multi-resolution semantic-based imagery retrieval using hidden Markov models and decision trees“. Expert Systems with Applications 37, Nr. 6 (Juni 2010): 4425–34. http://dx.doi.org/10.1016/j.eswa.2009.11.086.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Do, M. N. „Fast approximation of Kullback-Leibler distance for dependence trees and hidden Markov models“. IEEE Signal Processing Letters 10, Nr. 4 (April 2003): 115–18. http://dx.doi.org/10.1109/lsp.2003.809034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Maua, D. D., C. P. De Campos, A. Benavoli und A. Antonucci. „Probabilistic Inference in Credal Networks: New Complexity Results“. Journal of Artificial Intelligence Research 50 (28.07.2014): 603–37. http://dx.doi.org/10.1613/jair.4355.

Der volle Inhalt der Quelle
Annotation:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove a similar result for inference in naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and naive Bayes structures are used in real applications of imprecise probability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Segers, Johan. „One- versus multi-component regular variation and extremes of Markov trees“. Advances in Applied Probability 52, Nr. 3 (September 2020): 855–78. http://dx.doi.org/10.1017/apr.2020.22.

Der volle Inhalt der Quelle
Annotation:
AbstractA Markov tree is a random vector indexed by the nodes of a tree whose distribution is determined by the distributions of pairs of neighbouring variables and a list of conditional independence relations. Upon an assumption on the tails of the Markov kernels associated to these pairs, the conditional distribution of the self-normalized random vector when the variable at the root of the tree tends to infinity converges weakly to a random vector of coupled random walks called a tail tree. If, in addition, the conditioning variable has a regularly varying tail, the Markov tree satisfies a form of one-component regular variation. Changing the location of the root, that is, changing the conditioning variable, yields a different tail tree. When the tails of the marginal distributions of the conditioning variables are balanced, these tail trees are connected by a formula that generalizes the time change formula for regularly varying stationary time series. The formula is most easily understood when the various one-component regular variation statements are tied up into a single multi-component statement. The theory of multi-component regular variation is worked out for general random vectors, not necessarily Markov trees, with an eye towards other models, graphical or otherwise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Hidden Markov models indexed by trees"

1

Weibel, Julien. „Graphons de probabilités, limites de graphes pondérés aléatoires et chaînes de Markov branchantes cachées“. Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1031.

Der volle Inhalt der Quelle
Annotation:
Les graphes sont des objets mathématiques qui servent à modéliser tout type de réseaux, comme les réseaux électriques, les réseaux de communications et les réseaux sociaux. Formellement un graphe est composé d'un ensemble de sommets et d'un ensemble d'arêtes reliant des paires de sommets. Les sommets représentent par exemple des individus, tandis que les arêtes représentent les interactions entre ces individus. Dans le cas d'un graphe pondéré, chaque arête possède un poids ou une décoration pouvant modéliser une distance, une intensité d'interaction, une résistance. La modélisation de réseaux réels fait souvent intervenir de grands graphes qui ont un grand nombre de sommets et d'arêtes.La première partie de cette thèse est consacrée à l'introduction et à l'étude des propriétés des objets limites des grands graphes pondérés : les graphons de probabilités. Ces objets sont une généralisation des graphons introduits et étudiés par Lovász et ses co-auteurs dans le cas des graphes sans poids sur les arêtes. À partir d'une distance induisant la topologie faible sur les mesures, nous définissons une distance de coupe sur les graphons de probabilités. Nous exhibons un critère de tension pour les graphons de probabilités lié à la compacité relative dans la distance de coupe. Enfin, nous prouvons que cette topologie coïncide avec la topologie induite par la convergence en distribution des sous-graphes échantillonnés. Dans la deuxième partie de cette thèse, nous nous intéressons aux modèles markoviens cachés indexés par des arbres. Nous montrons la consistance forte et la normalité asymptotique de l'estimateur de maximum de vraisemblance pour ces modèles sous des hypothèses standards. Nous montrons un théorème ergodique pour des chaînes de Markov branchantes indexés par des arbres avec des formes générales. Enfin, nous montrons que pour une chaîne stationnaire et réversible, le graphe ligne est la forme d'arbre induisant une variance minimale pour l'estimateur de moyenne empirique parmi les arbres avec un nombre donné de sommets
Graphs are mathematical objects used to model all kinds of networks, such as electrical networks, communication networks, and social networks. Formally, a graph consists of a set of vertices and a set of edges connecting pairs of vertices. The vertices represent, for example, individuals, while the edges represent the interactions between these individuals. In the case of a weighted graph, each edge has a weight or a decoration that can model a distance, an interaction intensity, or a resistance. Modeling real-world networks often involves large graphs with a large number of vertices and edges.The first part of this thesis is dedicated to introducing and studying the properties of the limit objects of large weighted graphs : probability-graphons. These objects are a generalization of graphons introduced and studied by Lovász and his co-authors in the case of unweighted graphs. Starting from a distance that induces the weak topology on measures, we define a cut distance on probability-graphons. We exhibit a tightness criterion for probability-graphons related to relative compactness in the cut distance. Finally, we prove that this topology coincides with the topology induced by the convergence in distribution of the sampled subgraphs. In the second part of this thesis, we focus on hidden Markov models indexed by trees. We show the strong consistency and asymptotic normality of the maximum likelihood estimator for these models under standard assumptions. We prove an ergodic theorem for branching Markov chains indexed by trees with general shapes. Finally, we show that for a stationary and reversible chain, the line graph is the tree shape that induces the minimal variance for the empirical mean estimator among trees with a given number of vertices
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lanka, Venkata Raghava Ravi Teja Lanka. „VEHICLE RESPONSE PREDICTION USING PHYSICAL AND MACHINE LEARNING MODELS“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511891682062084.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Guo, Jia-Liang, und 郭家良. „Process Discovery using Rule-Integrated Trees Hidden Semi-Markov Models“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/456975.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
資訊管理學系研究所
105
To predict or to explain? With the dramatical growth of the volume of information generated from various information systems, data science has become popular and important in recent years while machine learning algorithms provide a very strong support and foundation for various data applications. Many data applications are based on black-box models. For example, a fraud detection system can predict which person will default but we cannot understand how the system consider it’s fraud. While white-box models are easy to understand but have relatively poor predictive performance. Hence, in this thesis, we propose a novel grafted tree algorithm to integrate trees of random forests. The model attempt to find a balance between a decision tree and a random forest. That is, the grafted tree have better interpretability and the performance than a single decision tree. With the decision tree is integrated from a random forest, it will be applied to Hidden semi-Markov models (HSMM) to build a Classification Tree Hidden Semi- Markov Model (CTHSMM) in order to discover underlying changes of a system. The experimental result shows that our proposed model RITHSMM is better than a simple decision tree based on Classification and Regression Trees and it can find more states/leaves so as to answer a kind of questions, “given a sequence of observable sequence, what are the most probable/relevant sequence of changes of a dynamic system?”.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tu, Cheng-En, und 杜承恩. „Mandarin Tone Recognition based on Decision Trees and Hidden Markov Models“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/74449857537411484291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Hidden Markov models indexed by trees"

1

Oswald, Julie N., Christine Erbe, William L. Gannon, Shyam Madhusudhana und Jeanette A. Thomas. „Detection and Classification Methods for Animal Sounds“. In Exploring Animal Behavior Through Sound: Volume 1, 269–317. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97540-1_8.

Der volle Inhalt der Quelle
Annotation:
AbstractClassification of the acoustic repertoires of animals into sound types is a useful tool for taxonomic studies, behavioral studies, and for documenting the occurrence of animals. Classification of acoustic repertoires enables the identification of species, age, gender, and individual identity, correlations between sound types and behavior, the identification of changes in vocal behavior over time or in response to anthropogenic noise, comparisons between the repertoires of populations living in different geographic regions and environments, and the development of software tools for automated signal processing. Techniques for classification have evolved over time as technical capabilities have expanded. Initially, researchers applied qualitative methods, such as listening and visually discerning sounds in spectrograms. Advances in computer technology and the development of software for the automatic detection and classification of sounds have allowed bioacousticians to quickly find sounds in recordings, thus significantly reducing analysis time and enabling the analysis of larger datasets. In this chapter, we present software algorithms for automated signal detection (based on energy, Teager–Kaiser energy, spectral entropy, matched filtering, and spectrogram cross-correlation) as well as for signal classification (e.g., parametric clustering, principal component analysis, discriminant function analysis, classification trees, artificial neural networks, random forests, Gaussian mixture models, support vector machines, dynamic time-warping, and hidden Markov models). Methods for evaluating the performance of automated tools are presented (i.e., receiver operating characteristics and precision-recall) and challenges with classifying animal sounds are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Elakya, R., S. Surya, Abinaya G (c1edcca3-9bd8-40f6-a0f6-da223586be33, S. Shanthana und T. Manoranjitham. „Unveiling the Depths“. In Advances in Environmental Engineering and Green Technologies, 279–92. IGI Global, 2024. https://doi.org/10.4018/979-8-3693-6670-7.ch013.

Der volle Inhalt der Quelle
Annotation:
Deep-sea ecosystems, harbouring extraordinary biodiversity and geological wonders, remain largely unexplored due to their remote and inhospitable nature. Recent technological advancements, particularly in the realm of machine learning (ML), offer promising avenues for unravelling the mysteries of the deep and safeguarding its fragile ecosystems. This abstract presents an overview of the manifold applications of ML in deep-sea ecosystem analysis, focusing on key areas such as species identification, habitat characterization, and event detection. Primarily, ML algorithms, including Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), and Random Forests, are deployed for automated species identification, leveraging vast repositories of underwater imagery. By discerning intricate visual patterns, these algorithms facilitate rapid biodiversity assessment and aid in monitoring shifts in species distributions. Moreover, ML techniques, such as Decision Trees, K-Nearest Neighbour (KNN), and deep learning architectures, are instrumental in habitat classification endeavours. Through the analysis of multifaceted sensor data, these algorithms enable the delineation of distinct seafloor habitats, empowering conservationists with invaluable insights into ecosystem structure and function. Furthermore, ML-based anomaly detection algorithms, Hidden Markov Models (HMMs), and reinforcement learning (RL) strategies contribute to event detection efforts in the deep sea. By scrutinizing temporal data streams from underwater sensors, these algorithms detect aberrations indicative of natural phenomena or anthropogenic disturbances, thus facilitating timely interventions and mitigative actions. In conclusion, the integration of ML techniques into deep-sea ecosystem analysis represents a paradigm shift in our approach to understanding and preserving these enigmatic realms. By automating labor-intensive tasks, enhancing data processing capabilities, and fostering real-time monitoring, ML empowers scientists and conservationists to unlock the secrets of the deep and embark on a journey towards sustainable stewardship of these invaluable ecosystems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Hidden Markov models indexed by trees"

1

Jin, Shaohua, Yongxue Wang, Huitao Liu, Ying Tian und Hui Li. „Some Strong Limit Theorems for Hidden Markov Models Indexed by a Non-homogeneous Tree“. In 2010 Third International Symposium on Intelligent Information Technology and Security Informatics (IITSI). IEEE, 2010. http://dx.doi.org/10.1109/iitsi.2010.68.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Milone, Diego H., Diego R. Tomassi und Leandro E. Di Persia. „Signal denoising with hidden Markov models using hidden Markov trees as observation densities“. In 2008 IEEE Workshop on Signal Processing for Machine Learning. IEEE, 2008. http://dx.doi.org/10.1109/mlsp.2008.4685509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lacey, Arron, Jingjing Deng und Xianghua Xie. „Protein classification using Hidden Markov models and randomised decision trees“. In 2014 7th International Conference on Biomedical Engineering and Informatics (BMEI). IEEE, 2014. http://dx.doi.org/10.1109/bmei.2014.7002856.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie