Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Bayesian non-Parametric model.

Dissertationen zum Thema „Bayesian non-Parametric model“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-20 Dissertationen für die Forschung zum Thema "Bayesian non-Parametric model" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bartcus, Marius. „Bayesian non-parametric parsimonious mixtures for model-based clustering“. Thesis, Toulon, 2015. http://www.theses.fr/2015TOUL0010/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse porte sur l’apprentissage statistique et l’analyse de données multi-dimensionnelles. Elle se focalise particulièrement sur l’apprentissage non supervisé de modèles génératifs pour la classification automatique. Nous étudions les modèles de mélanges Gaussians, aussi bien dans le contexte d’estimation par maximum de vraisemblance via l’algorithme EM, que dans le contexte Bayésien d’estimation par Maximum A Posteriori via des techniques d’échantillonnage par Monte Carlo. Nous considérons principalement les modèles de mélange parcimonieux qui reposent sur une décomposition spectrale de la matrice de covariance et qui offre un cadre flexible notamment pour les problèmes de classification en grande dimension. Ensuite, nous investiguons les mélanges Bayésiens non-paramétriques qui se basent sur des processus généraux flexibles comme le processus de Dirichlet et le Processus du Restaurant Chinois. Cette formulation non-paramétrique des modèles est pertinente aussi bien pour l’apprentissage du modèle, que pour la question difficile du choix de modèle. Nous proposons de nouveaux modèles de mélanges Bayésiens non-paramétriques parcimonieux et dérivons une technique d’échantillonnage par Monte Carlo dans laquelle le modèle de mélange et son nombre de composantes sont appris simultanément à partir des données. La sélection de la structure du modèle est effectuée en utilisant le facteur de Bayes. Ces modèles, par leur formulation non-paramétrique et parcimonieuse, sont utiles pour les problèmes d’analyse de masses de données lorsque le nombre de classe est indéterminé et augmente avec les données, et lorsque la dimension est grande. Les modèles proposés validés sur des données simulées et des jeux de données réelles standard. Ensuite, ils sont appliqués sur un problème réel difficile de structuration automatique de données bioacoustiques complexes issues de signaux de chant de baleine. Enfin, nous ouvrons des perspectives Markoviennes via les processus de Dirichlet hiérarchiques pour les modèles Markov cachés
This thesis focuses on statistical learning and multi-dimensional data analysis. It particularly focuses on unsupervised learning of generative models for model-based clustering. We study the Gaussians mixture models, in the context of maximum likelihood estimation via the EM algorithm, as well as in the Bayesian estimation context by maximum a posteriori via Markov Chain Monte Carlo (MCMC) sampling techniques. We mainly consider the parsimonious mixture models which are based on a spectral decomposition of the covariance matrix and provide a flexible framework particularly for the analysis of high-dimensional data. Then, we investigate non-parametric Bayesian mixtures which are based on general flexible processes such as the Dirichlet process and the Chinese Restaurant Process. This non-parametric model formulation is relevant for both learning the model, as well for dealing with the issue of model selection. We propose new Bayesian non-parametric parsimonious mixtures and derive a MCMC sampling technique where the mixture model and the number of mixture components are simultaneously learned from the data. The selection of the model structure is performed by using Bayes Factors. These models, by their non-parametric and sparse formulation, are useful for the analysis of large data sets when the number of classes is undetermined and increases with the data, and when the dimension is high. The models are validated on simulated data and standard real data sets. Then, they are applied to a real difficult problem of automatic structuring of complex bioacoustic data issued from whale song signals. Finally, we open Markovian perspectives via hierarchical Dirichlet processes hidden Markov models
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ren, Yan. „A Non-parametric Bayesian Method for Hierarchical Clustering of Longitudinal Data“. University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337085531.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gebremeskel, Haftu Gebrehiwot. „Implementing hierarchical bayesian model to fertility data: the case of Ethiopia“. Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424458.

Der volle Inhalt der Quelle
Annotation:
Background: Ethiopia is a country with 9 ethnically-based administrative regions and 2 city administrations, often cited, among other things, with high fertility rates and rapid population growth rate. Despite the country’s effort in their reduction, they still remain high, especially at regional-level. To this end, the study of fertility in Ethiopia, particularly on its regions, where fertility variation and its repercussion are at boiling point, is paramount important. An easy way of finding different characteristics of a fertility distribution is to build a suitable model of fertility pattern through different mathematical curves. ASFR is worthwhile in this regard. In general, the age-specific fertility pattern is said to have a typical shape common to all human populations through years though many countries some from Africa has already started showing a deviation from this classical bell shaped curve. Some of existing models are therefore inadequate to describe patterns of many of the African countries including Ethiopia. In order to describe this shape (ASF curve), a number of parametric and non-parametric functions have been exploited in the developed world though fitting these models to curves of Africa in general and that of Ethiopian in particular data has not been undertaken yet. To accurately model fertility patterns in Ethiopia, a new mathematical model that is both easily used, and provides good fit for the data is required. Objective: The principal goals of this thesis are therefore fourfold: (1). to examine the pattern of ASFRs at country and regional level,in Ethiopia; (2). to propose a model that best captures various shapes of ASFRs at both country and regional level, and then compare the performance of the model with some existing ones; (3). to fit the proposed model using Hierarchical Bayesian techniques and show that this method is flexible enough for local estimates vis-´a-vis traditional formula, where the estimates might be very imprecise, due to low sample size; and (4). to compare the resulting estimates obtained with the non-hierarchical procedures, such as Bayesian and Maximum likelihood counterparts. Methodology: In this study, we proposed a four parametric parametric model, Skew Normal model, to fit the fertility schedules, and showed that it is flexible enough in capturing fertility patterns shown at country level and most regions of Ethiopia. In order to determine the performance of this proposed model, we conducted a preliminary analysis along with ten other commonly used parametric and non-parametric models in demographic literature, namely: Quadratic Spline function, Cubic Splines, Coale-Trussell function, Beta, Gamma, Hadwiger distribution, Polynomial models, the Adjusted Error Model, Gompertz curve, Skew Normal, and Peristera & Kostaki Model. The criterion followed in fitting these models was Nonlinear Regression with nonlinear least squares (nls) estimation. We used Akaike Information Criterion (AIC) as model selecction criterion. For many demographers, however, estimating regional-specific ASFR model and the associated uncertainty introduced due those factors can be difficult, especially in a situation where we have extremely varying sample size among different regions. Recently, it has been proposed that Hierarchical procedures might provide more reliable parameter estimates than Non-Hierarchical procedures, such as complete pooling and independence to make local/regional-level analyses. In this study, a Hierarchical Bayesian procedure was, therefore, formulated to explore the posterior distribution of model parameters (for generation of region-specific ASFR point estimates and uncertainty bound). Besides, other non-hierarchical approaches, namely Bayesian and the maximum likelihood methods, were also instrumented to estimate parameters and compare the result obtained using these approaches with Hierarchical Bayesian counterparts. Gibbs sampling along with MetropolisHastings argorithm in R (Development Core Team, 2005) was applied to draw the posterior samples for each parameter. Data augmentation method was also implemented to ease the sampling process. Sensitivity analysis, convergence diagnosis and model checking were also thoroughly conducted to ensure how robust our results are. In all cases, non-informative prior distributions for all regional vectors (parameters) were used in order to real the lack of knowledge about these random variables. Result: The results obtained from this preliminary analysis testified that the values of the Akaike Information criterion(AIC) for the proposed model, Skew Normal (SN), is lowest: in the capital, Addis Ababa, Dire Dawa, Harari, Affar, Gambela, Benshangul-Gumuz, and country level data as well. On the contrary, its value was also higher some of the models and lower the rest on the remain regions, namely: Tigray, Oromiya, Amhara, Somali and SNNP. This tells us that the proposed model was able to capturing the pattern of fertility at the empirical fertility data of Ethiopia and its regions better than the other existing models considered in 6 of the 11 regions. The result from the HBA indicates that most of the posterior means were much closer to the true fixed fertility values. They were also more precise and have lower uncertainty with narrower credible interval vis-´a-vis the other approaches, ML and Bayesian estimate analogues. Conclusion: From the preliminary analysis, it can be concluded that the proposed model was better to capture ASFR pattern at national level and its regions than the other existing common models considered. Following this result, we conducted inference and prediction on the model parameters using these three approaches: HBA, BA and ML methods. The overall result suggested several points. One such is that HBA was the best approach to implement for such a data as it gave more consistent, precise (the low uncertainty) than the other approaches. Generally, both ML method and Bayesian method can be used to analyze our model, but they can be applicable to different conditions. ML method can be applied when precise values of model parameters have been known, large sample size can be obtained in the test; and similarly, Bayesian method can be applied when uncertainties on the model parameters exist, prior knowledge on the model parameters are available, and few data is available in the study.
Background: L’Etiopia è una nazione divisa in 9 regioni amministrative (definite su base etnica) e due città. Si tratta di una nazione citata spesso come esempio di alta fecondità e rapida crescita demografica. Nonostante gli sforzi del governo, fecondità e cresita della popolazione rimangono elevati, specialmente a livello regionale. Pertanto, lo studio della fecondità in Etiopia e nelle sue regioni – caraterizzate da un’alta variabilità – è di vitale importanza. Un modo semplice di rilevare le diverse caratteristiche della distribuzione della feconditàè quello di costruire in modello adatto, specificando diverse funzioni matematiche. In questo senso, vale la pena concentrarsi sui tassi specifici di fecondità, i quali mostrano una precisa forma comune a tutte le popolazioni. Tuttavia, molti paesi mostrano una “simmetrizzazione” che molti modelli non riescono a cogliere adeguatamente. Pertanto, per cogliere questa la forma dei tassi specifici, sono stati utilizzati alcuni modelli parametrici ma l’uso di tali modelliè ancora molto limitato in Africa ed in Etiopia in particolare. Obiettivo: In questo lavoro si utilizza un nuovo modello per modellare la fecondità in Etiopia con quattro obiettivi specifici: (1). esaminare la forma dei tassi specifici per età dell’Etiopia a livello nazionale e regionale; (2). proporre un modello che colga al meglio le varie forme dei tassi specifici sia a livello nazionale che regionale. La performance del modello proposto verrà confrontata con quella di altri modelli esistenti; (3). adattare la funzione di fecondità proposta attraverso un modello gerarchico Bayesiano e mostrare che tale modelloè sufficientemente flessibile per stimare la fecondità delle singole regioni – dove le stime possono essere imprecise a causa di una bassa numerosità campionaria; (4). confrontare le stime ottenute con quelle fornite da metodi non gerarchici (massima verosimiglianza o Bayesiana semplice) Metodologia: In questo studio, proponiamo un modello a 4 parametri, la Normale Asimmetrica, per modellare i tassi specifici di fecondità. Si mostra che questo modello è sufficientemente flessibile per cogliere adeguatamente le forme dei tassi specifici a livello sia nazionale che regionale. Per valutare la performance del modello, si è condotta un’analisi preliminare confrontandolo con altri dieci modelli parametrici e non parametrici usati nella letteratura demografica: la funzione splie quadratica, la Cubic-Spline, i modelli di Coale e Trussel, Beta, Gamma, Hadwiger, polinomiale, Gompertz, Peristera-Kostaki e l’Adjustment Error Model. I modelli sono stati stimati usando i minimi quadrati non lineari (nls) e il Criterio d’Informazione di Akaike viene usato per determinarne la performance. Tuttavia, la stima per le singole regioni pu‘o risultare difficile in situazioni dove abbiamo un’alta variabilità della numerosità campionaria. Si propone, quindi di usare procedure gerarchiche che permettono di ottenere stime più affidabili rispetto ai modelli non gerarchici (“pooling” completo o “unpooling”) per l’analisi a livello regionale. In questo studia si formula un modello Bayesiano gerarchico ottenendo la distribuzione a posteriori dei parametri per i tassi di fecnodità specifici a livello regionale e relativa stima dell’incertezza. Altri metodi non gerarchici (Bayesiano semplice e massima verosimiglianza) vengono anch’essi usati per confronto. Gli algoritmi Gibbs Sampling e Metropolis-Hastings vengono usati per campionare dalla distribuzione a posteriori di ogni parametro. Anche il metodo del “Data Augmentation” viene utilizzato per ottenere le stime. La robustezza dei risultati viene controllata attraverso un’analisi di sensibilità e l’opportuna diagnostica della convergenza degli algoritmi viene riportata nel testo. In tutti i casi, si sono usate distribuzioni a priori non-informative. Risultati: I risutlati ottenuti dall’analisi preliminare mostrano che il modello Skew Normal ha il pi`u basso AIC nelle regioni Addis Ababa, Dire Dawa, Harari, Affar, Gambela, Benshangul-Gumuz e anche per le stime nazionali. Nelle altre regioni (Tigray, Oromiya, Amhara, Somali e SNNP) il modello Skew Normal non risulta il milgiore, ma comunque mostra un buon adattamento ai dati. Dunque, il modello Skew Normal risulta il migliore in 6 regioni su 11 e sui tassi specifici di tutto il paese. Conclusioni: Dunque, il modello Skew Normal risulta globalmente il migliore. Da questo risultato iniziale, siè partiti per costruire i modelli Gerachico Bayesiano, Bayesiano semplice e di massima verosimiglianza. Il risultato del confronto tra questi tre approcci è che il modello gerarchico fornisce stime più preciso rispetto agli altri.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bratières, Sébastien. „Non-parametric Bayesian models for structured output prediction“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274973.

Der volle Inhalt der Quelle
Annotation:
Structured output prediction is a machine learning tasks in which an input object is not just assigned a single class, as in classification, but multiple, interdependent labels. This means that the presence or value of a given label affects the other labels, for instance in text labelling problems, where output labels are applied to each word, and their interdependencies must be modelled. Non-parametric Bayesian (NPB) techniques are probabilistic modelling techniques which have the interesting property of allowing model capacity to grow, in a controllable way, with data complexity, while maintaining the advantages of Bayesian modelling. In this thesis, we develop NPB algorithms to solve structured output problems. We first study a map-reduce implementation of a stochastic inference method designed for the infinite hidden Markov model, applied to a computational linguistics task, part-of-speech tagging. We show that mainstream map-reduce frameworks do not easily support highly iterative algorithms. The main contribution of this thesis consists in a conceptually novel discriminative model, GPstruct. It is motivated by labelling tasks, and combines attractive properties of conditional random fields (CRF), structured support vector machines, and Gaussian process (GP) classifiers. In probabilistic terms, GPstruct combines a CRF likelihood with a GP prior on factors; it can also be described as a Bayesian kernelized CRF. To train this model, we develop a Markov chain Monte Carlo algorithm based on elliptical slice sampling and investigate its properties. We then validate it on real data experiments, and explore two topologies: sequence output with text labelling tasks, and grid output with semantic segmentation of images. The latter case poses scalability issues, which are addressed using likelihood approximations and an ensemble method which allows distributed inference and prediction. The experimental validation demonstrates: (a) the model is flexible and its constituent parts are modular and easy to engineer; (b) predictive performance and, most crucially, the probabilistic calibration of predictions are better than or equal to that of competitor models, and (c) model hyperparameters can be learnt from data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhang, Jufen. „Bayesian density estimation and classification of incomplete data using semi-parametric and non parametric models“. Thesis, University of Exeter, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426082.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Yangyi. „Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional Covariate“. Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/71285.

Der volle Inhalt der Quelle
Annotation:
We provide a Frequentist-Bayesian hybrid test statistic in this dissertation for two testing problems. The first one is to design a test for the significant differences between non-parametric functions and the second one is to design a test allowing any departure of predictors of high dimensional X from constant. The implementation is also given in construction of the proposal test statistics for both problems. For the first testing problem, we consider the statistical difference among massive outcomes or signals to be of interest in many diverse fields including neurophysiology, imaging, engineering, and other related fields. However, such data often have nonlinear system, including to row/column patterns, having non-normal distribution, and other hard-to-identifying internal relationship, which lead to difficulties in testing the significance in difference between them for both unknown relationship and high-dimensionality. In this dissertation, we propose an Adaptive Bayes Sum Test capable of testing the significance between two nonlinear system basing on universal non-parametric mathematical decomposition/smoothing components. Our approach is developed from adapting the Bayes sum test statistic by Hart (2009). Any internal pattern is treated through Fourier transformation. Resampling techniques are applied to construct the empirical distribution of test statistic to reduce the effect of non-normal distribution. A simulation study suggests our approach performs better than the alternative method, the Adaptive Neyman Test by Fan and Lin (1998). The usefulness of our approach is demonstrated with an application in the identification of electronic chips as well as an application to test the change of pattern of precipitations. For the second testing problem, currently numerous statistical methods have been developed for analyzing high-dimensional data. These methods mainly focus on variable selection approach, but are limited for purpose of testing with high-dimensional data, and often are required to have explicit derivative likelihood functions. In this dissertation, we propose ``Hybrid Omnibus Test'' for high-dimensional data testing purpose with much less requirements. Our Hybrid Omnibus Test is developed under semi-parametric framework where likelihood function is no longer necessary. Our Hybrid Omnibus Test is a version of Freqentist-Bayesian hybrid score-type test for a functional generalized partial linear single index model, which has link being functional of predictors through a generalized partially linear single index. We propose an efficient score based on estimating equation to the mathematical difficulty in likelihood derivation and construct our Hybrid Omnibus Test. We compare our approach with a empirical likelihood ratio test and Bayesian inference based on Bayes factor using simulation study in terms of false positive rate and true positive rate. Our simulation results suggest that our approach outperforms in terms of false positive rate, true positive rate, and computation cost in high-dimensional case and low-dimensional case. The advantage of our approach is also demonstrated by published biological results with application to a genetic pathway data of type II diabetes.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Knowles, David Arthur. „Bayesian non-parametric models and inference for sparse and hierarchical latent structure“. Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hadrich, Ben Arab Atizez. „Étude des fonctions B-splines pour la fusion d'images segmentées par approche bayésienne“. Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0385/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse nous avons traité le problème de l'estimation non paramétrique des lois de probabilités. Dans un premier temps, nous avons supposé que la densité inconnue f a été approchée par un mélange de base B-spline quadratique. Puis, nous avons proposé un nouvel estimateur de la densité inconnue f basé sur les fonctions B-splines quadratiques, avec deux méthodes d'estimation. La première est base sur la méthode du maximum de vraisemblance et la deuxième est basée sur la méthode d'estimation Bayésienne MAP. Ensuite, nous avons généralisé notre étude d'estimation dans le cadre du mélange et nous avons proposé un nouvel estimateur du mélange de lois inconnues basé sur les deux méthodes d'estimation adaptées. Dans un deuxième temps, nous avons traité le problème de la segmentation statistique semi supervisée des images en se basant sur le modèle de Markov caché et les fonctions B-splines. Nous avons montré l'apport de l'hybridation du modèle de Markov caché et les fonctions B-splines en segmentation statistique bayésienne semi supervisée des images. Dans un troisième temps, nous avons présenté une approche de fusion basée sur la méthode de maximum de vraisemblance, à travers l'estimation non paramétrique des probabilités, pour chaque pixel de l'image. Nous avons ensuite appliqué cette approche sur des images multi-spectrales et multi-temporelles segmentées par notre algorithme non paramétrique et non supervisé
In this thesis we are treated the problem of nonparametric estimation probability distributions. At first, we assumed that the unknown density f was approximated by a basic mixture quadratic B-spline. Then, we proposed a new estimate of the unknown density function f based on quadratic B-splines, with two methods estimation. The first is based on the maximum likelihood method and the second is based on the Bayesian MAP estimation method. Then we have generalized our estimation study as part of the mixture and we have proposed a new estimator mixture of unknown distributions based on the adapted estimation of two methods. In a second time, we treated the problem of semi supervised statistical segmentation of images based on the hidden Markov model and the B-sline functions. We have shown the contribution of hybridization of the hidden Markov model and B-spline functions in unsupervised Bayesian statistical image segmentation. Thirdly, we presented a fusion approach based on the maximum likelihood method, through the nonparametric estimation of probabilities, for each pixel of the image. We then applied this approach to multi-spectral and multi-temporal images segmented by our nonparametric and unsupervised algorithm
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yang, Sikun [Verfasser], Heinz Akademischer Betreuer] Köppl und Kristian [Akademischer Betreuer] [Kersting. „Non-parametric Bayesian Latent Factor Models for Network Reconstruction / Sikun Yang ; Heinz Köppl, Kristian Kersting“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-96957.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yang, Sikun [Verfasser], Heinz [Akademischer Betreuer] Köppl und Kristian [Akademischer Betreuer] Kersting. „Non-parametric Bayesian Latent Factor Models for Network Reconstruction / Sikun Yang ; Heinz Köppl, Kristian Kersting“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1204200769/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wei, Wei. „Probabilistic Models of Topics and Social Events“. Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/941.

Der volle Inhalt der Quelle
Annotation:
Structured probabilistic inference has shown to be useful in modeling complex latent structures of data. One successful way in which this technique has been applied is in the discovery of latent topical structures of text data, which is usually referred to as topic modeling. With the recent popularity of mobile devices and social networking, we can now easily acquire text data attached to meta information, such as geo-spatial coordinates and time stamps. This metadata can provide rich and accurate information that is helpful in answering many research questions related to spatial and temporal reasoning. However, such data must be treated differently from text data. For example, spatial data is usually organized in terms of a two dimensional region while temporal information can exhibit periodicities. While some work existing in the topic modeling community that utilizes some of the meta information, these models largely focused on incorporating metadata into text analysis, rather than providing models that make full use of the joint distribution of metainformation and text. In this thesis, I propose the event detection problem, which is a multidimensional latent clustering problem on spatial, temporal and topical data. I start with a simple parametric model to discover independent events using geo-tagged Twitter data. The model is then improved toward two directions. First, I augmented the model using Recurrent Chinese Restaurant Process (RCRP) to discover events that are dynamic in nature. Second, I studied a model that can detect events using data from multiple media sources. I studied the characteristics of different media in terms of reported event times and linguistic patterns. The approaches studied in this thesis are largely based on Bayesian nonparametric methods to deal with steaming data and unpredictable number of clusters. The research will not only serve the event detection problem itself but also shed light into a more general structured clustering problem in spatial, temporal and textual data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Okabe, Shu. „Modèles faiblement supervisés pour la documentation automatique des langues“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG091.

Der volle Inhalt der Quelle
Annotation:
Face à la menace d'extinction de la moitié des langues parlées aujourd'hui d'ici la fin du siècle, la documentation des langues est un domaine de la linguistique notamment consacré à la collecte, annotation et archivage de données. Dans ce contexte, la documentation automatique des langues vise à outiller les linguistes pour faciliter différentes étapes de la documentation, à travers des approches de traitement automatique du langage.Dans le cadre du projet de documentation automatique CLD2025, cette thèse s'intéresse principalement à deux tâches : la segmentation en mots, identifiant les frontières des mots dans une transcription non segmentée d'une phrase enregistrée, ainsi que la génération de gloses interlinéaires, prédisant des annotations linguistiques pour chaque unité de la phrase. Pour la première, nous améliorons les performances des modèles bayésiens non paramétriques utilisés jusque là à travers une supervision faible, en nous appuyant sur des ressources disponibles de manière réaliste lors de la documentation, comme des phrases déjà segmentées ou des lexiques. Comme nous observons toujours une tendance de sur-segmentation dans nos modèles, nous introduisons un second niveau de segmentation : les morphèmes. Nos expériences avec divers types de modèles de segmentation à deux niveaux indiquent une qualité de segmentation sensiblement meilleure ; nous constatons, par ailleurs, les limites des approches uniquement statistiques pour différencier les mots des morphèmes.La seconde tâche concerne la génération de gloses, soit grammaticales, soit lexicales. Comme ces dernières ne peuvent pas être prédites en se basant seulement sur les données d'entraînement, notre modèle statistique d'étiquetage de séquences fait moduler, pour chaque phrase, les étiquettes possibles et propose une approche compétitive avec les modèles neuronaux les plus récents
In the wake of the threat of extinction of half of the languages spoken today by the end of the century, language documentation is a field of linguistics notably dedicated to the recording, annotation, and archiving of data. In this context, computational language documentation aims to devise tools for linguists to ease several documentation steps through natural language processing approaches.As part of the CLD2025 computational language documentation project, this thesis focuses mainly on two tasks: word segmentation to identify word boundaries in an unsegmented transcription of a recorded sentence and automatic interlinear glossing to predict linguistic annotations for each sentence unit.For the first task, we improve the performance of the Bayesian non-parametric models used until now through weak supervision. For this purpose, we leverage realistically available resources during documentation, such as already-segmented sentences or dictionaries. Since we still observe an over-segmenting tendency in our models, we introduce a second segmentation level: the morphemes. Our experiments with various types of two-level segmentation models indicate a slight improvement in the segmentation quality. However, we also face limitations in differentiating words from morphemes, using statistical cues only. The second task concerns the generation of either grammatical or lexical glosses. As the latter cannot be predicted using training data solely, our statistical sequence-labelling model adapts the set of possible labels for each sentence and provides a competitive alternative to the most recent neural models
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Das, Debasish. „Bayesian Sparse Regression with Application to Data-driven Understanding of Climate“. Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/313587.

Der volle Inhalt der Quelle
Annotation:
Computer and Information Science
Ph.D.
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables.
Temple University--Theses
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Rodrigues, Agatha Sacramento. „Estatística em confiabilidade de sistemas: uma abordagem Bayesiana paramétrica“. Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10102018-232055/.

Der volle Inhalt der Quelle
Annotation:
A confiabilidade de um sistema de componentes depende da confiabilidade de cada componente. Assim, a estimação da função de confiabilidade de cada componente do sistema é de interesse. No entanto, esta não é uma tarefa fácil, pois quando o sistema falha, o tempo de falha de um dado componente pode não ser observado, isto é, um problema de dados censurados. Neste trabalho, propomos modelos Bayesianos paramétricos para estimação das funções de confiabilidade de componentes e sistemas em quatro diferentes cenários. Inicialmente, um modelo Weibull é proposto para estimar a distribuição do tempo de vida de um componente de interesse envolvido em sistemas coerentes não reparáveis, quando estão disponíveis o tempo de falha do sistema e o estado do componente no momento da falha do sistema. Não é imposta a suposição de que os tempos de vida dos componentes sejam identicamente distribuídos, mas a suposição de independência entre os tempos até a falha dos componentes é necessária, conforme teorema anunciado e devidamente demonstrado. Em situações com causa de falha mascarada, os estados dos componentes no momento da falha do sistema não são observados e, neste cenário, um modelo Weibull com variáveis latentes no processo de estimação é proposto. Os dois modelos anteriormente descritos propõem estimar marginalmente as funções de confiabilidade dos componentes quando não são disponíveis ou necessárias as informações dos demais componentes e, por consequência, a suposição de independência entre os tempos de vida dos componentes é necessária. Com o intuito de não impor esta suposição, o modelo Weibull multivariado de Hougaard é proposto para a estimação das funções de confiabilidade de componentes envolvidos em sistemas coerentes não reparáveis. Por fim, um modelo Weibull para a estimação da função de confiabilidade de componentes de um sistema em série reparável com causa de falha mascarada é proposto. Para cada cenário considerado, diferentes estudos de simulação são realizados para avaliar os modelos propostos, sempre comparando com a melhor solução encontrada na literatura até então, em que, em geral, os modelos propostos apresentam melhores resultados. Com o intuito de demonstrar a aplicabilidade dos modelos, análises de dados são realizadas com problemas reais não só da área de confiabilidade, mas também da área social.
The reliability of a system of components depends on reliability of each component. Thus, the initial statistical work should be the estimation of the reliability of each component of the system. This is not an easy task because when the system fails, the failure time of a given component can be not observed, that is, a problem of censored data. We propose parametric Bayesian models for reliability functions estimation of systems and components involved in four scenarios. First, a Weibull model is proposed to estimate component failure time distribution from non-repairable coherent systems when there are available the system failure time and the component status at the system failure moment. Furthermore, identically distributed failure times are not a required restriction. An important result is proved: without the assumption that components\' lifetimes are mutually independent, a given set of sub-reliability functions does not identify the corresponding marginal reliability function. In masked cause of failure situations, it is not possible to identify the statuses of the components at the moment of system failure and, in this second scenario, we propose a Bayesian Weibull model by means of latent variables in the estimation process. The two models described above propose to estimate marginally the reliability functions of the components when the information of the other components is not available or necessary and, consequently, the assumption of independence among the components\' failure times is necessary. In order to not impose this assumption, the Hougaard multivariate Weibull model is proposed for the estimation of the components\' reliability functions involved in non-repairable coherent systems. Finally, a Weibull model for the estimation of the reliability functions of components of a repairable series system with masked cause of failure is proposed. For each scenario, different simulation studies are carried out to evaluate the proposed models, always comparing then with the best solution found in the literature until then. In general, the proposed models present better results. In order to demonstrate the applicability of the models, data analysis are performed with real problems not only from the reliability area, but also from social area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Tran, Gia-Lac. „Advances in Deep Gaussian Processes : calibration and sparsification“. Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS410.pdf.

Der volle Inhalt der Quelle
Annotation:
L'intégration des Convolutional Neural Networks (CNNs) et des GPs est une solution prometteuse pour améliorer le pouvoir de représentation des méthodes contemporaines. Dans notre première étude, nous utilisons des diagrammes de fiabilité pour montrer que les combinaisons actuelles de cnns et GPs sont mal calibrées, ce qui donne lieu à des prédictions trop confiantes. En utilisant des Random Feature et la technique d'inférence variationnelle, nous proposons une nouvelle solution correctement calibrée pour combinaisons des CNNs et des GPs. Nous proposons également une extension intuitive de cette solution, utilisant des Structured Random Features afin d'améliorer la précision du modèle et réduire la complexité des calculs. En termes de coût de calcul, la complexité du GPs exact est cubique en la taille de l'ensemble d'entrainement, ce qui le rend inutilisable lorsque celle-ci dépasse quelques milliers d'éléments. Afin de faciliter l'extension des GPs à des quantités massives de données, nous sélectionnons un petit ensemble de points actifs ou points d'induction par une distillation globale à partir de toutes les observations. Nous utilisons ensuite ces points actifs pour faire des prédictions. Plusieurs travaux similaires se basent sur l'étude Titsias et al en 2009 [5] and Hensman et al en 2015 [6]. Cependant, il est encore difficile de traiter le cas général, et il est toujours possible que le nombre de points actifs requis dépasse un budget de calcul donné. Dans notre deuxième étude, nous proposons Sparse-within-Sparse Gaussian Processes (SWSGP) qui permet l'approximation avec un grand nombre de points inducteurs sans cout de calcul prohibitif
Gaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in a supervised learning problem. It is well-known that GPs are able to make inferences as well as predictive uncertainties with a firm mathematical background. However, GPs are often unfavorable by the practitioners due to their kernel's expressiveness and the computational requirements. Integration of (convolutional) neural networks and GPs are a promising solution to enhance the representational power. As our first contribution, we empirically show that these combinations are miscalibrated, which leads to over-confident predictions. We also propose a novel well-calibrated solution to merge neural structures and GPs by using random features and variational inference techniques. In addition, these frameworks can be intuitively extended to reduce the computational cost by using structural random features. In terms of computational cost, the exact Gaussian Processes require the cubic complexity to training size. Inducing point-based Gaussian Processes are a common choice to mitigate the bottleneck by selecting a small set of active points through a global distillation from available observations. However, the general case remains elusive and it is still possible that the required number of active points may exceed a certain computational budget. In our second study, we propose Sparse-within-Sparse Gaussian Processes which enable the approximation with a large number of inducing points without suffering a prohibitive computational cost
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Forti, Marco. „Dynamic Factor Models. Improvements and applications“. Doctoral thesis, 2022. http://hdl.handle.net/11573/1613357.

Der volle Inhalt der Quelle
Annotation:
High-dimensional financial data are characterised by panels of heterogeneous time series, in order to deal with such a complex panels I adopted infinite dimensional Dynamic Factor Models (DFM) to extract volatilities and Bayesian non-parametrics techniques to estimate the parameters of a Stochastic Volatility model. The non-parametric estimation is realised ad an infinite mixture of normals and the combination of such specification with DFM seems to be an original element of this work. The applied exercises worked imply the use of S&P500 daily data spanning over 12 years, the approach returned results showing good adherence of the forecasted volatility and forecasted returns with actual realisations, overcoming existent approaches and offering an effective tool to deal with large, high-frequency datasets. A second application of this approach is in the field of macroeconomics, in which Structural DFM was applied to opena market sector in order to validate the theoretical model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

FORTI, MARCO. „Dynamic factor models: improvements and applications“. Doctoral thesis, 2021. http://hdl.handle.net/11573/1482196.

Der volle Inhalt der Quelle
Annotation:
In the last two decades data collection, aided by an increased computational capability, has considerably increased both dimension and structure of the datasets; given this, statisticians and economists may today work with time series of remarkable dimension which may come from different sources. Dealing with such datasets may not be so easy and requires the development of ad hoc mathematical models. Dynamic Factor Models (DFM) represent one of the newest techniques in big data management. The adoption of those models allowed me to deepen the study of volatility while introducing Bayesian non-parametric techniques, and to do structural analysis improving the generated impulse response functions. The application of this all was made in the field of economics and finance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Du, Lan. „Non-parametric bayesian methods for structured topic models“. Phd thesis, 2011. http://hdl.handle.net/1885/149800.

Der volle Inhalt der Quelle
Annotation:
The proliferation of large electronic document archives requires new techniques for automatically analysing large collections, which has posed several new and interesting research challenges. Topic modelling, as a promising statistical technique, has gained significant momentum in recent years in information retrieval, sentiment analysis, images processing, etc. Besides existing topic models, the field of topic modelling still needs to be further explored using more powerful tools. One potentially useful area is to directly consider the document structure ranging from semantically high-level segments (e.g., chapters, sections, or paragraphs) to low-level segments (e.g., sentences or words) in topic modeling. This thesis introduces a family of structured topic models for statistically modeling text documents together with their intrinsic document structures. These models take advantage of non-parametric Bayesian techniques (e.g., the two-parameter Poisson-Dirichlet process (PDP)) and Markov chain Monte Carlo methods. Two preliminary contributions of this thesis are 1. The Compound Poisson-Dirichlet process (CPDP): it is an extension of the PDP that can be applied to multiple input distributions. 2. Two Gibbs sampling algorithms for the PDP in a finite state space: these two samplers are based on the Chinese restaurant process that provides an elegant analogy of incremental sampling for the PDP. The first, a two-stage Gibbs sampler, arises from a table multiplicity representation for the PDP. The second is built on top of a table indicator representation. In a simply controlled environment of multinomial sampling, the two new samplers have fast convergence speed. These support the major contribution of this thesis, which is a set of structured topic models: Segmented Topic Model (STM) which models a simple document structure with a four-level hierarchy by mapping the document layout to a hierarchical subject structure. It performs significantly better than the latent Dirichlet allocation model and other segmented models at predicting unseen words. Sequential Latent Dirichlet Allocation (SeqLDA) which is motivated by topical correlations among adjacent segments (i.e., the sequential document structure). This new model uses the PDP and a simple first-order Markov chain to link a set of LDAs together. It provides a novel approach for exploring the topic evolution within each individual document. Adaptive Topic Model (AdaTM) which embeds the CPDP in a simple directed acyclic graph to jointly model both hierarchical and sequential document structures. This new model demonstrates in terms of per-word predictive accuracy and topic distribution profile analysis that it is beneficial to consider both forms of structures in topic modelling. - provided by Candidate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yang, Sikun. „Non-parametric Bayesian Latent Factor Models for Network Reconstruction“. Phd thesis, 2020. https://tuprints.ulb.tu-darmstadt.de/9695/13/2019-12-11_YANG_SIKUN.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with the statistical learning of probabilistic models for graph-structured data. It addresses both the theoretical aspects of network modelling--like the learning of appropriate representations for networks--and the practical difficulties in developing the algorithms to perform inference for the proposed models. The first part of the thesis addresses the problem of discrete-time dynamic network modeling. The objective is to learn the common structure and the underlying interaction dynamics among the entities involved in the observed temporal network. Two probabilistic modeling frameworks are developed. First, a Bayesian nonparametric framework is proposed to capture the static latent community structure and the evolving node-community memberships over time. More specifically, the hierarchical gamma process is utilized to capture the underlying intra-community and inter-community interactions. The appropriate number of latent communities can be automatically estimated via the inherent shrinkage mechanism of the hierarchical gamma process prior. The gamma Markov process are constructed to capture the evolving node-community relations. As the Bernoulli-Poisson link function is used to map the binary edges to the latent parameter space, the proposed method scales with the number of non-zero edges. Hence, the proposed method is particularly well-fitted to model large sparse networks. Moreover, a time-dependent hierarchical gamma process dynamic network model is proposed to capture the birth and death dynamics of the underlying communities. For performance evaluation, the proposed methods are compared with state-of-the-art statistical network models on both synthetic and real-world data. In the second part of the thesis, the main objective is to analyze continuous-time event-based dynamic networks. A fundamental problem in modeling such continuously-generated temporal interaction events data is to capture the reciprocal nature of the interactions among entities--the actions performed by one individual toward another increase the probability that an action of the same type to be returned. Hence, the mutually-exciting Hawkes process is utilized to capture the reciprocity between each pair of individuals involved in the observed dynamic network. In particular, the base rate of the Hawkes process is built upon the latent parameters inferred using the hierarchical gamma process edge partition model, to capture the underlying community structure. Moreover, each interaction event between two individuals is augmented with a pair of latent variables, which will be referred to as latent patterns, to indicate which of their involved communities lead to the occurring of that interaction. Accordingly, the proposed model allows the excitatory effects of each interaction on its opposite direction are determined by its latent patterns. Efficient Gibbs sampling and Expectation Maximization algorithms are developed to perform inference. Finally, the evaluations performed on the real-world data demonstrate the interpretability and competitive performance of the model compared with state-of-the-art methods. In the third part of this thesis, the objective is to analyze the common structure of multiple related data sources under the generative framework. First, a Bayesian nonparametric group factor analysis method is developed to factorize multiple related groups of data into the common latent factor space. The hierarchical beta Bernoulli process is exploited to induce sparsity over the group-specific factor loadings to strengthen the model interpretability. A collapsed variational inference scheme is proposed to perform efficient inference for large-scale data analysis in real-world applications. Moreover, a Poisson gamma memberships framework is investigated for joint modelling of network and related node features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Martínez, Vargas Danae Mirel. „Régression de Cox avec partitions latentes issues du modèle de Potts“. Thèse, 2019. http://hdl.handle.net/1866/22552.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie