Dissertationen zum Thema „Bayesian non-Parametric model“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-20 Dissertationen für die Forschung zum Thema "Bayesian non-Parametric model" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Bartcus, Marius. „Bayesian non-parametric parsimonious mixtures for model-based clustering“. Thesis, Toulon, 2015. http://www.theses.fr/2015TOUL0010/document.
Der volle Inhalt der QuelleThis thesis focuses on statistical learning and multi-dimensional data analysis. It particularly focuses on unsupervised learning of generative models for model-based clustering. We study the Gaussians mixture models, in the context of maximum likelihood estimation via the EM algorithm, as well as in the Bayesian estimation context by maximum a posteriori via Markov Chain Monte Carlo (MCMC) sampling techniques. We mainly consider the parsimonious mixture models which are based on a spectral decomposition of the covariance matrix and provide a flexible framework particularly for the analysis of high-dimensional data. Then, we investigate non-parametric Bayesian mixtures which are based on general flexible processes such as the Dirichlet process and the Chinese Restaurant Process. This non-parametric model formulation is relevant for both learning the model, as well for dealing with the issue of model selection. We propose new Bayesian non-parametric parsimonious mixtures and derive a MCMC sampling technique where the mixture model and the number of mixture components are simultaneously learned from the data. The selection of the model structure is performed by using Bayes Factors. These models, by their non-parametric and sparse formulation, are useful for the analysis of large data sets when the number of classes is undetermined and increases with the data, and when the dimension is high. The models are validated on simulated data and standard real data sets. Then, they are applied to a real difficult problem of automatic structuring of complex bioacoustic data issued from whale song signals. Finally, we open Markovian perspectives via hierarchical Dirichlet processes hidden Markov models
Ren, Yan. „A Non-parametric Bayesian Method for Hierarchical Clustering of Longitudinal Data“. University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337085531.
Der volle Inhalt der QuelleGebremeskel, Haftu Gebrehiwot. „Implementing hierarchical bayesian model to fertility data: the case of Ethiopia“. Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424458.
Der volle Inhalt der QuelleBackground: L’Etiopia è una nazione divisa in 9 regioni amministrative (definite su base etnica) e due città. Si tratta di una nazione citata spesso come esempio di alta fecondità e rapida crescita demografica. Nonostante gli sforzi del governo, fecondità e cresita della popolazione rimangono elevati, specialmente a livello regionale. Pertanto, lo studio della fecondità in Etiopia e nelle sue regioni – caraterizzate da un’alta variabilità – è di vitale importanza. Un modo semplice di rilevare le diverse caratteristiche della distribuzione della feconditàè quello di costruire in modello adatto, specificando diverse funzioni matematiche. In questo senso, vale la pena concentrarsi sui tassi specifici di fecondità, i quali mostrano una precisa forma comune a tutte le popolazioni. Tuttavia, molti paesi mostrano una “simmetrizzazione” che molti modelli non riescono a cogliere adeguatamente. Pertanto, per cogliere questa la forma dei tassi specifici, sono stati utilizzati alcuni modelli parametrici ma l’uso di tali modelliè ancora molto limitato in Africa ed in Etiopia in particolare. Obiettivo: In questo lavoro si utilizza un nuovo modello per modellare la fecondità in Etiopia con quattro obiettivi specifici: (1). esaminare la forma dei tassi specifici per età dell’Etiopia a livello nazionale e regionale; (2). proporre un modello che colga al meglio le varie forme dei tassi specifici sia a livello nazionale che regionale. La performance del modello proposto verrà confrontata con quella di altri modelli esistenti; (3). adattare la funzione di fecondità proposta attraverso un modello gerarchico Bayesiano e mostrare che tale modelloè sufficientemente flessibile per stimare la fecondità delle singole regioni – dove le stime possono essere imprecise a causa di una bassa numerosità campionaria; (4). confrontare le stime ottenute con quelle fornite da metodi non gerarchici (massima verosimiglianza o Bayesiana semplice) Metodologia: In questo studio, proponiamo un modello a 4 parametri, la Normale Asimmetrica, per modellare i tassi specifici di fecondità. Si mostra che questo modello è sufficientemente flessibile per cogliere adeguatamente le forme dei tassi specifici a livello sia nazionale che regionale. Per valutare la performance del modello, si è condotta un’analisi preliminare confrontandolo con altri dieci modelli parametrici e non parametrici usati nella letteratura demografica: la funzione splie quadratica, la Cubic-Spline, i modelli di Coale e Trussel, Beta, Gamma, Hadwiger, polinomiale, Gompertz, Peristera-Kostaki e l’Adjustment Error Model. I modelli sono stati stimati usando i minimi quadrati non lineari (nls) e il Criterio d’Informazione di Akaike viene usato per determinarne la performance. Tuttavia, la stima per le singole regioni pu‘o risultare difficile in situazioni dove abbiamo un’alta variabilità della numerosità campionaria. Si propone, quindi di usare procedure gerarchiche che permettono di ottenere stime più affidabili rispetto ai modelli non gerarchici (“pooling” completo o “unpooling”) per l’analisi a livello regionale. In questo studia si formula un modello Bayesiano gerarchico ottenendo la distribuzione a posteriori dei parametri per i tassi di fecnodità specifici a livello regionale e relativa stima dell’incertezza. Altri metodi non gerarchici (Bayesiano semplice e massima verosimiglianza) vengono anch’essi usati per confronto. Gli algoritmi Gibbs Sampling e Metropolis-Hastings vengono usati per campionare dalla distribuzione a posteriori di ogni parametro. Anche il metodo del “Data Augmentation” viene utilizzato per ottenere le stime. La robustezza dei risultati viene controllata attraverso un’analisi di sensibilità e l’opportuna diagnostica della convergenza degli algoritmi viene riportata nel testo. In tutti i casi, si sono usate distribuzioni a priori non-informative. Risultati: I risutlati ottenuti dall’analisi preliminare mostrano che il modello Skew Normal ha il pi`u basso AIC nelle regioni Addis Ababa, Dire Dawa, Harari, Affar, Gambela, Benshangul-Gumuz e anche per le stime nazionali. Nelle altre regioni (Tigray, Oromiya, Amhara, Somali e SNNP) il modello Skew Normal non risulta il milgiore, ma comunque mostra un buon adattamento ai dati. Dunque, il modello Skew Normal risulta il migliore in 6 regioni su 11 e sui tassi specifici di tutto il paese. Conclusioni: Dunque, il modello Skew Normal risulta globalmente il migliore. Da questo risultato iniziale, siè partiti per costruire i modelli Gerachico Bayesiano, Bayesiano semplice e di massima verosimiglianza. Il risultato del confronto tra questi tre approcci è che il modello gerarchico fornisce stime più preciso rispetto agli altri.
Bratières, Sébastien. „Non-parametric Bayesian models for structured output prediction“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274973.
Der volle Inhalt der QuelleZhang, Jufen. „Bayesian density estimation and classification of incomplete data using semi-parametric and non parametric models“. Thesis, University of Exeter, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426082.
Der volle Inhalt der QuelleXu, Yangyi. „Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional Covariate“. Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/71285.
Der volle Inhalt der QuellePh. D.
Knowles, David Arthur. „Bayesian non-parametric models and inference for sparse and hierarchical latent structure“. Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610403.
Der volle Inhalt der QuelleHadrich, Ben Arab Atizez. „Étude des fonctions B-splines pour la fusion d'images segmentées par approche bayésienne“. Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0385/document.
Der volle Inhalt der QuelleIn this thesis we are treated the problem of nonparametric estimation probability distributions. At first, we assumed that the unknown density f was approximated by a basic mixture quadratic B-spline. Then, we proposed a new estimate of the unknown density function f based on quadratic B-splines, with two methods estimation. The first is based on the maximum likelihood method and the second is based on the Bayesian MAP estimation method. Then we have generalized our estimation study as part of the mixture and we have proposed a new estimator mixture of unknown distributions based on the adapted estimation of two methods. In a second time, we treated the problem of semi supervised statistical segmentation of images based on the hidden Markov model and the B-sline functions. We have shown the contribution of hybridization of the hidden Markov model and B-spline functions in unsupervised Bayesian statistical image segmentation. Thirdly, we presented a fusion approach based on the maximum likelihood method, through the nonparametric estimation of probabilities, for each pixel of the image. We then applied this approach to multi-spectral and multi-temporal images segmented by our nonparametric and unsupervised algorithm
Yang, Sikun [Verfasser], Heinz Akademischer Betreuer] Köppl und Kristian [Akademischer Betreuer] [Kersting. „Non-parametric Bayesian Latent Factor Models for Network Reconstruction / Sikun Yang ; Heinz Köppl, Kristian Kersting“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-96957.
Der volle Inhalt der QuelleYang, Sikun [Verfasser], Heinz [Akademischer Betreuer] Köppl und Kristian [Akademischer Betreuer] Kersting. „Non-parametric Bayesian Latent Factor Models for Network Reconstruction / Sikun Yang ; Heinz Köppl, Kristian Kersting“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1204200769/34.
Der volle Inhalt der QuelleWei, Wei. „Probabilistic Models of Topics and Social Events“. Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/941.
Der volle Inhalt der QuelleOkabe, Shu. „Modèles faiblement supervisés pour la documentation automatique des langues“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG091.
Der volle Inhalt der QuelleIn the wake of the threat of extinction of half of the languages spoken today by the end of the century, language documentation is a field of linguistics notably dedicated to the recording, annotation, and archiving of data. In this context, computational language documentation aims to devise tools for linguists to ease several documentation steps through natural language processing approaches.As part of the CLD2025 computational language documentation project, this thesis focuses mainly on two tasks: word segmentation to identify word boundaries in an unsegmented transcription of a recorded sentence and automatic interlinear glossing to predict linguistic annotations for each sentence unit.For the first task, we improve the performance of the Bayesian non-parametric models used until now through weak supervision. For this purpose, we leverage realistically available resources during documentation, such as already-segmented sentences or dictionaries. Since we still observe an over-segmenting tendency in our models, we introduce a second segmentation level: the morphemes. Our experiments with various types of two-level segmentation models indicate a slight improvement in the segmentation quality. However, we also face limitations in differentiating words from morphemes, using statistical cues only. The second task concerns the generation of either grammatical or lexical glosses. As the latter cannot be predicted using training data solely, our statistical sequence-labelling model adapts the set of possible labels for each sentence and provides a competitive alternative to the most recent neural models
Das, Debasish. „Bayesian Sparse Regression with Application to Data-driven Understanding of Climate“. Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/313587.
Der volle Inhalt der QuellePh.D.
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables.
Temple University--Theses
Rodrigues, Agatha Sacramento. „Estatística em confiabilidade de sistemas: uma abordagem Bayesiana paramétrica“. Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10102018-232055/.
Der volle Inhalt der QuelleThe reliability of a system of components depends on reliability of each component. Thus, the initial statistical work should be the estimation of the reliability of each component of the system. This is not an easy task because when the system fails, the failure time of a given component can be not observed, that is, a problem of censored data. We propose parametric Bayesian models for reliability functions estimation of systems and components involved in four scenarios. First, a Weibull model is proposed to estimate component failure time distribution from non-repairable coherent systems when there are available the system failure time and the component status at the system failure moment. Furthermore, identically distributed failure times are not a required restriction. An important result is proved: without the assumption that components\' lifetimes are mutually independent, a given set of sub-reliability functions does not identify the corresponding marginal reliability function. In masked cause of failure situations, it is not possible to identify the statuses of the components at the moment of system failure and, in this second scenario, we propose a Bayesian Weibull model by means of latent variables in the estimation process. The two models described above propose to estimate marginally the reliability functions of the components when the information of the other components is not available or necessary and, consequently, the assumption of independence among the components\' failure times is necessary. In order to not impose this assumption, the Hougaard multivariate Weibull model is proposed for the estimation of the components\' reliability functions involved in non-repairable coherent systems. Finally, a Weibull model for the estimation of the reliability functions of components of a repairable series system with masked cause of failure is proposed. For each scenario, different simulation studies are carried out to evaluate the proposed models, always comparing then with the best solution found in the literature until then. In general, the proposed models present better results. In order to demonstrate the applicability of the models, data analysis are performed with real problems not only from the reliability area, but also from social area.
Tran, Gia-Lac. „Advances in Deep Gaussian Processes : calibration and sparsification“. Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS410.pdf.
Der volle Inhalt der QuelleGaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in a supervised learning problem. It is well-known that GPs are able to make inferences as well as predictive uncertainties with a firm mathematical background. However, GPs are often unfavorable by the practitioners due to their kernel's expressiveness and the computational requirements. Integration of (convolutional) neural networks and GPs are a promising solution to enhance the representational power. As our first contribution, we empirically show that these combinations are miscalibrated, which leads to over-confident predictions. We also propose a novel well-calibrated solution to merge neural structures and GPs by using random features and variational inference techniques. In addition, these frameworks can be intuitively extended to reduce the computational cost by using structural random features. In terms of computational cost, the exact Gaussian Processes require the cubic complexity to training size. Inducing point-based Gaussian Processes are a common choice to mitigate the bottleneck by selecting a small set of active points through a global distillation from available observations. However, the general case remains elusive and it is still possible that the required number of active points may exceed a certain computational budget. In our second study, we propose Sparse-within-Sparse Gaussian Processes which enable the approximation with a large number of inducing points without suffering a prohibitive computational cost
Forti, Marco. „Dynamic Factor Models. Improvements and applications“. Doctoral thesis, 2022. http://hdl.handle.net/11573/1613357.
Der volle Inhalt der QuelleFORTI, MARCO. „Dynamic factor models: improvements and applications“. Doctoral thesis, 2021. http://hdl.handle.net/11573/1482196.
Der volle Inhalt der QuelleDu, Lan. „Non-parametric bayesian methods for structured topic models“. Phd thesis, 2011. http://hdl.handle.net/1885/149800.
Der volle Inhalt der QuelleYang, Sikun. „Non-parametric Bayesian Latent Factor Models for Network Reconstruction“. Phd thesis, 2020. https://tuprints.ulb.tu-darmstadt.de/9695/13/2019-12-11_YANG_SIKUN.pdf.
Der volle Inhalt der QuelleMartínez, Vargas Danae Mirel. „Régression de Cox avec partitions latentes issues du modèle de Potts“. Thèse, 2019. http://hdl.handle.net/1866/22552.
Der volle Inhalt der Quelle