Dissertations / Theses on the topic 'Mixture models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Mixture models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Xiang, Sijia. "Semiparametric mixture models." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17338.
Full textDepartment of Statistics
Weixin Yao
This dissertation consists of three parts that are related to semiparametric mixture models. In Part I, we construct the minimum profile Hellinger distance (MPHD) estimator for a class of semiparametric mixture models where one component has known distribution with possibly unknown parameters while the other component density and the mixing proportion are unknown. Such semiparametric mixture models have been often used in biology and the sequential clustering algorithm. In Part II, we propose a new class of semiparametric mixture of regression models, where the mixing proportions and variances are constants, but the component regression functions are smooth functions of a covariate. A one-step backfitting estimate and two EM-type algorithms have been proposed to achieve the optimal convergence rate for both the global parameters and nonparametric regression functions. We derive the asymptotic property of the proposed estimates and show that both proposed EM-type algorithms preserve the asymptotic ascent property. In Part III, we apply the idea of single-index model to the mixture of regression models and propose three new classes of models: the mixture of single-index models (MSIM), the mixture of regression models with varying single-index proportions (MRSIP), and the mixture of regression models with varying single-index proportions and variances (MRSIPV). Backfitting estimates and the corresponding algorithms have been proposed for the new models to achieve the optimal convergence rate for both the parameters and the nonparametric functions. We show that the nonparametric functions can be estimated as if the parameters were known and the parameters can be estimated with the same rate of convergence, n[subscript](-1/2), that is achieved in a parametric model.
Haider, Peter. "Prediction with Mixture Models." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/6961/.
Full textDas Lernen eines Modells für den Zusammenhang zwischen den Eingabeattributen und annotierten Zielattributen von Dateninstanzen dient zwei Zwecken. Einerseits ermöglicht es die Vorhersage des Zielattributs für Instanzen ohne Annotation. Andererseits können die Parameter des Modells nützliche Einsichten in die Struktur der Daten liefern. Wenn die Daten eine inhärente Partitionsstruktur besitzen, ist es natürlich, diese Struktur im Modell widerzuspiegeln. Solche Mischmodelle generieren Vorhersagen, indem sie die individuellen Vorhersagen der Mischkomponenten, welche mit den Partitionen der Daten korrespondieren, kombinieren. Oft ist die Partitionsstruktur latent und muss beim Lernen des Mischmodells mitinferiert werden. Eine direkte Evaluierung der Genauigkeit der inferierten Partitionsstruktur ist in vielen Fällen unmöglich, weil keine wahren Referenzdaten zum Vergleich herangezogen werden können. Jedoch kann man sie indirekt einschätzen, indem man die Vorhersagegenauigkeit des darauf basierenden Mischmodells misst. Diese Arbeit beschäftigt sich mit dem Zusammenspiel zwischen der Verbesserung der Vorhersagegenauigkeit durch das Aufdecken latenter Partitionierungen in Daten, und der Bewertung der geschätzen Struktur durch das Messen der Genauigkeit des resultierenden Vorhersagemodells. Bei der Anwendung des Filterns unerwünschter E-Mails sind die E-Mails in der Trainingsmende latent in Werbekampagnen partitioniert. Das Aufdecken dieser latenten Struktur erlaubt das Filtern zukünftiger E-Mails mit sehr niedrigen Falsch-Positiv-Raten. In dieser Arbeit wird ein Bayes'sches Partitionierunsmodell entwickelt, um diese Partitionierungsstruktur zu modellieren. Das Wissen über die Partitionierung von E-Mails in Kampagnen hilft auch dabei herauszufinden, welche E-Mails auf Veranlassen des selben Netzes von infiltrierten Rechnern, sogenannten Botnetzen, verschickt wurden. Dies ist eine weitere Schicht latenter Partitionierung. Diese latente Struktur aufzudecken erlaubt es, die Genauigkeit von E-Mail-Filtern zu erhöhen und sich effektiv gegen verteilte Denial-of-Service-Angriffe zu verteidigen. Zu diesem Zweck wird in dieser Arbeit ein diskriminatives Partitionierungsmodell hergeleitet, welches auf dem Graphen der beobachteten E-Mails basiert. Die mit diesem Modell inferierten Partitionierungen werden via ihrer Leistungsfähigkeit bei der Vorhersage der Kampagnen neuer E-Mails evaluiert. Weiterhin kann bei der Klassifikation des Inhalts einer E-Mail statistische Information über den sendenden Server wertvoll sein. Ein Modell zu lernen das diese Informationen nutzen kann erfordert Trainingsdaten, die Serverstatistiken enthalten. Um zusätzlich Trainingsdaten benutzen zu können, bei denen die Serverstatistiken fehlen, wird ein Modell entwickelt, das eine Mischung über potentiell alle Einsetzungen davon ist. Eine weitere Anwendung ist die Vorhersage des Navigationsverhaltens von Benutzern einer Webseite. Hier gibt es nicht a priori eine Partitionierung der Benutzer. Jedoch ist es notwendig, eine Partitionierung zu erzeugen, um verschiedene Nutzungsszenarien zu verstehen und verschiedene Layouts dafür zu entwerfen. Der vorgestellte Ansatz optimiert gleichzeitig die Fähigkeiten des Modells, sowohl die beste Partition zu bestimmen als auch mittels dieser Partition Vorhersagen über das Verhalten zu generieren. Jedes Modell wird auf realen Daten evaluiert und mit Referenzmethoden verglichen. Die Ergebnisse zeigen, dass das explizite Modellieren der Annahmen über die latente Partitionierungsstruktur zu verbesserten Vorhersagen führt. In den Fällen bei denen die Vorhersagegenauigkeit nicht direkt optimiert werden kann, erweist sich die Hinzunahme einer kleinen Anzahl von übergeordneten, direkt einstellbaren Parametern als nützlich.
Qi, Meng. "Development in Normal Mixture and Mixture of Experts Modeling." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/15.
Full textPolsen, Orathai. "Nonparametric regression and mixture models." Thesis, University of Leeds, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.578651.
Full textJames, S. D. "Mixture models for times series." Thesis, Swansea University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637395.
Full textSandhu, Manjinder Kaur. "Optimal designs for mixture models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213583.
Full textSánchez, Luis Enrique Benites. "Finite mixture of regression models." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10052018-131627/.
Full textEsta tese composta por três artigos, visa propor extensões das misturas finitas nos modelos de regressão. Aqui vamos considerar uma classe flexível de distribuições tanto univariada como multivariada, que permitem modelar adequadamente dados assimmétricos, que presentam multimodalidade, caldas pesadas e observações atípicas. Esta classe possui casos especiais tais como as distribuições skew-normal, skew-t, skew slash, skew normal contaminada, assim como os casos simétricos. Inicialmente, é proposto um modelo baseado na suposição de que os erros seguem uma mistura finita da distribuição mistura de escala skew-normal (SMSN) ao invés da convencional distribuição normal. Em seguida, temos um modelo de regressão censurado onde consideramos que o erro segue uma mistura finita da distribuição da mistura de escala normal (SMN). E por último, é considerada um mistura finita de regressão multivariada onde o erro tem uma distribuição SMSN multivariada. Para todos os modelos propostos foram desenvolvidos dois pacotes do software R, que estão exemplificados no apêndice.
Li, Xiongya. "Robust multivariate mixture regression models." Diss., Kansas State University, 2017. http://hdl.handle.net/2097/38427.
Full textDepartment of Statistics
Weixing Song
In this dissertation, we proposed a new robust estimation procedure for two multivariate mixture regression models and applied this novel method to functional mapping of dynamic traits. In the first part, a robust estimation procedure for the mixture of classical multivariate linear regression models is discussed by assuming that the error terms follow a multivariate Laplace distribution. An EM algorithm is developed based on the fact that the multivariate Laplace distribution is a scale mixture of the multivariate standard normal distribution. The performance of the proposed algorithm is thoroughly evaluated by some simulation and comparison studies. In the second part, the similar idea is extended to the mixture of linear mixed regression models by assuming that the random effect and the regression error jointly follow a multivariate Laplace distribution. Compared with the existing robust t procedure in the literature, simulation studies indicate that the finite sample performance of the proposed estimation procedure outperforms or is at least comparable to the robust t procedure. Comparing to t procedure, there is no need to determine the degrees of freedom, so the new robust estimation procedure is computationally more efficient than the robust t procedure. The ascent property for both EM algorithms are also proved. In the third part, the proposed robust method is applied to identify quantitative trait loci (QTL) underlying a functional mapping framework with dynamic traits of agricultural or biomedical interest. A robust multivariate Laplace mapping framework was proposed to replace the normality assumption. Simulation studies show the proposed method is comparable to the robust multivariate t-distribution developed in literature and outperforms the normal procedure. As an illustration, the proposed method is also applied to a real data set.
Kunkel, Deborah Elizabeth. "Anchored Bayesian Gaussian Mixture Models." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524134234501475.
Full textEvers, Ludger. "Model fitting and model selection for 'mixture of experts' models." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445776.
Full textHeath, Jeffrey W. "Global optimization of finite mixture models." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7179.
Full textThesis research directed by: Applied Mathematics and Scientific Computation Program. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Huang, Qingqing Ph D. Massachusetts Institute of Technology. "Efficient algorithms for learning mixture models." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107337.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 261-274).
We study the statistical learning problems for a class of probabilistic models called mixture models. Mixture models are usually used to model settings where the observed data consists of different sub-populations, yet we only have access to a limited number of samples of the pooled data. It includes many widely used models such as Gaussian mixtures models, Hidden Markov Models, and topic models. We focus on parametric learning: given unlabeled data generated according to a mixture model, infer about the parameters of the underlying model. The hierarchical structure of the probabilistic model leads to non-convexity of the likelihood function in the model parameters, thus imposing great challenges in finding statistically efficient and computationally efficient solutions. We start with a simple, yet general setup of mixture model in the first part. We study the problem of estimating a low rank M x M matrix which represents a discrete distribution over M2 outcomes, given access to sample drawn according to the distribution. We propose a learning algorithm that accurately recovers the underlying matrix using 9(M) number of samples, which immediately lead to improved learning algorithms for various mixture models including topic models and HMMs. We show that the linear sample complexity is actually optimal in the min-max sense. There are "hard" mixture models for which there exist worst case lower bounds of sample complexity that scale exponentially in the model dimensions. In the second part, we study Gaussian mixture models and HMMs. We propose new learning algorithms with polynomial runtime. We leverage techniques in probabilistic analysis to prove that worst case instances are actually rare, and our algorithm can efficiently handle all the non-worst case instances. In the third part, we study the problem of super-resolution. Despite the lower bound for any deterministic algorithm, we propose a new randomized algorithm which complexity scales only quadratically in all dimensions, and show that it can handle any instance with high probability over the randomization.
by Qingqing Huang.
Ph. D.
Nkadimeng, Calvin. "Language identification using Gaussian mixture models." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4170.
Full textENGLISH ABSTRACT: The importance of Language Identification for African languages is seeing a dramatic increase due to the development of telecommunication infrastructure and, as a result, an increase in volumes of data and speech traffic in public networks. By automatically processing the raw speech data the vital assistance given to people in distress can be speeded up, by referring their calls to a person knowledgeable in that language. To this effect a speech corpus was developed and various algorithms were implemented and tested on raw telephone speech data. These algorithms entailed data preparation, signal processing, and statistical analysis aimed at discriminating between languages. The statistical model of Gaussian Mixture Models (GMMs) were chosen for this research due to their ability to represent an entire language with a single stochastic model that does not require phonetic transcription. Language Identification for African languages using GMMs is feasible, although there are some few challenges like proper classification and accurate study into the relationship of langauges that need to be overcome. Other methods that make use of phonetically transcribed data need to be explored and tested with the new corpus for the research to be more rigorous.
AFRIKAANSE OPSOMMING: Die belang van die Taal identifiseer vir Afrika-tale is sien ’n dramatiese toename te danke aan die ontwikkeling van telekommunikasie-infrastruktuur en as gevolg ’n toename in volumes van data en spraak verkeer in die openbaar netwerke.Deur outomaties verwerking van die ruwe toespraak gegee die noodsaaklike hulp verleen aan mense in nood kan word vinniger-up ”, deur te verwys hul oproepe na ’n persoon ingelichte in daardie taal. Tot hierdie effek van ’n toespraak corpus het ontwikkel en die verskillende algoritmes is gemplementeer en getoets op die ruwe telefoon toespraak gegee.Hierdie algoritmes behels die data voorbereiding, seinverwerking, en statistiese analise wat gerig is op onderskei tussen tale.Die statistiese model van Gauss Mengsel Modelle (GGM) was gekies is vir hierdie navorsing as gevolg van hul vermo te verteenwoordig ’n hele taal met’ n enkele stogastiese model wat nodig nie fonetiese tanscription nie. Taal identifiseer vir die Afrikatale gebruik GGM haalbaar is, alhoewel daar enkele paar uitdagings soos behoorlike klassifikasie en akkurate ondersoek na die verhouding van TALE wat moet oorkom moet word.Ander metodes wat gebruik maak van foneties getranskribeerde data nodig om ondersoek te word en getoets word met die nuwe corpus vir die ondersoek te word strenger.
Chanialidis, Charalampos. "Bayesian mixture models for count data." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6371/.
Full textTong, Edward N. C. "Mixture models for consumer credit risk." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/374795/.
Full textSchwander, Olivier. "Information-geometric methods for mixture models." Palaiseau, Ecole polytechnique, 2013. http://pastel.archives-ouvertes.fr/docs/00/93/17/22/PDF/these.pdf.
Full textThis thesis presents new methods for mixture model learning based on information geometry. We focus on mixtures of exponential families, which encompass a large number of mixtures used in practice. With information geometry, statistical problems can be studied with geometrical tools. This framework gives new perspectives allowing to design algorithms which are both fast and generic. Two main contributions are proposed here. The first one is a method for simplification of kernel density estimators. This simplification is made with clustering algorithms, first with the Bregman divergence and next, for speed reason, with the Fisher-Rao distance and model centroids. The second contribution is a generalization of the k-MLE algorithm which allows to deal with mixtures where all the components do not belong to the same family: this method is applied to mixtures of generalized Gaussians and of Gamma laws and is faster than existing methods. The description of this two algorithms comes with a complete software implementation and their efficiency is evaluated through applications in bio-informatics and texture classification
Julien, Charbel. "Image statistical learning using mixture models." Lyon 2, 2008. http://theses.univ-lyon2.fr/documents/lyon2/2008/julien_c.
Full textLes travaux de la thèse ont porté essentiellement sur la modélisation du contenu visuel de bas niveau des images (Couleur, Texture, etc…). La modélisation de contenu visuel est la première étape à considérer dans tout système automatique de recherche d'image par contenu, y compris les approches d'apprentissage supervisé, non-supervisé, et semi-supervisé. Dans cette thèse nous avons choisi de modéliser le contenu visuel de bas niveau, par une signature « discret distribution » ou par un modèle du mélange « GMM » au lieu des simples modèles statistiques largement utilisés dans la littérature. En utilisant ces deux types de représentation, un prototype de clustering des bases d'images a été implémenté. Ce prototype est capable d'extraire les signatures et les GMM qui représentent les images, elles sont sauvegardées pour des traitements ultérieurs y compris le clustering des images. Dans ce type de représentation les distances classiques comme la distance Euclidienne, L-2 distance, etc. Ne seront plus applicables. Des distances qui nécessitent une optimisation linéaire peuvent être utilisées pour mesurer la distance entre signatures ou GMMs, exemple : « Mallows distance » et « Earth Mover’s distance EMD ». Calculer un vecteur moyen dans le cas où on utilise des vecteurs multidimensionnels, de longueur fixe, pour représenter les images peut être relativement facile. Par contre, dans notre cas un algorithme itératif qui nécessite de nouveau une optimisation linéaire a été proposé pour apprendre un modèle, signature ou GMM, et cela en exploitant les contraintes fixées par les utilisateurs
Julien, Charbel Zighed Djamel Abdelkader Saitta Lorenza. "Image statistical learning using mixture models." Lyon : Université Lumière Lyon 2, 2008. http://theses.univ-lyon2.fr/sdx/theses/lyon2/2008/julien_c.
Full textThèse soutenue en co-tutelle. Titre provenant de l'écran-titre. Bibliogr.
WADE, SARA KATHRYN. "Bayesian nonparametric regression through mixture models." Doctoral thesis, Università Bocconi, 2013. https://hdl.handle.net/11565/4054326.
Full textKutal, Durga Hari. "Various Approaches on Parameter Estimation in Mixture and Non-mixture Cure Models." Thesis, Florida Atlantic University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10929031.
Full textAnalyzing life-time data with long-term survivors is an important topic in medical application. Cure models are usually used to analyze survival data with the proportion of cure subjects or long-term survivors. In order to include the proportion of cure subjects, mixture and non-mixture cure models are considered. In this dissertation, we utilize both maximum likelihood and Bayesian methods to estimate model parameters. Simulation studies are carried out to verify the finite sample performance of the estimation methods. Real data analyses are reported to illustrate the goodness-of-fit via Fréchet, Weibull and Exponentiated Exponential susceptible distributions. Among the three parametric susceptible distributions, Fréchet is the most promising.
Next, we extend the non-mixture cure model to include a change point in a covariate for right censored data. The smoothed likelihood approach is used to address the problem of a log-likelihood function which is not differentiable with respect to the change point. The simulation study is based on the non-mixture change point cure model with an exponential distribution for the susceptible subjects. The simulation results revealed a convincing performance of the proposed method of estimation.
Frühwirth-Schnatter, Sylvia. "Model Likelihoods and Bayes Factors for Switching and Mixture Models." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2002. http://epub.wu.ac.at/474/1/document.pdf.
Full textSeries: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Frühwirth-Schnatter, Sylvia. "Model Likelihoods and Bayes Factors for Switching and Mixture Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/1146/1/document.pdf.
Full textSeries: Forschungsberichte / Institut für Statistik
Haas, Markus. "Dynamic mixture models for financial time series /." Berlin : Pro Business, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=012999049&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textGundersen, Terje. "Voice Transformation based on Gaussian mixture models." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10878.
Full textIn this thesis, a probabilistic model for transforming a voice to sound like another specific voice is tested. The model is fully automatic and only requires some 100 training sentences from both speakers with the same acoustic content. The classical source-filter decomposition allows prosodic and spectral transformation to be performed independently. The transformations are based on a Gaussian mixture model and a transformation function suggested by Y. Stylianou. Feature vectors of the same content from the source and target speaker, aligned in time by dynamic time warping, are fitted to a GMM. The short time spectra, represented as cepstral coefficients and derived from LPC, and the pitch periods, represented as fundamental frequency estimated from the RAPT algorithm, are transformed with the same probabilistic transformation function. Several techniques of spectrum and pitch transformation were assessed in addition to some novel smoothing techniques of the fundamental frequency contour. The pitch transform was implemented on the excitation signal from the inverse LP filtering by time domain PSOLA. The transformed spectrum parameters were used in the synthesis filter with the transformed excitation as input to yield the transformed voice. A listening test was performed with the best setup from objective tests and the results indicate that it is possible to recognise the transformed voice as the target speaker with a 72 % probability. However, the synthesised voice was affected by a muffling effect due to incorrect frequency transformation and the prosody sounded somewhat robotic.
Saba, Laura M. "Latent pattern mixture models for binary outcomes /." Connect to full text via ProQuest. Limited to UCD Anschutz Medical Campus, 2007.
Find full textTypescript. Includes bibliographical references (leaves 70-71). Free to UCD affiliates. Online version available via ProQuest Digital Dissertations;
Liu, Zhao, and 劉釗. "On mixture double autoregressive time series models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/196465.
Full textpublished_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
Fahey, Michael Thomas. "Finite mixture models for dietary pattern identification." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611505.
Full textWei, Yan. "Robust mixture regression models using t-distribution." Kansas State University, 2012. http://hdl.handle.net/2097/14110.
Full textDepartment of Statistics
Weixin Yao
In this report, we propose a robust mixture of regression based on t-distribution by extending the mixture of t-distributions proposed by Peel and McLachlan (2000) to the regression setting. This new mixture of regression model is robust to outliers in y direction but not robust to the outliers with high leverage points. In order to combat this, we also propose a modified version of the proposed method, which fits the mixture of regression based on t-distribution to the data after adaptively trimming the high leverage points. We further propose to adaptively choose the degree of freedom for the t-distribution using profile likelihood. The proposed robust mixture regression estimate has high efficiency due to the adaptive choice of degree of freedom. We demonstrate the effectiveness of the proposed new method and compare it with some of the existing methods through simulation study.
Morfopoulou, S. "Bayesian mixture models for metagenomic community profiling." Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1473450/.
Full textZhang, Xiuyun. "Efficient Algorithms for Fitting Bayesian Mixture Models." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243990513.
Full textHe, Ruofei. "Bayesian mixture models for frequent itemset mining." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/bayesian-mixture-models-for-frequent-itemset-mining(6d88d0d1-3066-4545-8565-56d651eeadc4).html.
Full textSubramaniam, Anand D. "Gaussian mixture models in compression and communication /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3112847.
Full textKang, An. "Online Bayesian nonparametric mixture models via regression." Thesis, University of Kent, 2018. https://kar.kent.ac.uk/66306/.
Full textYu, Chun. "Robust mixture modeling." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/18153.
Full textDepartment of Statistics
Weixin Yao and Kun Chen
Ordinary least-squares (OLS) estimators for a linear model are very sensitive to unusual values in the design space or outliers among y values. Even one single atypical value may have a large effect on the parameter estimates. In this proposal, we first review and describe some available and popular robust techniques, including some recent developed ones, and compare them in terms of breakdown point and efficiency. In addition, we also use a simulation study and a real data application to compare the performance of existing robust methods under different scenarios. Finite mixture models are widely applied in a variety of random phenomena. However, inference of mixture models is a challenging work when the outliers exist in the data. The traditional maximum likelihood estimator (MLE) is sensitive to outliers. In this proposal, we propose a Robust Mixture via Mean shift penalization (RMM) in mixture models and Robust Mixture Regression via Mean shift penalization (RMRM) in mixture regression, to achieve simultaneous outlier detection and parameter estimation. A mean shift parameter is added to the mixture models, and penalized by a nonconvex penalty function. With this model setting, we develop an iterative thresholding embedded EM algorithm to maximize the penalized objective function. Comparing with other existing robust methods, the proposed methods show outstanding performance in both identifying outliers and estimating the parameters.
Cilliers, Francois Dirk. "Tree-based Gaussian mixture models for speaker verification." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019.1/1639.
Full textChang, Ilsung. "Bayesian inference on mixture models and their applications." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3990.
Full textYen, Ming-Fang. "Frailty and mixture models in cancer screening evaluation." Thesis, University College London (University of London), 2004. http://discovery.ucl.ac.uk/1446761/.
Full textZhang, Xuekui. "Mixture models for analysing high throughput sequencing data." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/35982.
Full textLu, Liang. "Subspace Gaussian mixture models for automatic speech recognition." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8065.
Full textHernandez-Vela, Carlos Erwin Rodriguez. "Contributions to the Bayesian analysis of mixture models." Thesis, University of Kent, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.594272.
Full textAl, Hakmani Rahab. "Bayesian Estimation of Mixture IRT Models using NUTS." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1641.
Full textQarmalah, Najla Mohammed A. "Finite mixture models : visualisation, localised regression, and prediction." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12486/.
Full textMeddings, D. P. "Statistical inference in mixture models with random effects." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1455733/.
Full textPinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.
Full textThis thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
Jayaram, Vikram. "Reduced dimensionality hyperspectral classification using finite mixture models." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.
Full textDesai, Manisha. "Mixture models for genetic changes in cancer cells /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9566.
Full textYu, Chen. "The use of mixture models in capture-recapture." Thesis, University of Kent, 2015. https://kar.kent.ac.uk/50775/.
Full textPaganin, Sally. "Prior-driven cluster allocation in bayesian mixture models." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426831.
Full textKremer, Laura. "Assessment of a Credit Value atRisk for Corporate Credits." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124146.
Full textHe, Xiaojun Velu Rajabather Palani. "Two essays on applications of mixture models in finance." Related Electronic Resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2003. http://wwwlib.umi.com/cr/syr/main.
Full text