Academic literature on the topic 'Mixture models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Mixture models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Mixture models"

1

Razzaghi, Mehdi, Geoffrey J. McLachan, and Kaye E. Basford. "Mixture Models." Technometrics 33, no. 3 (August 1991): 365. http://dx.doi.org/10.2307/1268796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Razraghi, Mehdi. "Mixture Models." Technometrics 33, no. 3 (August 1991): 365–66. http://dx.doi.org/10.1080/00401706.1991.10484850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ueda, Naonori, Ryohei Nakano, Zoubin Ghahramani, and Geoffrey E. Hinton. "SMEM Algorithm for Mixture Models." Neural Computation 12, no. 9 (September 1, 2000): 2109–28. http://dx.doi.org/10.1162/089976600300015088.

Full text
Abstract:
We present a split-and-merge expectation-maximization (SMEM) algorithm to overcome the local maxima problem in parameter estimation of finite mixture models. In the case of mixture models, local maxima often involve having too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations, we repeatedly perform simultaneous split-and-merge operations using a new criterion for efficiently selecting the split-and-merge candidates. We apply the proposed algorithm to the training of gaussian mixtures and mixtures of factor analyzers using synthetic and real data and show the effectiveness of using the split- and-merge operations to improve the likelihood of both the training data and of held-out test data. We also show the practical usefulness of the proposed algorithm by applying it to image compression and pattern recognition problems.
APA, Harvard, Vancouver, ISO, and other styles
4

Achcar, Jorge A., Emílio A. Coelho-Barros, and Josmar Mazucheli. "Cure fraction models using mixture and non-mixture models." Tatra Mountains Mathematical Publications 51, no. 1 (November 1, 2012): 1–9. http://dx.doi.org/10.2478/v10127-012-0001-4.

Full text
Abstract:
ABSTRACT We introduce the Weibull distributions in presence of cure fraction, censored data and covariates. Two models are explored in this paper: mixture and non-mixture models. Inferences for the proposed models are obtained under the Bayesian approach, using standard MCMC (Markov Chain Monte Carlo) methods. An illustration of the proposed methodology is given considering a life- time data set.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Si Quang, Nicolas Lartillot, and Olivier Gascuel. "Phylogenetic mixture models for proteins." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1512 (October 7, 2008): 3965–76. http://dx.doi.org/10.1098/rstb.2008.0180.

Full text
Abstract:
Standard protein substitution models use a single amino acid replacement rate matrix that summarizes the biological, chemical and physical properties of amino acids. However, site evolution is highly heterogeneous and depends on many factors: genetic code; solvent exposure; secondary and tertiary structure; protein function; etc. These impact the substitution pattern and, in most cases, a single replacement matrix is not enough to represent all the complexity of the evolutionary processes. This paper explores in maximum-likelihood framework phylogenetic mixture models that combine several amino acid replacement matrices to better fit protein evolution. We learn these mixture models from a large alignment database extracted from HSSP, and test the performance using independent alignments from TreeBase . We compare unsupervised learning approaches, where the site categories are unknown, to supervised ones, where in estimations we use the known category of each site, based on its exposure or its secondary structure. All our models are combined with gamma-distributed rates across sites. Results show that highly significant likelihood gains are obtained when using mixture models compared with the best available single replacement matrices. Mixtures of matrices also improve over mixtures of profiles in the manner of the CAT model. The unsupervised approach tends to be better than the supervised one, but it appears difficult to implement and highly sensitive to the starting values of the parameters, meaning that the supervised approach is still of interest for initialization and model comparison. Using an unsupervised model involving three matrices, the average AIC gain per site with TreeBase test alignments is 0.31, 0.49 and 0.61 compared with LG (named after Le & Gascuel 2008 Mol. Biol. Evol. 25 , 1307–1320), WAG and JTT, respectively. This three-matrix model is significantly better than LG for 34 alignments (among 57), and significantly worse for 1 alignment only. Moreover, tree topologies inferred with our mixture models frequently differ from those obtained with single matrices, indicating that using these mixtures impacts not only the likelihood value but also the output tree. All our models and a PhyML implementation are available from http://atgc.lirmm.fr/mixtures .
APA, Harvard, Vancouver, ISO, and other styles
6

McLachlan, Geoffrey J., Sharon X. Lee, and Suren I. Rathnayake. "Finite Mixture Models." Annual Review of Statistics and Its Application 6, no. 1 (March 7, 2019): 355–78. http://dx.doi.org/10.1146/annurev-statistics-031017-100325.

Full text
Abstract:
The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and general scientific literature. The aim of this article is to provide an up-to-date account of the theory and methodological developments underlying the applications of finite mixture models. Because of their flexibility, mixture models are being increasingly exploited as a convenient, semiparametric way in which to model unknown distributional shapes. This is in addition to their obvious applications where there is group-structure in the data or where the aim is to explore the data for such structure, as in a cluster analysis. It has now been three decades since the publication of the monograph by McLachlan & Basford (1988) with an emphasis on the potential usefulness of mixture models for inference and clustering. Since then, mixture models have attracted the interest of many researchers and have found many new and interesting fields of application. Thus, the literature on mixture models has expanded enormously, and as a consequence, the bibliography here can only provide selected coverage.
APA, Harvard, Vancouver, ISO, and other styles
7

Shanmugam, Ramalingam. "Finite Mixture Models." Technometrics 44, no. 1 (February 2002): 82. http://dx.doi.org/10.1198/tech.2002.s651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nemec, James M., and Amanda F. L. Nemec. "Mixture models for studying stellar populations. II - Multivariate finite mixture models." Astronomical Journal 105 (April 1993): 1455. http://dx.doi.org/10.1086/116523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Verbeek, J. J., N. Vlassis, and B. Kröse. "Efficient Greedy Learning of Gaussian Mixture Models." Neural Computation 15, no. 2 (February 1, 2003): 469–85. http://dx.doi.org/10.1162/089976603762553004.

Full text
Abstract:
This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Focke, Walter W. "Mixture Models Based on Neural Network Averaging." Neural Computation 18, no. 1 (January 1, 2006): 1–9. http://dx.doi.org/10.1162/089976606774841576.

Full text
Abstract:
A modified version of the single hidden-layer perceptron architecture is proposed for modeling mixtures. A particular flexible mixture model is obtained by implementing the Box-Cox transformation as transfer function. In this case, the network response can be expressed in closed form as a weighted power mean. The quadratic Scheffé K-polynomial and the exponential Wilson equation turn out to be special forms of this general mixture model. Advantages of the proposed network architecture are that binary data sets suffice for “training” and that it is readily extended to incorporate additional mixture components while retaining all previously determined weights.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Mixture models"

1

Xiang, Sijia. "Semiparametric mixture models." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17338.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Weixin Yao
This dissertation consists of three parts that are related to semiparametric mixture models. In Part I, we construct the minimum profile Hellinger distance (MPHD) estimator for a class of semiparametric mixture models where one component has known distribution with possibly unknown parameters while the other component density and the mixing proportion are unknown. Such semiparametric mixture models have been often used in biology and the sequential clustering algorithm. In Part II, we propose a new class of semiparametric mixture of regression models, where the mixing proportions and variances are constants, but the component regression functions are smooth functions of a covariate. A one-step backfitting estimate and two EM-type algorithms have been proposed to achieve the optimal convergence rate for both the global parameters and nonparametric regression functions. We derive the asymptotic property of the proposed estimates and show that both proposed EM-type algorithms preserve the asymptotic ascent property. In Part III, we apply the idea of single-index model to the mixture of regression models and propose three new classes of models: the mixture of single-index models (MSIM), the mixture of regression models with varying single-index proportions (MRSIP), and the mixture of regression models with varying single-index proportions and variances (MRSIPV). Backfitting estimates and the corresponding algorithms have been proposed for the new models to achieve the optimal convergence rate for both the parameters and the nonparametric functions. We show that the nonparametric functions can be estimated as if the parameters were known and the parameters can be estimated with the same rate of convergence, n[subscript](-1/2), that is achieved in a parametric model.
APA, Harvard, Vancouver, ISO, and other styles
2

Haider, Peter. "Prediction with Mixture Models." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/6961/.

Full text
Abstract:
Learning a model for the relationship between the attributes and the annotated labels of data examples serves two purposes. Firstly, it enables the prediction of the label for examples without annotation. Secondly, the parameters of the model can provide useful insights into the structure of the data. If the data has an inherent partitioned structure, it is natural to mirror this structure in the model. Such mixture models predict by combining the individual predictions generated by the mixture components which correspond to the partitions in the data. Often the partitioned structure is latent, and has to be inferred when learning the mixture model. Directly evaluating the accuracy of the inferred partition structure is, in many cases, impossible because the ground truth cannot be obtained for comparison. However it can be assessed indirectly by measuring the prediction accuracy of the mixture model that arises from it. This thesis addresses the interplay between the improvement of predictive accuracy by uncovering latent cluster structure in data, and further addresses the validation of the estimated structure by measuring the accuracy of the resulting predictive model. In the application of filtering unsolicited emails, the emails in the training set are latently clustered into advertisement campaigns. Uncovering this latent structure allows filtering of future emails with very low false positive rates. In order to model the cluster structure, a Bayesian clustering model for dependent binary features is developed in this thesis. Knowing the clustering of emails into campaigns can also aid in uncovering which emails have been sent on behalf of the same network of captured hosts, so-called botnets. This association of emails to networks is another layer of latent clustering. Uncovering this latent structure allows service providers to further increase the accuracy of email filtering and to effectively defend against distributed denial-of-service attacks. To this end, a discriminative clustering model is derived in this thesis that is based on the graph of observed emails. The partitionings inferred using this model are evaluated through their capacity to predict the campaigns of new emails. Furthermore, when classifying the content of emails, statistical information about the sending server can be valuable. Learning a model that is able to make use of it requires training data that includes server statistics. In order to also use training data where the server statistics are missing, a model that is a mixture over potentially all substitutions thereof is developed. Another application is to predict the navigation behavior of the users of a website. Here, there is no a priori partitioning of the users into clusters, but to understand different usage scenarios and design different layouts for them, imposing a partitioning is necessary. The presented approach simultaneously optimizes the discriminative as well as the predictive power of the clusters. Each model is evaluated on real-world data and compared to baseline methods. The results show that explicitly modeling the assumptions about the latent cluster structure leads to improved predictions compared to the baselines. It is beneficial to incorporate a small number of hyperparameters that can be tuned to yield the best predictions in cases where the prediction accuracy can not be optimized directly.
Das Lernen eines Modells für den Zusammenhang zwischen den Eingabeattributen und annotierten Zielattributen von Dateninstanzen dient zwei Zwecken. Einerseits ermöglicht es die Vorhersage des Zielattributs für Instanzen ohne Annotation. Andererseits können die Parameter des Modells nützliche Einsichten in die Struktur der Daten liefern. Wenn die Daten eine inhärente Partitionsstruktur besitzen, ist es natürlich, diese Struktur im Modell widerzuspiegeln. Solche Mischmodelle generieren Vorhersagen, indem sie die individuellen Vorhersagen der Mischkomponenten, welche mit den Partitionen der Daten korrespondieren, kombinieren. Oft ist die Partitionsstruktur latent und muss beim Lernen des Mischmodells mitinferiert werden. Eine direkte Evaluierung der Genauigkeit der inferierten Partitionsstruktur ist in vielen Fällen unmöglich, weil keine wahren Referenzdaten zum Vergleich herangezogen werden können. Jedoch kann man sie indirekt einschätzen, indem man die Vorhersagegenauigkeit des darauf basierenden Mischmodells misst. Diese Arbeit beschäftigt sich mit dem Zusammenspiel zwischen der Verbesserung der Vorhersagegenauigkeit durch das Aufdecken latenter Partitionierungen in Daten, und der Bewertung der geschätzen Struktur durch das Messen der Genauigkeit des resultierenden Vorhersagemodells. Bei der Anwendung des Filterns unerwünschter E-Mails sind die E-Mails in der Trainingsmende latent in Werbekampagnen partitioniert. Das Aufdecken dieser latenten Struktur erlaubt das Filtern zukünftiger E-Mails mit sehr niedrigen Falsch-Positiv-Raten. In dieser Arbeit wird ein Bayes'sches Partitionierunsmodell entwickelt, um diese Partitionierungsstruktur zu modellieren. Das Wissen über die Partitionierung von E-Mails in Kampagnen hilft auch dabei herauszufinden, welche E-Mails auf Veranlassen des selben Netzes von infiltrierten Rechnern, sogenannten Botnetzen, verschickt wurden. Dies ist eine weitere Schicht latenter Partitionierung. Diese latente Struktur aufzudecken erlaubt es, die Genauigkeit von E-Mail-Filtern zu erhöhen und sich effektiv gegen verteilte Denial-of-Service-Angriffe zu verteidigen. Zu diesem Zweck wird in dieser Arbeit ein diskriminatives Partitionierungsmodell hergeleitet, welches auf dem Graphen der beobachteten E-Mails basiert. Die mit diesem Modell inferierten Partitionierungen werden via ihrer Leistungsfähigkeit bei der Vorhersage der Kampagnen neuer E-Mails evaluiert. Weiterhin kann bei der Klassifikation des Inhalts einer E-Mail statistische Information über den sendenden Server wertvoll sein. Ein Modell zu lernen das diese Informationen nutzen kann erfordert Trainingsdaten, die Serverstatistiken enthalten. Um zusätzlich Trainingsdaten benutzen zu können, bei denen die Serverstatistiken fehlen, wird ein Modell entwickelt, das eine Mischung über potentiell alle Einsetzungen davon ist. Eine weitere Anwendung ist die Vorhersage des Navigationsverhaltens von Benutzern einer Webseite. Hier gibt es nicht a priori eine Partitionierung der Benutzer. Jedoch ist es notwendig, eine Partitionierung zu erzeugen, um verschiedene Nutzungsszenarien zu verstehen und verschiedene Layouts dafür zu entwerfen. Der vorgestellte Ansatz optimiert gleichzeitig die Fähigkeiten des Modells, sowohl die beste Partition zu bestimmen als auch mittels dieser Partition Vorhersagen über das Verhalten zu generieren. Jedes Modell wird auf realen Daten evaluiert und mit Referenzmethoden verglichen. Die Ergebnisse zeigen, dass das explizite Modellieren der Annahmen über die latente Partitionierungsstruktur zu verbesserten Vorhersagen führt. In den Fällen bei denen die Vorhersagegenauigkeit nicht direkt optimiert werden kann, erweist sich die Hinzunahme einer kleinen Anzahl von übergeordneten, direkt einstellbaren Parametern als nützlich.
APA, Harvard, Vancouver, ISO, and other styles
3

Qi, Meng. "Development in Normal Mixture and Mixture of Experts Modeling." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/15.

Full text
Abstract:
In this dissertation, first we consider the problem of testing homogeneity and order in a contaminated normal model, when the data is correlated under some known covariance structure. To address this problem, we developed a moment based homogeneity and order test, and design weights for test statistics to increase power for homogeneity test. We applied our test to microarray about Down’s syndrome. This dissertation also studies a singular Bayesian information criterion (sBIC) for a bivariate hierarchical mixture model with varying weights, and develops a new data dependent information criterion (sFLIC).We apply our model and criteria to birth- weight and gestational age data for the same model, whose purposes are to select model complexity from data.
APA, Harvard, Vancouver, ISO, and other styles
4

Polsen, Orathai. "Nonparametric regression and mixture models." Thesis, University of Leeds, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.578651.

Full text
Abstract:
Nonparametric regression estimation has become popular in the last 50 years. A commonly used nonparametric method for estimating the regression curve is the kernel estimator, exemplified by the Nadaraya- Watson estimator. The first part of thesis concentrates on the important issue of how to make a good choice of smoothing parameter for the Nadaraya- Watson estimator. In this study three types of smoothing parameter selectors are investigated: cross-validation, plug-in and bootstrap. In addition, two situations are examined: the same smoothing parameter and different smoothing parameters are employed for the estimates of the numerator and the denominator. We study the asymptotic bias and variance of the Nadaraya- Watson estimator when different smoothing parameters are used. We propose various plug-in methods for selecting smoothing parameter including a bootstrap smoothing parameter selector. The performances of the proposed selectors are investigated and also compared with cross-validation via a simulation study. Numerical results demonstrate that the proposed plug-in selectors outperform cross-validation when data is bivariate normal distributed. Numerical results also suggest that the proposed bootstrap selector with asymptotic pilot smoothing parameter compares favourably with cross-validation. We consider a circular-circular parametric regression model proposed by Taylor (2009), including parameter estimation and inference. In addition, we investigate diagnostic tools for circular regression which can be generally applied. A final thread is related to mixture models, in particular a mixture of linear regression models and a mixture of circular-circular regression models where there is unobserved group membership of the observation. We investigate methods for selecting starting values for EM algorithm which is used to fit mixture models and also the distributions of these values. Our experiments suggest that the proposed method compares favourably with the common method in mixture linear regression models.
APA, Harvard, Vancouver, ISO, and other styles
5

James, S. D. "Mixture models for times series." Thesis, Swansea University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637395.

Full text
Abstract:
This thesis reviews some known results for the class of mixture models introduced by Jalali and Pemberton (1995) and presents two examples from the literature, which are based on the theory. The first has a countable number of mixture elements while the second has a finite number, K, and is called the Bernstein mixture model, since it involves the use of Bernstein polynomials in its construction. By including an additional parameter, λ, in the Binomial weights function, we obtain a parameterised version of the Bernstein model. The elements of the transition matrix for this model are polynomials in λ of degree K and the stationary distribution assumes a more complicated structure compared with its unparameterised counterpart. A series of elementary mathematical techniques is applied to reduce the elements of the transition matrix to much simpler polynomials and Cramer's Rule is adopted as a solution to obtain an explicit, analytical expression for the stationary distribution of the time series. Through maximum likelihood estimation of the parameters, λ, and K, in the parameterised Bernstein model, the solution developed using Cramer's Rule is compared with an alternative approach for evaluating the stationary distribution. This approach involves implementing a NAG subroutine based on Crout's factorisation method to solve the usual equations for the stationary probability row-vector. Finally, a relatively straightforward treatment of asymptotic maximum likelihood theory is given for the parameterised Bernstein model by employing regularity conditions stated in Billingsley (1961).
APA, Harvard, Vancouver, ISO, and other styles
6

Sandhu, Manjinder Kaur. "Optimal designs for mixture models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sánchez, Luis Enrique Benites. "Finite mixture of regression models." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10052018-131627/.

Full text
Abstract:
This dissertation consists of three articles, proposing extensions of finite mixtures in regression models. Here we consider a flexible class of both univariate and multivariate distributions, which allow adequate modeling of asymmetric data that have multimodality, heavy tails and outlying observations. This class has special cases such as skew-normal, skew-t, skew-slash and skew normal contaminated distributions, as well as symmetric cases. Initially, a model is proposed based on the assumption that the errors follow a finite mixture of scale mixture of skew-normal (FM-SMSN) distribution rather than the conventional normal distribution. Next, we have a censored regression model where we consider that the error follows a finite mixture of scale mixture of normal (SMN) distribution. Next, we propose a censored regression model where we consider that the error follows a finite mixture of scale mixture of normal (SMN) distribution. Finally, we consider a finite mixture of multivariate regression where the error has a multivariate SMSN distribution. For all proposed models, two R packages were developed, which are reported in the appendix.
Esta tese composta por três artigos, visa propor extensões das misturas finitas nos modelos de regressão. Aqui vamos considerar uma classe flexível de distribuições tanto univariada como multivariada, que permitem modelar adequadamente dados assimmétricos, que presentam multimodalidade, caldas pesadas e observações atípicas. Esta classe possui casos especiais tais como as distribuições skew-normal, skew-t, skew slash, skew normal contaminada, assim como os casos simétricos. Inicialmente, é proposto um modelo baseado na suposição de que os erros seguem uma mistura finita da distribuição mistura de escala skew-normal (SMSN) ao invés da convencional distribuição normal. Em seguida, temos um modelo de regressão censurado onde consideramos que o erro segue uma mistura finita da distribuição da mistura de escala normal (SMN). E por último, é considerada um mistura finita de regressão multivariada onde o erro tem uma distribuição SMSN multivariada. Para todos os modelos propostos foram desenvolvidos dois pacotes do software R, que estão exemplificados no apêndice.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xiongya. "Robust multivariate mixture regression models." Diss., Kansas State University, 2017. http://hdl.handle.net/2097/38427.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Weixing Song
In this dissertation, we proposed a new robust estimation procedure for two multivariate mixture regression models and applied this novel method to functional mapping of dynamic traits. In the first part, a robust estimation procedure for the mixture of classical multivariate linear regression models is discussed by assuming that the error terms follow a multivariate Laplace distribution. An EM algorithm is developed based on the fact that the multivariate Laplace distribution is a scale mixture of the multivariate standard normal distribution. The performance of the proposed algorithm is thoroughly evaluated by some simulation and comparison studies. In the second part, the similar idea is extended to the mixture of linear mixed regression models by assuming that the random effect and the regression error jointly follow a multivariate Laplace distribution. Compared with the existing robust t procedure in the literature, simulation studies indicate that the finite sample performance of the proposed estimation procedure outperforms or is at least comparable to the robust t procedure. Comparing to t procedure, there is no need to determine the degrees of freedom, so the new robust estimation procedure is computationally more efficient than the robust t procedure. The ascent property for both EM algorithms are also proved. In the third part, the proposed robust method is applied to identify quantitative trait loci (QTL) underlying a functional mapping framework with dynamic traits of agricultural or biomedical interest. A robust multivariate Laplace mapping framework was proposed to replace the normality assumption. Simulation studies show the proposed method is comparable to the robust multivariate t-distribution developed in literature and outperforms the normal procedure. As an illustration, the proposed method is also applied to a real data set.
APA, Harvard, Vancouver, ISO, and other styles
9

Kunkel, Deborah Elizabeth. "Anchored Bayesian Gaussian Mixture Models." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524134234501475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Evers, Ludger. "Model fitting and model selection for 'mixture of experts' models." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445776.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Mixture models"

1

Lindsay, Bruce G. Mixture Models. Haywood CA and Alexandria VA: Institute of Mathematical Statistics and American Statistical Association, 1995. http://dx.doi.org/10.1214/cbms/1462106013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McLachlan, Geoffrey, and David Peel. Finite Mixture Models. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2000. http://dx.doi.org/10.1002/0471721182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

1952-, Basford Kaye E., ed. Mixture models: Inference and applications to clustering. New York, N.Y: M. Dekker, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bouguila, Nizar, and Wentao Fan, eds. Mixture Models and Applications. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-23876-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Jiahua. Statistical Inference Under Mixture Models. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6141-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

von Davier, Matthias. Multivariate and Mixture Distribution Rasch Models. Edited by Claus H. Carstensen. New York, NY: Springer New York, 2007. http://dx.doi.org/10.1007/978-0-387-49839-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

R, Hancock Gregory, and Samuelsen Karen M, eds. Advances in latent variable mixture models. Charlotte, NC: Information Age Pub., 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mixture models: Theory, geometry, and applications. Hayward, Calif: Institute of Mathematical Statistics, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

service), SpringerLink (Online, ed. Medical Applications of Finite Mixture Models. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jepson, Allan D. Mixture models for optical flow computation. Toronto: University of Toronto, Dept. of Computer Science, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Mixture models"

1

Yao, Weixin, and Sijia Xiang. "Hypothesis testing and model selection for mixture models." In Mixture Models, 188–209. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yao, Weixin, and Sijia Xiang. "Mixture models for discrete data." In Mixture Models, 77–120. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yao, Weixin, and Sijia Xiang. "Semiparametric mixture regression models." In Mixture Models, 300–338. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yao, Weixin, and Sijia Xiang. "Label switching for mixture models." In Mixture Models, 157–87. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yao, Weixin, and Sijia Xiang. "Robust mixture regression models." In Mixture Models, 210–46. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yao, Weixin, and Sijia Xiang. "Semiparametric mixture models." In Mixture Models, 274–99. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yao, Weixin, and Sijia Xiang. "Introduction to mixture models." In Mixture Models, 1–76. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yao, Weixin, and Sijia Xiang. "Mixture models for high-dimensional data." In Mixture Models, 247–73. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yao, Weixin, and Sijia Xiang. "Mixture regression models." In Mixture Models, 121–44. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yao, Weixin, and Sijia Xiang. "Bayesian mixture models." In Mixture Models, 145–56. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003038511-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Mixture models"

1

Sandler, Mark. "Hierarchical mixture models." In the 13th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1281192.1281255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sak, Hasim, Cyril Allauzen, Kaisuke Nakajima, and Francoise Beaufays. "Mixture of mixture n-gram language models." In 2013 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU). IEEE, 2013. http://dx.doi.org/10.1109/asru.2013.6707701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Simo-Serra, Edgar, Carme Torras, and Francesc Moreno-Noguer. "Geodesic Finite Mixture Models." In British Machine Vision Conference 2014. British Machine Vision Association, 2014. http://dx.doi.org/10.5244/c.28.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Beaufays, F., M. Weintraub, and Yochai Konig. "Discriminative mixture weight estimation for large Gaussian mixture models." In 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258). IEEE, 1999. http://dx.doi.org/10.1109/icassp.1999.758131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bar-Yosef, Yossi, and Yuval Bistritz. "Discriminative simplification of mixture models." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5946927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maas, Ryan, Jeremy Hyrkas, Olivia Grace Telford, Magdalena Balazinska, Andrew Connolly, and Bill Howe. "Gaussian Mixture Models Use-Case." In the 3rd VLDB Workshop. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2803140.2803143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Evangelio, Ruben Heras, Michael Patzold, and Thomas Sikora. "Splitting Gaussians in Mixture Models." In 2012 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2012. http://dx.doi.org/10.1109/avss.2012.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Zhixian, and Xiaojun Wan. "Dependency-based Mixture Language Models." In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.acl-long.535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Kion, Damla Şentürk, and Runze Li. "Recent History Functional Linear Models." In Nonparametric Statistics and Mixture Models - A Festschrift in Honor of Thomas P Hettmansperger. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814340564_0011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Marden, John I. "QQ Plots for Assessing Symmetry Models." In Nonparametric Statistics and Mixture Models - A Festschrift in Honor of Thomas P Hettmansperger. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814340564_0013.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Mixture models"

1

Lavrenko, Victor. Optimal Mixture Models in IR. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada440363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Songqi. Mixture Models: From Latent Classes/Profiles to Latent Growth, Transitions, and Multilevel Mixture Models. Instats Inc., 2022. http://dx.doi.org/10.61700/ky72m8g8cc8x2469.

Full text
Abstract:
This seminar introduces mixture modeling and explores its application in applied psychology research and beyond. Topics and worked examples include latent class analysis (LCA), latent profile analysis (LPA), LCA/LPA with covariates, multilevel LCA/LPA, growth mixture modeling (GMM), and latent transition analysis (LTA). A certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
APA, Harvard, Vancouver, ISO, and other styles
3

Mueller, Shane, Andrew Boettcher, and Michael Young. Delineating Cultural Models: Extending the Cultural Mixture Model. Fort Belvoir, VA: Defense Technical Information Center, December 2011. http://dx.doi.org/10.21236/ada572740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koenker, Roger, Jiaying Gu, and Stanislav Volgushev. Testing for homogeneity in mixture models. Cemmap, March 2013. http://dx.doi.org/10.1920/wp.cem.2013.0913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gu, Jiaying, Stanislav Volgushev, and Roger Koenker. Testing for homogeneity in mixture models. The IFS, August 2017. http://dx.doi.org/10.1920/wp.cem.2017.3917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Guoshen, and Guillermo Sapiro. Statistical Compressive Sensing of Gaussian Mixture Models. Fort Belvoir, VA: Defense Technical Information Center, October 2010. http://dx.doi.org/10.21236/ada540728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xiaohong, Elie Tamer, and Maria Ponomareva. Likelihood inference in some finite mixture models. Cemmap, May 2013. http://dx.doi.org/10.1920/wp.cem.2013.1913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Steele, Russell J., Adrian E. Raftery, and Mary J. Emond. Computing Normalizing Constants for Finite Mixture Models via Incremental Mixture Importance Sampling (IMIS). Fort Belvoir, VA: Defense Technical Information Center, July 2003. http://dx.doi.org/10.21236/ada459853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kam, Chester. Mixture Modeling for Measurement Scale Assessment. Instats Inc., 2023. http://dx.doi.org/10.61700/8ll0tq1hym0nq469.

Full text
Abstract:
This seminar will introduce the use of mixture models for measurement scale assessment, covering topics such as factor analysis and careless response detection. As a set of worked examples, mixture models will be applied to multitrait-multimethod (MTMM) and bifactor models. By attending this seminar, you will learn how to understand and statistically handle different types of method or source variance using mixture models, which will improve the quality and rigor of your research. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
APA, Harvard, Vancouver, ISO, and other styles
10

Heckman, James, and Christopher Taber. Econometric Mixture Models and More General Models for Unobservables in Duration Analysis. Cambridge, MA: National Bureau of Economic Research, June 1994. http://dx.doi.org/10.3386/t0157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography