Gotowa bibliografia na temat „Gaussian mixture models”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Gaussian mixture models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Gaussian mixture models"

1

Ju, Zhaojie, i Honghai Liu. "Fuzzy Gaussian Mixture Models". Pattern Recognition 45, nr 3 (marzec 2012): 1146–58. http://dx.doi.org/10.1016/j.patcog.2011.08.028.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

McNicholas, Paul David, i Thomas Brendan Murphy. "Parsimonious Gaussian mixture models". Statistics and Computing 18, nr 3 (19.04.2008): 285–96. http://dx.doi.org/10.1007/s11222-008-9056-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Viroli, Cinzia, i Geoffrey J. McLachlan. "Deep Gaussian mixture models". Statistics and Computing 29, nr 1 (1.12.2017): 43–51. http://dx.doi.org/10.1007/s11222-017-9793-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Verbeek, J. J., N. Vlassis i B. Kröse. "Efficient Greedy Learning of Gaussian Mixture Models". Neural Computation 15, nr 2 (1.02.2003): 469–85. http://dx.doi.org/10.1162/089976603762553004.

Pełny tekst źródła
Streszczenie:
This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided.
Style APA, Harvard, Vancouver, ISO itp.
5

Kunkel, Deborah, i Mario Peruggia. "Anchored Bayesian Gaussian mixture models". Electronic Journal of Statistics 14, nr 2 (2020): 3869–913. http://dx.doi.org/10.1214/20-ejs1756.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chassagnol, Bastien, Antoine Bichat, Cheïma Boudjeniba, Pierre-Henri Wuillemin, Mickaël Guedj, David Gohel, Gregory Nuel i Etienne Becht. "Gaussian Mixture Models in R". R Journal 15, nr 2 (1.11.2023): 56–76. http://dx.doi.org/10.32614/rj-2023-043.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ruzgas, Tomas, i Indrė Drulytė. "Kernel Density Estimators for Gaussian Mixture Models". Lietuvos statistikos darbai 52, nr 1 (20.12.2013): 14–21. http://dx.doi.org/10.15388/ljs.2013.13919.

Pełny tekst źródła
Streszczenie:
The problem of nonparametric estimation of probability density function is considered. The performance of kernel estimators based on various common kernels and a new kernel K (see (14)) with both fixed and adaptive smoothing bandwidth is compared in terms of the symmetric mean absolute percentage error using the Monte Carlo method. The kernel K is everywhere positive but has lighter tails than the Gaussian density. Gaussian mixture models from a collection introduced by Marron and Wand (1992) are taken for Monte Carlo simulations. The adaptive kernel method outperforms the smoothing with a fixed bandwidth in the majority of models. The kernel K shows better performance for Gaussian mixtures with considerably overlapping components and multiple peaks (double claw distribution).
Style APA, Harvard, Vancouver, ISO itp.
8

Chen, Yongxin, Tryphon T. Georgiou i Allen Tannenbaum. "Optimal Transport for Gaussian Mixture Models". IEEE Access 7 (2019): 6269–78. http://dx.doi.org/10.1109/access.2018.2889838.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Nasios, N., i A. G. Bors. "Variational learning for Gaussian mixture models". IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 36, nr 4 (sierpień 2006): 849–62. http://dx.doi.org/10.1109/tsmcb.2006.872273.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Baibo, Changshui Zhang i Xing Yi. "Active curve axis Gaussian mixture models". Pattern Recognition 38, nr 12 (grudzień 2005): 2351–62. http://dx.doi.org/10.1016/j.patcog.2005.01.017.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Gaussian mixture models"

1

Kunkel, Deborah Elizabeth. "Anchored Bayesian Gaussian Mixture Models". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524134234501475.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Nkadimeng, Calvin. "Language identification using Gaussian mixture models". Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4170.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: The importance of Language Identification for African languages is seeing a dramatic increase due to the development of telecommunication infrastructure and, as a result, an increase in volumes of data and speech traffic in public networks. By automatically processing the raw speech data the vital assistance given to people in distress can be speeded up, by referring their calls to a person knowledgeable in that language. To this effect a speech corpus was developed and various algorithms were implemented and tested on raw telephone speech data. These algorithms entailed data preparation, signal processing, and statistical analysis aimed at discriminating between languages. The statistical model of Gaussian Mixture Models (GMMs) were chosen for this research due to their ability to represent an entire language with a single stochastic model that does not require phonetic transcription. Language Identification for African languages using GMMs is feasible, although there are some few challenges like proper classification and accurate study into the relationship of langauges that need to be overcome. Other methods that make use of phonetically transcribed data need to be explored and tested with the new corpus for the research to be more rigorous.
AFRIKAANSE OPSOMMING: Die belang van die Taal identifiseer vir Afrika-tale is sien ’n dramatiese toename te danke aan die ontwikkeling van telekommunikasie-infrastruktuur en as gevolg ’n toename in volumes van data en spraak verkeer in die openbaar netwerke.Deur outomaties verwerking van die ruwe toespraak gegee die noodsaaklike hulp verleen aan mense in nood kan word vinniger-up ”, deur te verwys hul oproepe na ’n persoon ingelichte in daardie taal. Tot hierdie effek van ’n toespraak corpus het ontwikkel en die verskillende algoritmes is gemplementeer en getoets op die ruwe telefoon toespraak gegee.Hierdie algoritmes behels die data voorbereiding, seinverwerking, en statistiese analise wat gerig is op onderskei tussen tale.Die statistiese model van Gauss Mengsel Modelle (GGM) was gekies is vir hierdie navorsing as gevolg van hul vermo te verteenwoordig ’n hele taal met’ n enkele stogastiese model wat nodig nie fonetiese tanscription nie. Taal identifiseer vir die Afrikatale gebruik GGM haalbaar is, alhoewel daar enkele paar uitdagings soos behoorlike klassifikasie en akkurate ondersoek na die verhouding van TALE wat moet oorkom moet word.Ander metodes wat gebruik maak van foneties getranskribeerde data nodig om ondersoek te word en getoets word met die nuwe corpus vir die ondersoek te word strenger.
Style APA, Harvard, Vancouver, ISO itp.
3

Gundersen, Terje. "Voice Transformation based on Gaussian mixture models". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10878.

Pełny tekst źródła
Streszczenie:

In this thesis, a probabilistic model for transforming a voice to sound like another specific voice is tested. The model is fully automatic and only requires some 100 training sentences from both speakers with the same acoustic content. The classical source-filter decomposition allows prosodic and spectral transformation to be performed independently. The transformations are based on a Gaussian mixture model and a transformation function suggested by Y. Stylianou. Feature vectors of the same content from the source and target speaker, aligned in time by dynamic time warping, are fitted to a GMM. The short time spectra, represented as cepstral coefficients and derived from LPC, and the pitch periods, represented as fundamental frequency estimated from the RAPT algorithm, are transformed with the same probabilistic transformation function. Several techniques of spectrum and pitch transformation were assessed in addition to some novel smoothing techniques of the fundamental frequency contour. The pitch transform was implemented on the excitation signal from the inverse LP filtering by time domain PSOLA. The transformed spectrum parameters were used in the synthesis filter with the transformed excitation as input to yield the transformed voice. A listening test was performed with the best setup from objective tests and the results indicate that it is possible to recognise the transformed voice as the target speaker with a 72 % probability. However, the synthesised voice was affected by a muffling effect due to incorrect frequency transformation and the prosody sounded somewhat robotic.

Style APA, Harvard, Vancouver, ISO itp.
4

Subramaniam, Anand D. "Gaussian mixture models in compression and communication /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3112847.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cilliers, Francois Dirk. "Tree-based Gaussian mixture models for speaker verification". Thesis, Link to the online version, 2005. http://hdl.handle.net/10019.1/1639.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lu, Liang. "Subspace Gaussian mixture models for automatic speech recognition". Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8065.

Pełny tekst źródła
Streszczenie:
In most of state-of-the-art speech recognition systems, Gaussian mixture models (GMMs) are used to model the density of the emitting states in the hidden Markov models (HMMs). In a conventional system, the model parameters of each GMM are estimated directly and independently given the alignment. This results a large number of model parameters to be estimated, and consequently, a large amount of training data is required to fit the model. In addition, different sources of acoustic variability that impact the accuracy of a recogniser such as pronunciation variation, accent, speaker factor and environmental noise are only weakly modelled and factorized by adaptation techniques such as maximum likelihood linear regression (MLLR), maximum a posteriori adaptation (MAP) and vocal tract length normalisation (VTLN). In this thesis, we will discuss an alternative acoustic modelling approach — the subspace Gaussian mixture model (SGMM), which is expected to deal with these two issues better. In an SGMM, the model parameters are derived from low-dimensional model and speaker subspaces that can capture phonetic and speaker correlations. Given these subspaces, only a small number of state-dependent parameters are required to derive the corresponding GMMs. Hence, the total number of model parameters can be reduced, which allows acoustic modelling with a limited amount of training data. In addition, the SGMM-based acoustic model factorizes the phonetic and speaker factors and within this framework, other source of acoustic variability may also be explored. In this thesis, we propose a regularised model estimation for SGMMs, which avoids overtraining in case that the training data is sparse. We will also take advantage of the structure of SGMMs to explore cross-lingual acoustic modelling for low-resource speech recognition. Here, the model subspace is estimated from out-domain data and ported to the target language system. In this case, only the state-dependent parameters need to be estimated which relaxes the requirement of the amount of training data. To improve the robustness of SGMMs against environmental noise, we propose to apply the joint uncertainty decoding (JUD) technique that is shown to be efficient and effective. We will report experimental results on the Wall Street Journal (WSJ) database and GlobalPhone corpora to evaluate the regularisation and cross-lingual modelling of SGMMs. Noise compensation using JUD for SGMM acoustic models is evaluated on the Aurora 4 database.
Style APA, Harvard, Vancouver, ISO itp.
7

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Pełny tekst źródła
Streszczenie:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais.
This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
Style APA, Harvard, Vancouver, ISO itp.
8

Chockalingam, Prakash. "Non-rigid multi-modal object tracking using Gaussian mixture models". Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1252937467/.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.) -- Clemson University, 2009.
Contains additional supplemental files. Title from first page of PDF file. Document formatted into pages; contains vii, 54 p. ; also includes color graphics.
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Bo Yu. "Deterministic annealing EM algorithm for robust learning of Gaussian mixture models". Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493309.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Plasse, Joshua H. "The EM Algorithm in Multivariate Gaussian Mixture Models using Anderson Acceleration". Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/290.

Pełny tekst źródła
Streszczenie:
Over the years analysts have used the EM algorithm to obtain maximum likelihood estimates from incomplete data for various models. The general algorithm admits several appealing properties such as strong global convergence; however, the rate of convergence is linear which in some cases may be unacceptably slow. This work is primarily concerned with applying Anderson acceleration to the EM algorithm for Gaussian mixture models (GMM) in hopes of alleviating slow convergence. As preamble we provide a review of maximum likelihood estimation and derive the EM algorithm in detail. The iterates that correspond to the GMM are then formulated and examples are provided. These examples show how faster convergence is experienced when the data are well separated, whereas much slower convergence is seen whenever the sample is poorly separated. The Anderson acceleration method is then presented, and its connection to the EM algorithm is discussed. The work is then concluded by applying Anderson acceleration to the EM algorithm which results in reducing the number of iterations required to obtain convergence.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Gaussian mixture models"

1

1st, Krishna M. Vamsi. Brain Tumor Segmentation Using Bivariate Gaussian Mixture Models. Selfypage Developers Pvt Ltd, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Speaker Verification in the Presence of Channel Mismatch Using Gaussian Mixture Models. Storming Media, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Cheng, Russell. Finite Mixture Examples; MAPIS Details. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0018.

Pełny tekst źródła
Streszczenie:
Two detailed numerical examples are given in this chapter illustrating and comparing mainly the reversible jump Markov chain Monte Carlo (RJMCMC) and the maximum a posteriori/importance sampling (MAPIS) methods. The numerical examples are the well-known galaxy data set with sample size 82, and the Hidalgo stamp issues thickness data with sample size 485. A comparison is made of the estimates obtained by the RJMCMC and MAPIS methods for (i) the posterior k-distribution of the number of components, k, (ii) the predictive finite mixture distribution itself, and (iii) the posterior distributions of the component parameters and weights. The estimates obtained by MAPIS are shown to be more satisfactory and meaningful. Details are given of the practical implementation of MAPIS for five non-normal mixture models, namely: the extreme value, gamma, inverse Gaussian, lognormal, and Weibull. Mathematical details are also given of the acceptance-rejection importance sampling used in MAPIS.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Gaussian mixture models"

1

Yu, Dong, i Li Deng. "Gaussian Mixture Models". W Automatic Speech Recognition, 13–21. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5779-3_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Reynolds, Douglas. "Gaussian Mixture Models". W Encyclopedia of Biometrics, 659–63. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_196.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Reynolds, Douglas. "Gaussian Mixture Models". W Encyclopedia of Biometrics, 827–32. Boston, MA: Springer US, 2015. http://dx.doi.org/10.1007/978-1-4899-7488-4_196.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Honghai, Zhaojie Ju, Xiaofei Ji, Chee Seng Chan i Mehdi Khoury. "Fuzzy Gaussian Mixture Models". W Human Motion Sensing and Recognition, 95–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/978-3-662-53692-6_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lee, Hyoung-joo, i Sungzoon Cho. "Combining Gaussian Mixture Models". W Lecture Notes in Computer Science, 666–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28651-6_98.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Scrucca, Luca, Chris Fraley, T. Brendan Murphy i Adrian E. Raftery. "Visualizing Gaussian Mixture Models". W Model-Based Clustering, Classification, and Density Estimation Using mclust in R, 153–88. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003277965-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Aladjem, Mayer. "Projection Pursuit Fitting Gaussian Mixture Models". W Lecture Notes in Computer Science, 396–404. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-70659-3_41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Blömer, Johannes, i Kathrin Bujna. "Adaptive Seeding for Gaussian Mixture Models". W Advances in Knowledge Discovery and Data Mining, 296–308. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31750-2_24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zeng, Jia, i Zhi-Qiang Liu. "Type-2 Fuzzy Gaussian Mixture Models". W Type-2 Fuzzy Graphical Models for Pattern Recognition, 45–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44690-4_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ponsa, Daniel, i Xavier Roca. "Unsupervised Parameterisation of Gaussian Mixture Models". W Lecture Notes in Computer Science, 388–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36079-4_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Gaussian mixture models"

1

Maas, Ryan, Jeremy Hyrkas, Olivia Grace Telford, Magdalena Balazinska, Andrew Connolly i Bill Howe. "Gaussian Mixture Models Use-Case". W the 3rd VLDB Workshop. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2803140.2803143.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Beaufays, F., M. Weintraub i Yochai Konig. "Discriminative mixture weight estimation for large Gaussian mixture models". W 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258). IEEE, 1999. http://dx.doi.org/10.1109/icassp.1999.758131.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Levine, Stacey, Katie Heaps, Joshua Koslosky i Glenn Sidle. "Image Fusion using Gaussian Mixture Models". W British Machine Vision Conference 2013. British Machine Vision Association, 2013. http://dx.doi.org/10.5244/c.27.89.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Keselman, Leonid, i Martial Hebert. "Direct Fitting of Gaussian Mixture Models". W 2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019. http://dx.doi.org/10.1109/crv.2019.00012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zeng, Jia, Lei Xie i Zhi-Qiang Liu. "Gaussian Mixture Models with Uncertain Parameters". W 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370617.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

D'souza, Kevin, i K. T. V. Talele. "Voice conversion using Gaussian Mixture Models". W 2015 International Conference on Communication, Information & Computing Technology (ICCICT). IEEE, 2015. http://dx.doi.org/10.1109/iccict.2015.7045743.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bouguila, Nizar. "Non-Gaussian mixture image models prediction". W 2008 15th IEEE International Conference on Image Processing. IEEE, 2008. http://dx.doi.org/10.1109/icip.2008.4712321.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Gupta, Hitesh Anand, i Vinay M. Varma. "Noise classification using Gaussian Mixture Models". W 2012 1st International Conference on Recent Advances in Information Technology (RAIT). IEEE, 2012. http://dx.doi.org/10.1109/rait.2012.6194530.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zelinka, Petr. "Smooth interpolation of Gaussian mixture models". W 2009 19th International Conference Radioelektronika (RADIOELEKTRONIKA). IEEE, 2009. http://dx.doi.org/10.1109/radioelek.2009.5158781.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Pfaff, Patrick, Christian Plagemann i Wolfram Burgard. "Gaussian mixture models for probabilistic localization". W 2008 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2008. http://dx.doi.org/10.1109/robot.2008.4543251.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Gaussian mixture models"

1

Yu, Guoshen, i Guillermo Sapiro. Statistical Compressive Sensing of Gaussian Mixture Models. Fort Belvoir, VA: Defense Technical Information Center, październik 2010. http://dx.doi.org/10.21236/ada540728.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hogden, J., i J. C. Scovel. MALCOM X: Combining maximum likelihood continuity mapping with Gaussian mixture models. Office of Scientific and Technical Information (OSTI), listopad 1998. http://dx.doi.org/10.2172/677150.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yu, Guoshen, Guillermo Sapiro i Stephane Mallat. Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity. Fort Belvoir, VA: Defense Technical Information Center, czerwiec 2010. http://dx.doi.org/10.21236/ada540722.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ramakrishnan, Aravind, Ashraf Alrajhi, Egemen Okte, Hasan Ozer i Imad Al-Qadi. Truck-Platooning Impacts on Flexible Pavements: Experimental and Mechanistic Approaches. Illinois Center for Transportation, listopad 2021. http://dx.doi.org/10.36501/0197-9191/21-038.

Pełny tekst źródła
Streszczenie:
Truck platoons are expected to improve safety and reduce fuel consumption. However, their use is projected to accelerate pavement damage due to channelized-load application (lack of wander) and potentially reduced duration between truck-loading applications (reduced rest period). The effect of wander on pavement damage is well documented, while relatively few studies are available on the effect of rest period on pavement permanent deformation. Therefore, the main objective of this study was to quantify the impact of rest period theoretically, using a numerical method, and experimentally, using laboratory testing. A 3-D finite-element (FE) pavement model was developed and run to quantify the effect of rest period. Strain recovery and accumulation were predicted by fitting Gaussian mixture models to the strain values computed from the FE model. The effect of rest period was found to be insignificant for truck spacing greater than 10 ft. An experimental program was conducted, and several asphalt concrete (AC) mixes were considered at various stress levels, temperatures, and rest periods. Test results showed that AC deformation increased with rest period, irrespective of AC-mix type, stress level, and/or temperature. This observation was attributed to a well-documented hardening–relaxation mechanism, which occurs during AC plastic deformation. Hence, experimental and FE-model results are conflicting due to modeling AC as a viscoelastic and the difference in the loading mechanism. A shift model was developed by extending the time–temperature superposition concept to incorporate rest period, using the experimental data. The shift factors were used to compute the equivalent number of cycles for various platoon scenarios (truck spacings or rest period). The shift model was implemented in AASHTOware pavement mechanic–empirical design (PMED) guidelines for the calculation of rutting using equivalent number of cycles.
Style APA, Harvard, Vancouver, ISO itp.
5

De Leon, Phillip L., i Richard D. McClanahan. Efficient speaker verification using Gaussian mixture model component clustering. Office of Scientific and Technical Information (OSTI), kwiecień 2012. http://dx.doi.org/10.2172/1039402.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii