Gotowa bibliografia na temat „Gaussian mixture models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Gaussian mixture models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Gaussian mixture models"
Ju, Zhaojie, i Honghai Liu. "Fuzzy Gaussian Mixture Models". Pattern Recognition 45, nr 3 (marzec 2012): 1146–58. http://dx.doi.org/10.1016/j.patcog.2011.08.028.
Pełny tekst źródłaMcNicholas, Paul David, i Thomas Brendan Murphy. "Parsimonious Gaussian mixture models". Statistics and Computing 18, nr 3 (19.04.2008): 285–96. http://dx.doi.org/10.1007/s11222-008-9056-0.
Pełny tekst źródłaViroli, Cinzia, i Geoffrey J. McLachlan. "Deep Gaussian mixture models". Statistics and Computing 29, nr 1 (1.12.2017): 43–51. http://dx.doi.org/10.1007/s11222-017-9793-z.
Pełny tekst źródłaVerbeek, J. J., N. Vlassis i B. Kröse. "Efficient Greedy Learning of Gaussian Mixture Models". Neural Computation 15, nr 2 (1.02.2003): 469–85. http://dx.doi.org/10.1162/089976603762553004.
Pełny tekst źródłaKunkel, Deborah, i Mario Peruggia. "Anchored Bayesian Gaussian mixture models". Electronic Journal of Statistics 14, nr 2 (2020): 3869–913. http://dx.doi.org/10.1214/20-ejs1756.
Pełny tekst źródłaChassagnol, Bastien, Antoine Bichat, Cheïma Boudjeniba, Pierre-Henri Wuillemin, Mickaël Guedj, David Gohel, Gregory Nuel i Etienne Becht. "Gaussian Mixture Models in R". R Journal 15, nr 2 (1.11.2023): 56–76. http://dx.doi.org/10.32614/rj-2023-043.
Pełny tekst źródłaRuzgas, Tomas, i Indrė Drulytė. "Kernel Density Estimators for Gaussian Mixture Models". Lietuvos statistikos darbai 52, nr 1 (20.12.2013): 14–21. http://dx.doi.org/10.15388/ljs.2013.13919.
Pełny tekst źródłaChen, Yongxin, Tryphon T. Georgiou i Allen Tannenbaum. "Optimal Transport for Gaussian Mixture Models". IEEE Access 7 (2019): 6269–78. http://dx.doi.org/10.1109/access.2018.2889838.
Pełny tekst źródłaNasios, N., i A. G. Bors. "Variational learning for Gaussian mixture models". IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 36, nr 4 (sierpień 2006): 849–62. http://dx.doi.org/10.1109/tsmcb.2006.872273.
Pełny tekst źródłaZhang, Baibo, Changshui Zhang i Xing Yi. "Active curve axis Gaussian mixture models". Pattern Recognition 38, nr 12 (grudzień 2005): 2351–62. http://dx.doi.org/10.1016/j.patcog.2005.01.017.
Pełny tekst źródłaRozprawy doktorskie na temat "Gaussian mixture models"
Kunkel, Deborah Elizabeth. "Anchored Bayesian Gaussian Mixture Models". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524134234501475.
Pełny tekst źródłaNkadimeng, Calvin. "Language identification using Gaussian mixture models". Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4170.
Pełny tekst źródłaENGLISH ABSTRACT: The importance of Language Identification for African languages is seeing a dramatic increase due to the development of telecommunication infrastructure and, as a result, an increase in volumes of data and speech traffic in public networks. By automatically processing the raw speech data the vital assistance given to people in distress can be speeded up, by referring their calls to a person knowledgeable in that language. To this effect a speech corpus was developed and various algorithms were implemented and tested on raw telephone speech data. These algorithms entailed data preparation, signal processing, and statistical analysis aimed at discriminating between languages. The statistical model of Gaussian Mixture Models (GMMs) were chosen for this research due to their ability to represent an entire language with a single stochastic model that does not require phonetic transcription. Language Identification for African languages using GMMs is feasible, although there are some few challenges like proper classification and accurate study into the relationship of langauges that need to be overcome. Other methods that make use of phonetically transcribed data need to be explored and tested with the new corpus for the research to be more rigorous.
AFRIKAANSE OPSOMMING: Die belang van die Taal identifiseer vir Afrika-tale is sien ’n dramatiese toename te danke aan die ontwikkeling van telekommunikasie-infrastruktuur en as gevolg ’n toename in volumes van data en spraak verkeer in die openbaar netwerke.Deur outomaties verwerking van die ruwe toespraak gegee die noodsaaklike hulp verleen aan mense in nood kan word vinniger-up ”, deur te verwys hul oproepe na ’n persoon ingelichte in daardie taal. Tot hierdie effek van ’n toespraak corpus het ontwikkel en die verskillende algoritmes is gemplementeer en getoets op die ruwe telefoon toespraak gegee.Hierdie algoritmes behels die data voorbereiding, seinverwerking, en statistiese analise wat gerig is op onderskei tussen tale.Die statistiese model van Gauss Mengsel Modelle (GGM) was gekies is vir hierdie navorsing as gevolg van hul vermo te verteenwoordig ’n hele taal met’ n enkele stogastiese model wat nodig nie fonetiese tanscription nie. Taal identifiseer vir die Afrikatale gebruik GGM haalbaar is, alhoewel daar enkele paar uitdagings soos behoorlike klassifikasie en akkurate ondersoek na die verhouding van TALE wat moet oorkom moet word.Ander metodes wat gebruik maak van foneties getranskribeerde data nodig om ondersoek te word en getoets word met die nuwe corpus vir die ondersoek te word strenger.
Gundersen, Terje. "Voice Transformation based on Gaussian mixture models". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10878.
Pełny tekst źródłaIn this thesis, a probabilistic model for transforming a voice to sound like another specific voice is tested. The model is fully automatic and only requires some 100 training sentences from both speakers with the same acoustic content. The classical source-filter decomposition allows prosodic and spectral transformation to be performed independently. The transformations are based on a Gaussian mixture model and a transformation function suggested by Y. Stylianou. Feature vectors of the same content from the source and target speaker, aligned in time by dynamic time warping, are fitted to a GMM. The short time spectra, represented as cepstral coefficients and derived from LPC, and the pitch periods, represented as fundamental frequency estimated from the RAPT algorithm, are transformed with the same probabilistic transformation function. Several techniques of spectrum and pitch transformation were assessed in addition to some novel smoothing techniques of the fundamental frequency contour. The pitch transform was implemented on the excitation signal from the inverse LP filtering by time domain PSOLA. The transformed spectrum parameters were used in the synthesis filter with the transformed excitation as input to yield the transformed voice. A listening test was performed with the best setup from objective tests and the results indicate that it is possible to recognise the transformed voice as the target speaker with a 72 % probability. However, the synthesised voice was affected by a muffling effect due to incorrect frequency transformation and the prosody sounded somewhat robotic.
Subramaniam, Anand D. "Gaussian mixture models in compression and communication /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3112847.
Pełny tekst źródłaCilliers, Francois Dirk. "Tree-based Gaussian mixture models for speaker verification". Thesis, Link to the online version, 2005. http://hdl.handle.net/10019.1/1639.
Pełny tekst źródłaLu, Liang. "Subspace Gaussian mixture models for automatic speech recognition". Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8065.
Pełny tekst źródłaPinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.
Pełny tekst źródłaThis thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
Chockalingam, Prakash. "Non-rigid multi-modal object tracking using Gaussian mixture models". Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1252937467/.
Pełny tekst źródłaContains additional supplemental files. Title from first page of PDF file. Document formatted into pages; contains vii, 54 p. ; also includes color graphics.
Wang, Bo Yu. "Deterministic annealing EM algorithm for robust learning of Gaussian mixture models". Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493309.
Pełny tekst źródłaPlasse, Joshua H. "The EM Algorithm in Multivariate Gaussian Mixture Models using Anderson Acceleration". Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/290.
Pełny tekst źródłaKsiążki na temat "Gaussian mixture models"
1st, Krishna M. Vamsi. Brain Tumor Segmentation Using Bivariate Gaussian Mixture Models. Selfypage Developers Pvt Ltd, 2022.
Znajdź pełny tekst źródłaSpeaker Verification in the Presence of Channel Mismatch Using Gaussian Mixture Models. Storming Media, 1997.
Znajdź pełny tekst źródłaCheng, Russell. Finite Mixture Examples; MAPIS Details. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0018.
Pełny tekst źródłaCzęści książek na temat "Gaussian mixture models"
Yu, Dong, i Li Deng. "Gaussian Mixture Models". W Automatic Speech Recognition, 13–21. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5779-3_2.
Pełny tekst źródłaReynolds, Douglas. "Gaussian Mixture Models". W Encyclopedia of Biometrics, 659–63. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_196.
Pełny tekst źródłaReynolds, Douglas. "Gaussian Mixture Models". W Encyclopedia of Biometrics, 827–32. Boston, MA: Springer US, 2015. http://dx.doi.org/10.1007/978-1-4899-7488-4_196.
Pełny tekst źródłaLiu, Honghai, Zhaojie Ju, Xiaofei Ji, Chee Seng Chan i Mehdi Khoury. "Fuzzy Gaussian Mixture Models". W Human Motion Sensing and Recognition, 95–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/978-3-662-53692-6_5.
Pełny tekst źródłaLee, Hyoung-joo, i Sungzoon Cho. "Combining Gaussian Mixture Models". W Lecture Notes in Computer Science, 666–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28651-6_98.
Pełny tekst źródłaScrucca, Luca, Chris Fraley, T. Brendan Murphy i Adrian E. Raftery. "Visualizing Gaussian Mixture Models". W Model-Based Clustering, Classification, and Density Estimation Using mclust in R, 153–88. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003277965-6.
Pełny tekst źródłaAladjem, Mayer. "Projection Pursuit Fitting Gaussian Mixture Models". W Lecture Notes in Computer Science, 396–404. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-70659-3_41.
Pełny tekst źródłaBlömer, Johannes, i Kathrin Bujna. "Adaptive Seeding for Gaussian Mixture Models". W Advances in Knowledge Discovery and Data Mining, 296–308. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31750-2_24.
Pełny tekst źródłaZeng, Jia, i Zhi-Qiang Liu. "Type-2 Fuzzy Gaussian Mixture Models". W Type-2 Fuzzy Graphical Models for Pattern Recognition, 45–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44690-4_4.
Pełny tekst źródłaPonsa, Daniel, i Xavier Roca. "Unsupervised Parameterisation of Gaussian Mixture Models". W Lecture Notes in Computer Science, 388–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36079-4_34.
Pełny tekst źródłaStreszczenia konferencji na temat "Gaussian mixture models"
Maas, Ryan, Jeremy Hyrkas, Olivia Grace Telford, Magdalena Balazinska, Andrew Connolly i Bill Howe. "Gaussian Mixture Models Use-Case". W the 3rd VLDB Workshop. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2803140.2803143.
Pełny tekst źródłaBeaufays, F., M. Weintraub i Yochai Konig. "Discriminative mixture weight estimation for large Gaussian mixture models". W 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258). IEEE, 1999. http://dx.doi.org/10.1109/icassp.1999.758131.
Pełny tekst źródłaLevine, Stacey, Katie Heaps, Joshua Koslosky i Glenn Sidle. "Image Fusion using Gaussian Mixture Models". W British Machine Vision Conference 2013. British Machine Vision Association, 2013. http://dx.doi.org/10.5244/c.27.89.
Pełny tekst źródłaKeselman, Leonid, i Martial Hebert. "Direct Fitting of Gaussian Mixture Models". W 2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019. http://dx.doi.org/10.1109/crv.2019.00012.
Pełny tekst źródłaZeng, Jia, Lei Xie i Zhi-Qiang Liu. "Gaussian Mixture Models with Uncertain Parameters". W 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370617.
Pełny tekst źródłaD'souza, Kevin, i K. T. V. Talele. "Voice conversion using Gaussian Mixture Models". W 2015 International Conference on Communication, Information & Computing Technology (ICCICT). IEEE, 2015. http://dx.doi.org/10.1109/iccict.2015.7045743.
Pełny tekst źródłaBouguila, Nizar. "Non-Gaussian mixture image models prediction". W 2008 15th IEEE International Conference on Image Processing. IEEE, 2008. http://dx.doi.org/10.1109/icip.2008.4712321.
Pełny tekst źródłaGupta, Hitesh Anand, i Vinay M. Varma. "Noise classification using Gaussian Mixture Models". W 2012 1st International Conference on Recent Advances in Information Technology (RAIT). IEEE, 2012. http://dx.doi.org/10.1109/rait.2012.6194530.
Pełny tekst źródłaZelinka, Petr. "Smooth interpolation of Gaussian mixture models". W 2009 19th International Conference Radioelektronika (RADIOELEKTRONIKA). IEEE, 2009. http://dx.doi.org/10.1109/radioelek.2009.5158781.
Pełny tekst źródłaPfaff, Patrick, Christian Plagemann i Wolfram Burgard. "Gaussian mixture models for probabilistic localization". W 2008 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2008. http://dx.doi.org/10.1109/robot.2008.4543251.
Pełny tekst źródłaRaporty organizacyjne na temat "Gaussian mixture models"
Yu, Guoshen, i Guillermo Sapiro. Statistical Compressive Sensing of Gaussian Mixture Models. Fort Belvoir, VA: Defense Technical Information Center, październik 2010. http://dx.doi.org/10.21236/ada540728.
Pełny tekst źródłaHogden, J., i J. C. Scovel. MALCOM X: Combining maximum likelihood continuity mapping with Gaussian mixture models. Office of Scientific and Technical Information (OSTI), listopad 1998. http://dx.doi.org/10.2172/677150.
Pełny tekst źródłaYu, Guoshen, Guillermo Sapiro i Stephane Mallat. Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity. Fort Belvoir, VA: Defense Technical Information Center, czerwiec 2010. http://dx.doi.org/10.21236/ada540722.
Pełny tekst źródłaRamakrishnan, Aravind, Ashraf Alrajhi, Egemen Okte, Hasan Ozer i Imad Al-Qadi. Truck-Platooning Impacts on Flexible Pavements: Experimental and Mechanistic Approaches. Illinois Center for Transportation, listopad 2021. http://dx.doi.org/10.36501/0197-9191/21-038.
Pełny tekst źródłaDe Leon, Phillip L., i Richard D. McClanahan. Efficient speaker verification using Gaussian mixture model component clustering. Office of Scientific and Technical Information (OSTI), kwiecień 2012. http://dx.doi.org/10.2172/1039402.
Pełny tekst źródła