Artykuły w czasopismach na temat „Automatic speech recognition – Statistical methods”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Automatic speech recognition – Statistical methods”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Boyer, A., J. Di Martino, P. Divoux, J. P. Haton, J. F. Mari i K. Smaili. "Statistical methods in multi-speaker automatic speech recognition". Applied Stochastic Models and Data Analysis 6, nr 3 (wrzesień 1990): 143–55. http://dx.doi.org/10.1002/asm.3150060302.
Pełny tekst źródłaKłosowski, Piotr. "A Rule-Based Grapheme-to-Phoneme Conversion System". Applied Sciences 12, nr 5 (7.03.2022): 2758. http://dx.doi.org/10.3390/app12052758.
Pełny tekst źródłaToth, Laszlo, Ildiko Hoffmann, Gabor Gosztolya, Veronika Vincze, Greta Szatloczki, Zoltan Banreti, Magdolna Pakaski i Janos Kalman. "A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech". Current Alzheimer Research 15, nr 2 (3.01.2018): 130–38. http://dx.doi.org/10.2174/1567205014666171121114930.
Pełny tekst źródłaGellatly, Andrew W., i Thomas A. Dingus. "Speech Recognition and Automotive Applications: Using Speech to Perform in-Vehicle Tasks". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, nr 17 (październik 1998): 1247–51. http://dx.doi.org/10.1177/154193129804201715.
Pełny tekst źródłaSeman, Noraini, i Ahmad Firdaus Norazam. "Hybrid methods of brandt’s generalised likelihood ratio and short-term energy for malay word speech segmentation". Indonesian Journal of Electrical Engineering and Computer Science 16, nr 1 (1.10.2019): 283. http://dx.doi.org/10.11591/ijeecs.v16.i1.pp283-291.
Pełny tekst źródłaCabral, Frederico Soares, Hidekazu Fukai i Satoshi Tamura. "Feature Extraction Methods Proposed for Speech Recognition Are Effective on Road Condition Monitoring Using Smartphone Inertial Sensors". Sensors 19, nr 16 (9.08.2019): 3481. http://dx.doi.org/10.3390/s19163481.
Pełny tekst źródłaHai, Yanfei. "Computer-aided teaching mode of oral English intelligent learning based on speech recognition and network assistance". Journal of Intelligent & Fuzzy Systems 39, nr 4 (21.10.2020): 5749–60. http://dx.doi.org/10.3233/jifs-189052.
Pełny tekst źródłaMarkovnikov, Nikita, i Irina Kipyatkova. "Encoder-decoder models for recognition of Russian speech". Information and Control Systems, nr 4 (4.10.2019): 45–53. http://dx.doi.org/10.31799/1684-8853-2019-4-45-53.
Pełny tekst źródłaAFLI, HAITHEM, LOÏC BARRAULT i HOLGER SCHWENK. "Building and using multimodal comparable corpora for machine translation". Natural Language Engineering 22, nr 4 (15.06.2016): 603–25. http://dx.doi.org/10.1017/s1351324916000152.
Pełny tekst źródłaKozlova, A. T. "Temporal Characteristics of Prosody in Imperative Utterances and the Phenomenon of Emphatic Length in the English Language". Bulletin of Kemerovo State University, nr 3 (27.10.2018): 192–96. http://dx.doi.org/10.21603/2078-8975-2018-3-192-196.
Pełny tekst źródłaLing, Xufeng, Jie Yang, Jingxin Liang, Huaizhong Zhu i Hui Sun. "A Deep-Learning Based Method for Analysis of Students’ Attention in Offline Class". Electronics 11, nr 17 (25.08.2022): 2663. http://dx.doi.org/10.3390/electronics11172663.
Pełny tekst źródłaAsgari, Meysam, Robert Gale, Katherine Wild, Hiroko Dodge i Jeffrey Kaye. "Automatic Assessment of Cognitive Tests for Differentiating Mild Cognitive Impairment: A Proof of Concept Study of the Digit Span Task". Current Alzheimer Research 17, nr 7 (16.11.2020): 658–66. http://dx.doi.org/10.2174/1567205017666201008110854.
Pełny tekst źródłaWoo, MinJae, Prabodh Mishra, Ju Lin, Snigdhaswin Kar, Nicholas Deas, Caleb Linduff, Sufeng Niu i in. "Complete and Resilient Documentation for Operational Medical Environments Leveraging Mobile Hands-free Technology in a Systems Approach: Experimental Study". JMIR mHealth and uHealth 9, nr 10 (12.10.2021): e32301. http://dx.doi.org/10.2196/32301.
Pełny tekst źródłaLevinson, S. E. "Structural methods in automatic speech recognition". Proceedings of the IEEE 73, nr 11 (1985): 1625–50. http://dx.doi.org/10.1109/proc.1985.13344.
Pełny tekst źródłaSun, Don X., i Frederick Jelinek. "Statistical Methods for Speech Recognition". Journal of the American Statistical Association 94, nr 446 (czerwiec 1999): 650. http://dx.doi.org/10.2307/2670189.
Pełny tekst źródłaRigazio, Luca. "Disciminative clustering methods for automatic speech recognition". Journal of the Acoustical Society of America 114, nr 4 (2003): 1719. http://dx.doi.org/10.1121/1.1627548.
Pełny tekst źródłaRussell, M. J., R. K. Moore i M. J. Tomlinson. "Dynamic Programming and Statistical Modelling in Automatic Speech Recognition". Journal of the Operational Research Society 37, nr 1 (styczeń 1986): 21. http://dx.doi.org/10.2307/2582543.
Pełny tekst źródłaRussell, M. J., R. K. Moore i M. J. Tomlinson. "Dynamic Programming and Statistical Modelling in Automatic Speech Recognition". Journal of the Operational Research Society 37, nr 1 (styczeń 1986): 21–30. http://dx.doi.org/10.1057/jors.1986.4.
Pełny tekst źródłaBourlard, H., i N. Morgan. "Continuous speech recognition by connectionist statistical methods". IEEE Transactions on Neural Networks 4, nr 6 (1993): 893–909. http://dx.doi.org/10.1109/72.286885.
Pełny tekst źródłaBojanic, Milana, Vlado Delic i Milan Secujski. "Relevance of the types and the statistical properties of features in the recognition of basic emotions in speech". Facta universitatis - series: Electronics and Energetics 27, nr 3 (2014): 425–33. http://dx.doi.org/10.2298/fuee1403425b.
Pełny tekst źródłaKundegorski, Mikolaj, Philip J. B. Jackson i Bartosz Ziółko. "Two-Microphone Dereverberation for Automatic Speech Recognition of Polish". Archives of Acoustics 39, nr 3 (1.03.2015): 411–20. http://dx.doi.org/10.2478/aoa-2014-0045.
Pełny tekst źródłaSchultz, Benjamin G., Venkata S. Aditya Tarigoppula, Gustavo Noffs, Sandra Rojas, Anneke van der Walt, David B. Grayden i Adam P. Vogel. "Automatic speech recognition in neurodegenerative disease". International Journal of Speech Technology 24, nr 3 (4.05.2021): 771–79. http://dx.doi.org/10.1007/s10772-021-09836-w.
Pełny tekst źródłaO’Shaughnessy, Douglas. "Invited paper: Automatic speech recognition: History, methods and challenges". Pattern Recognition 41, nr 10 (październik 2008): 2965–79. http://dx.doi.org/10.1016/j.patcog.2008.05.008.
Pełny tekst źródłaO’Shaughnessy, Douglas D., i T. Nagarajan Li. "Better model and decoding methods for automatic speech recognition". Journal of the Acoustical Society of America 119, nr 5 (maj 2006): 3441–42. http://dx.doi.org/10.1121/1.4786938.
Pełny tekst źródłaDebnath, Saswati, i Pinki Roy. "Audio-Visual Automatic Speech Recognition Using PZM, MFCC and Statistical Analysis". International Journal of Interactive Multimedia and Artificial Intelligence 7, nr 2 (2021): 121. http://dx.doi.org/10.9781/ijimai.2021.09.001.
Pełny tekst źródłaDashtaki, Parnyan Bahrami. "An Investigation into Methodology and Metrics Employed to Evaluate the (Speech-to-Speech) Way in Translation Systems". Modern Applied Science 11, nr 4 (8.02.2017): 55. http://dx.doi.org/10.5539/mas.v11n4p55.
Pełny tekst źródłaStolcke, Andreas, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema i Marie Meteer. "Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech". Computational Linguistics 26, nr 3 (wrzesień 2000): 339–73. http://dx.doi.org/10.1162/089120100561737.
Pełny tekst źródłaSteeneken, Herman J. M., i Andrew Varga. "Assessment for automatic speech recognition: I. Comparison of assessment methods". Speech Communication 12, nr 3 (lipiec 1993): 241–46. http://dx.doi.org/10.1016/0167-6393(93)90094-2.
Pełny tekst źródłaHadiwinoto, P. N., i D. P. Lestari. "Data augmentation on spontaneous Indonesian automatic speech recognition using statistical machine translation". IOP Conference Series: Materials Science and Engineering 803 (28.05.2020): 012030. http://dx.doi.org/10.1088/1757-899x/803/1/012030.
Pełny tekst źródłaPartila, Pavol, Miroslav Voznak i Jaromir Tovarek. "Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System". Scientific World Journal 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/573068.
Pełny tekst źródłaSingh, Satyanand. "High level speaker specific features modeling in automatic speaker recognition system". International Journal of Electrical and Computer Engineering (IJECE) 10, nr 2 (1.04.2020): 1859. http://dx.doi.org/10.11591/ijece.v10i2.pp1859-1867.
Pełny tekst źródłaSkowronski, Mark D., i John G. Harris. "Statistical automatic species identification of microchiroptera from echolocation calls: Lessons learned from human automatic speech recognition". Journal of the Acoustical Society of America 116, nr 4 (październik 2004): 2639. http://dx.doi.org/10.1121/1.4808665.
Pełny tekst źródłaDing, Ing-Jr, i Yen-Ming Hsu. "An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition". Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/898729.
Pełny tekst źródłaKawahara, Tatsuya. "Transcription System Using Automatic Speech Recognition for the Japanese Parliament (Diet)". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 2 (22.07.2012): 2224–28. http://dx.doi.org/10.1609/aaai.v26i2.18962.
Pełny tekst źródłaJAFARI, AYYOOB, i FARSHAD ALMASGANJ. "USING NONLINEAR MODELING OF RECONSTRUCTED PHASE SPACE AND FREQUENCY DOMAIN ANALYSIS TO IMPROVE AUTOMATIC SPEECH RECOGNITION PERFORMANCE". International Journal of Bifurcation and Chaos 22, nr 03 (marzec 2012): 1250053. http://dx.doi.org/10.1142/s0218127412500538.
Pełny tekst źródłaRojathai, S., i M. Venkatesulu. "Investigation of ANFIS and FFBNN Recognition Methods Performance in Tamil Speech Word Recognition". International Journal of Software Innovation 2, nr 2 (kwiecień 2014): 43–53. http://dx.doi.org/10.4018/ijsi.2014040103.
Pełny tekst źródłaDua, Mohit, Rajesh Kumar Aggarwal i Mantosh Biswas. "Optimizing Integrated Features for Hindi Automatic Speech Recognition System". Journal of Intelligent Systems 29, nr 1 (1.10.2018): 959–76. http://dx.doi.org/10.1515/jisys-2018-0057.
Pełny tekst źródłaLiu, Chang, Pengyuan Zhang, Ta Li i Yonghong Yan. "Semantic Features Based N-Best Rescoring Methods for Automatic Speech Recognition". Applied Sciences 9, nr 23 (22.11.2019): 5053. http://dx.doi.org/10.3390/app9235053.
Pełny tekst źródłaStern, Richard, i Nelson Morgan. "Hearing Is Believing: Biologically Inspired Methods for Robust Automatic Speech Recognition". IEEE Signal Processing Magazine 29, nr 6 (listopad 2012): 34–43. http://dx.doi.org/10.1109/msp.2012.2207989.
Pełny tekst źródłaDeng, Li, i Don X. Sun. "A statistical approach to automatic speech recognition using the atomic speech units constructed from overlapping articulatory features". Journal of the Acoustical Society of America 95, nr 5 (maj 1994): 2702–19. http://dx.doi.org/10.1121/1.409839.
Pełny tekst źródłaMamyrbayev, Orken, Keylan Alimhan, Dina Oralbekova, Akbayan Bekarystankyzy i Bagashar Zhumazhanov. "Identifying the influence of transfer learning method in developing an end-to-end automatic speech recognition system with a low data level". Eastern-European Journal of Enterprise Technologies 1, nr 9(115) (28.02.2022): 84–92. http://dx.doi.org/10.15587/1729-4061.2022.252801.
Pełny tekst źródłaRaval, Deepang, Vyom Pathak, Muktan Patel i Brijesh Bhatt. "Improving Deep Learning based Automatic Speech Recognition for Gujarati". ACM Transactions on Asian and Low-Resource Language Information Processing 21, nr 3 (31.05.2022): 1–18. http://dx.doi.org/10.1145/3483446.
Pełny tekst źródłaMatveev, Yuri, Anton Matveev, Olga Frolova, Elena Lyakso i Nersisson Ruban. "Automatic Speech Emotion Recognition of Younger School Age Children". Mathematics 10, nr 14 (6.07.2022): 2373. http://dx.doi.org/10.3390/math10142373.
Pełny tekst źródłaLiao, Lyuchao, Francis Afedzie Kwofie, Zhifeng Chen, Guangjie Han, Yongqiang Wang, Yuyuan Lin i Dongmei Hu. "A Bidirectional Context Embedding Transformer for Automatic Speech Recognition". Information 13, nr 2 (29.01.2022): 69. http://dx.doi.org/10.3390/info13020069.
Pełny tekst źródłaGarberg, Roger B. "Automatic Speech Recognition Applications: A Study of Methods for Defining Command Vocabularies". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 39, nr 3 (październik 1995): 203–7. http://dx.doi.org/10.1177/154193129503900307.
Pełny tekst źródłaAggarwal, Rajesh Kumar, i Mayank Dave. "Acoustic modeling problem for automatic speech recognition system: conventional methods (Part I)". International Journal of Speech Technology 14, nr 4 (23.09.2011): 297–308. http://dx.doi.org/10.1007/s10772-011-9108-2.
Pełny tekst źródłaHaider, Fasih, Pierre Albert i Saturnino Luz. "User Identity Protection in Automatic Emotion Recognition through Disguised Speech". AI 2, nr 4 (25.11.2021): 636–49. http://dx.doi.org/10.3390/ai2040038.
Pełny tekst źródłaPipiras, Laurynas, Rytis Maskeliūnas i Robertas Damaševičius. "Lithuanian Speech Recognition Using Purely Phonetic Deep Learning". Computers 8, nr 4 (18.10.2019): 76. http://dx.doi.org/10.3390/computers8040076.
Pełny tekst źródłaProksch, Sven-Oliver, Christopher Wratil i Jens Wäckerle. "Testing the Validity of Automatic Speech Recognition for Political Text Analysis". Political Analysis 27, nr 3 (19.02.2019): 339–59. http://dx.doi.org/10.1017/pan.2018.62.
Pełny tekst źródłaŞchiopu, Daniela. "Using Statistical Methods in a Speech Recognition System for Romanian Language". IFAC Proceedings Volumes 46, nr 28 (2013): 99–103. http://dx.doi.org/10.3182/20130925-3-cz-3023.00078.
Pełny tekst źródła