Artykuły w czasopismach na temat „Perceptual features for speech recognition”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Perceptual features for speech recognition”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Li, Guan Yu, Hong Zhi Yu, Yong Hong Li i Ning Ma. "Features Extraction for Lhasa Tibetan Speech Recognition". Applied Mechanics and Materials 571-572 (czerwiec 2014): 205–8. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.205.
Pełny tekst źródłaHaque, Serajul, Roberto Togneri i Anthony Zaknich. "Perceptual features for automatic speech recognition in noisy environments". Speech Communication 51, nr 1 (styczeń 2009): 58–75. http://dx.doi.org/10.1016/j.specom.2008.06.002.
Pełny tekst źródłaTrabelsi, Imen, i Med Salim Bouhlel. "Comparison of Several Acoustic Modeling Techniques for Speech Emotion Recognition". International Journal of Synthetic Emotions 7, nr 1 (styczeń 2016): 58–68. http://dx.doi.org/10.4018/ijse.2016010105.
Pełny tekst źródłaDua, Mohit, Rajesh Kumar Aggarwal i Mantosh Biswas. "Optimizing Integrated Features for Hindi Automatic Speech Recognition System". Journal of Intelligent Systems 29, nr 1 (1.10.2018): 959–76. http://dx.doi.org/10.1515/jisys-2018-0057.
Pełny tekst źródłaAl Mahmud, Nahyan, i Shahfida Amjad Munni. "Qualitative Analysis of PLP in LSTM for Bangla Speech Recognition". International journal of Multimedia & Its Applications 12, nr 5 (30.10.2020): 1–8. http://dx.doi.org/10.5121/ijma.2020.12501.
Pełny tekst źródłaKamińska, Dorota. "Emotional Speech Recognition Based on the Committee of Classifiers". Entropy 21, nr 10 (21.09.2019): 920. http://dx.doi.org/10.3390/e21100920.
Pełny tekst źródłaDmitrieva, E., V. Gelman, K. Zaitseva i A. Orlov. "Psychophysiological features of perceptual learning in the process of speech emotional prosody recognition". International Journal of Psychophysiology 85, nr 3 (wrzesień 2012): 375. http://dx.doi.org/10.1016/j.ijpsycho.2012.07.034.
Pełny tekst źródłaSeyedin, Sanaz, Seyed Mohammad Ahadi i Saeed Gazor. "New Features Using Robust MVDR Spectrum of Filtered Autocorrelation Sequence for Robust Speech Recognition". Scientific World Journal 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/634160.
Pełny tekst źródłaKaur, Gurpreet, Mohit Srivastava i Amod Kumar. "Genetic Algorithm for Combined Speaker and Speech Recognition using Deep Neural Networks". Journal of Telecommunications and Information Technology 2 (29.06.2018): 23–31. http://dx.doi.org/10.26636/jtit.2018.119617.
Pełny tekst źródłaTrabelsi, Imen, i Med Salim Bouhlel. "Feature Selection for GUMI Kernel-Based SVM in Speech Emotion Recognition". International Journal of Synthetic Emotions 6, nr 2 (lipiec 2015): 57–68. http://dx.doi.org/10.4018/ijse.2015070104.
Pełny tekst źródłaLalitha, S., i Deepa Gupta. "An Encapsulation of Vital Non-Linear Frequency Features for Various Speech Applications". Journal of Computational and Theoretical Nanoscience 17, nr 1 (1.01.2020): 303–7. http://dx.doi.org/10.1166/jctn.2020.8666.
Pełny tekst źródłaLinkai Bu i T. D. Church. "Perceptual speech processing and phonetic feature mapping for robust vowel recognition". IEEE Transactions on Speech and Audio Processing 8, nr 2 (marzec 2000): 105–14. http://dx.doi.org/10.1109/89.824695.
Pełny tekst źródłaHelali, W., Ζ. Hajaiej i A. Cherif. "Real Time Speech Recognition based on PWP Thresholding and MFCC using SVM". Engineering, Technology & Applied Science Research 10, nr 5 (26.10.2020): 6204–8. http://dx.doi.org/10.48084/etasr.3759.
Pełny tekst źródłaBurgos, Pepi, Roeland van Hout i Brigitte Planken. "Matching Acoustical Properties and Native Perceptual Assessments of L2 Speech". Open Linguistics 4, nr 1 (1.01.2018): 199–226. http://dx.doi.org/10.1515/opli-2018-0011.
Pełny tekst źródłaSmith, Kimberly G., i Daniel Fogerty. "Integration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence Recognition". Journal of Speech, Language, and Hearing Research 58, nr 6 (grudzień 2015): 1805–17. http://dx.doi.org/10.1044/2015_jslhr-h-14-0272.
Pełny tekst źródłaCAI, Shang, Yeming XIAO, Jielin PAN, Qingwei ZHAO i Yonghong YAN. "Noise Robust Feature Scheme for Automatic Speech Recognition Based on Auditory Perceptual Mechanisms". IEICE Transactions on Information and Systems E95.D, nr 6 (2012): 1610–18. http://dx.doi.org/10.1587/transinf.e95.d.1610.
Pełny tekst źródłaNashipudimath, Madhu M., Pooja Pillai, Anupama Subramanian, Vani Nair i Sarah Khalife. "Voice Feature Extraction for Gender and Emotion Recognition". ITM Web of Conferences 40 (2021): 03008. http://dx.doi.org/10.1051/itmconf/20214003008.
Pełny tekst źródłaNair, Vani, Pooja Pillai, Anupama Subramanian, Sarah Khalife i Dr Madhu Nashipudimath. "Voice Feature Extraction for Gender and Emotion Recognition". International Journal on Recent and Innovation Trends in Computing and Communication 9, nr 5 (31.05.2021): 17–22. http://dx.doi.org/10.17762/ijritcc.v9i5.5463.
Pełny tekst źródłaDavies-Venn, Evelyn, i Pamela Souza. "The Role of Spectral Resolution, Working Memory, and Audibility in Explaining Variance in Susceptibility to Temporal Envelope Distortion". Journal of the American Academy of Audiology 25, nr 06 (czerwiec 2014): 592–604. http://dx.doi.org/10.3766/jaaa.25.6.9.
Pełny tekst źródłaCabral, Frederico Soares, Hidekazu Fukai i Satoshi Tamura. "Feature Extraction Methods Proposed for Speech Recognition Are Effective on Road Condition Monitoring Using Smartphone Inertial Sensors". Sensors 19, nr 16 (9.08.2019): 3481. http://dx.doi.org/10.3390/s19163481.
Pełny tekst źródłaMassaro, Dominic W. "Multiple Book Review of Speech perception by ear and eye: A paradigm for psychological inquiry". Behavioral and Brain Sciences 12, nr 4 (grudzień 1989): 741–55. http://dx.doi.org/10.1017/s0140525x00025619.
Pełny tekst źródłaDua, Mohit, Rajesh Kumar Aggarwal i Mantosh Biswas. "Discriminative Training Using Noise Robust Integrated Features and Refined HMM Modeling". Journal of Intelligent Systems 29, nr 1 (20.02.2018): 327–44. http://dx.doi.org/10.1515/jisys-2017-0618.
Pełny tekst źródłaSchädler, Marc R., David Hülsmeier, Anna Warzybok i Birger Kollmeier. "Individual Aided Speech-Recognition Performance and Predictions of Benefit for Listeners With Impaired Hearing Employing FADE". Trends in Hearing 24 (styczeń 2020): 233121652093892. http://dx.doi.org/10.1177/2331216520938929.
Pełny tekst źródłaMyronova, T. Yu, i O. V. Kovalevska. "Methods of development orientational skills in a foreign text". Bulletin of Luhansk Taras Shevchenko National University, nr 4 (335) (2020): 195–202. http://dx.doi.org/10.12958/2227-2844-2020-4(335)-195-202.
Pełny tekst źródłaBach, Jörg-Hendrik, Jörn Anemüller i Birger Kollmeier. "Robust speech detection in real acoustic backgrounds with perceptually motivated features". Speech Communication 53, nr 5 (maj 2011): 690–706. http://dx.doi.org/10.1016/j.specom.2010.07.003.
Pełny tekst źródłaWolfe, Jace, Mila Duke, Erin Schafer, Christine Jones i Lori Rakita. "Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss". Journal of the American Academy of Audiology 28, nr 05 (maj 2017): 415–35. http://dx.doi.org/10.3766/jaaa.16015.
Pełny tekst źródłaAbid Noor, Ali O. "Robust speaker verification in band-localized noise conditions". Indonesian Journal of Electrical Engineering and Computer Science 13, nr 2 (1.02.2019): 499. http://dx.doi.org/10.11591/ijeecs.v13.i2.pp499-506.
Pełny tekst źródłaKwak, Yuna, Hosung Nam, Hyun-Woong Kim i Chai-Youn Kim. "Cross-Modal Correspondence Between Speech Sound and Visual Shape Influencing Perceptual Representation of Shape: the Role of Articulation and Pitch". Multisensory Research 33, nr 6 (17.06.2020): 569–98. http://dx.doi.org/10.1163/22134808-20191330.
Pełny tekst źródłaFrey, Brendan J., i Geoffrey E. Hinton. "Variational Learning in Nonlinear Gaussian Belief Networks". Neural Computation 11, nr 1 (1.01.1999): 193–213. http://dx.doi.org/10.1162/089976699300016872.
Pełny tekst źródłaGfeller, Kate, Dingfeng Jiang, Jacob J. Oleson, Virginia Driscoll i John F. Knutson. "Temporal Stability of Music Perception and Appraisal Scores of Adult Cochlear Implant Recipients". Journal of the American Academy of Audiology 21, nr 01 (styczeń 2010): 028–34. http://dx.doi.org/10.3766/jaaa.21.1.4.
Pełny tekst źródłaSheldon, Claire A., George L. Malcolm i Jason J. S. Barton. "Alexia With and Without Agraphia: An Assessment of Two Classical Syndromes". Canadian Journal of Neurological Sciences / Journal Canadien des Sciences Neurologiques 35, nr 5 (listopad 2008): 616–24. http://dx.doi.org/10.1017/s0317167100009410.
Pełny tekst źródłaNusbaum, Howard C. "Perceptual expectations, attention, and speech recognition." Journal of the Acoustical Society of America 127, nr 3 (marzec 2010): 1890. http://dx.doi.org/10.1121/1.3384714.
Pełny tekst źródłaAliūkaitė, Daiva, i Danguolė Mikulėnienė. "The narrative of an ordinary member of language community: WHERE and WHY is dialecticity of a locality created". Lietuvių kalba, nr 13 (20.12.2019): 1–22. http://dx.doi.org/10.15388/lk.2019.22481.
Pełny tekst źródłaZhao, Kai, i Dan Wang. "Research on Speech Recognition Method in Multi Layer Perceptual Network Environment". International Journal of Circuits, Systems and Signal Processing 15 (24.08.2021): 996–1004. http://dx.doi.org/10.46300/9106.2021.15.107.
Pełny tekst źródłaAxelrod, Scott E. "Speech recognition utilizing multitude of speech features". Journal of the Acoustical Society of America 128, nr 4 (2010): 2259. http://dx.doi.org/10.1121/1.3500788.
Pełny tekst źródłaAllen, Jont B., i Marion Regnier. "SPEECH AND METHOD FOR IDENTIFYING PERCEPTUAL FEATURES". Journal of the Acoustical Society of America 132, nr 4 (2012): 2779. http://dx.doi.org/10.1121/1.4757834.
Pełny tekst źródłaMattys, Sven L., i Shekeila D. Palmer. "Divided attention disrupts perceptual encoding during speech recognition". Journal of the Acoustical Society of America 137, nr 3 (marzec 2015): 1464–72. http://dx.doi.org/10.1121/1.4913507.
Pełny tekst źródłaEide, Ellen M. "Speech recognition using discriminant features". Journal of the Acoustical Society of America 126, nr 3 (2009): 1646. http://dx.doi.org/10.1121/1.3230471.
Pełny tekst źródłaHuang, Chang-Han, i Frank Torsten Bernd Seide. "Tone features for speech recognition". Journal of the Acoustical Society of America 117, nr 5 (2005): 2698. http://dx.doi.org/10.1121/1.1932393.
Pełny tekst źródłaBahl, Lalit R. "Speech recognition using dynamic features". Journal of the Acoustical Society of America 102, nr 6 (1997): 3252. http://dx.doi.org/10.1121/1.420242.
Pełny tekst źródłaShafiro, Valeriy, Daniel Fogerty, Kimberly Smith i Stanley Sheft. "Perceptual Organization of Interrupted Speech and Text". Journal of Speech, Language, and Hearing Research 61, nr 10 (26.10.2018): 2578–88. http://dx.doi.org/10.1044/2018_jslhr-h-17-0477.
Pełny tekst źródłaJones, Harrison N., Kelly D. Crisp, Maragatha Kuchibhatla, Leslie Mahler, Thomas Risoli, Carlee W. Jones i Priya Kishnani. "Auditory-Perceptual Speech Features in Children With Down Syndrome". American Journal on Intellectual and Developmental Disabilities 124, nr 4 (1.07.2019): 324–38. http://dx.doi.org/10.1352/1944-7558-124.4.324.
Pełny tekst źródłaRichter, Caitlin, Naomi H. Feldman, Harini Salgado i Aren Jansen. "Evaluating Low-Level Speech Features Against Human Perceptual Data". Transactions of the Association for Computational Linguistics 5 (grudzień 2017): 425–40. http://dx.doi.org/10.1162/tacl_a_00071.
Pełny tekst źródłaSmall, Larry H. "Listeners' Perceptual Strategies in Word Recognition: Shadowing Misarticulated Speech". Perceptual and Motor Skills 69, nr 3_suppl (grudzień 1989): 1211–16. http://dx.doi.org/10.2466/pms.1989.69.3f.1211.
Pełny tekst źródłaSmall, Larry H. "Listeners’ Perceptual Strategies in Word Recognition: Shadowing Misarticulated Speech". Perceptual and Motor Skills 69, nr 3-2 (grudzień 1989): 1211–16. http://dx.doi.org/10.1177/00315125890693-226.
Pełny tekst źródłaNusbaum, Howard. "Perceptual learning and expectations: Cognitive mechanisms in speech recognition." Journal of the Acoustical Society of America 125, nr 4 (kwiecień 2009): 2604. http://dx.doi.org/10.1121/1.4783910.
Pełny tekst źródłaThomas-Stonell, Nancy, Ava-Lee Kotler, Herbert Leeper i Philip Doyle. "Computerized speech recognition: influence of intelligibility and perceptual consistency on recognition accuracy". Augmentative and Alternative Communication 14, nr 1 (styczeń 1998): 51–56. http://dx.doi.org/10.1080/07434619812331278196.
Pełny tekst źródłaNajnin, Shamima, i Bonny Banerjee. "Speech recognition using cepstral articulatory features". Speech Communication 107 (luty 2019): 26–37. http://dx.doi.org/10.1016/j.specom.2019.01.002.
Pełny tekst źródłaPotamianos, Alexandros. "Novel features for robust speech recognition". Journal of the Acoustical Society of America 112, nr 5 (listopad 2002): 2278. http://dx.doi.org/10.1121/1.4779131.
Pełny tekst źródłaLee, Youngjik Lee, i Kyu-Woong Hwang Hwang. "Selecting Good Speech Features for Recognition". ETRI Journal 18, nr 1 (1.04.1996): 29–40. http://dx.doi.org/10.4218/etrij.96.0196.0013.
Pełny tekst źródła