Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Acoustic analysis of speech“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Acoustic analysis of speech" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Acoustic analysis of speech"
Masih, Dawa A. A., Nawzad K. Jalal, Manar N. A. Mohammed und Sulaiman A. Mustafa. „The Assessment of Acoustical Characteristics for Recent Mosque Buildings in Erbil City of Iraq“. ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 9, Nr. 1 (01.03.2021): 51–66. http://dx.doi.org/10.14500/aro.10784.
Der volle Inhalt der QuelleDuran, Sebastian, Martyn Chambers und Ioannis Kanellopoulos. „An Archaeoacoustics Analysis of Cistercian Architecture: The Case of the Beaulieu Abbey“. Acoustics 3, Nr. 2 (26.03.2021): 252–69. http://dx.doi.org/10.3390/acoustics3020018.
Der volle Inhalt der QuelleAskenfelt, Anders G., und Britta Hammarberg. „Speech Waveform Perturbation Analysis“. Journal of Speech, Language, and Hearing Research 29, Nr. 1 (März 1986): 50–64. http://dx.doi.org/10.1044/jshr.2901.50.
Der volle Inhalt der QuelleChenausky, Karen, Joel MacAuslan und Richard Goldhor. „Acoustic Analysis of PD Speech“. Parkinson's Disease 2011 (2011): 1–13. http://dx.doi.org/10.4061/2011/435232.
Der volle Inhalt der QuelleM, Manjutha. „Acoustic Analysis of Formant Frequency Variation in Tamil Stuttered Speech“. Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (25.07.2020): 2934–44. http://dx.doi.org/10.5373/jardcs/v12sp7/20202438.
Der volle Inhalt der QuelleWeedon, B., E. Hellier, J. Edworthy und K. Walters. „Perceived Urgency in Speech Warnings“. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, Nr. 22 (Juli 2000): 690–93. http://dx.doi.org/10.1177/154193120004402251.
Der volle Inhalt der QuelleKeller, Eric, Patrick Vigneux und Martine Laframboise. „Acoustic analysis of neurologically impaired speech“. International Journal of Language & Communication Disorders 26, Nr. 1 (Januar 1991): 75–94. http://dx.doi.org/10.3109/13682829109011993.
Der volle Inhalt der QuelleThakore, Jogin, Viliam Rapcan, Shona Darcy, Sherlyn Yeap, Natasha Afzal und Richard Reilly. „Acoustic and temporal analysis of speech“. International Clinical Psychopharmacology 26 (September 2011): e131. http://dx.doi.org/10.1097/01.yic.0000405855.63819.e2.
Der volle Inhalt der QuelleSondhi, Savita, Munna Khan, Ritu Vijay, Ashok K. Salhan und Satish Chouhan. „Acoustic analysis of speech under stress“. International Journal of Bioinformatics Research and Applications 11, Nr. 5 (2015): 417. http://dx.doi.org/10.1504/ijbra.2015.071942.
Der volle Inhalt der QuelleO'Shaughnessy, Douglas. „Acoustic Analysis for Automatic Speech Recognition“. Proceedings of the IEEE 101, Nr. 5 (Mai 2013): 1038–53. http://dx.doi.org/10.1109/jproc.2013.2251592.
Der volle Inhalt der QuelleDissertationen zum Thema "Acoustic analysis of speech"
John, Jeeva. „Acoustic Analysis of Speech of Persons with Autistic Spectrum Disorders“. Bowling Green State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1206329066.
Der volle Inhalt der QuelleNulsen, Susan, und n/a. „Combining acoustic analysis and phonotactic analysis to improve automatic speech recognition“. University of Canberra. Information Sciences & Engineering, 1998. http://erl.canberra.edu.au./public/adt-AUC20060825.131042.
Der volle Inhalt der QuelleBrock, James L. „Acoustic classification using independent component analysis /“. Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/2067.
Der volle Inhalt der QuelleSingh-Miller, Natasha 1981. „Neighborhood analysis methods in acoustic modeling for automatic speech recognition“. Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62450.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (p. 121-134).
This thesis investigates the problem of using nearest-neighbor based non-parametric methods for performing multi-class class-conditional probability estimation. The methods developed are applied to the problem of acoustic modeling for speech recognition. Neighborhood components analysis (NCA) (Goldberger et al. [2005]) serves as the departure point for this study. NCA is a non-parametric method that can be seen as providing two things: (1) low-dimensional linear projections of the feature space that allow nearest-neighbor algorithms to perform well, and (2) nearest-neighbor based class-conditional probability estimates. First, NCA is used to perform dimensionality reduction on acoustic vectors, a commonly addressed problem in speech recognition. NCA is shown to perform competitively with another commonly employed dimensionality reduction technique in speech known as heteroscedastic linear discriminant analysis (HLDA) (Kumar [1997]). Second, a nearest neighbor-based model related to NCA is created to provide a class-conditional estimate that is sensitive to the possible underlying relationship between the acoustic-phonetic labels. An embedding of the labels is learned that can be used to estimate the similarity or confusability between labels. This embedding is related to the concept of error-correcting output codes (ECOC) and therefore the proposed model is referred to as NCA-ECOC. The estimates provided by this method along with nearest neighbor information is shown to provide improvements in speech recognition performance (2.5% relative reduction in word error rate). Third, a model for calculating class-conditional probability estimates is proposed that generalizes GMM, NCA, and kernel density approaches. This model, called locally-adaptive neighborhood components analysis, LA-NCA, learns different low-dimensional projections for different parts of the space. The models exploits the fact that in different parts of the space different directions may be important for discrimination between the classes. This model is computationally intensive and prone to over-fitting, so methods for sub-selecting neighbors used for providing the classconditional estimates are explored. The estimates provided by LA-NCA are shown to give significant gains in speech recognition performance (7-8% relative reduction in word error rate) as well as phonetic classification.
by Natasha Singh-Miller.
Ph.D.
Williams, A. Lynn. „Phonologic and Acoustic Analyses of Final Consonant Omission“. Digital Commons @ East Tennessee State University, 1998. https://dc.etsu.edu/etsu-works/2008.
Der volle Inhalt der QuelleLee, Matthew E. „Acoustic Models for the Analysis and Synthesis of the Singing Voice“. Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6859.
Der volle Inhalt der QuelleNg, So-sum. „Acoustic analysis of contour tones produced by Cantonese dysarthric speakers“. Click to view the E-thesis via HKUTO, 2001. http://sunzi.lib.hku.hk/hkuto/record/B36208024.
Der volle Inhalt der Quelle"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, May 4, 2001." Also available in print.
Srinivasan, Nandini. „Acoustic Analysis of English Vowels by Young Spanish-English Bilingual Language Learners“. Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10815722.
Der volle Inhalt der QuelleSeveral studies across various languages have shown that monolingual listeners perceive significant differences between the speech of monolinguals and bilinguals. However, these differences may not always affect the phoneme category as identified by the listener or the speaker; differences may often be found between tokens corresponding to unique phonological categories and, as such, be more easily detectable through acoustic analysis. We hypothesized that unshared English vowels produced by young Spanish-English bilinguals would have measurably different formant values and duration than the same vowels produced by young English monolinguals because of Spanish influence on English phonology. We did not find significant differences in formant values between the two groups, but we found that SpanishEnglish bilinguals produced certain vowels with longer duration than English monolinguals. Our findings add to the ever-growing body of literature on bilingual language acquisition and the perception of accentedness.
Odlozinski, Lisa M. „An acoustic analysis of speech rate control procedures in Parkinson's disease“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0004/MQ30738.pdf.
Der volle Inhalt der QuelleCao, Ying Alisa 1979. „Analysis of acoustic cues for identifying consonant /ð/ in continuous speech“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87279.
Der volle Inhalt der QuelleBücher zum Thema "Acoustic analysis of speech"
Kent, Raymond D. The acoustic analysis of speech. San Diego: Singular, 1996.
Den vollen Inhalt der Quelle finden1940-, Read Charles, Hrsg. The acoustic analysis of speech. 2. Aufl. Australia: Singular/Thomson Learning, 2002.
Den vollen Inhalt der Quelle finden1940-, Read Charles, Hrsg. The acoustic analysis of speech. San Diego, Calif: Singular Pub. Group, 1992.
Den vollen Inhalt der Quelle findenKent, Raymond D. The acoustic analysis of speech. London: Whurr, 1992.
Den vollen Inhalt der Quelle findenPatryn, Ryszard. Phonetic-acoustic analysis of Polish speech sounds. Warszawa: Wydawnictwa Uniwersytetu Warszawskiego, 1987.
Den vollen Inhalt der Quelle findenChuang, Ming-Fei. Interactive tools for sound signal analysis. Monterey, Calif: Naval Postgraduate School, 1997.
Den vollen Inhalt der Quelle findenHarrington, Jonathan. Techniques in speech acoustics. Dordrecht: Kluwer Academic Publishers, 1999.
Den vollen Inhalt der Quelle finden1952-, Cassidy Steve, Hrsg. Techniques in speech acoustics. Dordrecht: Kluwer Academic Publishers, 1999.
Den vollen Inhalt der Quelle findenBolla, Kálmán. A phonetic conspectus of English: The articulatory and acoustic features of British English speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1989.
Den vollen Inhalt der Quelle findenSchuller, Björn W. Intelligent Audio Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Acoustic analysis of speech"
Verkhodanova, Vasilisa, Vladimir Shapranov und Irina Kipyatkova. „Hesitations in Spontaneous Speech: Acoustic Analysis and Detection“. In Speech and Computer, 398–406. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66429-3_39.
Der volle Inhalt der QuelleFant, Gunnar, Anita Kruckenberg und Johan Liljencrants. „Acoustic-phonetic Analysis of Prominence in Swedish“. In Text, Speech and Language Technology, 55–86. Dordrecht: Springer Netherlands, 2000. http://dx.doi.org/10.1007/978-94-011-4317-2_3.
Der volle Inhalt der QuelleHowell, Peter, Mark Williams und Louise Vause. „Acoustic Analysis of Repetitions in Stutterers’ Speech“. In Speech Motor Dynamics in Stuttering, 371–80. Vienna: Springer Vienna, 1987. http://dx.doi.org/10.1007/978-3-7091-6969-8_29.
Der volle Inhalt der Quellede Cheveigné, Alain. „The Cancellation Principle in Acoustic Scene Analysis“. In Speech Separation by Humans and Machines, 245–59. Boston, MA: Springer US, 2005. http://dx.doi.org/10.1007/0-387-22794-6_16.
Der volle Inhalt der QuelleLi, Aijun. „Acoustic and Articulatory Analysis of Emotional Vowels“. In Encoding and Decoding of Emotional Speech, 109–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47691-8_4.
Der volle Inhalt der QuelleFant, Gunnar. „Acoustical Analysis of Speech“. In Encyclopedia of Acoustics, 1589–98. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2007. http://dx.doi.org/10.1002/9780470172544.ch127.
Der volle Inhalt der QuelleBauer, Dominik, Jim Kannampuzha und Bernd J. Kröger. „Articulatory Speech Re-synthesis: Profiting from Natural Acoustic Speech Data“. In Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, 344–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03320-9_32.
Der volle Inhalt der QuelleDrugman, Thomas, Myriam Rijckaert, George Lawson und Marc Remacle. „Analysis and Quantification of Acoustic Artefacts in Tracheoesophageal Speech“. In Advances in Nonlinear Speech Processing, 104–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38847-7_14.
Der volle Inhalt der QuelleLudeña-Choez, Jimmy, und Ascensión Gallardo-Antolín. „NMF-Based Spectral Analysis for Acoustic Event Classification Tasks“. In Advances in Nonlinear Speech Processing, 9–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38847-7_2.
Der volle Inhalt der QuelleCui, Dandan, und Lianhong Cai. „Acoustic and Physiological Feature Analysis of Affective Speech“. In Lecture Notes in Computer Science, 912–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-37275-2_114.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Acoustic analysis of speech"
Pucher, Michael, und Dietmar Schabus. „Visio-articulatory to acoustic conversion of speech“. In FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813858.
Der volle Inhalt der QuelleItoh, Taisuke, Kazuya Takeda und Fumitada Itakura. „Acoustic analysis and recognition of whispered speech“. In Proceedings of ICASSP '02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.5743736.
Der volle Inhalt der QuelleItoh, Takeda und Itakura. „Acoustic analysis and recognition of whispered speech“. In IEEE International Conference on Acoustics Speech and Signal Processing ICASSP-02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.1005758.
Der volle Inhalt der QuelleHakim, Faisal Abdul, Miranti Indar Mandasari, Joko Sarwono, Khairurrijal, Mikrajuddin Abdullah, Wahyu Srigutomo, Sparisoma Viridi und Novitrian. „Acoustic Speech Analysis Of Wayang Golek Puppeteer“. In THE 4TH ASIAN PHYSICS SYMPOSIUM—AN INTERNATIONAL SYMPOSIUM. AIP, 2010. http://dx.doi.org/10.1063/1.3537939.
Der volle Inhalt der QuelleKrishnamurthy, Nitish, und John H. L. Hansen. „Speech babble: Analysis and modeling for speech systems“. In ICASSP 2008. IEEE International Conference on Acoustic, Speech and Signal Processes. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518657.
Der volle Inhalt der QuelleFan, Xing, und John H. L. Hansen. „Acoustic analysis for speaker identification of whispered speech“. In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495059.
Der volle Inhalt der QuelleCastellanos, G., G. Daza, L. Sanchez, O. Castrillon und J. Suarez. „Acoustic Speech Analysis for Hypernasality Detection in Children“. In Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2006. http://dx.doi.org/10.1109/iembs.2006.260572.
Der volle Inhalt der QuelleCastellanos, G., G. Daza, L. Sanchez, O. Castrillon und J. Suarez. „Acoustic Speech Analysis for Hypernasality Detection in Children“. In Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2006. http://dx.doi.org/10.1109/iembs.2006.4398702.
Der volle Inhalt der QuelleBerg, Yana A., Anastasia V. Nenko und Daria V. Borovikova. „Analysis of Acoustic Parameters of the Speech Apparatus“. In 2020 21st International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM). IEEE, 2020. http://dx.doi.org/10.1109/edm49804.2020.9153533.
Der volle Inhalt der QuelleGeethashree, A., und D. J. Ravi. „Acoustic and Spectral Analysis of Kannada Emotional Speech“. In Third International Conference on Current Trends in Engineering Science and Technology ICCTEST-2017. Grenze Scientific Society, 2017. http://dx.doi.org/10.21647/icctest/2017/48934.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Acoustic analysis of speech"
Colosi, John A. An Analysis of Long-Range Acoustic Propagation Fluctuations and Upper Ocean Sound Speed Variability. Fort Belvoir, VA: Defense Technical Information Center, Dezember 2005. http://dx.doi.org/10.21236/ada441242.
Der volle Inhalt der QuelleColosi, John A. An Analysis of Long-Range Acoustic Propagation Fluctuations and Upper Ocean Sound Speed Variability. Fort Belvoir, VA: Defense Technical Information Center, September 2003. http://dx.doi.org/10.21236/ada629913.
Der volle Inhalt der QuelleColosi, John A. An Analysis of Long-Range Acoustic Propagation Fluctuations and Upper Ocean Sound Speed Variability. Fort Belvoir, VA: Defense Technical Information Center, September 2001. http://dx.doi.org/10.21236/ada625607.
Der volle Inhalt der QuelleColosi, John A., und Jinshan Xu. An Analysis of Upper Ocean Sound Speed Variability and its Effects on Long-Range Acoustic Fluctuations Observed for the North Pacific Acoustic Laboratory. Fort Belvoir, VA: Defense Technical Information Center, Juli 2006. http://dx.doi.org/10.21236/ada450109.
Der volle Inhalt der QuelleColosi, John A. Analysis and Modeling of Ocean Acoustic Fluctuations and Moored Observations of Philippine Sea Sound-Speed Structure. Fort Belvoir, VA: Defense Technical Information Center, September 2009. http://dx.doi.org/10.21236/ada531640.
Der volle Inhalt der QuelleColosi, John A. Analysis and Modeling of Ocean Acoustic Fluctuations and Moored Observations of Philippine Sea Sound-Speed Structure. Fort Belvoir, VA: Defense Technical Information Center, September 2011. http://dx.doi.org/10.21236/ada571573.
Der volle Inhalt der QuelleColosi, John A. Analysis and Modeling of Ocean Acoustic Fluctuations and Moored Observations of Philippine Sea Sound-Speed Structure. Fort Belvoir, VA: Defense Technical Information Center, September 2012. http://dx.doi.org/10.21236/ada574824.
Der volle Inhalt der QuelleOstendorf, Mari, und J. R. Rohlicek. Segment-Based Acoustic Models for Continuous Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1992. http://dx.doi.org/10.21236/ada259780.
Der volle Inhalt der QuelleBrown, Peter F. The Acoustic-Modeling Problem in Automatic Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1987. http://dx.doi.org/10.21236/ada188529.
Der volle Inhalt der QuelleOstendorf, Mari, und J. R. Rohlicek. Segment-Based Acoustic Models for Continuous Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, Februar 1994. http://dx.doi.org/10.21236/ada276109.
Der volle Inhalt der Quelle