Gotowa bibliografia na temat „Speech”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Speech”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Speech"
Mohamad Nasir, A. B., N. R. M. Nasir i F. H. M. Salleh. "SPEESH: speech-based mobile application for dysarthric speech recognition". Journal of Physics: Conference Series 1860, nr 1 (1.03.2021): 012003. http://dx.doi.org/10.1088/1742-6596/1860/1/012003.
Pełny tekst źródłaRogers, David, i Geoffrey Hill. "Speech! Speech!" World Literature Today 76, nr 1 (2002): 152. http://dx.doi.org/10.2307/40157092.
Pełny tekst źródłaMohammed Hashim, Suhair Safwat. "Speech Acts in Political Speeches". Journal of Modern Education Review 5, nr 7 (20.07.2015): 699–706. http://dx.doi.org/10.15341/jmer(2155-7993)/07.05.2015/008.
Pełny tekst źródłaLaird, Andrew. "Speech in Speech". Classical Review 49, nr 2 (październik 1999): 417–18. http://dx.doi.org/10.1093/cr/49.2.417.
Pełny tekst źródłaFurui, S., T. Kikuchi, Y. Shinnaka i C. Hori. "Speech-to-Text and Speech-to-Speech Summarization of Spontaneous Speech". IEEE Transactions on Speech and Audio Processing 12, nr 4 (lipiec 2004): 401–8. http://dx.doi.org/10.1109/tsa.2004.828699.
Pełny tekst źródłaPark, Seongjin. "Interpretation of speech rhythm: Speech error, speech rhythm, and speech proficiency". Journal of the Acoustical Society of America 153, nr 3_supplement (1.03.2023): A343. http://dx.doi.org/10.1121/10.0019094.
Pełny tekst źródłaJanai, Siddhanna, Shreekanth T., Chandan M. i Ajish K. Abraham. "Speech-to-Speech Conversion". International Journal of Ambient Computing and Intelligence 12, nr 1 (styczeń 2021): 184–206. http://dx.doi.org/10.4018/ijaci.2021010108.
Pełny tekst źródłaWidari, Kadek, i Ni Luh Yaniasti. "Japanese Directive Speech". International Journal of Multidisciplinary Sciences 1, nr 2 (1.06.2023): 147–58. http://dx.doi.org/10.37329/ijms.v1i2.2285.
Pełny tekst źródłaOder, Alp Bugra. "Speech Acts Revisited: Examining Illocutionary Speech Acts in Speeches of Mustafa Kemal Ataturk". Proceedings of The International Conference on Research in Humanities and Social Sciences 1, nr 1 (22.12.2023): 24–35. http://dx.doi.org/10.33422/icrhs.v1i1.130.
Pełny tekst źródłaChen, Jing, Xihong H. Wu, Xuefei F. Zou, Zhiping P. Zhang, Lijuan J. Xu, Mengyuan Y. Wang, Liang Li i Huisheng S. Chi. "Effect of speech rate on speech‐on‐speech masking". Journal of the Acoustical Society of America 123, nr 5 (maj 2008): 3713. http://dx.doi.org/10.1121/1.2935152.
Pełny tekst źródłaRozprawy doktorskie na temat "Speech"
Sun, Felix (Felix W. ). "Speech Representation Models for Speech Synthesis and Multimodal Speech Recognition". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106378.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-63).
The field of speech recognition has seen steady advances over the last two decades, leading to the accurate, real-time recognition systems available on mobile phones today. In this thesis, I apply speech modeling techniques developed for recognition to two other speech problems: speech synthesis and multimodal speech recognition with images. In both problems, there is a need to learn a relationship between speech sounds and another source of information. For speech synthesis, I show that using a neural network acoustic model results in a synthesizer that is more tolerant of noisy training data than previous work. For multimodal recognition, I show how information from images can be effectively integrated into the recognition search framework, resulting in improved accuracy when image data is available.
by Felix Sun.
M. Eng.
Alcaraz, Meseguer Noelia. "Speech Analysis for Automatic Speech Recognition". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9092.
Pełny tekst źródłaThe classical front end analysis in speech recognition is a spectral analysis which parametrizes the speech signal into feature vectors; the most popular set of them is the Mel Frequency Cepstral Coefficients (MFCC). They are based on a standard power spectrum estimate which is first subjected to a log-based transform of the frequency axis (mel- frequency scale), and then decorrelated by using a modified discrete cosine transform. Following a focused introduction on speech production, perception and analysis, this paper gives a study of the implementation of a speech generative model; whereby the speech is synthesized and recovered back from its MFCC representations. The work has been developed into two steps: first, the computation of the MFCC vectors from the source speech files by using HTK Software; and second, the implementation of the generative model in itself, which, actually, represents the conversion chain from HTK-generated MFCC vectors to speech reconstruction. In order to know the goodness of the speech coding into feature vectors and to evaluate the generative model, the spectral distance between the original speech signal and the one produced from the MFCC vectors has been computed. For that, spectral models based on Linear Prediction Coding (LPC) analysis have been used. During the implementation of the generative model some results have been obtained in terms of the reconstruction of the spectral representation and the quality of the synthesized speech.
Kleinschmidt, Tristan Friedrich. "Robust speech recognition using speech enhancement". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/31895/1/Tristan_Kleinschmidt_Thesis.pdf.
Pełny tekst źródłaBlank, Sarah Catrin. "Speech comprehension, speech production and recovery of propositional speech following aphasic stroke". Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407772.
Pełny tekst źródłaPrice, Moneca C. "Interactions between speech coders and disordered speech". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ28640.pdf.
Pełny tekst źródłaChong, Fong Loong. "Objective speech quality measurement for Chinese speech". Thesis, University of Canterbury. Computer Science and Software Engineering, 2005. http://hdl.handle.net/10092/9607.
Pełny tekst źródłaStedmon, Alexander Winstan. "Putting speech in, taking speech out : human factors in the use of speech interfaces". Thesis, University of Nottingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420342.
Pełny tekst źródłaMiyajima, C., D. Negi, Y. Ninomiya, M. Sano, K. Mori, K. Itou, K. Takeda i Y. Suenaga. "Audio-Visual Speech Database for Bimodal Speech Recognition". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10460.
Pełny tekst źródłaTang, Lihong. "Nonsensical speech : speech acts in postsocialist Chinese culture /". Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/6662.
Pełny tekst źródłaItakura, Fumitada, Tetsuya Shinde, Kiyoshi Tatara, Taisuke Ito, Ikuya Yokoo, Shigeki Matsubara, Kazuya Takeda i Nobuo Kawaguchi. "CIAIR speech corpus for real world speech recognition". The oriental chapter of COCOSDA (The International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques), 2002. http://hdl.handle.net/2237/15462.
Pełny tekst źródłaKsiążki na temat "Speech"
Hill, Geoffrey. Speech! Speech! London: Penguin Books, 2001.
Znajdź pełny tekst źródłaKidawara, Yutaka, Eiichiro Sumita i Hisashi Kawai, red. Speech-to-Speech Translation. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0595-9.
Pełny tekst źródłaKitano, Hiroaki. Speech-to-Speech Translation. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2732-9.
Pełny tekst źródłaAyers, Joe. Speech sampler: Speeches and analyses. Wyd. 2. Ruston, Wash: Communication Ventures, 1999.
Znajdź pełny tekst źródłaAffairs, Canada Dept of External. Speech. S.l: s.n, 1988.
Znajdź pełny tekst źródłaCheckland, Michael. Speech. London: BBC Corporate Press Office, 1987.
Znajdź pełny tekst źródłaHardcastle, William J. Speech Production and Speech Modelling. Dordrecht: Springer Netherlands, 1990.
Znajdź pełny tekst źródłaNATO, Advanced Study Institute on Speech Production and Speech Modelling (1st 1989 Bonas France). Speech production and speech modelling. Dordrecht, Netherlands: Kluwer Academic Publishers, 1990.
Znajdź pełny tekst źródłaSpicer, Robert N. Free Speech and False Speech. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-69820-5.
Pełny tekst źródłaHardcastle, William J., i Alain Marchal, red. Speech Production and Speech Modelling. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8.
Pełny tekst źródłaCzęści książek na temat "Speech"
Danon-Boileau, Laurent. "Associative Speech, Compulsive Speech". W Psychoanalysts in Session, 7–11. Abingdon, Oxon; New York, NY: Routledge, 2021. | Series: The new library of psychoanalysis | “Published in French, 2016”–Title page verso.: Routledge, 2020. http://dx.doi.org/10.4324/9780429196751-1a.
Pełny tekst źródłaHartmann, William M. "Speech". W Principles of Musical Acoustics, 227–36. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-6786-1_22.
Pełny tekst źródłaTurkstra, Lyn S. "Speech". W Encyclopedia of Clinical Neuropsychology, 3243. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_923.
Pełny tekst źródłaTurkstra, Lyn S. "Speech". W Encyclopedia of Clinical Neuropsychology, 1. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_923-3.
Pełny tekst źródłaWeik, Martin H. "speech". W Computer Science and Communications Dictionary, 1635. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_17909.
Pełny tekst źródłaBean Ellawadi, Allison. "Speech". W Encyclopedia of Autism Spectrum Disorders, 1. New York, NY: Springer New York, 2017. http://dx.doi.org/10.1007/978-1-4614-6435-8_1699-3.
Pełny tekst źródłaBean, Allison. "Speech". W Encyclopedia of Autism Spectrum Disorders, 2953. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-1698-3_1699.
Pełny tekst źródłaHoward-Jones, Paul. "Speech". W Evolution of the Learning Brain, 101–18. Abingdon, Oxon ; New York, NY : Routledge, 2018.: Routledge, 2018. http://dx.doi.org/10.4324/9781315150857-6.
Pełny tekst źródłaLucey, Michael. "Speech". W The Proustian Mind, 191–207. London: Routledge, 2022. http://dx.doi.org/10.4324/9780429341472-16.
Pełny tekst źródłaTait, James. "Speech". W Entering Architectural Practice, 317–31. Abingdon, Oxon ; New York : Routledge, 2021.: Routledge, 2020. http://dx.doi.org/10.4324/9780429346569-22.
Pełny tekst źródłaStreszczenia konferencji na temat "Speech"
Alonso-Mora, Javier. "Keynote Speeches *Ranked by speech time Keynote Speech 1". W 2020 3rd International Conference on Unmanned Systems (ICUS). IEEE, 2020. http://dx.doi.org/10.1109/icus50048.2020.9274922.
Pełny tekst źródłaNewell, Alan F., John L. Arnott i R. Dye. "A full speed speech simulation of speech recognition machines". W European Conference on Speech Technology. ISCA: ISCA, 1987. http://dx.doi.org/10.21437/ecst.1987-207.
Pełny tekst źródłaLouw, Johannes A., Daniel R. van Niekerk i Georg I. Schlünz. "Introducing the Speect speech synthesis platform". W The Blizzard Challenge 2010. ISCA: ISCA, 2010. http://dx.doi.org/10.21437/blizzard.2010-4.
Pełny tekst źródłaZhu, Jinwei, Huan Chen, Xing Wen, Zhenlin Huang i Liuqi Zhao. "An Adaptive Speech Speed Algorithm for Improving Continuous Speech Recognition". W ICMLCA 2023: 2023 4th International Conference on Machine Learning and Computer Application. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3650215.3650322.
Pełny tekst źródłaHuckvale, Mark. "Speech synthesis, speech simulation and speech science". W 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-388.
Pełny tekst źródłaMineeva, M. I., i D. A. Korneev. "Low-Speed Speech Compression Systems". W 2023 Dynamics of Systems, Mechanisms and Machines (Dynamics). IEEE, 2023. http://dx.doi.org/10.1109/dynamics60586.2023.10349495.
Pełny tekst źródłaLin, Weigan. "Cassinian Waveguides [Keynote Speeches: Speech 1]". W 2007 5th International Conference on Communications, Circuits and Systems. IEEE, 2007. http://dx.doi.org/10.1109/icccas.2007.4348120.
Pełny tekst źródłaIzzad, M., Nursuriati Jamil i Zainab Abu Bakar. "Speech/non-speech detection in Malay language spontaneous speech". W 2013 International Conference on Computing, Management and Telecommunications (ComManTel). IEEE, 2013. http://dx.doi.org/10.1109/commantel.2013.6482394.
Pełny tekst źródłaKrishna, Gautam, Co Tran, Jianguo Yu i Ahmed H. Tewfik. "Speech Recognition with No Speech or with Noisy Speech". W ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683453.
Pełny tekst źródłaClark, Leigh, Benjamin R. Cowan, Abi Roper, Stephen Lindsay i Owen Sheers. "Speech diversity and speech interfaces". W CUI '20: 2nd Conference on Conversational User Interfaces. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3405755.3406139.
Pełny tekst źródłaRaporty organizacyjne na temat "Speech"
Weinstein, C. J. Speech-to-Speech Translation: Technology and Applications Study. Fort Belvoir, VA: Defense Technical Information Center, maj 2002. http://dx.doi.org/10.21236/ada401684.
Pełny tekst źródłaSamuel, Arthur G. Levels of Processing of Speech and Non-Speech. Fort Belvoir, VA: Defense Technical Information Center, maj 1991. http://dx.doi.org/10.21236/ada237796.
Pełny tekst źródłaGordon, Jane. Use of synthetic speech in tests of speech discrimination. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.5327.
Pełny tekst źródłaOre, Brian M. Speech Recognition, Articulatory Feature Detection, and Speech Synthesis in Multiple Languages. Fort Belvoir, VA: Defense Technical Information Center, listopad 2009. http://dx.doi.org/10.21236/ada519140.
Pełny tekst źródłaArtayev, S. N., L. A. Lidzhiyeva i ZH A. Mukabenova. Listen to native speech! OFERNIO, luty 2019. http://dx.doi.org/10.12731/ofernio.2019.24050.
Pełny tekst źródłaSabrin, Howard. UNIX Speech Processing Development. Fort Belvoir, VA: Defense Technical Information Center, październik 1997. http://dx.doi.org/10.21236/ada332983.
Pełny tekst źródłaHogden, J. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding. Office of Scientific and Technical Information (OSTI), grudzień 1996. http://dx.doi.org/10.2172/432946.
Pełny tekst źródłaShao, Yang, Soundararajan Srinivasan, Zhaozhang Jin i DeLiang Wang. A Computational Auditory Scene Analysis System for Speech Segregation and Robust Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2007. http://dx.doi.org/10.21236/ad1001212.
Pełny tekst źródłaJosephson, John, i James Russell. Separation of Speech from Background. Fort Belvoir, VA: Defense Technical Information Center, grudzień 2004. http://dx.doi.org/10.21236/ada428696.
Pełny tekst źródłaMaybury, Mark T. Auditory Models for Speech Analysis. Fort Belvoir, VA: Defense Technical Information Center, styczeń 1988. http://dx.doi.org/10.21236/ada197322.
Pełny tekst źródła