Gotowa bibliografia na temat „Speech”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Speech”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Speech"

1

Mohamad Nasir, A. B., N. R. M. Nasir i F. H. M. Salleh. "SPEESH: speech-based mobile application for dysarthric speech recognition". Journal of Physics: Conference Series 1860, nr 1 (1.03.2021): 012003. http://dx.doi.org/10.1088/1742-6596/1860/1/012003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rogers, David, i Geoffrey Hill. "Speech! Speech!" World Literature Today 76, nr 1 (2002): 152. http://dx.doi.org/10.2307/40157092.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Mohammed Hashim, Suhair Safwat. "Speech Acts in Political Speeches". Journal of Modern Education Review 5, nr 7 (20.07.2015): 699–706. http://dx.doi.org/10.15341/jmer(2155-7993)/07.05.2015/008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Laird, Andrew. "Speech in Speech". Classical Review 49, nr 2 (październik 1999): 417–18. http://dx.doi.org/10.1093/cr/49.2.417.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Furui, S., T. Kikuchi, Y. Shinnaka i C. Hori. "Speech-to-Text and Speech-to-Speech Summarization of Spontaneous Speech". IEEE Transactions on Speech and Audio Processing 12, nr 4 (lipiec 2004): 401–8. http://dx.doi.org/10.1109/tsa.2004.828699.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Park, Seongjin. "Interpretation of speech rhythm: Speech error, speech rhythm, and speech proficiency". Journal of the Acoustical Society of America 153, nr 3_supplement (1.03.2023): A343. http://dx.doi.org/10.1121/10.0019094.

Pełny tekst źródła
Streszczenie:
It has been well-known that second language learners are affected by their first language when producing their L2. For speech rhythm, it has been suggested that L2 speakers are affected by L1 speech rhythm (e.g., Korean learners of English produce English without reducing the duration of unstressed vowels), and the effect is greater when speakers are beginner or intermediate-level language learners. This study, however, suggests that the direction of the effect is not always the same as researchers expected, and shows how easily speech rhythm is influenced by speech errors. The result of this study shows the relationship between the type of speech errors and speech rhythm metrics, and how that affects the perceptual proficiency of L2 speakers as well as L1 speakers. Future studies will be conducted to examine the way to infer the type of speech errors using speech rhythm metrics.
Style APA, Harvard, Vancouver, ISO itp.
7

Janai, Siddhanna, Shreekanth T., Chandan M. i Ajish K. Abraham. "Speech-to-Speech Conversion". International Journal of Ambient Computing and Intelligence 12, nr 1 (styczeń 2021): 184–206. http://dx.doi.org/10.4018/ijaci.2021010108.

Pełny tekst źródła
Streszczenie:
A novel approach to build a speech-to-speech conversion (STSC) system for individuals with speech impairment dysarthria is described. STSC system takes impaired speech having inherent disturbance as input and produces a synthesized output speech with good pronunciation and noise free utterance. The STSC system involves two stages, namely automatic speech recognition (ASR) and automatic speech synthesis. ASR transforms speech into text, while automatic speech synthesis (or text-to-speech [TTS]) performs the reverse task. At present, the recognition system is developed for a small vocabulary of 50 words and the accuracy of 94% is achieved for normal speakers and 88% for speakers with dysarthria. The output speech of TTS system has achieved a MOS value of 4.5 out of 5 as obtained by averaging the response of 20 listeners. This method of STSC would be an augmentative and alternative communication aid for speakers with dysarthria.
Style APA, Harvard, Vancouver, ISO itp.
8

Widari, Kadek, i Ni Luh Yaniasti. "Japanese Directive Speech". International Journal of Multidisciplinary Sciences 1, nr 2 (1.06.2023): 147–58. http://dx.doi.org/10.37329/ijms.v1i2.2285.

Pełny tekst źródła
Streszczenie:
Language in role as a means of communication could result in relationships between speakers of to said. The use of language in the process of communication is very important, because community life is made possible by speech. Speechs are used to information of ideas, intentions directly or indirectly. Speech act serves to declare mean it is speakers by speech partners of said. Directive speech is the type of a speech that used by speakers of to send the speech partner said do something. The use of directive speech in conveying a speech should look factors affecting the speech. The porpuse in this research was to identify the directive speech acts and factors that affect the level of politeness. The method in this research used qualitative method with descriptive analysis method. The theory in this research used were speech act theory by Yule and politeness theory by Mizutani. The results of the analysis, the forms of directive speech acts contained in the comic Ore wo Suki Nano wa Omae dake kayo are 1) directive speech acts of command marked by tamae and nasai, 2) directive speech acts of requests marked by te kure, naide kure, te kudasai, tte, te, and te hoshii; 3) the directive speech act of an invitation is marked with mashou; 4) directive speech acts of permission marked by te mo ii and 5) directive speech acts of suggestions marked with houga ii. Directive speech is influenced by several factors, namely: 1) familiarity factor; 2) age; 3) social relationship; 4) gender and 5) situation.
Style APA, Harvard, Vancouver, ISO itp.
9

Oder, Alp Bugra. "Speech Acts Revisited: Examining Illocutionary Speech Acts in Speeches of Mustafa Kemal Ataturk". Proceedings of The International Conference on Research in Humanities and Social Sciences 1, nr 1 (22.12.2023): 24–35. http://dx.doi.org/10.33422/icrhs.v1i1.130.

Pełny tekst źródła
Streszczenie:
Pragmatics is an interdisciplinary subfield of applied linguistics that investigates the meaning in context. One of its research areas, speech acts, provides important implications on how the meaning behind the utterances is perceived and what effect it may have on the hearer. Theories and classifications proposed by Austin (1962) and Searle (1979) are particularly useful in understanding the hidden meaning and its effect on the audience. Political discourse is directly connected with speech acts and there is a body of research that focuses on the classification of illocutionary acts embedded within speeches of politicians. In this regard, the present research aimed to analyze illocutionary speech acts of two speeches of Mustafa Kemal Ataturk;speech at the 10th anniversary of Turkish Republic and Address to Turkish Youth which was a part of his great speech that he delivered to deputies and representatives of the Republican Party on 15th-20th October 1927 by employing qualitative content analysis on English translations of the speeches. Subsequent to meticulous analysis, the present qualitative study concluded that Ataturk used more speech acts in his speech at the 10th anniversary of the Turkish Republic than his Address to Turkish Youth. Speech acts in his speech at the 10th anniversary of the Turkish Republic primarily featured expressive, representative, commissive and directive speech acts while his Address to Turkish Youth featured representative and commissive, directive, expressive speech acts, respectively.In total, the most used speech acts were representatives, followed by expressives, commissives and directives. No declaration speech act was observed in either speech.
Style APA, Harvard, Vancouver, ISO itp.
10

Chen, Jing, Xihong H. Wu, Xuefei F. Zou, Zhiping P. Zhang, Lijuan J. Xu, Mengyuan Y. Wang, Liang Li i Huisheng S. Chi. "Effect of speech rate on speech‐on‐speech masking". Journal of the Acoustical Society of America 123, nr 5 (maj 2008): 3713. http://dx.doi.org/10.1121/1.2935152.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Speech"

1

Sun, Felix (Felix W. ). "Speech Representation Models for Speech Synthesis and Multimodal Speech Recognition". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106378.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-63).
The field of speech recognition has seen steady advances over the last two decades, leading to the accurate, real-time recognition systems available on mobile phones today. In this thesis, I apply speech modeling techniques developed for recognition to two other speech problems: speech synthesis and multimodal speech recognition with images. In both problems, there is a need to learn a relationship between speech sounds and another source of information. For speech synthesis, I show that using a neural network acoustic model results in a synthesizer that is more tolerant of noisy training data than previous work. For multimodal recognition, I show how information from images can be effectively integrated into the recognition search framework, resulting in improved accuracy when image data is available.
by Felix Sun.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
2

Alcaraz, Meseguer Noelia. "Speech Analysis for Automatic Speech Recognition". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9092.

Pełny tekst źródła
Streszczenie:

The classical front end analysis in speech recognition is a spectral analysis which parametrizes the speech signal into feature vectors; the most popular set of them is the Mel Frequency Cepstral Coefficients (MFCC). They are based on a standard power spectrum estimate which is first subjected to a log-based transform of the frequency axis (mel- frequency scale), and then decorrelated by using a modified discrete cosine transform. Following a focused introduction on speech production, perception and analysis, this paper gives a study of the implementation of a speech generative model; whereby the speech is synthesized and recovered back from its MFCC representations. The work has been developed into two steps: first, the computation of the MFCC vectors from the source speech files by using HTK Software; and second, the implementation of the generative model in itself, which, actually, represents the conversion chain from HTK-generated MFCC vectors to speech reconstruction. In order to know the goodness of the speech coding into feature vectors and to evaluate the generative model, the spectral distance between the original speech signal and the one produced from the MFCC vectors has been computed. For that, spectral models based on Linear Prediction Coding (LPC) analysis have been used. During the implementation of the generative model some results have been obtained in terms of the reconstruction of the spectral representation and the quality of the synthesized speech.

Style APA, Harvard, Vancouver, ISO itp.
3

Kleinschmidt, Tristan Friedrich. "Robust speech recognition using speech enhancement". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/31895/1/Tristan_Kleinschmidt_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Style APA, Harvard, Vancouver, ISO itp.
4

Blank, Sarah Catrin. "Speech comprehension, speech production and recovery of propositional speech following aphasic stroke". Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407772.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Price, Moneca C. "Interactions between speech coders and disordered speech". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ28640.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chong, Fong Loong. "Objective speech quality measurement for Chinese speech". Thesis, University of Canterbury. Computer Science and Software Engineering, 2005. http://hdl.handle.net/10092/9607.

Pełny tekst źródła
Streszczenie:
In the search for the optimisation of transmission speed and storage, speech information is often coded, or transmitted with a reduced bandwidth. As a result, quality and/or intelligibility are sometimes degraded. Speech quality is normally defined as the degree of goodness in the perception of speech while speech intelligibility is how well or clearly one can understand what is being said. In order to assess the level of acceptability of degraded speeches, various subjective methods have been developed to test codecs or sound processing systems. Although good results have been demonstrated with these, they are time consuming and expensive due to the necessary involvement of teams of professional or naive subjects1[56]. To reduce cost, computerised objective systems were created with the hope of replacing human subjects [90][43]. While reasonable standards have been reported by several of these systems, they have not reached the accuracy of well constructed subjective tests yet [92][84]. Therefore, their evaluations and improvements are constantly been researched for further breakthroughs. To date, objective speech quality measurement systems (OSQMs) have been developed mostly in Europe or the United States, and effectiveness is only tested for English, several European and Asian languages but not Chinese (Mandarin) [38][70][32].
Style APA, Harvard, Vancouver, ISO itp.
7

Stedmon, Alexander Winstan. "Putting speech in, taking speech out : human factors in the use of speech interfaces". Thesis, University of Nottingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420342.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Miyajima, C., D. Negi, Y. Ninomiya, M. Sano, K. Mori, K. Itou, K. Takeda i Y. Suenaga. "Audio-Visual Speech Database for Bimodal Speech Recognition". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10460.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Tang, Lihong. "Nonsensical speech : speech acts in postsocialist Chinese culture /". Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/6662.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Itakura, Fumitada, Tetsuya Shinde, Kiyoshi Tatara, Taisuke Ito, Ikuya Yokoo, Shigeki Matsubara, Kazuya Takeda i Nobuo Kawaguchi. "CIAIR speech corpus for real world speech recognition". The oriental chapter of COCOSDA (The International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques), 2002. http://hdl.handle.net/2237/15462.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Speech"

1

Hill, Geoffrey. Speech! Speech! London: Penguin Books, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kidawara, Yutaka, Eiichiro Sumita i Hisashi Kawai, red. Speech-to-Speech Translation. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0595-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kitano, Hiroaki. Speech-to-Speech Translation. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2732-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ayers, Joe. Speech sampler: Speeches and analyses. Wyd. 2. Ruston, Wash: Communication Ventures, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Affairs, Canada Dept of External. Speech. S.l: s.n, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Checkland, Michael. Speech. London: BBC Corporate Press Office, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hardcastle, William J. Speech Production and Speech Modelling. Dordrecht: Springer Netherlands, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

NATO, Advanced Study Institute on Speech Production and Speech Modelling (1st 1989 Bonas France). Speech production and speech modelling. Dordrecht, Netherlands: Kluwer Academic Publishers, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Spicer, Robert N. Free Speech and False Speech. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-69820-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hardcastle, William J., i Alain Marchal, red. Speech Production and Speech Modelling. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Speech"

1

Danon-Boileau, Laurent. "Associative Speech, Compulsive Speech". W Psychoanalysts in Session, 7–11. Abingdon, Oxon; New York, NY: Routledge, 2021. | Series: The new library of psychoanalysis | “Published in French, 2016”–Title page verso.: Routledge, 2020. http://dx.doi.org/10.4324/9780429196751-1a.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hartmann, William M. "Speech". W Principles of Musical Acoustics, 227–36. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-6786-1_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Turkstra, Lyn S. "Speech". W Encyclopedia of Clinical Neuropsychology, 3243. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_923.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Turkstra, Lyn S. "Speech". W Encyclopedia of Clinical Neuropsychology, 1. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_923-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Weik, Martin H. "speech". W Computer Science and Communications Dictionary, 1635. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_17909.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Bean Ellawadi, Allison. "Speech". W Encyclopedia of Autism Spectrum Disorders, 1. New York, NY: Springer New York, 2017. http://dx.doi.org/10.1007/978-1-4614-6435-8_1699-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bean, Allison. "Speech". W Encyclopedia of Autism Spectrum Disorders, 2953. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-1698-3_1699.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Howard-Jones, Paul. "Speech". W Evolution of the Learning Brain, 101–18. Abingdon, Oxon ; New York, NY : Routledge, 2018.: Routledge, 2018. http://dx.doi.org/10.4324/9781315150857-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lucey, Michael. "Speech". W The Proustian Mind, 191–207. London: Routledge, 2022. http://dx.doi.org/10.4324/9780429341472-16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Tait, James. "Speech". W Entering Architectural Practice, 317–31. Abingdon, Oxon ; New York : Routledge, 2021.: Routledge, 2020. http://dx.doi.org/10.4324/9780429346569-22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Speech"

1

Alonso-Mora, Javier. "Keynote Speeches *Ranked by speech time Keynote Speech 1". W 2020 3rd International Conference on Unmanned Systems (ICUS). IEEE, 2020. http://dx.doi.org/10.1109/icus50048.2020.9274922.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Newell, Alan F., John L. Arnott i R. Dye. "A full speed speech simulation of speech recognition machines". W European Conference on Speech Technology. ISCA: ISCA, 1987. http://dx.doi.org/10.21437/ecst.1987-207.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Louw, Johannes A., Daniel R. van Niekerk i Georg I. Schlünz. "Introducing the Speect speech synthesis platform". W The Blizzard Challenge 2010. ISCA: ISCA, 2010. http://dx.doi.org/10.21437/blizzard.2010-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhu, Jinwei, Huan Chen, Xing Wen, Zhenlin Huang i Liuqi Zhao. "An Adaptive Speech Speed Algorithm for Improving Continuous Speech Recognition". W ICMLCA 2023: 2023 4th International Conference on Machine Learning and Computer Application. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3650215.3650322.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Huckvale, Mark. "Speech synthesis, speech simulation and speech science". W 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-388.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Mineeva, M. I., i D. A. Korneev. "Low-Speed Speech Compression Systems". W 2023 Dynamics of Systems, Mechanisms and Machines (Dynamics). IEEE, 2023. http://dx.doi.org/10.1109/dynamics60586.2023.10349495.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Lin, Weigan. "Cassinian Waveguides [Keynote Speeches: Speech 1]". W 2007 5th International Conference on Communications, Circuits and Systems. IEEE, 2007. http://dx.doi.org/10.1109/icccas.2007.4348120.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Izzad, M., Nursuriati Jamil i Zainab Abu Bakar. "Speech/non-speech detection in Malay language spontaneous speech". W 2013 International Conference on Computing, Management and Telecommunications (ComManTel). IEEE, 2013. http://dx.doi.org/10.1109/commantel.2013.6482394.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Krishna, Gautam, Co Tran, Jianguo Yu i Ahmed H. Tewfik. "Speech Recognition with No Speech or with Noisy Speech". W ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683453.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Clark, Leigh, Benjamin R. Cowan, Abi Roper, Stephen Lindsay i Owen Sheers. "Speech diversity and speech interfaces". W CUI '20: 2nd Conference on Conversational User Interfaces. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3405755.3406139.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Speech"

1

Weinstein, C. J. Speech-to-Speech Translation: Technology and Applications Study. Fort Belvoir, VA: Defense Technical Information Center, maj 2002. http://dx.doi.org/10.21236/ada401684.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Samuel, Arthur G. Levels of Processing of Speech and Non-Speech. Fort Belvoir, VA: Defense Technical Information Center, maj 1991. http://dx.doi.org/10.21236/ada237796.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Gordon, Jane. Use of synthetic speech in tests of speech discrimination. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.5327.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ore, Brian M. Speech Recognition, Articulatory Feature Detection, and Speech Synthesis in Multiple Languages. Fort Belvoir, VA: Defense Technical Information Center, listopad 2009. http://dx.doi.org/10.21236/ada519140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Artayev, S. N., L. A. Lidzhiyeva i ZH A. Mukabenova. Listen to native speech! OFERNIO, luty 2019. http://dx.doi.org/10.12731/ofernio.2019.24050.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sabrin, Howard. UNIX Speech Processing Development. Fort Belvoir, VA: Defense Technical Information Center, październik 1997. http://dx.doi.org/10.21236/ada332983.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hogden, J. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding. Office of Scientific and Technical Information (OSTI), grudzień 1996. http://dx.doi.org/10.2172/432946.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Shao, Yang, Soundararajan Srinivasan, Zhaozhang Jin i DeLiang Wang. A Computational Auditory Scene Analysis System for Speech Segregation and Robust Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2007. http://dx.doi.org/10.21236/ad1001212.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Josephson, John, i James Russell. Separation of Speech from Background. Fort Belvoir, VA: Defense Technical Information Center, grudzień 2004. http://dx.doi.org/10.21236/ada428696.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Maybury, Mark T. Auditory Models for Speech Analysis. Fort Belvoir, VA: Defense Technical Information Center, styczeń 1988. http://dx.doi.org/10.21236/ada197322.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii