Gotowa bibliografia na temat „Speech recognition”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Speech recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Speech recognition"

1

Miyazaki, Toshiyuki, i Yoji Ishikawa. "Speech recognition device and speech recognition method". Journal of the Acoustical Society of America 126, nr 3 (2009): 1648. http://dx.doi.org/10.1121/1.3230481.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Shinotsuka, Hiroshi, i Noritoshi Hino. "Speech recognition method and speech recognition device". Journal of the Acoustical Society of America 111, nr 4 (2002): 1518. http://dx.doi.org/10.1121/1.1479008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Mohamad Nasir, A. B., N. R. M. Nasir i F. H. M. Salleh. "SPEESH: speech-based mobile application for dysarthric speech recognition". Journal of Physics: Conference Series 1860, nr 1 (1.03.2021): 012003. http://dx.doi.org/10.1088/1742-6596/1860/1/012003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Downey, Simon N. "Speech Recognition". Journal of the Acoustical Society of America 130, nr 6 (2011): 4183. http://dx.doi.org/10.1121/1.3669379.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lennig, Matthew. "Speech recognition". Journal of the Acoustical Society of America 91, nr 1 (styczeń 1992): 546. http://dx.doi.org/10.1121/1.402661.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Alotaibi, Y. A., i M. M. Shahsavari. "Speech recognition". IEEE Potentials 17, nr 1 (1998): 23–28. http://dx.doi.org/10.1109/45.652853.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Feldman, Joel A. "Speech recognition". Journal of the Acoustical Society of America 86, nr 6 (grudzień 1989): 2478. http://dx.doi.org/10.1121/1.398356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Gadbois, Gregory J., i Stijn A. Van Even. "Speech recognition". Journal of the Acoustical Society of America 107, nr 5 (2000): 2325. http://dx.doi.org/10.1121/1.428607.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Cameron, Ian R., i Paul C. Millar. "Speech recognition". Journal of the Acoustical Society of America 95, nr 2 (luty 1994): 1185. http://dx.doi.org/10.1121/1.408412.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Abe, Kenji. "SPEECH RECOGNITION SYSTEM AND METHOD FOR SPEECH RECOGNITION". Journal of the Acoustical Society of America 134, nr 1 (2013): 738. http://dx.doi.org/10.1121/1.4813052.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Speech recognition"

1

Chuchilina, L. M., i I. E. Yeskov. "Speech recognition". Thesis, Видавництво СумДУ, 2008. http://essuir.sumdu.edu.ua/handle/123456789/15995.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Alcaraz, Meseguer Noelia. "Speech Analysis for Automatic Speech Recognition". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9092.

Pełny tekst źródła
Streszczenie:

The classical front end analysis in speech recognition is a spectral analysis which parametrizes the speech signal into feature vectors; the most popular set of them is the Mel Frequency Cepstral Coefficients (MFCC). They are based on a standard power spectrum estimate which is first subjected to a log-based transform of the frequency axis (mel- frequency scale), and then decorrelated by using a modified discrete cosine transform. Following a focused introduction on speech production, perception and analysis, this paper gives a study of the implementation of a speech generative model; whereby the speech is synthesized and recovered back from its MFCC representations. The work has been developed into two steps: first, the computation of the MFCC vectors from the source speech files by using HTK Software; and second, the implementation of the generative model in itself, which, actually, represents the conversion chain from HTK-generated MFCC vectors to speech reconstruction. In order to know the goodness of the speech coding into feature vectors and to evaluate the generative model, the spectral distance between the original speech signal and the one produced from the MFCC vectors has been computed. For that, spectral models based on Linear Prediction Coding (LPC) analysis have been used. During the implementation of the generative model some results have been obtained in terms of the reconstruction of the spectral representation and the quality of the synthesized speech.

Style APA, Harvard, Vancouver, ISO itp.
3

Kleinschmidt, Tristan Friedrich. "Robust speech recognition using speech enhancement". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/31895/1/Tristan_Kleinschmidt_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Style APA, Harvard, Vancouver, ISO itp.
4

Eriksson, Mattias. "Speech recognition availability". Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2651.

Pełny tekst źródła
Streszczenie:

This project investigates the importance of availability in the scope of dictation programs. Using speech recognition technology for dictating has not reached the public, and that may very well be a result of poor availability in today’s technical solutions.

I have constructed a persona character, Johanna, who personalizes the target user. I have also developed a solution that streams audio into a speech recognition server and sends back interpreted text. Johanna affirmed that the solution was successful in theory.

I then incorporated test users that tried out the solution in practice. Half of them do indeed claim that their usage has been and will continue to be increased thanks to the new level of availability.

Style APA, Harvard, Vancouver, ISO itp.
5

Uebler, Ulla. "Multilingual speech recognition /". Berlin : Logos Verlag, 2000. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=009117880&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Yonglian. "Speech Recognition under Stress". Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1968468151&sid=9&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Lucas, Adrian Edward. "Acoustic level speech recognition". Thesis, University of Surrey, 1991. http://epubs.surrey.ac.uk/2819/.

Pełny tekst źródła
Streszczenie:
A number of techniques have been developed over the last forty years which attempt to solve the problem of recognizing human speech by machine. Although the general problem of unconstrained, speaker independent connected speech recognition is still not solved, some of the methods have demonstrated varying degrees of success on a number of constrained speech recognition tasks. Human speech communication is considered to take place on a number of levels from the acoustic signal through to higher linguistic and semantic levels. At the acoustic level, the recognition process can be divided into time-alignment (the removal of global and local timing differences between the unknown input speech and the stored reference templates) and referencete mplate matching. Little attention seems to have been given to the effective use of acoustic level contextual information to improve the performance of these tasks. In this thesis, a new template matching scheme is developed which addresses this issue and successfully allows the utilization of acoustic level context. The method, based on Bayesian decision theory, is a dynamic time warping approach which incorporates statistical dependencies in matching errors between frames along the entire length of the reference template. In addition, the method includes a speaker compensation technique operating simultaneously. Implementation is carried out using the highly efficient branch and bound algorithm. Speech model storage requirements are quite small as a result of an elegant feature of the recursive matching criterion. Furthermore, a novel method for inferencing the special speech models is introduced. The new method is tested on data drawn from nearly 8000 utterances of the 26 letters of the British English Alphabet spoken by 104 speakers, split almost equally between male and female speakers. Experiments show that the new approach is a powerful acoustic level speech recognizer achieving up to 34% better recognition performance when compared with a conventional method based on the dynamic programming algorithm.
Style APA, Harvard, Vancouver, ISO itp.
8

Žmolíková, Kateřina. "Far-Field Speech Recognition". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255331.

Pełny tekst źródła
Streszczenie:
Systémy rozpoznávání řeči v dnešní době dosahují poměrně vysoké úspěšnosti. V případě řeči, která je snímána vzdáleným mikrofonem a je tak narušena množstvím šumu a dozvukem (reverberací), je ale přesnost rozpoznávání značně zhoršena. Tento problém je možné zmírnit využitím mikrofonních polí. Tato práce se zabývá technikami, které umožňují kombinovat signály z více mikrofonů tak, aby byla zlepšena kvalita výsledného signálu a tedy i přesnost rozpoznávání. Práce nejprve shrnuje teorii rozpoznávání řeči a uvádí nejpoužívanější algoritmy pro zpracování mikrofonních polí. Následně jsou demonstrovány a analyzovány výsledky použití dvou metod pro beamforming a metody dereverberace vícekanálových signálů. Na závěr je vyzkoušen alternativní způsob beamformingu za použití neuronových sítí.
Style APA, Harvard, Vancouver, ISO itp.
9

Sun, Felix (Felix W. ). "Speech Representation Models for Speech Synthesis and Multimodal Speech Recognition". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106378.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-63).
The field of speech recognition has seen steady advances over the last two decades, leading to the accurate, real-time recognition systems available on mobile phones today. In this thesis, I apply speech modeling techniques developed for recognition to two other speech problems: speech synthesis and multimodal speech recognition with images. In both problems, there is a need to learn a relationship between speech sounds and another source of information. For speech synthesis, I show that using a neural network acoustic model results in a synthesizer that is more tolerant of noisy training data than previous work. For multimodal recognition, I show how information from images can be effectively integrated into the recognition search framework, resulting in improved accuracy when image data is available.
by Felix Sun.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
10

Miyajima, C., D. Negi, Y. Ninomiya, M. Sano, K. Mori, K. Itou, K. Takeda i Y. Suenaga. "Audio-Visual Speech Database for Bimodal Speech Recognition". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10460.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Speech recognition"

1

Yu, Dong, i Li Deng. Automatic Speech Recognition. London: Springer London, 2015. http://dx.doi.org/10.1007/978-1-4471-5779-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lee, Kai-Fu. Automatic Speech Recognition. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4615-3650-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bourlard, Hervé A., i Nelson Morgan. Connectionist Speech Recognition. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-3210-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Markowitz, Judith A. Using speech recognition. Upper Saddle River, N.J: Prentice Hall PTR, 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Woelfel, Matthias. Distant speech recognition. Chichester, West Sussex, U.K: Wiley, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Buydos, John F. Speech recognition and processing. Washington, D.C: Science Reference Section, Science and Technology Division, Library of Congress, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Huang, X. D. Hidden Markov models for speech recognition. Edinburgh: Edinburgh University Press, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Holmes, J. N. Speech synthesis and recognition. Wyd. 2. New York: Taylor & Francis, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Laface, Pietro, i Renato De Mori, red. Speech Recognition and Understanding. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-76626-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Neustein, Amy, red. Advances in Speech Recognition. Boston, MA: Springer US, 2010. http://dx.doi.org/10.1007/978-1-4419-5951-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Speech recognition"

1

Fink, Gernot A. "Speech Recognition". W Markov Models for Pattern Recognition, 229–36. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6308-4_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Morris, Tim. "Speech Recognition". W Multimedia Systems, 89–100. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0455-1_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Xin, Jack, i Yingyong Qi. "Speech Recognition". W Mathematical Modeling and Signal Processing in Speech and Hearing Sciences, 115–39. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03086-9_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Paulus, Dietrich W. R., i Joachim Hornegger. "Speech Recognition". W Pattern Recognition of Images and Speech in C++, 329–53. Wiesbaden: Vieweg+Teubner Verlag, 1997. http://dx.doi.org/10.1007/978-3-663-13991-1_25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Farouk, Mohamed Hesham. "Speech Recognition". W SpringerBriefs in Electrical and Computer Engineering, 27–29. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02732-6_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Stölzle, Anton. "Speech Recognition". W The Kluwer International Series in Engineering and Computer Science, 321–38. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3570-6_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Farouk, Mohamed Hesham. "Speech Recognition". W SpringerBriefs in Electrical and Computer Engineering, 41–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69002-5_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Weik, Martin H. "speech recognition". W Computer Science and Communications Dictionary, 1636. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_17919.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sinha, Priyabrata. "Speech Recognition". W Speech Processing in Embedded Systems, 143–55. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-75581-6_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Renals, Steve, i Thomas Hain. "Speech Recognition". W The Handbook of Computational Linguistics and Natural Language Processing, 297–332. Oxford, UK: Wiley-Blackwell, 2010. http://dx.doi.org/10.1002/9781444324044.ch12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Speech recognition"

1

Manabe, Hiroyuki, Akira Hiraiwa i Toshiaki Sugimura. "Speech recognition using EMG; mime speech recognition". W 8th European Conference on Speech Communication and Technology (Eurospeech 2003). ISCA: ISCA, 2003. http://dx.doi.org/10.21437/eurospeech.2003-524.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Negoita, Alexandru, George Suciu, Svetlana Segarceanu i Dan Trufin. "SPEECH RECOGNITION SYSTEM". W eLSE 2021. ADL Romania, 2021. http://dx.doi.org/10.12753/2066-026x-21-095.

Pełny tekst źródła
Streszczenie:
Speech recognition, also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, is a capability which enables a program to process human speech into a written format. While it's commonly confused with voice recognition, speech recognition focuses on the translation of speech from a verbal format to a text one whereas voice recognition just seeks to identify an individual user's voice. Speech recognition applications are becoming more and more useful nowadays. Various interactive speech aware applications are available in the market. But they are usually meant for and executed on the traditional general-purpose computers. With growth in the needs for embedded computing and the demand for emerging embedded platforms, it is required that the speech recognition systems (SRS) are available on them too. Speech recognition systems emerge as efficient alternatives for such devices where typing becomes difficult attributed to their small screen limitations. The paper aims to test a speech recognition system that can be used for a human-machine interaction through speech. The goal is to allow the machine to recognize a set of instructions sent by the user through the voice signal. An automatic speech recognition system will be tested in order to identify words that belong to a limited vocabulary. It will be implemented by engaging a deep neural network (DNN). The construction of the network will be done with the help of the Tensorflow library, which provides support for the development of artificial intelligence algorithms. The system will be tested out on a non-homogeneous group of people, because it is desirable to develop a voice recognition system, independent of the speaker.
Style APA, Harvard, Vancouver, ISO itp.
3

Woodland, P. "Speech recognition". W IEE Colloquium Speech and Language Engineering - State of the Art. IEE, 1998. http://dx.doi.org/10.1049/ic:19980956.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Manabe, Hiroyuki, Akira Hiraiwa i Toshiaki Sugimura. ""Unvoiced speech recognition using EMG - mime speech recognition"". W CHI '03 extended abstracts. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/765891.765996.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Dong Wang, Lie Lu i Hong-Jiang Zhang. "Speech segmentation without speech recognition". W 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698). IEEE, 2003. http://dx.doi.org/10.1109/icme.2003.1220940.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Newell, Alan F., John L. Arnott i R. Dye. "A full speed speech simulation of speech recognition machines". W European Conference on Speech Technology. ISCA: ISCA, 1987. http://dx.doi.org/10.21437/ecst.1987-207.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Paulose, Supriya, Shikhamoni Nath i Samudravijaya K. "Marathi Speech Recognition". W The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/sltu.2018-48.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Thorat, Roopa A., i Ruchira A. Jadhav. "Speech recognition system". W the International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1523103.1523226.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hui Lin i Jeff Bilmes. "Polyphase speech recognition". W ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518558.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Devi, Sulochana, Siddhi Chokshi, Kritika Kotian i Juili Warwatkar. "Visual Speech Recognition". W 2021 4th Biennial International Conference on Nascent Technologies in Engineering (ICNTE). IEEE, 2021. http://dx.doi.org/10.1109/icnte51185.2021.9487784.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Speech recognition"

1

Hoeferlin, David M., Brian M. Ore, Stephen A. Thorn i David Snyder. Speech Processing and Recognition (SPaRe). Fort Belvoir, VA: Defense Technical Information Center, styczeń 2011. http://dx.doi.org/10.21236/ada540142.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kubala, F., S. Austin, C. Barry, J. Makhoul, P. Placeway i R. Schwartz. Byblos Speech Recognition Benchmark Results. Fort Belvoir, VA: Defense Technical Information Center, styczeń 1991. http://dx.doi.org/10.21236/ada459943.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Schwartz, Richard, i Owen Kimball. Toward Real-Time Continuous Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, marzec 1989. http://dx.doi.org/10.21236/ada208196.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Fu-Hua, Pedro J. Moreno, Richard M. Stern i Alejandro Acero. Signal Processing for Robust Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, styczeń 1994. http://dx.doi.org/10.21236/ada457798.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Schwartz, R., Y.-L. Chow, A. Derr, M.-W. Feng i O. Kimball. Statistical Modeling for Continuous Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, luty 1988. http://dx.doi.org/10.21236/ada192054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

STANDARD OBJECT SYSTEMS INC SHALIMAR FL. Auditory Modeling for Noisy Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2000. http://dx.doi.org/10.21236/ada373379.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Pfister, M. Software Package for Speaker Independent or Dependent Speech Recognition Using Standard Objects for Phonetic Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, luty 1998. http://dx.doi.org/10.21236/ada341198.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ore, Brian M. Speech Recognition, Articulatory Feature Detection, and Speech Synthesis in Multiple Languages. Fort Belvoir, VA: Defense Technical Information Center, listopad 2009. http://dx.doi.org/10.21236/ada519140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Draelos, Timothy J., Stephen Heck, Jennifer Galasso i Ronald Brogan. Seismic Phase Identification with Speech Recognition Algorithms. Office of Scientific and Technical Information (OSTI), wrzesień 2018. http://dx.doi.org/10.2172/1474260.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Schwartz, Richard, i John Makhoul. Combining Multiple Knowledge Sources for Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1988. http://dx.doi.org/10.21236/ada198928.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii