Journal articles on the topic 'Automatic speech recognition'

To see the other types of publications on this topic, follow the link: Automatic speech recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automatic speech recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fried, Louis. "AUTOMATIC SPEECH RECOGNITION." Information Systems Management 13, no. 1 (January 1996): 29–37. http://dx.doi.org/10.1080/10580539608906969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chigier, Benjamin. "Automatic speech recognition." Journal of the Acoustical Society of America 103, no. 1 (January 1998): 19. http://dx.doi.org/10.1121/1.423151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hovell, Simon Alexander. "Automatic speech recognition." Journal of the Acoustical Society of America 107, no. 5 (2000): 2325. http://dx.doi.org/10.1121/1.428610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Espy‐Wilson, Carol. "Automatic speech recognition." Journal of the Acoustical Society of America 117, no. 4 (April 2005): 2403. http://dx.doi.org/10.1121/1.4786105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Merrill, John W. "Automatic speech recognition." Journal of the Acoustical Society of America 121, no. 1 (2007): 29. http://dx.doi.org/10.1121/1.2434334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rao, P. V. S., and K. K. Paliwal. "Automatic speech recognition." Sadhana 9, no. 2 (September 1986): 85–120. http://dx.doi.org/10.1007/bf02747521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

SAYEM, Asm. "Speech Analysis for Alphabets in Bangla Language: Automatic Speech Recognition." International Journal of Engineering Research 3, no. 2 (February 1, 2014): 88–93. http://dx.doi.org/10.17950/ijer/v3s2/211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carlson, Gloria Stevens, and Jared Bernstein. "Automatic speech recognition of impaired speech." International Journal of Rehabilitation Research 11, no. 4 (December 1988): 396–97. http://dx.doi.org/10.1097/00004356-198812000-00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

SAGISAKA, Yoshinori. "AUTOMATIC SPEECH RECOGNITION MODELS." Kodo Keiryogaku (The Japanese Journal of Behaviormetrics) 22, no. 1 (1995): 40–47. http://dx.doi.org/10.2333/jbhmk.22.40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Receveur, Simon, Robin Weiss, and Tim Fingscheidt. "Turbo Automatic Speech Recognition." IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, no. 5 (May 2016): 846–62. http://dx.doi.org/10.1109/taslp.2016.2520364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dutta Majumder, D. "Fuzzy sets in pattern recognition, image analysis and automatic speech recognition." Applications of Mathematics 30, no. 4 (1985): 237–54. http://dx.doi.org/10.21136/am.1985.104148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Carlson, Gloria Stevens, Jared Bernstein, and Donald W. Bell. "Automatic speech recognition for speech‐impaired people." Journal of the Acoustical Society of America 84, S1 (November 1988): S46. http://dx.doi.org/10.1121/1.2026325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

King, Simon, Joe Frankel, Karen Livescu, Erik McDermott, Korin Richmond, and Mirjam Wester. "Speech production knowledge in automatic speech recognition." Journal of the Acoustical Society of America 121, no. 2 (February 2007): 723–42. http://dx.doi.org/10.1121/1.2404622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sar, Leda, Mark Hasegawa-Johnson, and Chang D. Yoo. "Counterfactually Fair Automatic Speech Recognition." IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 3515–25. http://dx.doi.org/10.1109/taslp.2021.3126949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

J.Arora, Shipra, and Rishi Pal Singh. "Automatic Speech Recognition: A Review." International Journal of Computer Applications 60, no. 9 (December 18, 2012): 34–44. http://dx.doi.org/10.5120/9722-4190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cooke, N. J., and M. Russell. "Gaze-contingent automatic speech recognition." IET Signal Processing 2, no. 4 (2008): 369. http://dx.doi.org/10.1049/iet-spr:20070127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Francart, Tom, Marc Moonen, and Jan Wouters. "Automatic testing of speech recognition." International Journal of Audiology 48, no. 2 (January 2009): 80–90. http://dx.doi.org/10.1080/14992020802400662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Vensko, George. "Apparatus for automatic speech recognition." Journal of the Acoustical Society of America 85, no. 4 (April 1989): 1812. http://dx.doi.org/10.1121/1.397922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

JONES, DYLAN M., CLIVE R. FRANKISH, and KEVIN HAPESHI. "Automatic speech recognition in practice." Behaviour & Information Technology 11, no. 2 (March 1992): 109–22. http://dx.doi.org/10.1080/01449299208924325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Espy-Wilson, Carol Y. "Linguistically informed automatic speech recognition." Journal of the Acoustical Society of America 112, no. 5 (November 2002): 2278. http://dx.doi.org/10.1121/1.1526574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cerisara, Christophe, and Dominique Fohr. "Multi-band automatic speech recognition." Computer Speech & Language 15, no. 2 (April 2001): 151–74. http://dx.doi.org/10.1006/csla.2001.0163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Namita Kure, Et al. "Survey of Automatic Dysarthric Speech Recognition." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (November 2, 2023): 1028–36. http://dx.doi.org/10.17762/ijritcc.v11i10.8622.

Full text
Abstract:
The need for automated speech recognition has expanded as a result of significant industrial expansion for a variety of automation and human-machine interface applications. The speech impairment brought on by communication disorders, neurogenic speech disorders, or psychological speech disorders limits the performance of different artificial intelligence-based systems. The dysarthric condition is a neurogenic speech disease that restricts the capacity of the human voice to articulate. This article presents a comprehensive survey of the recent advances in the automatic Dysarthric Speech Recognition (DSR) using machine learning and deep learning paradigms. It focuses on the methodology, database, evaluation metrics and major findings from the study of previous approaches. From the literature survey it provides the gaps between exiting work and previous work on DSR and provides the future direction for improvement of DSR.
APA, Harvard, Vancouver, ISO, and other styles
23

Singhal, Shiwani, Muskan Deswal, and Er Shafalii Sharma. "AUTOMATIC SPEECH RECOGNITION USING DEEP NEURAL NETWORKS." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 11 (November 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27313.

Full text
Abstract:
This research concentrates on the progression and present status of automatic speech recognition systems powered by deep neural networks. It discusses model architectures, training approaches, evaluating model efficacy, and recent advancements specific to deep neural networks applied in automatic speech recognition models. It considers the challenges faced in crafting these speech recognition models, such as data scarcity and the necessity for adaptability. Our exploration traces the evolution of automatic speech recognition through deep neural networks, presenting valuable insights aimed at propelling the domain of speech recognition for diverse applications, spanning from smart devices to healthcare. KEYWORDS - Automatic Speech Recognition, Deep Neural Networks, Language Modeling, Robustness to Noise, Speech Modelling
APA, Harvard, Vancouver, ISO, and other styles
24

Kewley-Port, Diane, Jonathan Dalby, and Deborah Burleson. "Speech intelligibility training using automatic speech recognition technology." Journal of the Acoustical Society of America 112, no. 5 (November 2002): 2303–4. http://dx.doi.org/10.1121/1.4779274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Benzeghiba, M., R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, et al. "Automatic speech recognition and speech variability: A review." Speech Communication 49, no. 10-11 (October 2007): 763–86. http://dx.doi.org/10.1016/j.specom.2007.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kushwaha, Shivam, Piyush Deep, Mohd Muaz, and Er Shafalii Sharma. "AUTOMATIC SPEECH RECOGNITION USING DEEP NEURAL NETWORKS." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 11 (November 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27292.

Full text
Abstract:
This research paper revolves around the evolution and the present scenario of Automatic Speech Recognition systems using Deep Neural Networks. It includes the designs, techniques of training, the evaluations of the model performance, and the emerging trends that are specific to deep neural networks embedded in automatic speech recognition models. This research also incorporates the challenges faced while building and deploying the speech recognition model including limited data availability and adaptability. We have examined how deep neural networks have transformed automatic speech recognition and this research provides valuable insights to improve the speech recognition technology across a number of applications from healthcare to smart devices. INDEX TERMS - Automatic Speech Recognition, Deep Neural Networks, Language Modeling, Robustness to Noise, Speech Modelling
APA, Harvard, Vancouver, ISO, and other styles
27

Janai, Siddhanna, Shreekanth T., Chandan M., and Ajish K. Abraham. "Speech-to-Speech Conversion." International Journal of Ambient Computing and Intelligence 12, no. 1 (January 2021): 184–206. http://dx.doi.org/10.4018/ijaci.2021010108.

Full text
Abstract:
A novel approach to build a speech-to-speech conversion (STSC) system for individuals with speech impairment dysarthria is described. STSC system takes impaired speech having inherent disturbance as input and produces a synthesized output speech with good pronunciation and noise free utterance. The STSC system involves two stages, namely automatic speech recognition (ASR) and automatic speech synthesis. ASR transforms speech into text, while automatic speech synthesis (or text-to-speech [TTS]) performs the reverse task. At present, the recognition system is developed for a small vocabulary of 50 words and the accuracy of 94% is achieved for normal speakers and 88% for speakers with dysarthria. The output speech of TTS system has achieved a MOS value of 4.5 out of 5 as obtained by averaging the response of 20 listeners. This method of STSC would be an augmentative and alternative communication aid for speakers with dysarthria.
APA, Harvard, Vancouver, ISO, and other styles
28

Salaja, Rosemary T., Ronan Flynn, and Michael Russell. "A Life-Based Classifier for Automatic Speech Recognition." Applied Mechanics and Materials 679 (October 2014): 189–93. http://dx.doi.org/10.4028/www.scientific.net/amm.679.189.

Full text
Abstract:
Research in speech recognition has produced different approaches that have been used for the classification of speech utterances in the back-end of an automatic speech recognition (ASR) system. As speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. This paper proposes a new back-end classifier that is based on artificial life (ALife) and describes how the proposed classifier can be used in a speech recognition system.
APA, Harvard, Vancouver, ISO, and other styles
29

Schultz, Benjamin G., Venkata S. Aditya Tarigoppula, Gustavo Noffs, Sandra Rojas, Anneke van der Walt, David B. Grayden, and Adam P. Vogel. "Automatic speech recognition in neurodegenerative disease." International Journal of Speech Technology 24, no. 3 (May 4, 2021): 771–79. http://dx.doi.org/10.1007/s10772-021-09836-w.

Full text
Abstract:
AbstractAutomatic speech recognition (ASR) could potentially improve communication by providing transcriptions of speech in real time. ASR is particularly useful for people with progressive disorders that lead to reduced speech intelligibility or difficulties performing motor tasks. ASR services are usually trained on healthy speech and may not be optimized for impaired speech, creating a barrier for accessing augmented assistance devices. We tested the performance of three state-of-the-art ASR platforms on two groups of people with neurodegenerative disease and healthy controls. We further examined individual differences that may explain errors in ASR services within groups, such as age and sex. Speakers were recorded while reading a standard text. Speech was elicited from individuals with multiple sclerosis, Friedreich’s ataxia, and healthy controls. Recordings were manually transcribed and compared to ASR transcriptions using Amazon Web Services, Google Cloud, and IBM Watson. Accuracy was measured as the proportion of words that were correctly classified. ASR accuracy was higher for controls than clinical groups, and higher for multiple sclerosis compared to Friedreich’s ataxia for all ASR services. Amazon Web Services and Google Cloud yielded higher accuracy than IBM Watson. ASR accuracy decreased with increased disease duration. Age and sex did not significantly affect ASR accuracy. ASR faces challenges for people with neuromuscular disorders. Until improvements are made in recognizing less intelligible speech, the true value of ASR for people requiring augmented assistance devices and alternative communication remains unrealized. We suggest potential methods to improve ASR for those with impaired speech.
APA, Harvard, Vancouver, ISO, and other styles
30

Rista, Amarildo, and Arbana Kadriu. "Automatic Speech Recognition: A Comprehensive Survey." SEEU Review 15, no. 2 (December 1, 2020): 86–112. http://dx.doi.org/10.2478/seeur-2020-0019.

Full text
Abstract:
Abstract Speech recognition is an interdisciplinary subfield of natural language processing (NLP) that facilitates the recognition and translation of spoken language into text by machine. Speech recognition plays an important role in digital transformation. It is widely used in different areas such as education, industry, and healthcare and has recently been used in many Internet of Things and Machine Learning applications. The process of speech recognition is one of the most difficult processes in computer science. Despite numerous searches in this domain, an optimal method for speech recognition has not yet been found. This is due to the fact that there are many attributes that characterize natural languages and every language has its particular highlights. The aim of this research is to provide a comprehensive understanding of the various techniques within the domain of Speech Recognition through a systematic literature review of the existing work. We will introduce the most significant and relevant techniques that may provide some directions in the future research.
APA, Harvard, Vancouver, ISO, and other styles
31

Auti, Dr Nisha, Atharva Pujari, Anagha Desai, Shreya Patil, Sanika Kshirsagar, and Rutika Rindhe. "Advanced Audio Signal Processing for Speaker Recognition and Sentiment Analysis." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 1717–24. http://dx.doi.org/10.22214/ijraset.2023.51825.

Full text
Abstract:
Abstract: Automatic Speech Recognition (ASR) technology has revolutionized human-computer interaction by allowing users to communicate with computer interfaces using their voice in a natural way. Speaker recognition is a biometric recognition method that identifies individuals based on their unique speech signal, with potential applications in security, communication, and personalization. Sentiment analysis is a statistical method that analyzes unique acoustic properties of the speaker's voice to identify emotions or sentiments in speech. This allows for automated speech recognition systems to accurately categorize speech as Positive, Neutral, or Negative. While sentiment analysis has been developed for various languages, further research is required for regional languages. This project aims to improve the accuracy of automatic speech recognition systems by implementing advanced audio signal processing and sentiment analysis detection. The proposed system will identify the speaker's voice and analyze the audio signal to detect the context of speech, including the identification of foul language and aggressive speech. The system will be developed for the Marathi Language dataset, with potential for further development in other languages.
APA, Harvard, Vancouver, ISO, and other styles
32

Galatang, Danny Henry, and Suyanto Suyanto. "Syllable-Based Indonesian Automatic Speech Recognition." International Journal on Electrical Engineering and Informatics 12, no. 4 (December 31, 2020): 720–28. http://dx.doi.org/10.15676/ijeei.2020.12.4.2.

Full text
Abstract:
The syllable-based automatic speech recognition (ASR) systems commonly perform better than the phoneme-based ones. This paper focuses on developing an Indonesian monosyllable-based ASR (MSASR) system using an ASR engine called SPRAAK and comparing it to a phoneme-based one. The Mozilla DeepSpeech-based end-to-end ASR (MDSE2EASR), one of the state-of-the-art models based on character (similar to the phoneme-based model), is also investigated to confirm the result. Besides, a novel Kaituoxu SpeechTransformer (KST) E2EASR is also examined. Testing on the Indonesian speech corpus of 5,439 words shows that the proposed MSASR produces much higher word accuracy (76.57%) than the monophone-based one (63.36%). Its performance is comparable to the character-based MDS-E2EASR, which produces 76.90%, and the character-based KST-E2EASR (78.00%). In the future, this monosyllable-based ASR is possible to be improved to the bisyllable-based one to give higher word accuracy. Nevertheless, extensive bisyllable acoustic models must be handled using an advanced method.
APA, Harvard, Vancouver, ISO, and other styles
33

GURTUEVA, I. A. "MODERN PROBLEMS OF AUTOMATIC SPEECH RECOGNITION." News of the Kabardin-Balkar Scientific Center of RAS 6, no. 98 (2020): 20–33. http://dx.doi.org/10.35330/1991-6639-2020-6-98-20-33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Abushariah, Mohammad A. M., and Assal A. M. Alqudah. "Automatic Identity Recognition Using Speech Biometric." European Scientific Journal, ESJ 12, no. 12 (April 28, 2016): 43. http://dx.doi.org/10.19044/esj.2016.v12n12p43.

Full text
Abstract:
Biometric technology refers to the automatic identification of a person using physical or behavioral traits associated with him/her. This technology can be an excellent candidate for developing intelligent systems such as speaker identification, facial recognition, signature verification...etc. Biometric technology can be used to design and develop automatic identity recognition systems, which are highly demanded and can be used in banking systems, employee identification, immigration, e-commerce…etc. The first phase of this research emphasizes on the development of automatic identity recognizer using speech biometric technology based on Artificial Intelligence (AI) techniques provided in MATLAB. For our phase one, speech data is collected from 20 (10 male and 10 female) participants in order to develop the recognizer. The speech data include utterances recorded for the English language digits (0 to 9), where each participant recorded each digit 3 times, which resulted in a total of 600 utterances for all participants. For our phase two, speech data is collected from 100 (50 male and 50 female) participants in order to develop the recognizer. The speech data is divided into text-dependent and text-independent data, whereby each participant selected his/her full name and recorded it 30 times, which makes up the text-independent data. On the other hand, the text-dependent data is represented by a short Arabic language story that contains 16 sentences, whereby every sentence was recorded by every participant 5 times. As a result, this new corpus contains 3000 (30 utterances * 100 speakers) sound files that represent the text-independent data using their full names and 8000 (16 sentences * 5 utterances * 100 speakers) sound files that represent the text-dependent data using the short story. For the purpose of our phase one of developing the automatic identity recognizer using speech, the 600 utterances have undergone the feature extraction and feature classification phases. The speech-based automatic identity recognition system is based on the most dominating feature extraction technique, which is known as the Mel-Frequency Cepstral Coefficient (MFCC). For feature classification phase, the system is based on the Vector Quantization (VQ) algorithm. Based on our experimental results, the highest accuracy achieved is 76%. The experimental results have shown acceptable performance, but can be improved further in our phase two using larger speech data size and better performance classification techniques such as the Hidden Markov Model (HMM).
APA, Harvard, Vancouver, ISO, and other styles
35

Sain, Nakshatra. "Automatic Speech Recognition: - Ease of Typing." International Journal for Research in Applied Science and Engineering Technology 6, no. 4 (April 30, 2018): 1932–34. http://dx.doi.org/10.22214/ijraset.2018.4331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mihajlik, Péter, Tibor Révész, and Péter Tatai. "Phonetic transcription in automatic speech recognition." Acta Linguistica Hungarica 49, no. 3-4 (November 2002): 407–25. http://dx.doi.org/10.1556/aling.49.2002.3-4.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Coro, Gianpaolo, Fabio Valerio Massoli, Antonio Origlia, and Francesco Cutugno. "Psycho-acoustics inspired automatic speech recognition." Computers & Electrical Engineering 93 (July 2021): 107238. http://dx.doi.org/10.1016/j.compeleceng.2021.107238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Aldarmaki, Hanan, Asad Ullah, Sreepratha Ram, and Nazar Zaki. "Unsupervised Automatic Speech Recognition: A review." Speech Communication 139 (April 2022): 76–91. http://dx.doi.org/10.1016/j.specom.2022.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sain, Nakshatra. "Automatic Speech Recognition: - A Literature Survey." International Journal for Research in Applied Science and Engineering Technology 6, no. 4 (April 30, 2018): 3644–45. http://dx.doi.org/10.22214/ijraset.2018.4606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gjoreski, Martin, Hristijan Gjoreski, and Andrea Kulakov. "Automatic Recognition of Emotions from Speech." International Journal of Computational Linguistics Research 10, no. 4 (December 1, 2019): 101. http://dx.doi.org/10.6025/jcl/2019/10/4/101-107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hartwell, Walter T. "Automatic speech recognition using echo cancellation." Journal of the Acoustical Society of America 91, no. 3 (March 1992): 1791. http://dx.doi.org/10.1121/1.403754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Noyes, Jan M., Chris Baber, and Andrew P. Leggatt. "Automatic Speech Recognition, Noise and Workload." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 22 (July 2000): 762–65. http://dx.doi.org/10.1177/154193120004402269.

Full text
Abstract:
Despite the increasing use of technology in the developed world, most computer communications still take place via a QWERTY keyboard and a mouse. The use of Automatic Speech Recognition (ASR) whereby individuals can ‘talk’ to their computers has yet to be realised to any great extent. This is despite the benefits relating to greater efficiency, use in adverse environments and in the ‘hands-eyes busy’ situation. There are now affordable ASR products in the marketplace, and many people are able to buy these products and try ASR for themselves. However, anecdotal reports suggest that these same people will use ASR for a few days or weeks and then revert to conventional interaction techniques; only a hardy few appear to persist long enough to reap the benefits. Thus, it is our contention that ASR is a commercially viable technology but that it still requires further development to make a significant contribution to usability. Admittedly, there are some very successful applications that have used ASR for a number of decades, but these are often characterised by relatively small vocabularies, dedicated users and non-threatening situations; typical applications are in offices (Noyes & Frankish, 1989) or for disabled users (Noyes & Frankish, 1992). Given that Armoured Fighting Vehicles (AFVs) could employ ASR with limited vocabulary and dedicated users, the use of ASR in this application is considered here. The principle difference between ASR for AFV and previous applications is the environmental conditions in which the technology will be used.
APA, Harvard, Vancouver, ISO, and other styles
43

Martinčić-Ipšić, Sanda, Miran Pobar, and Ivo Ipšić. "Croatian Large Vocabulary Automatic Speech Recognition." Automatika 52, no. 2 (January 2011): 147–57. http://dx.doi.org/10.1080/00051144.2011.11828413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bilmes, Jeff A. "Graphical models and automatic speech recognition." Journal of the Acoustical Society of America 112, no. 5 (November 2002): 2278–79. http://dx.doi.org/10.1121/1.4779134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

O’Shaughnessy, Douglas, and Selouani Sid‐Ahmed. "Robust automatic recognition of telephone speech." Journal of the Acoustical Society of America 113, no. 4 (April 2003): 2198. http://dx.doi.org/10.1121/1.4780179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Levinson, S. E. "Structural methods in automatic speech recognition." Proceedings of the IEEE 73, no. 11 (1985): 1625–50. http://dx.doi.org/10.1109/proc.1985.13344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

O'Shaughnessy, Douglas. "Acoustic Analysis for Automatic Speech Recognition." Proceedings of the IEEE 101, no. 5 (May 2013): 1038–53. http://dx.doi.org/10.1109/jproc.2013.2251592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Arora, Neerja. "Automatic Speech Recognition System: A Review." International Journal of Computer Applications 151, no. 1 (October 17, 2016): 24–28. http://dx.doi.org/10.5120/ijca2016911368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Shi-Xiong Zhang and M. J. F. Gales. "Structured SVMs for Automatic Speech Recognition." IEEE Transactions on Audio, Speech, and Language Processing 21, no. 3 (March 2013): 544–55. http://dx.doi.org/10.1109/tasl.2012.2227734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhu, Qifeng. "DYNAMIC PRUNING FOR AUTOMATIC SPEECH RECOGNITION." Journal of the Acoustical Society of America 134, no. 3 (2013): 2377. http://dx.doi.org/10.1121/1.4820207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography