Artykuły w czasopismach na temat „Automatic speech recognition”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Automatic speech recognition.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Automatic speech recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Fried, Louis. "AUTOMATIC SPEECH RECOGNITION". Information Systems Management 13, nr 1 (styczeń 1996): 29–37. http://dx.doi.org/10.1080/10580539608906969.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Chigier, Benjamin. "Automatic speech recognition". Journal of the Acoustical Society of America 103, nr 1 (styczeń 1998): 19. http://dx.doi.org/10.1121/1.423151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hovell, Simon Alexander. "Automatic speech recognition". Journal of the Acoustical Society of America 107, nr 5 (2000): 2325. http://dx.doi.org/10.1121/1.428610.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Espy‐Wilson, Carol. "Automatic speech recognition". Journal of the Acoustical Society of America 117, nr 4 (kwiecień 2005): 2403. http://dx.doi.org/10.1121/1.4786105.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Merrill, John W. "Automatic speech recognition". Journal of the Acoustical Society of America 121, nr 1 (2007): 29. http://dx.doi.org/10.1121/1.2434334.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Rao, P. V. S., i K. K. Paliwal. "Automatic speech recognition". Sadhana 9, nr 2 (wrzesień 1986): 85–120. http://dx.doi.org/10.1007/bf02747521.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

SAYEM, Asm. "Speech Analysis for Alphabets in Bangla Language: Automatic Speech Recognition". International Journal of Engineering Research 3, nr 2 (1.02.2014): 88–93. http://dx.doi.org/10.17950/ijer/v3s2/211.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Carlson, Gloria Stevens, i Jared Bernstein. "Automatic speech recognition of impaired speech". International Journal of Rehabilitation Research 11, nr 4 (grudzień 1988): 396–97. http://dx.doi.org/10.1097/00004356-198812000-00013.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

SAGISAKA, Yoshinori. "AUTOMATIC SPEECH RECOGNITION MODELS". Kodo Keiryogaku (The Japanese Journal of Behaviormetrics) 22, nr 1 (1995): 40–47. http://dx.doi.org/10.2333/jbhmk.22.40.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Receveur, Simon, Robin Weiss i Tim Fingscheidt. "Turbo Automatic Speech Recognition". IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, nr 5 (maj 2016): 846–62. http://dx.doi.org/10.1109/taslp.2016.2520364.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Dutta Majumder, D. "Fuzzy sets in pattern recognition, image analysis and automatic speech recognition". Applications of Mathematics 30, nr 4 (1985): 237–54. http://dx.doi.org/10.21136/am.1985.104148.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Carlson, Gloria Stevens, Jared Bernstein i Donald W. Bell. "Automatic speech recognition for speech‐impaired people". Journal of the Acoustical Society of America 84, S1 (listopad 1988): S46. http://dx.doi.org/10.1121/1.2026325.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

King, Simon, Joe Frankel, Karen Livescu, Erik McDermott, Korin Richmond i Mirjam Wester. "Speech production knowledge in automatic speech recognition". Journal of the Acoustical Society of America 121, nr 2 (luty 2007): 723–42. http://dx.doi.org/10.1121/1.2404622.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Sar, Leda, Mark Hasegawa-Johnson i Chang D. Yoo. "Counterfactually Fair Automatic Speech Recognition". IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 3515–25. http://dx.doi.org/10.1109/taslp.2021.3126949.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

J.Arora, Shipra, i Rishi Pal Singh. "Automatic Speech Recognition: A Review". International Journal of Computer Applications 60, nr 9 (18.12.2012): 34–44. http://dx.doi.org/10.5120/9722-4190.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Cooke, N. J., i M. Russell. "Gaze-contingent automatic speech recognition". IET Signal Processing 2, nr 4 (2008): 369. http://dx.doi.org/10.1049/iet-spr:20070127.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Francart, Tom, Marc Moonen i Jan Wouters. "Automatic testing of speech recognition". International Journal of Audiology 48, nr 2 (styczeń 2009): 80–90. http://dx.doi.org/10.1080/14992020802400662.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Vensko, George. "Apparatus for automatic speech recognition". Journal of the Acoustical Society of America 85, nr 4 (kwiecień 1989): 1812. http://dx.doi.org/10.1121/1.397922.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

JONES, DYLAN M., CLIVE R. FRANKISH i KEVIN HAPESHI. "Automatic speech recognition in practice". Behaviour & Information Technology 11, nr 2 (marzec 1992): 109–22. http://dx.doi.org/10.1080/01449299208924325.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Espy-Wilson, Carol Y. "Linguistically informed automatic speech recognition". Journal of the Acoustical Society of America 112, nr 5 (listopad 2002): 2278. http://dx.doi.org/10.1121/1.1526574.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Cerisara, Christophe, i Dominique Fohr. "Multi-band automatic speech recognition". Computer Speech & Language 15, nr 2 (kwiecień 2001): 151–74. http://dx.doi.org/10.1006/csla.2001.0163.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Namita Kure, Et al. "Survey of Automatic Dysarthric Speech Recognition". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 10 (2.11.2023): 1028–36. http://dx.doi.org/10.17762/ijritcc.v11i10.8622.

Pełny tekst źródła
Streszczenie:
The need for automated speech recognition has expanded as a result of significant industrial expansion for a variety of automation and human-machine interface applications. The speech impairment brought on by communication disorders, neurogenic speech disorders, or psychological speech disorders limits the performance of different artificial intelligence-based systems. The dysarthric condition is a neurogenic speech disease that restricts the capacity of the human voice to articulate. This article presents a comprehensive survey of the recent advances in the automatic Dysarthric Speech Recognition (DSR) using machine learning and deep learning paradigms. It focuses on the methodology, database, evaluation metrics and major findings from the study of previous approaches. From the literature survey it provides the gaps between exiting work and previous work on DSR and provides the future direction for improvement of DSR.
Style APA, Harvard, Vancouver, ISO itp.
23

Singhal, Shiwani, Muskan Deswal i Er Shafalii Sharma. "AUTOMATIC SPEECH RECOGNITION USING DEEP NEURAL NETWORKS". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, nr 11 (1.11.2023): 1–11. http://dx.doi.org/10.55041/ijsrem27313.

Pełny tekst źródła
Streszczenie:
This research concentrates on the progression and present status of automatic speech recognition systems powered by deep neural networks. It discusses model architectures, training approaches, evaluating model efficacy, and recent advancements specific to deep neural networks applied in automatic speech recognition models. It considers the challenges faced in crafting these speech recognition models, such as data scarcity and the necessity for adaptability. Our exploration traces the evolution of automatic speech recognition through deep neural networks, presenting valuable insights aimed at propelling the domain of speech recognition for diverse applications, spanning from smart devices to healthcare. KEYWORDS - Automatic Speech Recognition, Deep Neural Networks, Language Modeling, Robustness to Noise, Speech Modelling
Style APA, Harvard, Vancouver, ISO itp.
24

Kewley-Port, Diane, Jonathan Dalby i Deborah Burleson. "Speech intelligibility training using automatic speech recognition technology". Journal of the Acoustical Society of America 112, nr 5 (listopad 2002): 2303–4. http://dx.doi.org/10.1121/1.4779274.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Benzeghiba, M., R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore i in. "Automatic speech recognition and speech variability: A review". Speech Communication 49, nr 10-11 (październik 2007): 763–86. http://dx.doi.org/10.1016/j.specom.2007.02.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Kushwaha, Shivam, Piyush Deep, Mohd Muaz i Er Shafalii Sharma. "AUTOMATIC SPEECH RECOGNITION USING DEEP NEURAL NETWORKS". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, nr 11 (1.11.2023): 1–11. http://dx.doi.org/10.55041/ijsrem27292.

Pełny tekst źródła
Streszczenie:
This research paper revolves around the evolution and the present scenario of Automatic Speech Recognition systems using Deep Neural Networks. It includes the designs, techniques of training, the evaluations of the model performance, and the emerging trends that are specific to deep neural networks embedded in automatic speech recognition models. This research also incorporates the challenges faced while building and deploying the speech recognition model including limited data availability and adaptability. We have examined how deep neural networks have transformed automatic speech recognition and this research provides valuable insights to improve the speech recognition technology across a number of applications from healthcare to smart devices. INDEX TERMS - Automatic Speech Recognition, Deep Neural Networks, Language Modeling, Robustness to Noise, Speech Modelling
Style APA, Harvard, Vancouver, ISO itp.
27

Janai, Siddhanna, Shreekanth T., Chandan M. i Ajish K. Abraham. "Speech-to-Speech Conversion". International Journal of Ambient Computing and Intelligence 12, nr 1 (styczeń 2021): 184–206. http://dx.doi.org/10.4018/ijaci.2021010108.

Pełny tekst źródła
Streszczenie:
A novel approach to build a speech-to-speech conversion (STSC) system for individuals with speech impairment dysarthria is described. STSC system takes impaired speech having inherent disturbance as input and produces a synthesized output speech with good pronunciation and noise free utterance. The STSC system involves two stages, namely automatic speech recognition (ASR) and automatic speech synthesis. ASR transforms speech into text, while automatic speech synthesis (or text-to-speech [TTS]) performs the reverse task. At present, the recognition system is developed for a small vocabulary of 50 words and the accuracy of 94% is achieved for normal speakers and 88% for speakers with dysarthria. The output speech of TTS system has achieved a MOS value of 4.5 out of 5 as obtained by averaging the response of 20 listeners. This method of STSC would be an augmentative and alternative communication aid for speakers with dysarthria.
Style APA, Harvard, Vancouver, ISO itp.
28

Salaja, Rosemary T., Ronan Flynn i Michael Russell. "A Life-Based Classifier for Automatic Speech Recognition". Applied Mechanics and Materials 679 (październik 2014): 189–93. http://dx.doi.org/10.4028/www.scientific.net/amm.679.189.

Pełny tekst źródła
Streszczenie:
Research in speech recognition has produced different approaches that have been used for the classification of speech utterances in the back-end of an automatic speech recognition (ASR) system. As speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. This paper proposes a new back-end classifier that is based on artificial life (ALife) and describes how the proposed classifier can be used in a speech recognition system.
Style APA, Harvard, Vancouver, ISO itp.
29

Schultz, Benjamin G., Venkata S. Aditya Tarigoppula, Gustavo Noffs, Sandra Rojas, Anneke van der Walt, David B. Grayden i Adam P. Vogel. "Automatic speech recognition in neurodegenerative disease". International Journal of Speech Technology 24, nr 3 (4.05.2021): 771–79. http://dx.doi.org/10.1007/s10772-021-09836-w.

Pełny tekst źródła
Streszczenie:
AbstractAutomatic speech recognition (ASR) could potentially improve communication by providing transcriptions of speech in real time. ASR is particularly useful for people with progressive disorders that lead to reduced speech intelligibility or difficulties performing motor tasks. ASR services are usually trained on healthy speech and may not be optimized for impaired speech, creating a barrier for accessing augmented assistance devices. We tested the performance of three state-of-the-art ASR platforms on two groups of people with neurodegenerative disease and healthy controls. We further examined individual differences that may explain errors in ASR services within groups, such as age and sex. Speakers were recorded while reading a standard text. Speech was elicited from individuals with multiple sclerosis, Friedreich’s ataxia, and healthy controls. Recordings were manually transcribed and compared to ASR transcriptions using Amazon Web Services, Google Cloud, and IBM Watson. Accuracy was measured as the proportion of words that were correctly classified. ASR accuracy was higher for controls than clinical groups, and higher for multiple sclerosis compared to Friedreich’s ataxia for all ASR services. Amazon Web Services and Google Cloud yielded higher accuracy than IBM Watson. ASR accuracy decreased with increased disease duration. Age and sex did not significantly affect ASR accuracy. ASR faces challenges for people with neuromuscular disorders. Until improvements are made in recognizing less intelligible speech, the true value of ASR for people requiring augmented assistance devices and alternative communication remains unrealized. We suggest potential methods to improve ASR for those with impaired speech.
Style APA, Harvard, Vancouver, ISO itp.
30

Rista, Amarildo, i Arbana Kadriu. "Automatic Speech Recognition: A Comprehensive Survey". SEEU Review 15, nr 2 (1.12.2020): 86–112. http://dx.doi.org/10.2478/seeur-2020-0019.

Pełny tekst źródła
Streszczenie:
Abstract Speech recognition is an interdisciplinary subfield of natural language processing (NLP) that facilitates the recognition and translation of spoken language into text by machine. Speech recognition plays an important role in digital transformation. It is widely used in different areas such as education, industry, and healthcare and has recently been used in many Internet of Things and Machine Learning applications. The process of speech recognition is one of the most difficult processes in computer science. Despite numerous searches in this domain, an optimal method for speech recognition has not yet been found. This is due to the fact that there are many attributes that characterize natural languages and every language has its particular highlights. The aim of this research is to provide a comprehensive understanding of the various techniques within the domain of Speech Recognition through a systematic literature review of the existing work. We will introduce the most significant and relevant techniques that may provide some directions in the future research.
Style APA, Harvard, Vancouver, ISO itp.
31

Auti, Dr Nisha, Atharva Pujari, Anagha Desai, Shreya Patil, Sanika Kshirsagar i Rutika Rindhe. "Advanced Audio Signal Processing for Speaker Recognition and Sentiment Analysis". International Journal for Research in Applied Science and Engineering Technology 11, nr 5 (31.05.2023): 1717–24. http://dx.doi.org/10.22214/ijraset.2023.51825.

Pełny tekst źródła
Streszczenie:
Abstract: Automatic Speech Recognition (ASR) technology has revolutionized human-computer interaction by allowing users to communicate with computer interfaces using their voice in a natural way. Speaker recognition is a biometric recognition method that identifies individuals based on their unique speech signal, with potential applications in security, communication, and personalization. Sentiment analysis is a statistical method that analyzes unique acoustic properties of the speaker's voice to identify emotions or sentiments in speech. This allows for automated speech recognition systems to accurately categorize speech as Positive, Neutral, or Negative. While sentiment analysis has been developed for various languages, further research is required for regional languages. This project aims to improve the accuracy of automatic speech recognition systems by implementing advanced audio signal processing and sentiment analysis detection. The proposed system will identify the speaker's voice and analyze the audio signal to detect the context of speech, including the identification of foul language and aggressive speech. The system will be developed for the Marathi Language dataset, with potential for further development in other languages.
Style APA, Harvard, Vancouver, ISO itp.
32

Galatang, Danny Henry, i Suyanto Suyanto. "Syllable-Based Indonesian Automatic Speech Recognition". International Journal on Electrical Engineering and Informatics 12, nr 4 (31.12.2020): 720–28. http://dx.doi.org/10.15676/ijeei.2020.12.4.2.

Pełny tekst źródła
Streszczenie:
The syllable-based automatic speech recognition (ASR) systems commonly perform better than the phoneme-based ones. This paper focuses on developing an Indonesian monosyllable-based ASR (MSASR) system using an ASR engine called SPRAAK and comparing it to a phoneme-based one. The Mozilla DeepSpeech-based end-to-end ASR (MDSE2EASR), one of the state-of-the-art models based on character (similar to the phoneme-based model), is also investigated to confirm the result. Besides, a novel Kaituoxu SpeechTransformer (KST) E2EASR is also examined. Testing on the Indonesian speech corpus of 5,439 words shows that the proposed MSASR produces much higher word accuracy (76.57%) than the monophone-based one (63.36%). Its performance is comparable to the character-based MDS-E2EASR, which produces 76.90%, and the character-based KST-E2EASR (78.00%). In the future, this monosyllable-based ASR is possible to be improved to the bisyllable-based one to give higher word accuracy. Nevertheless, extensive bisyllable acoustic models must be handled using an advanced method.
Style APA, Harvard, Vancouver, ISO itp.
33

GURTUEVA, I. A. "MODERN PROBLEMS OF AUTOMATIC SPEECH RECOGNITION". News of the Kabardin-Balkar Scientific Center of RAS 6, nr 98 (2020): 20–33. http://dx.doi.org/10.35330/1991-6639-2020-6-98-20-33.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Abushariah, Mohammad A. M., i Assal A. M. Alqudah. "Automatic Identity Recognition Using Speech Biometric". European Scientific Journal, ESJ 12, nr 12 (28.04.2016): 43. http://dx.doi.org/10.19044/esj.2016.v12n12p43.

Pełny tekst źródła
Streszczenie:
Biometric technology refers to the automatic identification of a person using physical or behavioral traits associated with him/her. This technology can be an excellent candidate for developing intelligent systems such as speaker identification, facial recognition, signature verification...etc. Biometric technology can be used to design and develop automatic identity recognition systems, which are highly demanded and can be used in banking systems, employee identification, immigration, e-commerce…etc. The first phase of this research emphasizes on the development of automatic identity recognizer using speech biometric technology based on Artificial Intelligence (AI) techniques provided in MATLAB. For our phase one, speech data is collected from 20 (10 male and 10 female) participants in order to develop the recognizer. The speech data include utterances recorded for the English language digits (0 to 9), where each participant recorded each digit 3 times, which resulted in a total of 600 utterances for all participants. For our phase two, speech data is collected from 100 (50 male and 50 female) participants in order to develop the recognizer. The speech data is divided into text-dependent and text-independent data, whereby each participant selected his/her full name and recorded it 30 times, which makes up the text-independent data. On the other hand, the text-dependent data is represented by a short Arabic language story that contains 16 sentences, whereby every sentence was recorded by every participant 5 times. As a result, this new corpus contains 3000 (30 utterances * 100 speakers) sound files that represent the text-independent data using their full names and 8000 (16 sentences * 5 utterances * 100 speakers) sound files that represent the text-dependent data using the short story. For the purpose of our phase one of developing the automatic identity recognizer using speech, the 600 utterances have undergone the feature extraction and feature classification phases. The speech-based automatic identity recognition system is based on the most dominating feature extraction technique, which is known as the Mel-Frequency Cepstral Coefficient (MFCC). For feature classification phase, the system is based on the Vector Quantization (VQ) algorithm. Based on our experimental results, the highest accuracy achieved is 76%. The experimental results have shown acceptable performance, but can be improved further in our phase two using larger speech data size and better performance classification techniques such as the Hidden Markov Model (HMM).
Style APA, Harvard, Vancouver, ISO itp.
35

Sain, Nakshatra. "Automatic Speech Recognition: - Ease of Typing". International Journal for Research in Applied Science and Engineering Technology 6, nr 4 (30.04.2018): 1932–34. http://dx.doi.org/10.22214/ijraset.2018.4331.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Mihajlik, Péter, Tibor Révész i Péter Tatai. "Phonetic transcription in automatic speech recognition". Acta Linguistica Hungarica 49, nr 3-4 (listopad 2002): 407–25. http://dx.doi.org/10.1556/aling.49.2002.3-4.9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Coro, Gianpaolo, Fabio Valerio Massoli, Antonio Origlia i Francesco Cutugno. "Psycho-acoustics inspired automatic speech recognition". Computers & Electrical Engineering 93 (lipiec 2021): 107238. http://dx.doi.org/10.1016/j.compeleceng.2021.107238.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Aldarmaki, Hanan, Asad Ullah, Sreepratha Ram i Nazar Zaki. "Unsupervised Automatic Speech Recognition: A review". Speech Communication 139 (kwiecień 2022): 76–91. http://dx.doi.org/10.1016/j.specom.2022.02.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Sain, Nakshatra. "Automatic Speech Recognition: - A Literature Survey". International Journal for Research in Applied Science and Engineering Technology 6, nr 4 (30.04.2018): 3644–45. http://dx.doi.org/10.22214/ijraset.2018.4606.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Gjoreski, Martin, Hristijan Gjoreski i Andrea Kulakov. "Automatic Recognition of Emotions from Speech". International Journal of Computational Linguistics Research 10, nr 4 (1.12.2019): 101. http://dx.doi.org/10.6025/jcl/2019/10/4/101-107.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Hartwell, Walter T. "Automatic speech recognition using echo cancellation". Journal of the Acoustical Society of America 91, nr 3 (marzec 1992): 1791. http://dx.doi.org/10.1121/1.403754.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Noyes, Jan M., Chris Baber i Andrew P. Leggatt. "Automatic Speech Recognition, Noise and Workload". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, nr 22 (lipiec 2000): 762–65. http://dx.doi.org/10.1177/154193120004402269.

Pełny tekst źródła
Streszczenie:
Despite the increasing use of technology in the developed world, most computer communications still take place via a QWERTY keyboard and a mouse. The use of Automatic Speech Recognition (ASR) whereby individuals can ‘talk’ to their computers has yet to be realised to any great extent. This is despite the benefits relating to greater efficiency, use in adverse environments and in the ‘hands-eyes busy’ situation. There are now affordable ASR products in the marketplace, and many people are able to buy these products and try ASR for themselves. However, anecdotal reports suggest that these same people will use ASR for a few days or weeks and then revert to conventional interaction techniques; only a hardy few appear to persist long enough to reap the benefits. Thus, it is our contention that ASR is a commercially viable technology but that it still requires further development to make a significant contribution to usability. Admittedly, there are some very successful applications that have used ASR for a number of decades, but these are often characterised by relatively small vocabularies, dedicated users and non-threatening situations; typical applications are in offices (Noyes & Frankish, 1989) or for disabled users (Noyes & Frankish, 1992). Given that Armoured Fighting Vehicles (AFVs) could employ ASR with limited vocabulary and dedicated users, the use of ASR in this application is considered here. The principle difference between ASR for AFV and previous applications is the environmental conditions in which the technology will be used.
Style APA, Harvard, Vancouver, ISO itp.
43

Martinčić-Ipšić, Sanda, Miran Pobar i Ivo Ipšić. "Croatian Large Vocabulary Automatic Speech Recognition". Automatika 52, nr 2 (styczeń 2011): 147–57. http://dx.doi.org/10.1080/00051144.2011.11828413.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Bilmes, Jeff A. "Graphical models and automatic speech recognition". Journal of the Acoustical Society of America 112, nr 5 (listopad 2002): 2278–79. http://dx.doi.org/10.1121/1.4779134.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

O’Shaughnessy, Douglas, i Selouani Sid‐Ahmed. "Robust automatic recognition of telephone speech". Journal of the Acoustical Society of America 113, nr 4 (kwiecień 2003): 2198. http://dx.doi.org/10.1121/1.4780179.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Levinson, S. E. "Structural methods in automatic speech recognition". Proceedings of the IEEE 73, nr 11 (1985): 1625–50. http://dx.doi.org/10.1109/proc.1985.13344.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

O'Shaughnessy, Douglas. "Acoustic Analysis for Automatic Speech Recognition". Proceedings of the IEEE 101, nr 5 (maj 2013): 1038–53. http://dx.doi.org/10.1109/jproc.2013.2251592.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Arora, Neerja. "Automatic Speech Recognition System: A Review". International Journal of Computer Applications 151, nr 1 (17.10.2016): 24–28. http://dx.doi.org/10.5120/ijca2016911368.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Shi-Xiong Zhang i M. J. F. Gales. "Structured SVMs for Automatic Speech Recognition". IEEE Transactions on Audio, Speech, and Language Processing 21, nr 3 (marzec 2013): 544–55. http://dx.doi.org/10.1109/tasl.2012.2227734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Zhu, Qifeng. "DYNAMIC PRUNING FOR AUTOMATIC SPEECH RECOGNITION". Journal of the Acoustical Society of America 134, nr 3 (2013): 2377. http://dx.doi.org/10.1121/1.4820207.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii