Auswahl der wissenschaftlichen Literatur zum Thema „Acoustic analysis of speech“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Acoustic analysis of speech" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Acoustic analysis of speech"

1

Masih, Dawa A. A., Nawzad K. Jalal, Manar N. A. Mohammed und Sulaiman A. Mustafa. „The Assessment of Acoustical Characteristics for Recent Mosque Buildings in Erbil City of Iraq“. ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 9, Nr. 1 (01.03.2021): 51–66. http://dx.doi.org/10.14500/aro.10784.

Der volle Inhalt der Quelle
Annotation:
The study of mosque acoustics, concerning acoustical features, sound quality for speech intelligibility, and additional practical acoustic criteria, is commonly overlooked. Acoustic quality is vital to the fundamental use of mosques, in terms of contributing toward prayers and worshippers’ appreciation. This paper undertakes a comparative analysis of the acoustic quality level and the acoustical characteristics for two modern mosque buildings constructed in Erbil city. This work investigates and examines the acoustical quality and performance of these two mosques and their prayer halls through room simulation using ODEON Room Acoustics Software, to assess the degree of speech intelligibility according to acoustic criteria relative to the spatial requirements and design guidelines. The sound pressure level and other room-acoustic indicators, such as reverberation time (T30), early decay time, and speech transmission index, are tested. The outcomes demonstrate the quality of acoustics in the investigated mosques during semi-occupied and fully-occupied circumstances. The results specify that the sound quality within the both mosques is displeasing as the loudspeakers were off.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Duran, Sebastian, Martyn Chambers und Ioannis Kanellopoulos. „An Archaeoacoustics Analysis of Cistercian Architecture: The Case of the Beaulieu Abbey“. Acoustics 3, Nr. 2 (26.03.2021): 252–69. http://dx.doi.org/10.3390/acoustics3020018.

Der volle Inhalt der Quelle
Annotation:
The Cistercian order is of acoustic interest because previous research has hypothesized that Cistercian architectural structures were designed for longer reverberation times in order to reinforce Gregorian chants. The presented study focused on an archaeoacacoustics analysis of the Cistercian Beaulieu Abbey (Hampshire, England, UK), using Geometrical Acoustics (GA) to recreate and investigate the acoustical properties of the original structure. To construct an acoustic model of the Abbey, the building’s dimensions and layout were retrieved from published archaeology research and comparison with equivalent structures. Absorption and scattering coefficients were assigned to emulate the original room surface materials’ acoustics properties. CATT-Acoustics was then used to perform the acoustics analysis of the simplified building structure. Shorter reverberation time (RTs) was generally observed at higher frequencies for all the simulated scenarios. Low speech intelligibility index (STI) and speech clarity (C50) values were observed across Abbey’s nave section. Despite limitations given by the impossibility to calibrate the model according to in situ measurements conducted in the original structure, the simulated acoustics performance suggested how the Abbey could have been designed to promote sacral music and chants, rather than preserve high speech intelligibility.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Askenfelt, Anders G., und Britta Hammarberg. „Speech Waveform Perturbation Analysis“. Journal of Speech, Language, and Hearing Research 29, Nr. 1 (März 1986): 50–64. http://dx.doi.org/10.1044/jshr.2901.50.

Der volle Inhalt der Quelle
Annotation:
The performance of seven acoustic measures of cycle-to-cycle variations (perturbations) in the speech waveform was compared. All measures were calculated automatically and applied on running speech. Three of the measures refer to the frequency of occurrence and severity of waveform perturbations in special selected parts of the speech, identified by means of the rate of change in the fundamental frequency. Three other measures refer to statistical properties of the distribution of the relative frequency differences between adjacent pitch periods. One perturbation measure refers to the percentage of consecutive pitch period differences with alternating signs. The acoustic measures were tested on tape recorded speech samples from 41 voice patients, before and after successful therapy. Scattergrams of acoustic waveform perturbation data versus an average of perceived deviant voice qualities, as rated by voice clinicians, are presented. The perturbation measures were compared with regard to the acoustic-perceptual correlation and their ability to discriminate between normal and pathological voice status. The standard deviation of the distribution of the relative frequency differences was suggested as the most useful acoustic measure of waveform perturbations for clinical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chenausky, Karen, Joel MacAuslan und Richard Goldhor. „Acoustic Analysis of PD Speech“. Parkinson's Disease 2011 (2011): 1–13. http://dx.doi.org/10.4061/2011/435232.

Der volle Inhalt der Quelle
Annotation:
According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD), with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS) substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

M, Manjutha. „Acoustic Analysis of Formant Frequency Variation in Tamil Stuttered Speech“. Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (25.07.2020): 2934–44. http://dx.doi.org/10.5373/jardcs/v12sp7/20202438.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Weedon, B., E. Hellier, J. Edworthy und K. Walters. „Perceived Urgency in Speech Warnings“. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, Nr. 22 (Juli 2000): 690–93. http://dx.doi.org/10.1177/154193120004402251.

Der volle Inhalt der Quelle
Annotation:
Two experiments are reported that investigate the effects of acoustics and semantics in verbal warnings. In the first experiment subjects rated the urgency of warning signal words spoken in different presentation styles (URGENT, NON-URGENT, MONOTONE). Significant differences in urgency ratings were found between presentation styles. Acoustic analysis revealed how acoustic parameters differed within these different presentation styles. These acoustic measurements were used to construct synthesised speech warnings that differed in urgency. They were rated in experiment 2 and the predicted differences between the urgency of the words were found. These studies indicate that urgency in natural speech is produced by alterations in a few acoustic parameters and that these alterations can easily be incorporated into synthetic speech to reproduce variations in urgency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Keller, Eric, Patrick Vigneux und Martine Laframboise. „Acoustic analysis of neurologically impaired speech“. International Journal of Language & Communication Disorders 26, Nr. 1 (Januar 1991): 75–94. http://dx.doi.org/10.3109/13682829109011993.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Thakore, Jogin, Viliam Rapcan, Shona Darcy, Sherlyn Yeap, Natasha Afzal und Richard Reilly. „Acoustic and temporal analysis of speech“. International Clinical Psychopharmacology 26 (September 2011): e131. http://dx.doi.org/10.1097/01.yic.0000405855.63819.e2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sondhi, Savita, Munna Khan, Ritu Vijay, Ashok K. Salhan und Satish Chouhan. „Acoustic analysis of speech under stress“. International Journal of Bioinformatics Research and Applications 11, Nr. 5 (2015): 417. http://dx.doi.org/10.1504/ijbra.2015.071942.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

O'Shaughnessy, Douglas. „Acoustic Analysis for Automatic Speech Recognition“. Proceedings of the IEEE 101, Nr. 5 (Mai 2013): 1038–53. http://dx.doi.org/10.1109/jproc.2013.2251592.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Acoustic analysis of speech"

1

John, Jeeva. „Acoustic Analysis of Speech of Persons with Autistic Spectrum Disorders“. Bowling Green State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1206329066.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nulsen, Susan, und n/a. „Combining acoustic analysis and phonotactic analysis to improve automatic speech recognition“. University of Canberra. Information Sciences & Engineering, 1998. http://erl.canberra.edu.au./public/adt-AUC20060825.131042.

Der volle Inhalt der Quelle
Annotation:
This thesis addresses the problem of automatic speech recognition, specifically, how to transform an acoustic waveform into a string of words or phonemes. A preliminary chapter gives linguistic information potentially useful in automatic speech recognition. This is followed by a description of the Wave Analysis Laboratory (WAL), a rule-based system which detects features in speech and was designed as the acoustic front end of a speech recognition system. Temporal reasoning as used in WAL rules is examined. The use of WAL in recognizing one particular class of speech sounds, the nasal consonants, is described in detail. The remainder of the thesis looks at the statistical analysis of samples of spontaneous speech. An orthographic transcription of a large sample of spontaneous speech is automatically translated into phonemes. Tables of the frequencies of word initial and word final phoneme clusters are constructed to illustrate some of the phonotactic constraints of the language. Statistical data is used to assign phonemes to phonotactic classes. These classes are unlike the acoustic classes, although there is a general distinction between the vowels, the consonants and the word boundary. A way of measuring the phonetic balance of a sample of speech is described. This can be used as a means of ranking potential test samples in terms of how well they represent the language. A phoneme n-gram model is used to measure the entropy of the language. The broad acoustic encoding output from WAL is used with this language model to reconstruct a small test sample. "Branching" a simpler alternative to perplexity is introduced and found to give similar results to perplexity. Finally, the drop in branching is calculated as knowledge of various sets of acoustic classes is considered. In the work described in this thesis the main contributions made to automatic speech recognition and the study of speech are in the development of the Wave Analysis Laboratory and in the analysis of speech from a phonotactic point of view. The phoneme cluster frequencies provide new information on spoken language, as do the phonotactic classes. The measures of phonetic balance and branching provide additional tools for use in the development of speech recognition systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Brock, James L. „Acoustic classification using independent component analysis /“. Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/2067.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Singh-Miller, Natasha 1981. „Neighborhood analysis methods in acoustic modeling for automatic speech recognition“. Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62450.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 121-134).
This thesis investigates the problem of using nearest-neighbor based non-parametric methods for performing multi-class class-conditional probability estimation. The methods developed are applied to the problem of acoustic modeling for speech recognition. Neighborhood components analysis (NCA) (Goldberger et al. [2005]) serves as the departure point for this study. NCA is a non-parametric method that can be seen as providing two things: (1) low-dimensional linear projections of the feature space that allow nearest-neighbor algorithms to perform well, and (2) nearest-neighbor based class-conditional probability estimates. First, NCA is used to perform dimensionality reduction on acoustic vectors, a commonly addressed problem in speech recognition. NCA is shown to perform competitively with another commonly employed dimensionality reduction technique in speech known as heteroscedastic linear discriminant analysis (HLDA) (Kumar [1997]). Second, a nearest neighbor-based model related to NCA is created to provide a class-conditional estimate that is sensitive to the possible underlying relationship between the acoustic-phonetic labels. An embedding of the labels is learned that can be used to estimate the similarity or confusability between labels. This embedding is related to the concept of error-correcting output codes (ECOC) and therefore the proposed model is referred to as NCA-ECOC. The estimates provided by this method along with nearest neighbor information is shown to provide improvements in speech recognition performance (2.5% relative reduction in word error rate). Third, a model for calculating class-conditional probability estimates is proposed that generalizes GMM, NCA, and kernel density approaches. This model, called locally-adaptive neighborhood components analysis, LA-NCA, learns different low-dimensional projections for different parts of the space. The models exploits the fact that in different parts of the space different directions may be important for discrimination between the classes. This model is computationally intensive and prone to over-fitting, so methods for sub-selecting neighbors used for providing the classconditional estimates are explored. The estimates provided by LA-NCA are shown to give significant gains in speech recognition performance (7-8% relative reduction in word error rate) as well as phonetic classification.
by Natasha Singh-Miller.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Williams, A. Lynn. „Phonologic and Acoustic Analyses of Final Consonant Omission“. Digital Commons @ East Tennessee State University, 1998. https://dc.etsu.edu/etsu-works/2008.

Der volle Inhalt der Quelle
Annotation:
Acoustic analyses have recently been brought to bear on the phonological error pattern of final consonant omission. The results from such acoustic analyses have generally supported the correctness of the phonological analyses. The purpose of this report is to present seemingly conflicting results from a generative phonological analysis and an acoustic analysis of one misarticulating child who omitted word-final obstruents. The apparent conflict is resolved in terms of two possible explanations with differing treatment implications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lee, Matthew E. „Acoustic Models for the Analysis and Synthesis of the Singing Voice“. Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6859.

Der volle Inhalt der Quelle
Annotation:
Throughout our history, the singing voice has been a fundamental tool for musical expression. While analysis and digital synthesis techniques have been developed for normal speech, few models and techniques have been focused on the singing voice. The central theme of this research is the development of models aimed at the characterization and synthesis of the singing voice. First, a spectral model is presented in which asymmetric generalized Gaussian functions are used to represent the formant structure of a singing voice in a flexible manner. Efficient methods for searching the parameter space are investigated and challenges associated with smooth parameter trajectories are discussed. Next a model for glottal characterization is introduced by first presenting an analysis of the relationship between measurable spectral qualities of the glottal waveform and perceptually relevant time-domain parameters. A mathematical derivation of this relationship is presented and is extended as a method for parameter estimation. These concepts are then used to outline a procedure for modifying glottal textures and qualities in the frequency domain. By combining these models with the Analysis-by-Synthesis/Overlap-Add sinusoidal model, the spectral and glottal models are shown to be capable of characterizing the singing voice according to traits such as level of training and registration. An application is presented in which these parameterizations are used to implement a system for singing voice enhancement. Subjective listening tests were conducted in which listeners showed an overall preference for outputs produced by the proposed enhancement system over both unmodified voices and voices enhanced with competitive methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ng, So-sum. „Acoustic analysis of contour tones produced by Cantonese dysarthric speakers“. Click to view the E-thesis via HKUTO, 2001. http://sunzi.lib.hku.hk/hkuto/record/B36208024.

Der volle Inhalt der Quelle
Annotation:
Thesis (B.Sc)--University of Hong Kong, 2001.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, May 4, 2001." Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Srinivasan, Nandini. „Acoustic Analysis of English Vowels by Young Spanish-English Bilingual Language Learners“. Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10815722.

Der volle Inhalt der Quelle
Annotation:

Several studies across various languages have shown that monolingual listeners perceive significant differences between the speech of monolinguals and bilinguals. However, these differences may not always affect the phoneme category as identified by the listener or the speaker; differences may often be found between tokens corresponding to unique phonological categories and, as such, be more easily detectable through acoustic analysis. We hypothesized that unshared English vowels produced by young Spanish-English bilinguals would have measurably different formant values and duration than the same vowels produced by young English monolinguals because of Spanish influence on English phonology. We did not find significant differences in formant values between the two groups, but we found that SpanishEnglish bilinguals produced certain vowels with longer duration than English monolinguals. Our findings add to the ever-growing body of literature on bilingual language acquisition and the perception of accentedness.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Odlozinski, Lisa M. „An acoustic analysis of speech rate control procedures in Parkinson's disease“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0004/MQ30738.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cao, Ying Alisa 1979. „Analysis of acoustic cues for identifying consonant /ð/ in continuous speech“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87279.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Acoustic analysis of speech"

1

Kent, Raymond D. The acoustic analysis of speech. San Diego: Singular, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

1940-, Read Charles, Hrsg. The acoustic analysis of speech. 2. Aufl. Australia: Singular/Thomson Learning, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

1940-, Read Charles, Hrsg. The acoustic analysis of speech. San Diego, Calif: Singular Pub. Group, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kent, Raymond D. The acoustic analysis of speech. London: Whurr, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Patryn, Ryszard. Phonetic-acoustic analysis of Polish speech sounds. Warszawa: Wydawnictwa Uniwersytetu Warszawskiego, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chuang, Ming-Fei. Interactive tools for sound signal analysis. Monterey, Calif: Naval Postgraduate School, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Harrington, Jonathan. Techniques in speech acoustics. Dordrecht: Kluwer Academic Publishers, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

1952-, Cassidy Steve, Hrsg. Techniques in speech acoustics. Dordrecht: Kluwer Academic Publishers, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bolla, Kálmán. A phonetic conspectus of English: The articulatory and acoustic features of British English speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Schuller, Björn W. Intelligent Audio Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Acoustic analysis of speech"

1

Verkhodanova, Vasilisa, Vladimir Shapranov und Irina Kipyatkova. „Hesitations in Spontaneous Speech: Acoustic Analysis and Detection“. In Speech and Computer, 398–406. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66429-3_39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Fant, Gunnar, Anita Kruckenberg und Johan Liljencrants. „Acoustic-phonetic Analysis of Prominence in Swedish“. In Text, Speech and Language Technology, 55–86. Dordrecht: Springer Netherlands, 2000. http://dx.doi.org/10.1007/978-94-011-4317-2_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Howell, Peter, Mark Williams und Louise Vause. „Acoustic Analysis of Repetitions in Stutterers’ Speech“. In Speech Motor Dynamics in Stuttering, 371–80. Vienna: Springer Vienna, 1987. http://dx.doi.org/10.1007/978-3-7091-6969-8_29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

de Cheveigné, Alain. „The Cancellation Principle in Acoustic Scene Analysis“. In Speech Separation by Humans and Machines, 245–59. Boston, MA: Springer US, 2005. http://dx.doi.org/10.1007/0-387-22794-6_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Li, Aijun. „Acoustic and Articulatory Analysis of Emotional Vowels“. In Encoding and Decoding of Emotional Speech, 109–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47691-8_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fant, Gunnar. „Acoustical Analysis of Speech“. In Encyclopedia of Acoustics, 1589–98. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2007. http://dx.doi.org/10.1002/9780470172544.ch127.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bauer, Dominik, Jim Kannampuzha und Bernd J. Kröger. „Articulatory Speech Re-synthesis: Profiting from Natural Acoustic Speech Data“. In Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, 344–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03320-9_32.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Drugman, Thomas, Myriam Rijckaert, George Lawson und Marc Remacle. „Analysis and Quantification of Acoustic Artefacts in Tracheoesophageal Speech“. In Advances in Nonlinear Speech Processing, 104–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38847-7_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ludeña-Choez, Jimmy, und Ascensión Gallardo-Antolín. „NMF-Based Spectral Analysis for Acoustic Event Classification Tasks“. In Advances in Nonlinear Speech Processing, 9–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38847-7_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cui, Dandan, und Lianhong Cai. „Acoustic and Physiological Feature Analysis of Affective Speech“. In Lecture Notes in Computer Science, 912–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-37275-2_114.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Acoustic analysis of speech"

1

Pucher, Michael, und Dietmar Schabus. „Visio-articulatory to acoustic conversion of speech“. In FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813858.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Itoh, Taisuke, Kazuya Takeda und Fumitada Itakura. „Acoustic analysis and recognition of whispered speech“. In Proceedings of ICASSP '02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.5743736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Itoh, Takeda und Itakura. „Acoustic analysis and recognition of whispered speech“. In IEEE International Conference on Acoustics Speech and Signal Processing ICASSP-02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.1005758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hakim, Faisal Abdul, Miranti Indar Mandasari, Joko Sarwono, Khairurrijal, Mikrajuddin Abdullah, Wahyu Srigutomo, Sparisoma Viridi und Novitrian. „Acoustic Speech Analysis Of Wayang Golek Puppeteer“. In THE 4TH ASIAN PHYSICS SYMPOSIUM—AN INTERNATIONAL SYMPOSIUM. AIP, 2010. http://dx.doi.org/10.1063/1.3537939.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Krishnamurthy, Nitish, und John H. L. Hansen. „Speech babble: Analysis and modeling for speech systems“. In ICASSP 2008. IEEE International Conference on Acoustic, Speech and Signal Processes. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518657.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fan, Xing, und John H. L. Hansen. „Acoustic analysis for speaker identification of whispered speech“. In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495059.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Castellanos, G., G. Daza, L. Sanchez, O. Castrillon und J. Suarez. „Acoustic Speech Analysis for Hypernasality Detection in Children“. In Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2006. http://dx.doi.org/10.1109/iembs.2006.260572.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Castellanos, G., G. Daza, L. Sanchez, O. Castrillon und J. Suarez. „Acoustic Speech Analysis for Hypernasality Detection in Children“. In Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2006. http://dx.doi.org/10.1109/iembs.2006.4398702.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Berg, Yana A., Anastasia V. Nenko und Daria V. Borovikova. „Analysis of Acoustic Parameters of the Speech Apparatus“. In 2020 21st International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM). IEEE, 2020. http://dx.doi.org/10.1109/edm49804.2020.9153533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Geethashree, A., und D. J. Ravi. „Acoustic and Spectral Analysis of Kannada Emotional Speech“. In Third International Conference on Current Trends in Engineering Science and Technology ICCTEST-2017. Grenze Scientific Society, 2017. http://dx.doi.org/10.21647/icctest/2017/48934.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Acoustic analysis of speech"

1

Colosi, John A. An Analysis of Long-Range Acoustic Propagation Fluctuations and Upper Ocean Sound Speed Variability. Fort Belvoir, VA: Defense Technical Information Center, Dezember 2005. http://dx.doi.org/10.21236/ada441242.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Colosi, John A. An Analysis of Long-Range Acoustic Propagation Fluctuations and Upper Ocean Sound Speed Variability. Fort Belvoir, VA: Defense Technical Information Center, September 2003. http://dx.doi.org/10.21236/ada629913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Colosi, John A. An Analysis of Long-Range Acoustic Propagation Fluctuations and Upper Ocean Sound Speed Variability. Fort Belvoir, VA: Defense Technical Information Center, September 2001. http://dx.doi.org/10.21236/ada625607.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Colosi, John A., und Jinshan Xu. An Analysis of Upper Ocean Sound Speed Variability and its Effects on Long-Range Acoustic Fluctuations Observed for the North Pacific Acoustic Laboratory. Fort Belvoir, VA: Defense Technical Information Center, Juli 2006. http://dx.doi.org/10.21236/ada450109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Colosi, John A. Analysis and Modeling of Ocean Acoustic Fluctuations and Moored Observations of Philippine Sea Sound-Speed Structure. Fort Belvoir, VA: Defense Technical Information Center, September 2009. http://dx.doi.org/10.21236/ada531640.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Colosi, John A. Analysis and Modeling of Ocean Acoustic Fluctuations and Moored Observations of Philippine Sea Sound-Speed Structure. Fort Belvoir, VA: Defense Technical Information Center, September 2011. http://dx.doi.org/10.21236/ada571573.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Colosi, John A. Analysis and Modeling of Ocean Acoustic Fluctuations and Moored Observations of Philippine Sea Sound-Speed Structure. Fort Belvoir, VA: Defense Technical Information Center, September 2012. http://dx.doi.org/10.21236/ada574824.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ostendorf, Mari, und J. R. Rohlicek. Segment-Based Acoustic Models for Continuous Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1992. http://dx.doi.org/10.21236/ada259780.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Brown, Peter F. The Acoustic-Modeling Problem in Automatic Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1987. http://dx.doi.org/10.21236/ada188529.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ostendorf, Mari, und J. R. Rohlicek. Segment-Based Acoustic Models for Continuous Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, Februar 1994. http://dx.doi.org/10.21236/ada276109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie