Gotowa bibliografia na temat „Acoustic-Articulatory Mapping”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Acoustic-Articulatory Mapping”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Acoustic-Articulatory Mapping"

1

Zussa, F., Q. Lin, G. Richard, D. Sinder i J. Flanagan. "Open‐loop acoustic‐to‐articulatory mapping". Journal of the Acoustical Society of America 98, nr 5 (listopad 1995): 2931. http://dx.doi.org/10.1121/1.414151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Riegelsberger, Edward L., i Ashok K. Krishnamurthy. "Acoustic‐to‐articulatory mapping of fricatives". Journal of the Acoustical Society of America 97, nr 5 (maj 1995): 3417. http://dx.doi.org/10.1121/1.412480.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ananthakrishnan, G., i Olov Engwall. "Mapping between acoustic and articulatory gestures". Speech Communication 53, nr 4 (kwiecień 2011): 567–89. http://dx.doi.org/10.1016/j.specom.2011.01.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Sepulveda-Sepulveda, Alexander, i German Castellanos-Domínguez. "Time-Frequency Energy Features for Articulator Position Inference on Stop Consonants". Ingeniería y Ciencia 8, nr 16 (30.11.2012): 37–56. http://dx.doi.org/10.17230/ingciencia.8.16.2.

Pełny tekst źródła
Streszczenie:
Acoustic-to-Articulatory inversion offers new perspectives and interesting applicationsin the speech processing field; however, it remains an open issue. This paper presents a method to estimate the distribution of the articulatory informationcontained in the stop consonants’ acoustics, whose parametrizationis achieved by using the wavelet packet transform. The main focus is on measuringthe relevant acoustic information, in terms of statistical association, forthe inference of the position of critical articulators involved in stop consonantsproduction. The rank correlation Kendall coefficient is used as the relevance measure. The maps of relevant time–frequency features are calculated for theMOCHA–TIMIT database; from which, stop consonants are extracted andanalysed. The proposed method obtains a set of time–frequency components closely related to articulatory phenemenon, which offers a deeper understanding into the relationship between the articulatory and acoustical phenomena.The relevant maps are tested into an acoustic–to–articulatory mapping systembased on Gaussian mixture models, where it is shown they are suitable for improvingthe performance of such a systems over stop consonants. The method could be extended to other manner of articulation categories, e.g. fricatives,in order to adapt present method to acoustic-to-articulatory mapping systemsover whole speech.
Style APA, Harvard, Vancouver, ISO itp.
5

Sorokin, V. N., i A. V. Trushkin. "Articulatory-to-acoustic mapping for inverse problem". Speech Communication 19, nr 2 (sierpień 1996): 105–18. http://dx.doi.org/10.1016/0167-6393(96)00028-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wu, Zhiyong, Kai Zhao, Xixin Wu, Xinyu Lan i Helen Meng. "Acoustic to articulatory mapping with deep neural network". Multimedia Tools and Applications 74, nr 22 (1.08.2014): 9889–907. http://dx.doi.org/10.1007/s11042-014-2183-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Riegelsberger, Edward L., i Ashok K. Krishnamurthy. "Acoustic‐to‐articulatory mapping of isolated and intervocalic fricatives". Journal of the Acoustical Society of America 101, nr 5 (maj 1997): 3175. http://dx.doi.org/10.1121/1.419149.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Atal, Bishnu. "A study of ambiguities in the acoustic-articulatory mapping". Journal of the Acoustical Society of America 122, nr 5 (2007): 3079. http://dx.doi.org/10.1121/1.2942998.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

McGowan, Richard S., i Michael A. Berger. "Acoustic-articulatory mapping in vowels by locally weighted regression". Journal of the Acoustical Society of America 126, nr 4 (2009): 2011. http://dx.doi.org/10.1121/1.3184581.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Schmidt, Anna Marie. "Korean to English articulatory mapping: Palatometric and acoustic data". Journal of the Acoustical Society of America 95, nr 5 (maj 1994): 2820–21. http://dx.doi.org/10.1121/1.409681.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Acoustic-Articulatory Mapping"

1

Lo, Boon Hooi. "An acoustic model for speech recognition with an articulatory layer and non-linear articulatory-to-acoustic mapping". Thesis, University of Birmingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633225.

Pełny tekst źródła
Streszczenie:
This thesis presents an extended hidden Markov Model (HMM), namely the linear/non-linear multi-level segmental hidden Markov model (linear/non-linear MSHMM). In the MSHMM framework, the relationship between symbolic and acoustic representations of a speech signal is regulated by an intermediate, articulatory-based layer. Such an approach has many potential advantages for speech pattern processing. By modelling speech dynamics directly in an articulatory domain, it may be possible to characterise the articulatory phenomena which give rise to variability in speech. The intermediate representations are based on the first three formant frequencies. The speech dynamics in the formant representation of each segment are modelled as fixed linear trajectories which characterise the distribution of formant frequencies. These trajectories are mapped into the acoustic features space by set of one or more non-linear mappings. Hence, comes the name linear/non-linear MSHMM. This thesis describes work developing a non-linear transformation approach using a nonlinear Radial Basis Function (RBF) network for the articulatory-to-acoustic mapping. A RBF network consists of a number of hidden units and mapping weights for linear transform component of the network. Each hidden unit is associated with a 'Gaussian-like' distribution. The thesis presents the training and optimisation processes for the parameters of the RBF network. The linear/non-linear MSHMMs, which form the basis for the thesis, are incorporated into an automatic speech recognition system. Gradient descent process is used to find the optimal parameters of the linear trajectory models during Viterbi training process. The phone classification experiments are presented for monophone MSHMMs using TEVflT database. The linear/non-linear MSHMM is compared with the linear/linear MSHMM, where both the model of dynamics and the articulatory-to-acoustic mappings are linear. The comparison results show no statistically significant difference in performance between these two models.
Style APA, Harvard, Vancouver, ISO itp.
2

Riegelsberger, Edward L. "The acoustic-to-articulatory mapping of voiced and fricated speech /". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu148794750113335.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Altun, Halis. "Evaluation of neural learning in a MLP NN for an acoustic-to-articulatory mapping problem using different training pattern vector characteristics". Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263405.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Illa, Aravind. "Acoustic-Articulatory Mapping: Analysis and Improvements with Neural Network Learning Paradigms". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5525.

Pełny tekst źródła
Streszczenie:
Human speech is one of many acoustic signals we perceive, which carries linguistic and paralinguistic (e.g., speaker identity, emotional state) information. Speech acoustics are produced as a result of different temporally overlapping gestures of speech articulators (such as lips, tongue tip, tongue body, tongue dorsum, velum, and larynx), each of which regulates constriction in different parts of the vocal tract. Estimating speech acoustic representations from articulatory movements is known as articulatory- to-acoustic forward (AAF) mapping i.e., articulatory speech synthesis. While estimating articulatory movements back from the speech acoustics is known as acoustic-to-articulatory inverse (AAI) mapping. These acoustic- articulatory mapping functions are known to be complex and nonlinear. The complexity of this mapping depends on a number of factors. These include the kind of representations used in the acoustic and articulatory spaces. Typically these representations capture both linguistic and paralinguistic aspects in speech. How each of these aspects contributes to the complexity of the mapping is unknown. These representations and, in turn, the acoustic-articulatory mapping are affected by the speaking rate as well. The nature and quality of the mapping vary across speakers. Thus, the complexity of mapping also depends on the amount of data from a speaker as well as the number of speakers used in learning the mapping function. Further, how the language variations impact the mapping requires detailed investigation. This thesis analyzes a few of such factors in detail and develops neural-network based models to learn mapping functions robust to many of these factors. Electromagnetic articulography (EMA) sensor data has been used directly in the past as articulatory representations for learning the acoustic-articulatory mapping function. In this thesis, we address the problem of optimal EMA sensor placement such that the air-tissue boundaries as seen in the mid-sagittal plane of the real-time magnetic resonance imaging (rtMRI) are reconstructed with minimum error. Following optimal sensor placement work, acoustic-articulatory data was collected using EMA from 41 subjects with speech stimuli in English and Indian native languages (Hindi, Kannada, Tamil, and Telugu), resulting in a total of ∼23 hours of data, used in this thesis. Representations from raw waveform are also learned for AAI task using convolutional and bidirectional long short term memory neural networks (CNN-BLSTM), where the learned filters of CNN are found to be similar to those used for computing Mel-frequency cepstral coefficients (MFCCs), typically used for AAI task. In order to examine the extent to which a representation having only the linguistic information can recover articulatory representations, we replace MFCC vectors with one-hot encoded vectors representing phonemes, which were further modified to remove the time duration of each phoneme and keep only phoneme sequence. Experiments with phoneme sequence using attention network achieve an AAI performance that is identical to that using phoneme with timing information, while there is a drop in performance compared to that using MFCC. Experiments to examine variation in speaking rate reveal that the errors in estimating the vertical motion of tongue articulators from acoustics with fast speaking rate are significantly higher than those with slow speaking rate. In order to reduce the demand for data from a speaker, low resource AAI is proposed using a transfer learning approach. Further, we show that AAI can be modeled to learn acoustic-articulatory mappings of multiple speakers through a single AAI model rather than building separate speaker-specific models. This is achieved by conditioning an AAI model with speaker embeddings, which benefits AAI in seen and unseen speaker evaluations. Finally, we show the benefit of estimated articulatory representations in voice conversion applications. Experiments revealed that articulatory representations estimated from speaker-independent AAI preserve linguistic information and suppress speaker-dependent factors. These articulatory representations (from an unseen speaker and language) are used to drive target speaker-specific AAF to synthesis speech, which preserves linguistic information and the target speaker’s voice characteristics.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Acoustic-Articulatory Mapping"

1

The Acoustic-to-Articulatory Mapping of Voiced and Fricated Speech. Storming Media, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Acoustic-Articulatory Mapping"

1

Schoentgen, Jean. "Speech Modelling Based on Acoustic-to-Articulatory Mapping". W Nonlinear Speech Modeling and Applications, 114–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11520153_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Klein, Eugen, Jana Brunner i Phil Hoole. "Which Factors Can Explain Individual Outcome Differences When Learning a New Articulatory-to-Acoustic Mapping?" W Studies on Speech Production, 158–72. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00126-1_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Acoustic-Articulatory Mapping"

1

Prado, P. P. L., E. H. Shiva i D. G. Childers. "Optimization of acoustic-to-articulatory mapping". W [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226127.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Potard, Blaise, i Yves Laprie. "Compact representations of the articulatory-to-acoustic mapping". W Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-660.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Jospa, Paul, i Alain Soquet. "The acoustic-articulatory mapping and the variational method". W 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA: ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ananthakrishnan, G., i Olov Engwall. "Resolving non-uniqueness in the acoustic-to-articulatory mapping". W ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947386.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Toda, Tomoki, Alan Black i Keiichi Tokuda. "Acoustic-to-articulatory inversion mapping with Gaussian mixture model". W Interspeech 2004. ISCA: ISCA, 2004. http://dx.doi.org/10.21437/interspeech.2004-410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Canevari, Claudia, Leonardo Badino, Luciano Fadiga i Giorgio Metta. "Relevance-weighted-reconstruction of articulatory features in deep-neural-network-based acoustic-to-articulatory mapping". W Interspeech 2013. ISCA: ISCA, 2013. http://dx.doi.org/10.21437/interspeech.2013-346.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Csapó, Tamás Gábor, Csaba Zainkó, László Tóth, Gábor Gosztolya i Alexandra Markó. "Ultrasound-Based Articulatory-to-Acoustic Mapping with WaveGlow Speech Synthesis". W Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-1031.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Toutios, Asterios, i Konstantinos Margaritis. "A support vector approach to the acoustic-to-articulatory mapping". W Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-850.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Tobing, Patrick Lumban, Hirokazu Kameoka i Tomoki Toda. "Deep acoustic-to-articulatory inversion mapping with latent trajectory modeling". W 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282219.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Laprie, Slim Ouni And Yves. "Design of hypercube codebooks for the acoustic-to-articulatory inversion respecting the non-linearities of the articulatory-to-acoustic mapping". W 6th European Conference on Speech Communication and Technology (Eurospeech 1999). ISCA: ISCA, 1999. http://dx.doi.org/10.21437/eurospeech.1999-39.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii