Literatura académica sobre el tema "Acoustic-Articulatory Mapping"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Acoustic-Articulatory Mapping".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Acoustic-Articulatory Mapping"

1

Zussa, F., Q. Lin, G. Richard, D. Sinder y J. Flanagan. "Open‐loop acoustic‐to‐articulatory mapping". Journal of the Acoustical Society of America 98, n.º 5 (noviembre de 1995): 2931. http://dx.doi.org/10.1121/1.414151.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Riegelsberger, Edward L. y Ashok K. Krishnamurthy. "Acoustic‐to‐articulatory mapping of fricatives". Journal of the Acoustical Society of America 97, n.º 5 (mayo de 1995): 3417. http://dx.doi.org/10.1121/1.412480.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ananthakrishnan, G. y Olov Engwall. "Mapping between acoustic and articulatory gestures". Speech Communication 53, n.º 4 (abril de 2011): 567–89. http://dx.doi.org/10.1016/j.specom.2011.01.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Sepulveda-Sepulveda, Alexander y German Castellanos-Domínguez. "Time-Frequency Energy Features for Articulator Position Inference on Stop Consonants". Ingeniería y Ciencia 8, n.º 16 (30 de noviembre de 2012): 37–56. http://dx.doi.org/10.17230/ingciencia.8.16.2.

Texto completo
Resumen
Acoustic-to-Articulatory inversion offers new perspectives and interesting applicationsin the speech processing field; however, it remains an open issue. This paper presents a method to estimate the distribution of the articulatory informationcontained in the stop consonants’ acoustics, whose parametrizationis achieved by using the wavelet packet transform. The main focus is on measuringthe relevant acoustic information, in terms of statistical association, forthe inference of the position of critical articulators involved in stop consonantsproduction. The rank correlation Kendall coefficient is used as the relevance measure. The maps of relevant time–frequency features are calculated for theMOCHA–TIMIT database; from which, stop consonants are extracted andanalysed. The proposed method obtains a set of time–frequency components closely related to articulatory phenemenon, which offers a deeper understanding into the relationship between the articulatory and acoustical phenomena.The relevant maps are tested into an acoustic–to–articulatory mapping systembased on Gaussian mixture models, where it is shown they are suitable for improvingthe performance of such a systems over stop consonants. The method could be extended to other manner of articulation categories, e.g. fricatives,in order to adapt present method to acoustic-to-articulatory mapping systemsover whole speech.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sorokin, V. N. y A. V. Trushkin. "Articulatory-to-acoustic mapping for inverse problem". Speech Communication 19, n.º 2 (agosto de 1996): 105–18. http://dx.doi.org/10.1016/0167-6393(96)00028-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wu, Zhiyong, Kai Zhao, Xixin Wu, Xinyu Lan y Helen Meng. "Acoustic to articulatory mapping with deep neural network". Multimedia Tools and Applications 74, n.º 22 (1 de agosto de 2014): 9889–907. http://dx.doi.org/10.1007/s11042-014-2183-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Riegelsberger, Edward L. y Ashok K. Krishnamurthy. "Acoustic‐to‐articulatory mapping of isolated and intervocalic fricatives". Journal of the Acoustical Society of America 101, n.º 5 (mayo de 1997): 3175. http://dx.doi.org/10.1121/1.419149.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Atal, Bishnu. "A study of ambiguities in the acoustic-articulatory mapping". Journal of the Acoustical Society of America 122, n.º 5 (2007): 3079. http://dx.doi.org/10.1121/1.2942998.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

McGowan, Richard S. y Michael A. Berger. "Acoustic-articulatory mapping in vowels by locally weighted regression". Journal of the Acoustical Society of America 126, n.º 4 (2009): 2011. http://dx.doi.org/10.1121/1.3184581.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Schmidt, Anna Marie. "Korean to English articulatory mapping: Palatometric and acoustic data". Journal of the Acoustical Society of America 95, n.º 5 (mayo de 1994): 2820–21. http://dx.doi.org/10.1121/1.409681.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Acoustic-Articulatory Mapping"

1

Lo, Boon Hooi. "An acoustic model for speech recognition with an articulatory layer and non-linear articulatory-to-acoustic mapping". Thesis, University of Birmingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633225.

Texto completo
Resumen
This thesis presents an extended hidden Markov Model (HMM), namely the linear/non-linear multi-level segmental hidden Markov model (linear/non-linear MSHMM). In the MSHMM framework, the relationship between symbolic and acoustic representations of a speech signal is regulated by an intermediate, articulatory-based layer. Such an approach has many potential advantages for speech pattern processing. By modelling speech dynamics directly in an articulatory domain, it may be possible to characterise the articulatory phenomena which give rise to variability in speech. The intermediate representations are based on the first three formant frequencies. The speech dynamics in the formant representation of each segment are modelled as fixed linear trajectories which characterise the distribution of formant frequencies. These trajectories are mapped into the acoustic features space by set of one or more non-linear mappings. Hence, comes the name linear/non-linear MSHMM. This thesis describes work developing a non-linear transformation approach using a nonlinear Radial Basis Function (RBF) network for the articulatory-to-acoustic mapping. A RBF network consists of a number of hidden units and mapping weights for linear transform component of the network. Each hidden unit is associated with a 'Gaussian-like' distribution. The thesis presents the training and optimisation processes for the parameters of the RBF network. The linear/non-linear MSHMMs, which form the basis for the thesis, are incorporated into an automatic speech recognition system. Gradient descent process is used to find the optimal parameters of the linear trajectory models during Viterbi training process. The phone classification experiments are presented for monophone MSHMMs using TEVflT database. The linear/non-linear MSHMM is compared with the linear/linear MSHMM, where both the model of dynamics and the articulatory-to-acoustic mappings are linear. The comparison results show no statistically significant difference in performance between these two models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Riegelsberger, Edward L. "The acoustic-to-articulatory mapping of voiced and fricated speech /". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu148794750113335.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Altun, Halis. "Evaluation of neural learning in a MLP NN for an acoustic-to-articulatory mapping problem using different training pattern vector characteristics". Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263405.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Illa, Aravind. "Acoustic-Articulatory Mapping: Analysis and Improvements with Neural Network Learning Paradigms". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5525.

Texto completo
Resumen
Human speech is one of many acoustic signals we perceive, which carries linguistic and paralinguistic (e.g., speaker identity, emotional state) information. Speech acoustics are produced as a result of different temporally overlapping gestures of speech articulators (such as lips, tongue tip, tongue body, tongue dorsum, velum, and larynx), each of which regulates constriction in different parts of the vocal tract. Estimating speech acoustic representations from articulatory movements is known as articulatory- to-acoustic forward (AAF) mapping i.e., articulatory speech synthesis. While estimating articulatory movements back from the speech acoustics is known as acoustic-to-articulatory inverse (AAI) mapping. These acoustic- articulatory mapping functions are known to be complex and nonlinear. The complexity of this mapping depends on a number of factors. These include the kind of representations used in the acoustic and articulatory spaces. Typically these representations capture both linguistic and paralinguistic aspects in speech. How each of these aspects contributes to the complexity of the mapping is unknown. These representations and, in turn, the acoustic-articulatory mapping are affected by the speaking rate as well. The nature and quality of the mapping vary across speakers. Thus, the complexity of mapping also depends on the amount of data from a speaker as well as the number of speakers used in learning the mapping function. Further, how the language variations impact the mapping requires detailed investigation. This thesis analyzes a few of such factors in detail and develops neural-network based models to learn mapping functions robust to many of these factors. Electromagnetic articulography (EMA) sensor data has been used directly in the past as articulatory representations for learning the acoustic-articulatory mapping function. In this thesis, we address the problem of optimal EMA sensor placement such that the air-tissue boundaries as seen in the mid-sagittal plane of the real-time magnetic resonance imaging (rtMRI) are reconstructed with minimum error. Following optimal sensor placement work, acoustic-articulatory data was collected using EMA from 41 subjects with speech stimuli in English and Indian native languages (Hindi, Kannada, Tamil, and Telugu), resulting in a total of ∼23 hours of data, used in this thesis. Representations from raw waveform are also learned for AAI task using convolutional and bidirectional long short term memory neural networks (CNN-BLSTM), where the learned filters of CNN are found to be similar to those used for computing Mel-frequency cepstral coefficients (MFCCs), typically used for AAI task. In order to examine the extent to which a representation having only the linguistic information can recover articulatory representations, we replace MFCC vectors with one-hot encoded vectors representing phonemes, which were further modified to remove the time duration of each phoneme and keep only phoneme sequence. Experiments with phoneme sequence using attention network achieve an AAI performance that is identical to that using phoneme with timing information, while there is a drop in performance compared to that using MFCC. Experiments to examine variation in speaking rate reveal that the errors in estimating the vertical motion of tongue articulators from acoustics with fast speaking rate are significantly higher than those with slow speaking rate. In order to reduce the demand for data from a speaker, low resource AAI is proposed using a transfer learning approach. Further, we show that AAI can be modeled to learn acoustic-articulatory mappings of multiple speakers through a single AAI model rather than building separate speaker-specific models. This is achieved by conditioning an AAI model with speaker embeddings, which benefits AAI in seen and unseen speaker evaluations. Finally, we show the benefit of estimated articulatory representations in voice conversion applications. Experiments revealed that articulatory representations estimated from speaker-independent AAI preserve linguistic information and suppress speaker-dependent factors. These articulatory representations (from an unseen speaker and language) are used to drive target speaker-specific AAF to synthesis speech, which preserves linguistic information and the target speaker’s voice characteristics.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Acoustic-Articulatory Mapping"

1

The Acoustic-to-Articulatory Mapping of Voiced and Fricated Speech. Storming Media, 1997.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Acoustic-Articulatory Mapping"

1

Schoentgen, Jean. "Speech Modelling Based on Acoustic-to-Articulatory Mapping". En Nonlinear Speech Modeling and Applications, 114–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11520153_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Klein, Eugen, Jana Brunner y Phil Hoole. "Which Factors Can Explain Individual Outcome Differences When Learning a New Articulatory-to-Acoustic Mapping?" En Studies on Speech Production, 158–72. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00126-1_15.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Acoustic-Articulatory Mapping"

1

Prado, P. P. L., E. H. Shiva y D. G. Childers. "Optimization of acoustic-to-articulatory mapping". En [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226127.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Potard, Blaise y Yves Laprie. "Compact representations of the articulatory-to-acoustic mapping". En Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-660.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jospa, Paul y Alain Soquet. "The acoustic-articulatory mapping and the variational method". En 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA: ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-151.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ananthakrishnan, G. y Olov Engwall. "Resolving non-uniqueness in the acoustic-to-articulatory mapping". En ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947386.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Toda, Tomoki, Alan Black y Keiichi Tokuda. "Acoustic-to-articulatory inversion mapping with Gaussian mixture model". En Interspeech 2004. ISCA: ISCA, 2004. http://dx.doi.org/10.21437/interspeech.2004-410.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Canevari, Claudia, Leonardo Badino, Luciano Fadiga y Giorgio Metta. "Relevance-weighted-reconstruction of articulatory features in deep-neural-network-based acoustic-to-articulatory mapping". En Interspeech 2013. ISCA: ISCA, 2013. http://dx.doi.org/10.21437/interspeech.2013-346.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Csapó, Tamás Gábor, Csaba Zainkó, László Tóth, Gábor Gosztolya y Alexandra Markó. "Ultrasound-Based Articulatory-to-Acoustic Mapping with WaveGlow Speech Synthesis". En Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-1031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Toutios, Asterios y Konstantinos Margaritis. "A support vector approach to the acoustic-to-articulatory mapping". En Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-850.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Tobing, Patrick Lumban, Hirokazu Kameoka y Tomoki Toda. "Deep acoustic-to-articulatory inversion mapping with latent trajectory modeling". En 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282219.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Laprie, Slim Ouni And Yves. "Design of hypercube codebooks for the acoustic-to-articulatory inversion respecting the non-linearities of the articulatory-to-acoustic mapping". En 6th European Conference on Speech Communication and Technology (Eurospeech 1999). ISCA: ISCA, 1999. http://dx.doi.org/10.21437/eurospeech.1999-39.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía