Добірка наукової літератури з теми "Acoustic-Articulatory Mapping"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Acoustic-Articulatory Mapping".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Acoustic-Articulatory Mapping"

1

Zussa, F., Q. Lin, G. Richard, D. Sinder, and J. Flanagan. "Open‐loop acoustic‐to‐articulatory mapping." Journal of the Acoustical Society of America 98, no. 5 (November 1995): 2931. http://dx.doi.org/10.1121/1.414151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Riegelsberger, Edward L., and Ashok K. Krishnamurthy. "Acoustic‐to‐articulatory mapping of fricatives." Journal of the Acoustical Society of America 97, no. 5 (May 1995): 3417. http://dx.doi.org/10.1121/1.412480.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ananthakrishnan, G., and Olov Engwall. "Mapping between acoustic and articulatory gestures." Speech Communication 53, no. 4 (April 2011): 567–89. http://dx.doi.org/10.1016/j.specom.2011.01.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sepulveda-Sepulveda, Alexander, and German Castellanos-Domínguez. "Time-Frequency Energy Features for Articulator Position Inference on Stop Consonants." Ingeniería y Ciencia 8, no. 16 (November 30, 2012): 37–56. http://dx.doi.org/10.17230/ingciencia.8.16.2.

Повний текст джерела
Анотація:
Acoustic-to-Articulatory inversion offers new perspectives and interesting applicationsin the speech processing field; however, it remains an open issue. This paper presents a method to estimate the distribution of the articulatory informationcontained in the stop consonants’ acoustics, whose parametrizationis achieved by using the wavelet packet transform. The main focus is on measuringthe relevant acoustic information, in terms of statistical association, forthe inference of the position of critical articulators involved in stop consonantsproduction. The rank correlation Kendall coefficient is used as the relevance measure. The maps of relevant time–frequency features are calculated for theMOCHA–TIMIT database; from which, stop consonants are extracted andanalysed. The proposed method obtains a set of time–frequency components closely related to articulatory phenemenon, which offers a deeper understanding into the relationship between the articulatory and acoustical phenomena.The relevant maps are tested into an acoustic–to–articulatory mapping systembased on Gaussian mixture models, where it is shown they are suitable for improvingthe performance of such a systems over stop consonants. The method could be extended to other manner of articulation categories, e.g. fricatives,in order to adapt present method to acoustic-to-articulatory mapping systemsover whole speech.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sorokin, V. N., and A. V. Trushkin. "Articulatory-to-acoustic mapping for inverse problem." Speech Communication 19, no. 2 (August 1996): 105–18. http://dx.doi.org/10.1016/0167-6393(96)00028-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wu, Zhiyong, Kai Zhao, Xixin Wu, Xinyu Lan, and Helen Meng. "Acoustic to articulatory mapping with deep neural network." Multimedia Tools and Applications 74, no. 22 (August 1, 2014): 9889–907. http://dx.doi.org/10.1007/s11042-014-2183-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Riegelsberger, Edward L., and Ashok K. Krishnamurthy. "Acoustic‐to‐articulatory mapping of isolated and intervocalic fricatives." Journal of the Acoustical Society of America 101, no. 5 (May 1997): 3175. http://dx.doi.org/10.1121/1.419149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Atal, Bishnu. "A study of ambiguities in the acoustic-articulatory mapping." Journal of the Acoustical Society of America 122, no. 5 (2007): 3079. http://dx.doi.org/10.1121/1.2942998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

McGowan, Richard S., and Michael A. Berger. "Acoustic-articulatory mapping in vowels by locally weighted regression." Journal of the Acoustical Society of America 126, no. 4 (2009): 2011. http://dx.doi.org/10.1121/1.3184581.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Schmidt, Anna Marie. "Korean to English articulatory mapping: Palatometric and acoustic data." Journal of the Acoustical Society of America 95, no. 5 (May 1994): 2820–21. http://dx.doi.org/10.1121/1.409681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Acoustic-Articulatory Mapping"

1

Lo, Boon Hooi. "An acoustic model for speech recognition with an articulatory layer and non-linear articulatory-to-acoustic mapping." Thesis, University of Birmingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633225.

Повний текст джерела
Анотація:
This thesis presents an extended hidden Markov Model (HMM), namely the linear/non-linear multi-level segmental hidden Markov model (linear/non-linear MSHMM). In the MSHMM framework, the relationship between symbolic and acoustic representations of a speech signal is regulated by an intermediate, articulatory-based layer. Such an approach has many potential advantages for speech pattern processing. By modelling speech dynamics directly in an articulatory domain, it may be possible to characterise the articulatory phenomena which give rise to variability in speech. The intermediate representations are based on the first three formant frequencies. The speech dynamics in the formant representation of each segment are modelled as fixed linear trajectories which characterise the distribution of formant frequencies. These trajectories are mapped into the acoustic features space by set of one or more non-linear mappings. Hence, comes the name linear/non-linear MSHMM. This thesis describes work developing a non-linear transformation approach using a nonlinear Radial Basis Function (RBF) network for the articulatory-to-acoustic mapping. A RBF network consists of a number of hidden units and mapping weights for linear transform component of the network. Each hidden unit is associated with a 'Gaussian-like' distribution. The thesis presents the training and optimisation processes for the parameters of the RBF network. The linear/non-linear MSHMMs, which form the basis for the thesis, are incorporated into an automatic speech recognition system. Gradient descent process is used to find the optimal parameters of the linear trajectory models during Viterbi training process. The phone classification experiments are presented for monophone MSHMMs using TEVflT database. The linear/non-linear MSHMM is compared with the linear/linear MSHMM, where both the model of dynamics and the articulatory-to-acoustic mappings are linear. The comparison results show no statistically significant difference in performance between these two models.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Riegelsberger, Edward L. "The acoustic-to-articulatory mapping of voiced and fricated speech /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu148794750113335.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Altun, Halis. "Evaluation of neural learning in a MLP NN for an acoustic-to-articulatory mapping problem using different training pattern vector characteristics." Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Illa, Aravind. "Acoustic-Articulatory Mapping: Analysis and Improvements with Neural Network Learning Paradigms." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5525.

Повний текст джерела
Анотація:
Human speech is one of many acoustic signals we perceive, which carries linguistic and paralinguistic (e.g., speaker identity, emotional state) information. Speech acoustics are produced as a result of different temporally overlapping gestures of speech articulators (such as lips, tongue tip, tongue body, tongue dorsum, velum, and larynx), each of which regulates constriction in different parts of the vocal tract. Estimating speech acoustic representations from articulatory movements is known as articulatory- to-acoustic forward (AAF) mapping i.e., articulatory speech synthesis. While estimating articulatory movements back from the speech acoustics is known as acoustic-to-articulatory inverse (AAI) mapping. These acoustic- articulatory mapping functions are known to be complex and nonlinear. The complexity of this mapping depends on a number of factors. These include the kind of representations used in the acoustic and articulatory spaces. Typically these representations capture both linguistic and paralinguistic aspects in speech. How each of these aspects contributes to the complexity of the mapping is unknown. These representations and, in turn, the acoustic-articulatory mapping are affected by the speaking rate as well. The nature and quality of the mapping vary across speakers. Thus, the complexity of mapping also depends on the amount of data from a speaker as well as the number of speakers used in learning the mapping function. Further, how the language variations impact the mapping requires detailed investigation. This thesis analyzes a few of such factors in detail and develops neural-network based models to learn mapping functions robust to many of these factors. Electromagnetic articulography (EMA) sensor data has been used directly in the past as articulatory representations for learning the acoustic-articulatory mapping function. In this thesis, we address the problem of optimal EMA sensor placement such that the air-tissue boundaries as seen in the mid-sagittal plane of the real-time magnetic resonance imaging (rtMRI) are reconstructed with minimum error. Following optimal sensor placement work, acoustic-articulatory data was collected using EMA from 41 subjects with speech stimuli in English and Indian native languages (Hindi, Kannada, Tamil, and Telugu), resulting in a total of ∼23 hours of data, used in this thesis. Representations from raw waveform are also learned for AAI task using convolutional and bidirectional long short term memory neural networks (CNN-BLSTM), where the learned filters of CNN are found to be similar to those used for computing Mel-frequency cepstral coefficients (MFCCs), typically used for AAI task. In order to examine the extent to which a representation having only the linguistic information can recover articulatory representations, we replace MFCC vectors with one-hot encoded vectors representing phonemes, which were further modified to remove the time duration of each phoneme and keep only phoneme sequence. Experiments with phoneme sequence using attention network achieve an AAI performance that is identical to that using phoneme with timing information, while there is a drop in performance compared to that using MFCC. Experiments to examine variation in speaking rate reveal that the errors in estimating the vertical motion of tongue articulators from acoustics with fast speaking rate are significantly higher than those with slow speaking rate. In order to reduce the demand for data from a speaker, low resource AAI is proposed using a transfer learning approach. Further, we show that AAI can be modeled to learn acoustic-articulatory mappings of multiple speakers through a single AAI model rather than building separate speaker-specific models. This is achieved by conditioning an AAI model with speaker embeddings, which benefits AAI in seen and unseen speaker evaluations. Finally, we show the benefit of estimated articulatory representations in voice conversion applications. Experiments revealed that articulatory representations estimated from speaker-independent AAI preserve linguistic information and suppress speaker-dependent factors. These articulatory representations (from an unseen speaker and language) are used to drive target speaker-specific AAF to synthesis speech, which preserves linguistic information and the target speaker’s voice characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Acoustic-Articulatory Mapping"

1

The Acoustic-to-Articulatory Mapping of Voiced and Fricated Speech. Storming Media, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Acoustic-Articulatory Mapping"

1

Schoentgen, Jean. "Speech Modelling Based on Acoustic-to-Articulatory Mapping." In Nonlinear Speech Modeling and Applications, 114–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11520153_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Klein, Eugen, Jana Brunner, and Phil Hoole. "Which Factors Can Explain Individual Outcome Differences When Learning a New Articulatory-to-Acoustic Mapping?" In Studies on Speech Production, 158–72. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00126-1_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Acoustic-Articulatory Mapping"

1

Prado, P. P. L., E. H. Shiva, and D. G. Childers. "Optimization of acoustic-to-articulatory mapping." In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226127.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Potard, Blaise, and Yves Laprie. "Compact representations of the articulatory-to-acoustic mapping." In Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-660.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jospa, Paul, and Alain Soquet. "The acoustic-articulatory mapping and the variational method." In 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA: ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ananthakrishnan, G., and Olov Engwall. "Resolving non-uniqueness in the acoustic-to-articulatory mapping." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947386.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Toda, Tomoki, Alan Black, and Keiichi Tokuda. "Acoustic-to-articulatory inversion mapping with Gaussian mixture model." In Interspeech 2004. ISCA: ISCA, 2004. http://dx.doi.org/10.21437/interspeech.2004-410.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Canevari, Claudia, Leonardo Badino, Luciano Fadiga, and Giorgio Metta. "Relevance-weighted-reconstruction of articulatory features in deep-neural-network-based acoustic-to-articulatory mapping." In Interspeech 2013. ISCA: ISCA, 2013. http://dx.doi.org/10.21437/interspeech.2013-346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Csapó, Tamás Gábor, Csaba Zainkó, László Tóth, Gábor Gosztolya, and Alexandra Markó. "Ultrasound-Based Articulatory-to-Acoustic Mapping with WaveGlow Speech Synthesis." In Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-1031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Toutios, Asterios, and Konstantinos Margaritis. "A support vector approach to the acoustic-to-articulatory mapping." In Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tobing, Patrick Lumban, Hirokazu Kameoka, and Tomoki Toda. "Deep acoustic-to-articulatory inversion mapping with latent trajectory modeling." In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Laprie, Slim Ouni And Yves. "Design of hypercube codebooks for the acoustic-to-articulatory inversion respecting the non-linearities of the articulatory-to-acoustic mapping." In 6th European Conference on Speech Communication and Technology (Eurospeech 1999). ISCA: ISCA, 1999. http://dx.doi.org/10.21437/eurospeech.1999-39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії