Добірка наукової літератури з теми "Articulatory data"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Articulatory data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Articulatory data"

1

Silva, Samuel, Nuno Almeida, Conceição Cunha, Arun Joseph, Jens Frahm, and António Teixeira. "Data-Driven Critical Tract Variable Determination for European Portuguese." Information 11, no. 10 (October 21, 2020): 491. http://dx.doi.org/10.3390/info11100491.

Повний текст джерела
Анотація:
Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a visual analysis and comparison of imaging data or vocal tract profiles can already provide relevant findings, the sheer amount of available data demands and can strongly profit from unsupervised data-driven approaches. Recent work, in this regard, has asserted the possibility of determining critical articulators from RT-MRI data by considering a representation of vocal tract configurations based on landmarks placed on the tongue, lips, and velum, yielding meaningful results for European Portuguese (EP). Advancing this previous work to obtain a characterization of EP sounds grounded on Articulatory Phonology, important to explore critical gestures and advance, for example, articulatory speech synthesis, entails the consideration of a novel set of tract variables. To this end, this article explores critical variable determination considering a vocal tract representation aligned with Articulatory Phonology and the Task Dynamics framework. The overall results, obtained considering data for three EP speakers, show the applicability of this approach and are consistent with existing descriptions of EP sounds.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Abirami, S., L. Anirudh, and P. Vijayalakshmi. "Silent Speech Interface: An Inversion Problem." Journal of Physics: Conference Series 2318, no. 1 (August 1, 2022): 012008. http://dx.doi.org/10.1088/1742-6596/2318/1/012008.

Повний текст джерела
Анотація:
Abstract When conventional acoustic-verbal communication is neither possible or desirable, silent speech interfaces (SSI) rely on biosignals, non-acoustic signals created by the human body during speech production, to facilitate communication. Despite considerable advances in sensing techniques that can be employed to capture these biosignals, majority of them are used under controlled scenarios in laboratories. One such example is Electromagnetic Articulograph (EMA), which monitors articulatory motion. It is expensive with inconvenient wiring and practically not portable in real world. Since articulator measurement is difficult, articulatory parameters may be estimated from acoustics through inversion. Acoustic-to-articulatory inversion (AAI) is a technique for determining articulatory parameters using acoustic input. Automatic voice recognition, text-to- speech synthesis, and speech accent conversion can all benefit from this. However, for speakers with no articulatory data, inversion is required in many practical applications. Articulatory reconstruction is more useful when the inversion is speaker independent. Initially, we analysed positional data to better understand the relationship between sensor data and uttered speech. Following the analysis, we built a speaker independent articulatory reconstruction system that uses a Bi- LSTM model. Additionally, we evaluated the trained model using standard evaluation measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Browman, Catherine P., and Louis Goldstein. "Articulatory gestures as phonological units." Phonology 6, no. 2 (August 1989): 201–51. http://dx.doi.org/10.1017/s0952675700001019.

Повний текст джерела
Анотація:
We have argued that dynamically defined articulatory gestures are the appropriate units to serve as the atoms of phonological representation. Gestures are a natural unit, not only because they involve task-oriented movements of the articulators, but because they arguably emerge as prelinguistic discrete units of action in infants. The use of gestures, rather than constellations of gestures as in Root nodes, as basic units of description makes it possible to characterise a variety of language patterns in which gestural organisation varies. Such patterns range from the misorderings of disordered speech through phonological rules involving gestural overlap and deletion to historical changes in which the overlap of gestures provides a crucial explanatory element.Gestures can participate in language patterns involving overlap because they are spatiotemporal in nature and therefore have internal duration. In addition, gestures differ from current theories of feature geometry by including the constriction degree as an inherent part of the gesture. Since the gestural constrictions occur in the vocal tract, which can be charactensed in terms of tube geometry, all the levels of the vocal tract will be constricted, leading to a constriction degree hierarchy. The values of the constriction degree at each higher level node in the hierarchy can be predicted on the basis of the percolation principles and tube geometry. In this way, the use of gestures as atoms can be reconciled with the use of Constriction degree at various levels in the vocal tract (or feature geometry) hierarchy.The phonological notation developed for the gestural approach might usefully be incorporated, in whole or in part, into other phonologies. Five components of the notation were discussed, all derived from the basic premise that gestures are the primitive phonological unit, organised into gestural scores. These components include (1) constriction degree as a subordinate of the articulator node and (2) stiffness (duration) as a subordinate of the articulator node. That is, both CD and duration are inherent to the gesture. The gestures are arranged in gestural scores using (3) articulatory tiers, with (4) the relevant geometry (articulatory, tube or feature) indicated to the left of the score and (5) structural information above the score, if desired. Association lines can also be used to indicate how the gestures are combined into phonological units. Thus, gestures can serve both as characterisations of articulatory movement data and as the atoms of phonological representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Jun, Jordan R. Green, Ashok Samal, and Yana Yunusova. "Articulatory Distinctiveness of Vowels and Consonants: A Data-Driven Approach." Journal of Speech, Language, and Hearing Research 56, no. 5 (October 2013): 1539–51. http://dx.doi.org/10.1044/1092-4388(2013/12-0030).

Повний текст джерела
Анотація:
Purpose To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach. Method Tongue and lip movements of 8 vowels and 11 consonants from 10 healthy talkers were collected. First, classification accuracies were obtained using 2 complementary approaches: (a) Procrustes analysis and (b) a support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces using multidimensional scaling. Results Vowel classification accuracies of 91.67% and 89.05% and consonant classification accuracies of 91.37% and 88.94% were obtained using Procrustes analysis and a support vector machine, respectively. Articulatory vowel and consonant spaces were derived based on the pairwise Procrustes distances. Conclusions The articulatory vowel space derived in this study resembled the long-standing descriptive articulatory vowel space defined by tongue height and advancement. The articulatory consonant space was consistent with feature-based classification of English consonants. The derived articulatory vowel and consonant spaces may have clinical implications, including serving as an objective measure of the severity of articulatory impairment.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kuruvilla-Dugdale, Mili, and Antje S. Mefferd. "Articulatory Performance in Dysarthria: Using a Data-Driven Approach to Estimate Articulatory Demands and Deficits." Brain Sciences 12, no. 10 (October 20, 2022): 1409. http://dx.doi.org/10.3390/brainsci12101409.

Повний текст джерела
Анотація:
This study pursued two goals: (1) to establish range of motion (ROM) demand tiers (i.e., low, moderate, high) specific to the jaw (J), lower lip (LL), posterior tongue (PT), and anterior tongue (AT) for multisyllabic words based on the articulatory performance of neurotypical talkers and (2) to identify demand- and disease-specific articulatory performance characteristics in talkers with amyotrophic lateral sclerosis (ALS) and Parkinson’s disease (PD). J, LL, PT, and AT movements of 12 talkers with ALS, 12 talkers with PD, and 12 controls were recorded using electromagnetic articulography. Vertical ROM, average speed, and movement duration were measured. Results showed that in talkers with PD, J and LL ROM were already significantly reduced at the lowest tier whereas PT and AT ROM were only significantly reduced at moderate and high tiers. In talkers with ALS, J ROM was significantly reduced at the moderate tier whereas LL, PT, and AT ROM were only significantly reduced at the highest tier. In both clinical groups, significantly reduced J and LL speeds could already be observed at the lowest tier whereas significantly reduced AT speeds could only be observed at the highest tier. PT speeds were already significantly reduced at the lowest tier in the ALS group but not until the moderate tier in the PD group. Finally, movement duration, but not ROM or speed performance, differentiated between ALS and PD even at the lowest tier. Results suggest that articulatory deficits vary with stimuli-specific motor demands across articulators and clinical groups.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

M., Dhanalakshmi, Nagarajan T., and Vijayalakshmi P. "Significant sensors and parameters in assessment of dysarthric speech." Sensor Review 41, no. 3 (July 26, 2021): 271–86. http://dx.doi.org/10.1108/sr-01-2021-0004.

Повний текст джерела
Анотація:
Purpose Dysarthria is a neuromotor speech disorder caused by neuromuscular disturbances that affect one or more articulators resulting in unintelligible speech. Though inter-phoneme articulatory variations are well captured by formant frequency-based acoustic features, these variations are expected to be much higher for dysarthric speakers than normal. These substantial variations can be well captured by placing sensors in appropriate articulatory position. This study focuses to determine a set of articulatory sensors and parameters in order to assess articulatory dysfunctions in dysarthric speech. Design/methodology/approach The current work aims to determine significant sensors and parameters associated using motion path and correlation analyzes on the TORGO database of dysarthric speech. Among eight informative sensor channels and six parameters per channel in positional data, the sensors such as tongue middle, back and tip, lower and upper lips and parameters (y, z, φ) are found to contribute significantly toward capturing the articulatory information. Acoustic and positional data analyzes are performed to validate these identified significant sensors. Furthermore, a convolutional neural network-based classifier is developed for both phone-and word-level classification of dysarthric speech using acoustic and positional data. Findings The average phone error rate is observed to be lower, up to 15.54% for positional data when compared with acoustic-only data. Further, word-level classification using a combination of both acoustic and positional information is performed to study that the positional data acquired using significant sensors will boost the performance of classification even for severe dysarthric speakers. Originality/value The proposed work shows that the significant sensors and parameters can be used to assess dysfunctions in dysarthric speech effectively. The articulatory sensor data helps in better assessment than the acoustic data even for severe dysarthric speakers.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Byrd, Dani, Edward Flemming, Carl Andrew Mueller, and Cheng Cheng Tan. "Using Regions and Indices in EPG Data Reduction." Journal of Speech, Language, and Hearing Research 38, no. 4 (August 1995): 821–27. http://dx.doi.org/10.1044/jshr.3804.821.

Повний текст джерела
Анотація:
This note describes how dynamic electropalatography (EPG) can be used for the acquisition and analysis of articulatory data. Various data reduction procedures developed to analyze the electropalatographic data are reported. Specifically, these procedures concern two interesting areas in EPG data analysis—first, the novel use of speaker-specific articulatory regions and second, the development of arithmetic indices to quantify time-varying articulatory behavior and reflect reduction and coarticulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lee, Jimin, Michael Bell, and Zachary Simmons. "Articulatory Kinematic Characteristics Across the Dysarthria Severity Spectrum in Individuals With Amyotrophic Lateral Sclerosis." American Journal of Speech-Language Pathology 27, no. 1 (February 6, 2018): 258–69. http://dx.doi.org/10.1044/2017_ajslp-16-0230.

Повний текст джерела
Анотація:
Purpose The current study investigated whether articulatory kinematic patterns can be extrapolated across the spectrum of dysarthria severity in individuals with amyotrophic lateral sclerosis (ALS). Method Temporal and spatial articulatory kinematic data were collected using electromagnetic articulography from 14 individuals with dysarthria secondary to ALS and 6 typically aging speakers. Speech intelligibility and speaking rate were used as indices of severity. Results Temporal measures (duration, speed of articulators) were significantly correlated with both indices of severity. In speakers with dysarthria, spatial measures were not correlated with severity except in 3 measures: tongue movement displacement was more reduced in the anterior–posterior dimension; jaw movement distance was greater in the inferior–superior dimension; jaw convex hull area was larger in speakers with slower speaking rates. Visual inspection of movement trajectories revealed that overall spatial kinematic characteristics in speakers with severe dysarthria differed qualitatively from those in speakers with mild or moderate dysarthria. Unlike speakers with dysarthria, typically aging speakers displayed variable tongue movement and minimal jaw movement. Conclusions The current study revealed that spatial articulatory characteristics, unlike temporal characteristics, showed a complicated pattern across the severity spectrum. The findings suggest that articulatory characteristics in speakers with severe dysarthria cannot simply be extrapolated from those in speakers with mild-to-moderate dysarthria secondary to ALS.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Stevens, Kenneth N. "Inferring articulatory movements from acoustic data." Journal of the Acoustical Society of America 93, no. 4 (April 1993): 2416. http://dx.doi.org/10.1121/1.405910.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Baum, Shari R., David H. McFarland, and Mai Diab. "Compensation to articulatory perturbation: Perceptual data." Journal of the Acoustical Society of America 99, no. 6 (June 1996): 3791–94. http://dx.doi.org/10.1121/1.414996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Articulatory data"

1

Berry, Jeffrey James. "Machine Learning Methods for Articulatory Data." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/223348.

Повний текст джерела
Анотація:
Humans make use of more than just the audio signal to perceive speech. Behavioral and neurological research has shown that a person's knowledge of how speech is produced influences what is perceived. With methods for collecting articulatory data becoming more ubiquitous, methods for extracting useful information are needed to make this data useful to speech scientists, and for speech technology applications. This dissertation presents feature extraction methods for ultrasound images of the tongue and for data collected with an Electro-Magnetic Articulograph (EMA). The usefulness of these features is tested in several phoneme classification tasks. Feature extraction methods for ultrasound tongue images presented here consist of automatically tracing the tongue surface contour using a modified Deep Belief Network (DBN) (Hinton et al. 2006), and methods inspired by research in face recognition which use the entire image. The tongue tracing method consists of training a DBN as an autoencoder on concatenated images and traces, and then retraining the first two layers to accept only the image at runtime. This 'translational' DBN (tDBN) method is shown to produce traces comparable to those made by human experts. An iterative bootstrapping procedure is presented for using the tDBN to assist a human expert in labeling a new data set. Tongue contour traces are compared with the Eigentongues method of (Hueber et al. 2007), and a Gabor Jet representation in a 6-class phoneme classification task using Support Vector Classifiers (SVC), with Gabor Jets performing the best. These SVC methods are compared to a tDBN classifier, which extracts features from raw images and classifies them with accuracy only slightly lower than the Gabor Jet SVC method.For EMA data, supervised binary SVC feature detectors are trained for each feature in three versions of Distinctive Feature Theory (DFT): Preliminaries (Jakobson et al. 1954), The Sound Pattern of English (Chomsky and Halle 1968), and Unified Feature Theory (Clements and Hume 1995). Each of these feature sets, together with a fourth unsupervised feature set learned using Independent Components Analysis (ICA), are compared on their usefulness in a 46-class phoneme recognition task. Phoneme recognition is performed using a linear-chain Conditional Random Field (CRF) (Lafferty et al. 2001), which takes advantage of the temporal nature of speech, by looking at observations adjacent in time. Results of the phoneme recognition task show that Unified Feature Theory performs slightly better than the other versions of DFT. Surprisingly, ICA actually performs worse than running the CRF on raw EMA data.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Moody, Jay T. "Visualizing speech with a recurrent neural network trained on human acoustic-articulatory data /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9930904.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Drake, Eleanor Katherine Elizabeth. "The involvement of the speech production system in prediction during comprehension : an articulatory imaging investigation." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/22912.

Повний текст джерела
Анотація:
This thesis investigates the effects in speech production of prediction during speech comprehension. The topic is raised by recent theoretical models of speech comprehension, which suggest a more integrated role for speech production and comprehension mechanisms than has previously been posited. The thesis is specifically concerned with the suggestion that during speech comprehension upcoming input is simulated with reference to the listener’s own speech production system by way of efference copy. Throughout this thesis the approach taken is to investigate whether representations elicited during comprehension impact speech production. The representations of interest are those generated endogenously by the listener during prediction of upcoming input. We investigate whether predictions are represented at a form level within the listener’s speech production system. We first present an overview of the relevant literature. We then present details of a picture word interference study undertaken to confirm that the item set employed elicits typical phonological effects within a conventional paradigm in which the competing representation is perceptually available. The main body of the thesis presents evidence concerning the nature of representations arising during prediction, specifically their effect on speech output. We first present evidence from picture naming vocal response latencies. We then complement and extend this with evidence from articulatory imaging, allowing an examination of pre-acoustic aspects of speech production. To investigate effects on speech production as a dynamic motor-activity we employ the Delta method, developed to quantify articulatory variability from EPG and ultrasound recordings. We apply this technique to ultrasound data acquired during mid-sagittal imaging of the tongue and extend the approach to allow us to explore the time-course of articulation during the acoustic response latency period. We investigate whether prediction of another’s speech evokes articulatorily specified activation within the listener’s speech production system The findings presented in this thesis suggest that representations evoked as predictions during speech comprehension do affect speech motor output. However, we found no evidence to suggest that predictions are represented in an articulatorily specified manner. We discuss this conclusion with reference to models of speech production-perception that implicate efference copies in the generation of predictions during speech comprehension.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Cheng. "Inter-gestural Coordination in Temporal and Spatial Domains in Italian: Synchronous EPG + UTI Data." Doctoral thesis, Scuola Normale Superiore, 2019. http://hdl.handle.net/11384/86022.

Повний текст джерела
Анотація:
This dissertation explores the temporal coordination of articulatory gestures in various segmental conditions in Italian, by comparing onset and coda singletons as well as word-final and intervocalic consonant clusters in a Tuscan variety of Italian. Articulatory models of syllable structure assume that the coordination between the vocalic gesture and the consonantal gesture may differ in onset vs. coda and in singletons vs. clusters. Based on previous literature on different languages, we expect to find differences in the temporal coordination of singletons and clusters in Italian too. In addition, recent literature suggests that the articulatory and coarticulatory properties of the segments play an important role in determining the details of the coordination patterns, and that not all segments or segmental sequences behave in the same way as far as their gestural coordination relations are concerned. Thus, an additional aim of this work is to compare consonants with different coarticulatory properties (in the sense of modifications of C articulation in varying vocalic contexts) and seek for possible relations between coarticulation and coordination patterns. The methodology used is new. We used an original system for the acquisition, realtime synchronization and analysis of acoustic, electropalatographic (EPG) and ultrasound tongue imaging (UTI) data, called SynchroLing. EPG and UTI instrumental techniques provide complementary information on, respectively, linguo-palatal contact patterns in the anterior vocal tract and midsagittal profiles of the whole tongue, including postdorsum and root. SynchroLing allows real-time inspection of contacts in the artificial palate and tongue midsagittal movements, coupled with acoustics. [...]
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Douros, Ioannis. "Towards a 3 dimensional dynamic generic speaker model to study geometry simplifications of the vocal tract using magnetic resonance imaging data." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0115.

Повний текст джерела
Анотація:
Dans cette thèse, nous avons utilisé les données de l’IRM du conduit vocal pour étudier la production de la parole. La première partie consiste en l’étude de l’impact que le vélum, l’épiglotte et la position de la tête a sur la phonation de cinq voyelles françaises. Des simulations acoustiques ont été utilisées pour comparer les formants des cas étudiés avec la référence afin de mesurer leur impact. Pour cette partie du travail, nous avons utilisé des IRM statiques en 3D. Comme la parole est généralement une phénomène dynamique une question s’est posée, à savoir s’il serait possible de traiter les données 3D afin d’incorporer des informations temporelles de la parole continue. Par conséquent, la deuxième partie présente quelques algorithmes que l’on peut utiliser pour améliorer les données de production de la parole. Plusieurs transformations d’images ont été combinées afin de générer des estimations des formes du conduit vocal qui sont plus informatives que les originales. À ce stade, nous avons envisagé, outre l’amélioration des données de production de la parole, de créer un modèle de référence générique qui pourrait fournir des informations améliorées non pas pour un sujet spécifique, mais globalement pour la parole. C’est pourquoi nous avons consacré la troisième partie l’étude d’un algorithme permettant de créer un atlas spatio-temporel de l’appareil vocal qui peut être utilisé comme référence ou standard pour l’étude de la parole car il est indépendant du locuteur. Enfin, la dernière partie de la thèse, fait référence à une sélection de questions ouvertes du domaine qui restent encore sans réponse, quelques pistes intéressantes que l’on peut développer à partir de cette thèse et quelques approches potentielles qui pourraient être envisager afin de répondre à ces questions
In this thesis we used MRI (Magnetic Resonance Imaging) data of the vocal tract to study speech production. The first part consist of the study of the impact that the velum, the epiglottis and the head position has on the phonation of five french vowels. Acoustic simulations were used to compare the formants of the studied cases with the reference in order to measure their impact. For this part of the work, we used 3D static MR (Magnetic Resonance) images. As speech is usually a dynamic phenomenon, a question arose, whether it would be possible to process the 3D data in order to incorporate dynamic information of continuous speech. Therefore the second part presents some algorithms that one can use in order to enhance speech production data. Several image transformations were combined in order to generate estimations of vocal tract shapes which are more informative than the original ones. At this point, we envisaged apart from enhancing speech production data, to create a generic speaker model that could provide enhanced information not for a specific subject, but globally for speech. As a result, we devoted the third part in the investigation of an algorithm that one can use to create a spatiotemporal atlas of the vocal tract which can be used as a reference or standard speaker for speech studies as it is speaker independent. Finally, the last part of the thesis, refers to a selection of open questions of the field that are still left unanswered, some interesting directions that one can expand this thesis and some potential approaches that could help someone move forward towards these directions
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Blackwood, Ximenes Arwen. "The relation between acoustic and articulatory variation in vowels : data from American and Australian English." Thesis, 2022. http://hdl.handle.net/1959.7/uws:68957.

Повний текст джерела
Анотація:
In studies of dialect variation, the articulatory nature of vowels is sometimes inferred from formant values using the following heuristic: F1 is inversely correlated with tongue height and F2 is inversely correlated with tongue backness. This study compared vowel formants and corresponding lingual articulation in two dialects of English, standard North American English and Australian English. Five speakers of North American English and four speakers of Australian English were recorded producing multiple repetitions of ten monophthongs embedded in the /sVd/ context. Simultaneous articulatory data were collected using electromagnetic articulography. Results show that there are significant correlations between tongue position and formants in the direction predicted by the heuristic but also that the relations implied by the heuristic break down under specific conditions. Articulatory vowel spaces, based on tongue dorsum (TD) position, and acoustic vowel spaces, based on formants, show systematic misalignment due in part to the influence of other articulatory factors, including lip rounding and tongue curvature on formant values. Incorporating these dimensions into our dialect comparison yields a richer description and a more robust understanding of how vowel formant patterns are reproduced within and across dialects.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Steiner, Ingmar Michael A. [Verfasser]. "Observations on the dynamic control of an articulatory synthesizer using speech production data / vorgelegt von Ingmar Michael Augustus Steiner." 2010. http://d-nb.info/1005833303/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Articulatory data"

1

Seminar on Speech Production (5th 2000 Kloster Seeon). Proceedings of the 5th Seminar on Speech Production: Models and data & CREST Workshop on Models of Speech Production : motor planning and articulatory modelling. Munich: SPS5, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ahlers, M. Oliver. Simulation of occlusion in restorative dentistry: The Artex system ; an up-to-date concept regarding facebow-registration, individual recordings, articulators and measuring instruments. Hamburg: DentaConcept, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gibson, Mark, and Juana Gil, eds. Romance Phonetics and Phonology. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.001.0001.

Повний текст джерела
Анотація:
The research in this volume addresses several recurring topics in Romance Phonetics and Phonology with a special focus on the segment, syllable, word, and phrase levels of analysis. The original research presented in this volume ranges from the low-level mechanical processes involved in speech production and perception to high-level representation and computation. The interaction between these two dimensions of speech and their effects on first- and second-language acquisition are methodically treated in later chapters. Individual chapters address rhotics in various languages (Spanish, Italian, and Brazilian Portuguese), both taps and trills, singleton and geminate; vowel nasalization and associated changes; sibilants and fricatives, the ways in which vowels are affected by their position; there are explorations of diphthongs and consonant clusters in Romanian; variant consonant production in three Catalan dialects; voice quality discrimination in Italian by native speakers of Spanish; mutual language perception by French and Spanish native speakers of each other’s language; poetry recitation (vis-à-vis rhotics in particular); French prosodic structure; glide modifications and pre-voicing in onsets in Spanish and Catalan; vowel reduction in Galician; and detailed investigations of bilinguals’ language acquisition. A number of experimental methods are employed to address the topics under study including both acoustic and articulatory data; electropalatography (EPG), ultrasound, electromagnetic articulography (EMA).
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Recasens, Daniel. Phonetic Causes of Sound Change. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.001.0001.

Повний текст джерела
Анотація:
The present study sheds light on the phonetic causes of sound change and the intermediate stages of the diachronic pathways by studying the palatalization and assibilation of velar stops (referred to commonly as ‘velar softening’, as exemplified by the replacement of Latin /ˈkɛntʊ/ by Tuscan Italian [ˈtʃɛnto] ‘one hundred’), and of labial stops and labiodental fricatives (also known as’ labial softening’, as in the case of the dialectal variant [ˈtʃatɾə] of /ˈpjatɾə/ ‘stone’ in Romanian dialects). To a lesser extent, it also deals with the palatalization and affrication of dentoalveolar stops. The book supports an articulation-based account of those sound-change processes, and holds that, for the most part, the corresponding affricate and fricative outcomes have been issued from intermediate (alveolo)palatal-stop realizations differing in closure fronting degree. Special attention is given to the one-to-many relationship between the input and output consonantal realizations, to the acoustic cues which contribute to the implementation of these sound changes, and to those positional and contextual conditions in which those changes are prone to operate most feasibly. Different sources of evidence are taken into consideration: descriptive data from, for example, Bantu studies and linguistic atlases of Romanian dialects in the case of labial softening; articulatory and acoustic data for velar and (alveolo)palatal stops and front lingual affricates; perceptual results from phoneme identification tests. The universal character of the claims being made derives from the fact that the dialectal material, and to some extent the experimental material as well, belong to a wide range of languages from not only Europe but also all the other continents.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Vihman, Marilyn May. Phonological Templates in Development. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198793564.001.0001.

Повний текст джерела
Анотація:
Based on cross-linguistic data from several children each learning one of eight languages and grounded in the theoretical frameworks of usage-based phonology, exemplar theory, and Dynamic Systems Theory, this book explores the patterns or phonological templates children develop once they are producing 20–50 words or more. The children are found to begin with ‘selected’ words, which match some of the vocal forms they have practised in babbling; this is followed by the production of more challenging adult word forms, adapted—differently by different children and with some shaping by the particular adult language—to fit that child’s existing word forms. Early accuracy is replaced by later recourse to an ‘inner model’ of what a word can sound like; this is a template, or fixed output pattern to which a high proportion of the children’s forms adhere for a short time, before being replaced by ‘ordinary’ (more adult-like) forms with regular substitutions and omissions. The idea of templates developed in adult theorizing about phonology and morphology; in adult language it is most productive in colloquial forms and pet names or hypocoristics, found in informal settings or ‘language at play’. These are illustrated in some detail for over 200 English rhyming compounds, 100 Estonian and 500 French short forms. The issues of emergent systematicity, the roles of articulatory and memory challenges for children, and the similarities and differences in the function of templates for adults as compared with children are central concerns.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Articulatory data"

1

Bauer, Dominik, Jim Kannampuzha, and Bernd J. Kröger. "Articulatory Speech Re-synthesis: Profiting from Natural Acoustic Speech Data." In Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, 344–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03320-9_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Perkell, J. S. "Testing Theories of Speech Production: Implications of Some Detailed Analyses of Variable Articulatory Data." In Speech Production and Speech Modelling, 263–88. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sepulveda-Sepulveda, Alexander, and German Castellanos-Dominguez. "Assessment of the Relation Between Low-Frequency Features and Velum Opening by Using Real Articulatory Data." In Speech and Computer, 131–39. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Badin, Pierre, Frédéric Elisei, Gérard Bailly, and Yuliya Tarabalka. "An Audiovisual Talking Head for Augmented Speech Generation: Models and Animations Based on a Real Speaker’s Articulatory Data." In Articulated Motion and Deformable Objects, 132–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-70517-8_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zampaulo, André. "The phonetics of palatals." In Palatal Sound Change in the Romance Languages, 31–45. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198807384.003.0003.

Повний текст джерела
Анотація:
This chapter provides a detailed characterization of both articulatory and acoustic patterns of Romance palatals and their relevance to the goals of the book. While focusing on available data for sounds that are commonly found across the Romance-speaking world, this chapter also characterizes consonants whose emergence appear more restricted and/or for which articulatory and acoustic data do not abound in the Romance literature. Knowing the articulatory and acoustic characteristics of these sounds reveals itself as crucial to understanding the basic phonetic motivations for their diachronic pathways as well as their patterns of synchronic dialectal variation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Recasens, Daniel, and Meritxell Mira. "Articulatory setting, articulatory symmetry, and production mechanisms for Catalan consonant sequences." In Romance Phonetics and Phonology, 146–58. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0009.

Повний текст джерела
Анотація:
This study reports articulatory and acoustic data for three Catalan dialects (Eastern, Western, Valencian), showing that the sequences /tsʃ/ and /sʃ/, and /tʃs/ and /ʃs/, are implemented through analogous production mechanisms and therefore that fricative+fricative and affricate+fricative sequences behave symmetrically at the articulatory level. Analysis results also reveal a clear trend for regressive assimilation in the case of /(t)sʃ/ and for blending or a two-target realization in the case of /(t)ʃs/; differences in degree of articulatory complexity among the segmental sequences under analysis account for these production strategies. Moreover, the final phonetic outcome is strongly dependent on the dialect-dependent articulatory differences in fricative articulation; thus, in Valencian, /(t)sʃ / may undergo regressive assimilation or blending and /(t)ʃs/ regressive assimilation, owing to a more anterior lingual constriction for /ʃ/ than in the other dialects.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Recasens, Daniel. "Velar palatalization." In Phonetic Causes of Sound Change, 22–76. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.003.0003.

Повний текст джерела
Анотація:
An analysis of the conversion of velar stops before front vocalic segments, and in other contextual and positional conditions, into plain palatal, alveolopalatal, and even alveolar articulations is carried out using descriptive data from a considerable number of languages. Articulatory data on (alveolo)palatal stops reveal that these consonants are mostly alveolopalatal in the world’s languages, and also that their closure location may be highly variable, which accounts for their identification as /t/ or /k/. It is claimed that velar palatalization may be triggered by articulatory strengthening through an increase in tongue-to-palate contact in non-front vocalic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Recasens, Daniel. "Introduction." In Phonetic Causes of Sound Change, 1–12. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.003.0001.

Повний текст джерела
Анотація:
The chapter deals with the origin and phonetic causes of sound changes involving consonants, with the diachronic pathways connecting the input and output phonetic forms, and with models of sound change (e.g., Evolutionary Phonology, the Neogrammarian’s articulatory model, Ohala’s acoustic equivalence model). The need to use articulatory and acoustic data for ascertaining the causes of sound change (and in particular the palatalization and assibilation of velar, labial, and dentoalveolar obstruents) is emphasized. The chapter is also concerned with how allophones are phonologized in sound-change processes and with the special status of (alveolo)palatal stops regarding allophonic phonologization.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chitoran, Ioana, and Stefania Marin. "Vowels and diphthongs." In Romance Phonetics and Phonology, 118–32. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0007.

Повний текст джерела
Анотація:
This study compares the acoustic and articulatory properties of the Romanian mid diphthong /ea/ to the hiatus sequence /e.a/, and the high diphthong /ja/ to the hiatus sequence /i.a/. Both acoustic and articulatory (EMA) data support the analysis of the mid diphthong as forming a complex nucleus, consistent with its phonotactic behavior. This diphthong exhibits the greatest temporal overlap between the two vowels and the largest coarticulation/blend between its vocalic targets. The hiatus sequence /i.a/, which spans two syllables, shows the least overlap and coarticulation. The high diphthong /ja/ is a tautosyllabic sequence, displaying an intermediate degree of overlap, more similar to /ea/ than to hiatus sequences in its timing properties.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Celata, Chiara, Alessandro Vietti, and Lorenzo Spreafico. "An articulatory account of rhotic variation in Tuscan Italian." In Romance Phonetics and Phonology, 91–117. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0006.

Повний текст джерела
Анотація:
Rhotic variation in a spoken variety of Tuscan Italian is investigated. The chapter takes a multilevel articulatory approach, based on real-time synchronization and analysis of acoustic, electropalatographic (EPG), and ultrasound tongue imaging (UTI) data. Contrary to the expectations based on the received dialectological literature, it emerges that speakers produce various alveolar variants: taps, trills, fricatives, and approximant realizations. To examine the factors that may constrain the variation of /r/, a multiple correspondence analysis is carried out. The result is that there are significant associations between the phonetic properties of /r/ variants and their preferred contexts of occurrence. A particular focus is then placed on the articulatory properties of the singleton–geminate distinction. It is shown that the length contrast is maintained but contrary to expectation, trills are not primarily used for geminates. Instead, each speaker differentiates the singleton from the geminate according to a variety of production strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Articulatory data"

1

Kato, Tsuneo, Sungbok Lee, and Shrikanth Narayanan. "An analysis of articulatory-acoustic data based on articulatory strokes." In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960628.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wrench, Alan A., and Korin Richmond. "Continuous speech recognition using articulatory data." In 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Payan, Yohan. "A 2D Biomechanical Model of the Human Tongue." In ASME 1998 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/imece1998-0306.

Повний текст джерела
Анотація:
Abstract This study aims to evaluate the impact of anatomical, morphological and biomechanical properties of one of the main articulator, namely the tongue, onto the kinematic properties of speech movements. For this, a 2D biomechanical Finite Element model of the tongue was developed. It integrates four extrinsic muscles and three intrinsic ones. This model is controled according to the Equilibrium Point Hypothesis, proposed by Feldman (1966, 1986). The deformations of the model are computed, in order to simulate Vowel-to-Vowel transitions. The articulatory patterns synthesized with this model are then compared to data collected on a male native speaker of French. Emphasis is put on the potential influence of biomechanical tongue properties on to measurable kinematic features.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Maharana, Sarthak Kumar, Aravind Illa, Renuka Mannem, Yamini Belur, Preetie Shetty, Veeramani Preethish Kumar, Seena Vengalil, Kiran Polavarapu, Nalini Atchayaram, and Prasanta Kumar Ghosh. "Acoustic-to-Articulatory Inversion for Dysarthric Speech by Using Cross-Corpus Acoustic-Articulatory Data." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ouni, Slim, Loïc Mangeonjean, and Ingmar Steiner. "Visartico: a visualization tool for articulatory data." In Interspeech 2012. ISCA: ISCA, 2012. http://dx.doi.org/10.21437/interspeech.2012-510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Aron, Michael, Nicolas Ferveur, Erwan Kerrien, Marie-Odile Berger, and Yves Laprie. "Acquisition and synchronization of multimodal articulatory data." In Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jun Wa, Ashok Samal, Jordan R. Green, and Tom D. Carrell. "Vowel recognition from articulatory position time-series data." In 2009 3rd International Conference on Signal Processing and Communication Systems (ICSPCS 2009). IEEE, 2009. http://dx.doi.org/10.1109/icspcs.2009.5306418.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Prom-on, Santitham, Peter Birkholz, and Yi Xu. "Training an articulatory synthesizer with continuous acoustic data." In Interspeech 2013. ISCA: ISCA, 2013. http://dx.doi.org/10.21437/interspeech.2013-98.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Krug, Paul Konstantin, Peter Birkholz, Branislav Gerazov, Daniel Rudolph van Niekerk, Anqi Xu, and Yi Xu. "Articulatory Synthesis for Data Augmentation in Phoneme Recognition." In Interspeech 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/interspeech.2022-10874.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Toth, Arthur R., and Alan W. Black. "Cross-speaker articulatory position data for phonetic feature prediction." In Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії