Gotowa bibliografia na temat „Articulatory data”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Articulatory data”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Articulatory data"

1

Silva, Samuel, Nuno Almeida, Conceição Cunha, Arun Joseph, Jens Frahm i António Teixeira. "Data-Driven Critical Tract Variable Determination for European Portuguese". Information 11, nr 10 (21.10.2020): 491. http://dx.doi.org/10.3390/info11100491.

Pełny tekst źródła
Streszczenie:
Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a visual analysis and comparison of imaging data or vocal tract profiles can already provide relevant findings, the sheer amount of available data demands and can strongly profit from unsupervised data-driven approaches. Recent work, in this regard, has asserted the possibility of determining critical articulators from RT-MRI data by considering a representation of vocal tract configurations based on landmarks placed on the tongue, lips, and velum, yielding meaningful results for European Portuguese (EP). Advancing this previous work to obtain a characterization of EP sounds grounded on Articulatory Phonology, important to explore critical gestures and advance, for example, articulatory speech synthesis, entails the consideration of a novel set of tract variables. To this end, this article explores critical variable determination considering a vocal tract representation aligned with Articulatory Phonology and the Task Dynamics framework. The overall results, obtained considering data for three EP speakers, show the applicability of this approach and are consistent with existing descriptions of EP sounds.
Style APA, Harvard, Vancouver, ISO itp.
2

Abirami, S., L. Anirudh i P. Vijayalakshmi. "Silent Speech Interface: An Inversion Problem". Journal of Physics: Conference Series 2318, nr 1 (1.08.2022): 012008. http://dx.doi.org/10.1088/1742-6596/2318/1/012008.

Pełny tekst źródła
Streszczenie:
Abstract When conventional acoustic-verbal communication is neither possible or desirable, silent speech interfaces (SSI) rely on biosignals, non-acoustic signals created by the human body during speech production, to facilitate communication. Despite considerable advances in sensing techniques that can be employed to capture these biosignals, majority of them are used under controlled scenarios in laboratories. One such example is Electromagnetic Articulograph (EMA), which monitors articulatory motion. It is expensive with inconvenient wiring and practically not portable in real world. Since articulator measurement is difficult, articulatory parameters may be estimated from acoustics through inversion. Acoustic-to-articulatory inversion (AAI) is a technique for determining articulatory parameters using acoustic input. Automatic voice recognition, text-to- speech synthesis, and speech accent conversion can all benefit from this. However, for speakers with no articulatory data, inversion is required in many practical applications. Articulatory reconstruction is more useful when the inversion is speaker independent. Initially, we analysed positional data to better understand the relationship between sensor data and uttered speech. Following the analysis, we built a speaker independent articulatory reconstruction system that uses a Bi- LSTM model. Additionally, we evaluated the trained model using standard evaluation measures.
Style APA, Harvard, Vancouver, ISO itp.
3

Browman, Catherine P., i Louis Goldstein. "Articulatory gestures as phonological units". Phonology 6, nr 2 (sierpień 1989): 201–51. http://dx.doi.org/10.1017/s0952675700001019.

Pełny tekst źródła
Streszczenie:
We have argued that dynamically defined articulatory gestures are the appropriate units to serve as the atoms of phonological representation. Gestures are a natural unit, not only because they involve task-oriented movements of the articulators, but because they arguably emerge as prelinguistic discrete units of action in infants. The use of gestures, rather than constellations of gestures as in Root nodes, as basic units of description makes it possible to characterise a variety of language patterns in which gestural organisation varies. Such patterns range from the misorderings of disordered speech through phonological rules involving gestural overlap and deletion to historical changes in which the overlap of gestures provides a crucial explanatory element.Gestures can participate in language patterns involving overlap because they are spatiotemporal in nature and therefore have internal duration. In addition, gestures differ from current theories of feature geometry by including the constriction degree as an inherent part of the gesture. Since the gestural constrictions occur in the vocal tract, which can be charactensed in terms of tube geometry, all the levels of the vocal tract will be constricted, leading to a constriction degree hierarchy. The values of the constriction degree at each higher level node in the hierarchy can be predicted on the basis of the percolation principles and tube geometry. In this way, the use of gestures as atoms can be reconciled with the use of Constriction degree at various levels in the vocal tract (or feature geometry) hierarchy.The phonological notation developed for the gestural approach might usefully be incorporated, in whole or in part, into other phonologies. Five components of the notation were discussed, all derived from the basic premise that gestures are the primitive phonological unit, organised into gestural scores. These components include (1) constriction degree as a subordinate of the articulator node and (2) stiffness (duration) as a subordinate of the articulator node. That is, both CD and duration are inherent to the gesture. The gestures are arranged in gestural scores using (3) articulatory tiers, with (4) the relevant geometry (articulatory, tube or feature) indicated to the left of the score and (5) structural information above the score, if desired. Association lines can also be used to indicate how the gestures are combined into phonological units. Thus, gestures can serve both as characterisations of articulatory movement data and as the atoms of phonological representation.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Jun, Jordan R. Green, Ashok Samal i Yana Yunusova. "Articulatory Distinctiveness of Vowels and Consonants: A Data-Driven Approach". Journal of Speech, Language, and Hearing Research 56, nr 5 (październik 2013): 1539–51. http://dx.doi.org/10.1044/1092-4388(2013/12-0030).

Pełny tekst źródła
Streszczenie:
Purpose To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach. Method Tongue and lip movements of 8 vowels and 11 consonants from 10 healthy talkers were collected. First, classification accuracies were obtained using 2 complementary approaches: (a) Procrustes analysis and (b) a support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces using multidimensional scaling. Results Vowel classification accuracies of 91.67% and 89.05% and consonant classification accuracies of 91.37% and 88.94% were obtained using Procrustes analysis and a support vector machine, respectively. Articulatory vowel and consonant spaces were derived based on the pairwise Procrustes distances. Conclusions The articulatory vowel space derived in this study resembled the long-standing descriptive articulatory vowel space defined by tongue height and advancement. The articulatory consonant space was consistent with feature-based classification of English consonants. The derived articulatory vowel and consonant spaces may have clinical implications, including serving as an objective measure of the severity of articulatory impairment.
Style APA, Harvard, Vancouver, ISO itp.
5

Kuruvilla-Dugdale, Mili, i Antje S. Mefferd. "Articulatory Performance in Dysarthria: Using a Data-Driven Approach to Estimate Articulatory Demands and Deficits". Brain Sciences 12, nr 10 (20.10.2022): 1409. http://dx.doi.org/10.3390/brainsci12101409.

Pełny tekst źródła
Streszczenie:
This study pursued two goals: (1) to establish range of motion (ROM) demand tiers (i.e., low, moderate, high) specific to the jaw (J), lower lip (LL), posterior tongue (PT), and anterior tongue (AT) for multisyllabic words based on the articulatory performance of neurotypical talkers and (2) to identify demand- and disease-specific articulatory performance characteristics in talkers with amyotrophic lateral sclerosis (ALS) and Parkinson’s disease (PD). J, LL, PT, and AT movements of 12 talkers with ALS, 12 talkers with PD, and 12 controls were recorded using electromagnetic articulography. Vertical ROM, average speed, and movement duration were measured. Results showed that in talkers with PD, J and LL ROM were already significantly reduced at the lowest tier whereas PT and AT ROM were only significantly reduced at moderate and high tiers. In talkers with ALS, J ROM was significantly reduced at the moderate tier whereas LL, PT, and AT ROM were only significantly reduced at the highest tier. In both clinical groups, significantly reduced J and LL speeds could already be observed at the lowest tier whereas significantly reduced AT speeds could only be observed at the highest tier. PT speeds were already significantly reduced at the lowest tier in the ALS group but not until the moderate tier in the PD group. Finally, movement duration, but not ROM or speed performance, differentiated between ALS and PD even at the lowest tier. Results suggest that articulatory deficits vary with stimuli-specific motor demands across articulators and clinical groups.
Style APA, Harvard, Vancouver, ISO itp.
6

M., Dhanalakshmi, Nagarajan T. i Vijayalakshmi P. "Significant sensors and parameters in assessment of dysarthric speech". Sensor Review 41, nr 3 (26.07.2021): 271–86. http://dx.doi.org/10.1108/sr-01-2021-0004.

Pełny tekst źródła
Streszczenie:
Purpose Dysarthria is a neuromotor speech disorder caused by neuromuscular disturbances that affect one or more articulators resulting in unintelligible speech. Though inter-phoneme articulatory variations are well captured by formant frequency-based acoustic features, these variations are expected to be much higher for dysarthric speakers than normal. These substantial variations can be well captured by placing sensors in appropriate articulatory position. This study focuses to determine a set of articulatory sensors and parameters in order to assess articulatory dysfunctions in dysarthric speech. Design/methodology/approach The current work aims to determine significant sensors and parameters associated using motion path and correlation analyzes on the TORGO database of dysarthric speech. Among eight informative sensor channels and six parameters per channel in positional data, the sensors such as tongue middle, back and tip, lower and upper lips and parameters (y, z, φ) are found to contribute significantly toward capturing the articulatory information. Acoustic and positional data analyzes are performed to validate these identified significant sensors. Furthermore, a convolutional neural network-based classifier is developed for both phone-and word-level classification of dysarthric speech using acoustic and positional data. Findings The average phone error rate is observed to be lower, up to 15.54% for positional data when compared with acoustic-only data. Further, word-level classification using a combination of both acoustic and positional information is performed to study that the positional data acquired using significant sensors will boost the performance of classification even for severe dysarthric speakers. Originality/value The proposed work shows that the significant sensors and parameters can be used to assess dysfunctions in dysarthric speech effectively. The articulatory sensor data helps in better assessment than the acoustic data even for severe dysarthric speakers.
Style APA, Harvard, Vancouver, ISO itp.
7

Byrd, Dani, Edward Flemming, Carl Andrew Mueller i Cheng Cheng Tan. "Using Regions and Indices in EPG Data Reduction". Journal of Speech, Language, and Hearing Research 38, nr 4 (sierpień 1995): 821–27. http://dx.doi.org/10.1044/jshr.3804.821.

Pełny tekst źródła
Streszczenie:
This note describes how dynamic electropalatography (EPG) can be used for the acquisition and analysis of articulatory data. Various data reduction procedures developed to analyze the electropalatographic data are reported. Specifically, these procedures concern two interesting areas in EPG data analysis—first, the novel use of speaker-specific articulatory regions and second, the development of arithmetic indices to quantify time-varying articulatory behavior and reflect reduction and coarticulation.
Style APA, Harvard, Vancouver, ISO itp.
8

Lee, Jimin, Michael Bell i Zachary Simmons. "Articulatory Kinematic Characteristics Across the Dysarthria Severity Spectrum in Individuals With Amyotrophic Lateral Sclerosis". American Journal of Speech-Language Pathology 27, nr 1 (6.02.2018): 258–69. http://dx.doi.org/10.1044/2017_ajslp-16-0230.

Pełny tekst źródła
Streszczenie:
Purpose The current study investigated whether articulatory kinematic patterns can be extrapolated across the spectrum of dysarthria severity in individuals with amyotrophic lateral sclerosis (ALS). Method Temporal and spatial articulatory kinematic data were collected using electromagnetic articulography from 14 individuals with dysarthria secondary to ALS and 6 typically aging speakers. Speech intelligibility and speaking rate were used as indices of severity. Results Temporal measures (duration, speed of articulators) were significantly correlated with both indices of severity. In speakers with dysarthria, spatial measures were not correlated with severity except in 3 measures: tongue movement displacement was more reduced in the anterior–posterior dimension; jaw movement distance was greater in the inferior–superior dimension; jaw convex hull area was larger in speakers with slower speaking rates. Visual inspection of movement trajectories revealed that overall spatial kinematic characteristics in speakers with severe dysarthria differed qualitatively from those in speakers with mild or moderate dysarthria. Unlike speakers with dysarthria, typically aging speakers displayed variable tongue movement and minimal jaw movement. Conclusions The current study revealed that spatial articulatory characteristics, unlike temporal characteristics, showed a complicated pattern across the severity spectrum. The findings suggest that articulatory characteristics in speakers with severe dysarthria cannot simply be extrapolated from those in speakers with mild-to-moderate dysarthria secondary to ALS.
Style APA, Harvard, Vancouver, ISO itp.
9

Stevens, Kenneth N. "Inferring articulatory movements from acoustic data". Journal of the Acoustical Society of America 93, nr 4 (kwiecień 1993): 2416. http://dx.doi.org/10.1121/1.405910.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Baum, Shari R., David H. McFarland i Mai Diab. "Compensation to articulatory perturbation: Perceptual data". Journal of the Acoustical Society of America 99, nr 6 (czerwiec 1996): 3791–94. http://dx.doi.org/10.1121/1.414996.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Articulatory data"

1

Berry, Jeffrey James. "Machine Learning Methods for Articulatory Data". Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/223348.

Pełny tekst źródła
Streszczenie:
Humans make use of more than just the audio signal to perceive speech. Behavioral and neurological research has shown that a person's knowledge of how speech is produced influences what is perceived. With methods for collecting articulatory data becoming more ubiquitous, methods for extracting useful information are needed to make this data useful to speech scientists, and for speech technology applications. This dissertation presents feature extraction methods for ultrasound images of the tongue and for data collected with an Electro-Magnetic Articulograph (EMA). The usefulness of these features is tested in several phoneme classification tasks. Feature extraction methods for ultrasound tongue images presented here consist of automatically tracing the tongue surface contour using a modified Deep Belief Network (DBN) (Hinton et al. 2006), and methods inspired by research in face recognition which use the entire image. The tongue tracing method consists of training a DBN as an autoencoder on concatenated images and traces, and then retraining the first two layers to accept only the image at runtime. This 'translational' DBN (tDBN) method is shown to produce traces comparable to those made by human experts. An iterative bootstrapping procedure is presented for using the tDBN to assist a human expert in labeling a new data set. Tongue contour traces are compared with the Eigentongues method of (Hueber et al. 2007), and a Gabor Jet representation in a 6-class phoneme classification task using Support Vector Classifiers (SVC), with Gabor Jets performing the best. These SVC methods are compared to a tDBN classifier, which extracts features from raw images and classifies them with accuracy only slightly lower than the Gabor Jet SVC method.For EMA data, supervised binary SVC feature detectors are trained for each feature in three versions of Distinctive Feature Theory (DFT): Preliminaries (Jakobson et al. 1954), The Sound Pattern of English (Chomsky and Halle 1968), and Unified Feature Theory (Clements and Hume 1995). Each of these feature sets, together with a fourth unsupervised feature set learned using Independent Components Analysis (ICA), are compared on their usefulness in a 46-class phoneme recognition task. Phoneme recognition is performed using a linear-chain Conditional Random Field (CRF) (Lafferty et al. 2001), which takes advantage of the temporal nature of speech, by looking at observations adjacent in time. Results of the phoneme recognition task show that Unified Feature Theory performs slightly better than the other versions of DFT. Surprisingly, ICA actually performs worse than running the CRF on raw EMA data.
Style APA, Harvard, Vancouver, ISO itp.
2

Moody, Jay T. "Visualizing speech with a recurrent neural network trained on human acoustic-articulatory data /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9930904.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Drake, Eleanor Katherine Elizabeth. "The involvement of the speech production system in prediction during comprehension : an articulatory imaging investigation". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/22912.

Pełny tekst źródła
Streszczenie:
This thesis investigates the effects in speech production of prediction during speech comprehension. The topic is raised by recent theoretical models of speech comprehension, which suggest a more integrated role for speech production and comprehension mechanisms than has previously been posited. The thesis is specifically concerned with the suggestion that during speech comprehension upcoming input is simulated with reference to the listener’s own speech production system by way of efference copy. Throughout this thesis the approach taken is to investigate whether representations elicited during comprehension impact speech production. The representations of interest are those generated endogenously by the listener during prediction of upcoming input. We investigate whether predictions are represented at a form level within the listener’s speech production system. We first present an overview of the relevant literature. We then present details of a picture word interference study undertaken to confirm that the item set employed elicits typical phonological effects within a conventional paradigm in which the competing representation is perceptually available. The main body of the thesis presents evidence concerning the nature of representations arising during prediction, specifically their effect on speech output. We first present evidence from picture naming vocal response latencies. We then complement and extend this with evidence from articulatory imaging, allowing an examination of pre-acoustic aspects of speech production. To investigate effects on speech production as a dynamic motor-activity we employ the Delta method, developed to quantify articulatory variability from EPG and ultrasound recordings. We apply this technique to ultrasound data acquired during mid-sagittal imaging of the tongue and extend the approach to allow us to explore the time-course of articulation during the acoustic response latency period. We investigate whether prediction of another’s speech evokes articulatorily specified activation within the listener’s speech production system The findings presented in this thesis suggest that representations evoked as predictions during speech comprehension do affect speech motor output. However, we found no evidence to suggest that predictions are represented in an articulatorily specified manner. We discuss this conclusion with reference to models of speech production-perception that implicate efference copies in the generation of predictions during speech comprehension.
Style APA, Harvard, Vancouver, ISO itp.
4

Chen, Cheng. "Inter-gestural Coordination in Temporal and Spatial Domains in Italian: Synchronous EPG + UTI Data". Doctoral thesis, Scuola Normale Superiore, 2019. http://hdl.handle.net/11384/86022.

Pełny tekst źródła
Streszczenie:
This dissertation explores the temporal coordination of articulatory gestures in various segmental conditions in Italian, by comparing onset and coda singletons as well as word-final and intervocalic consonant clusters in a Tuscan variety of Italian. Articulatory models of syllable structure assume that the coordination between the vocalic gesture and the consonantal gesture may differ in onset vs. coda and in singletons vs. clusters. Based on previous literature on different languages, we expect to find differences in the temporal coordination of singletons and clusters in Italian too. In addition, recent literature suggests that the articulatory and coarticulatory properties of the segments play an important role in determining the details of the coordination patterns, and that not all segments or segmental sequences behave in the same way as far as their gestural coordination relations are concerned. Thus, an additional aim of this work is to compare consonants with different coarticulatory properties (in the sense of modifications of C articulation in varying vocalic contexts) and seek for possible relations between coarticulation and coordination patterns. The methodology used is new. We used an original system for the acquisition, realtime synchronization and analysis of acoustic, electropalatographic (EPG) and ultrasound tongue imaging (UTI) data, called SynchroLing. EPG and UTI instrumental techniques provide complementary information on, respectively, linguo-palatal contact patterns in the anterior vocal tract and midsagittal profiles of the whole tongue, including postdorsum and root. SynchroLing allows real-time inspection of contacts in the artificial palate and tongue midsagittal movements, coupled with acoustics. [...]
Style APA, Harvard, Vancouver, ISO itp.
5

Douros, Ioannis. "Towards a 3 dimensional dynamic generic speaker model to study geometry simplifications of the vocal tract using magnetic resonance imaging data". Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0115.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous avons utilisé les données de l’IRM du conduit vocal pour étudier la production de la parole. La première partie consiste en l’étude de l’impact que le vélum, l’épiglotte et la position de la tête a sur la phonation de cinq voyelles françaises. Des simulations acoustiques ont été utilisées pour comparer les formants des cas étudiés avec la référence afin de mesurer leur impact. Pour cette partie du travail, nous avons utilisé des IRM statiques en 3D. Comme la parole est généralement une phénomène dynamique une question s’est posée, à savoir s’il serait possible de traiter les données 3D afin d’incorporer des informations temporelles de la parole continue. Par conséquent, la deuxième partie présente quelques algorithmes que l’on peut utiliser pour améliorer les données de production de la parole. Plusieurs transformations d’images ont été combinées afin de générer des estimations des formes du conduit vocal qui sont plus informatives que les originales. À ce stade, nous avons envisagé, outre l’amélioration des données de production de la parole, de créer un modèle de référence générique qui pourrait fournir des informations améliorées non pas pour un sujet spécifique, mais globalement pour la parole. C’est pourquoi nous avons consacré la troisième partie l’étude d’un algorithme permettant de créer un atlas spatio-temporel de l’appareil vocal qui peut être utilisé comme référence ou standard pour l’étude de la parole car il est indépendant du locuteur. Enfin, la dernière partie de la thèse, fait référence à une sélection de questions ouvertes du domaine qui restent encore sans réponse, quelques pistes intéressantes que l’on peut développer à partir de cette thèse et quelques approches potentielles qui pourraient être envisager afin de répondre à ces questions
In this thesis we used MRI (Magnetic Resonance Imaging) data of the vocal tract to study speech production. The first part consist of the study of the impact that the velum, the epiglottis and the head position has on the phonation of five french vowels. Acoustic simulations were used to compare the formants of the studied cases with the reference in order to measure their impact. For this part of the work, we used 3D static MR (Magnetic Resonance) images. As speech is usually a dynamic phenomenon, a question arose, whether it would be possible to process the 3D data in order to incorporate dynamic information of continuous speech. Therefore the second part presents some algorithms that one can use in order to enhance speech production data. Several image transformations were combined in order to generate estimations of vocal tract shapes which are more informative than the original ones. At this point, we envisaged apart from enhancing speech production data, to create a generic speaker model that could provide enhanced information not for a specific subject, but globally for speech. As a result, we devoted the third part in the investigation of an algorithm that one can use to create a spatiotemporal atlas of the vocal tract which can be used as a reference or standard speaker for speech studies as it is speaker independent. Finally, the last part of the thesis, refers to a selection of open questions of the field that are still left unanswered, some interesting directions that one can expand this thesis and some potential approaches that could help someone move forward towards these directions
Style APA, Harvard, Vancouver, ISO itp.
6

Blackwood, Ximenes Arwen. "The relation between acoustic and articulatory variation in vowels : data from American and Australian English". Thesis, 2022. http://hdl.handle.net/1959.7/uws:68957.

Pełny tekst źródła
Streszczenie:
In studies of dialect variation, the articulatory nature of vowels is sometimes inferred from formant values using the following heuristic: F1 is inversely correlated with tongue height and F2 is inversely correlated with tongue backness. This study compared vowel formants and corresponding lingual articulation in two dialects of English, standard North American English and Australian English. Five speakers of North American English and four speakers of Australian English were recorded producing multiple repetitions of ten monophthongs embedded in the /sVd/ context. Simultaneous articulatory data were collected using electromagnetic articulography. Results show that there are significant correlations between tongue position and formants in the direction predicted by the heuristic but also that the relations implied by the heuristic break down under specific conditions. Articulatory vowel spaces, based on tongue dorsum (TD) position, and acoustic vowel spaces, based on formants, show systematic misalignment due in part to the influence of other articulatory factors, including lip rounding and tongue curvature on formant values. Incorporating these dimensions into our dialect comparison yields a richer description and a more robust understanding of how vowel formant patterns are reproduced within and across dialects.
Style APA, Harvard, Vancouver, ISO itp.
7

Steiner, Ingmar Michael A. [Verfasser]. "Observations on the dynamic control of an articulatory synthesizer using speech production data / vorgelegt von Ingmar Michael Augustus Steiner". 2010. http://d-nb.info/1005833303/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Articulatory data"

1

Seminar on Speech Production (5th 2000 Kloster Seeon). Proceedings of the 5th Seminar on Speech Production: Models and data & CREST Workshop on Models of Speech Production : motor planning and articulatory modelling. Munich: SPS5, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ahlers, M. Oliver. Simulation of occlusion in restorative dentistry: The Artex system ; an up-to-date concept regarding facebow-registration, individual recordings, articulators and measuring instruments. Hamburg: DentaConcept, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Gibson, Mark, i Juana Gil, red. Romance Phonetics and Phonology. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.001.0001.

Pełny tekst źródła
Streszczenie:
The research in this volume addresses several recurring topics in Romance Phonetics and Phonology with a special focus on the segment, syllable, word, and phrase levels of analysis. The original research presented in this volume ranges from the low-level mechanical processes involved in speech production and perception to high-level representation and computation. The interaction between these two dimensions of speech and their effects on first- and second-language acquisition are methodically treated in later chapters. Individual chapters address rhotics in various languages (Spanish, Italian, and Brazilian Portuguese), both taps and trills, singleton and geminate; vowel nasalization and associated changes; sibilants and fricatives, the ways in which vowels are affected by their position; there are explorations of diphthongs and consonant clusters in Romanian; variant consonant production in three Catalan dialects; voice quality discrimination in Italian by native speakers of Spanish; mutual language perception by French and Spanish native speakers of each other’s language; poetry recitation (vis-à-vis rhotics in particular); French prosodic structure; glide modifications and pre-voicing in onsets in Spanish and Catalan; vowel reduction in Galician; and detailed investigations of bilinguals’ language acquisition. A number of experimental methods are employed to address the topics under study including both acoustic and articulatory data; electropalatography (EPG), ultrasound, electromagnetic articulography (EMA).
Style APA, Harvard, Vancouver, ISO itp.
4

Recasens, Daniel. Phonetic Causes of Sound Change. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.001.0001.

Pełny tekst źródła
Streszczenie:
The present study sheds light on the phonetic causes of sound change and the intermediate stages of the diachronic pathways by studying the palatalization and assibilation of velar stops (referred to commonly as ‘velar softening’, as exemplified by the replacement of Latin /ˈkɛntʊ/ by Tuscan Italian [ˈtʃɛnto] ‘one hundred’), and of labial stops and labiodental fricatives (also known as’ labial softening’, as in the case of the dialectal variant [ˈtʃatɾə] of /ˈpjatɾə/ ‘stone’ in Romanian dialects). To a lesser extent, it also deals with the palatalization and affrication of dentoalveolar stops. The book supports an articulation-based account of those sound-change processes, and holds that, for the most part, the corresponding affricate and fricative outcomes have been issued from intermediate (alveolo)palatal-stop realizations differing in closure fronting degree. Special attention is given to the one-to-many relationship between the input and output consonantal realizations, to the acoustic cues which contribute to the implementation of these sound changes, and to those positional and contextual conditions in which those changes are prone to operate most feasibly. Different sources of evidence are taken into consideration: descriptive data from, for example, Bantu studies and linguistic atlases of Romanian dialects in the case of labial softening; articulatory and acoustic data for velar and (alveolo)palatal stops and front lingual affricates; perceptual results from phoneme identification tests. The universal character of the claims being made derives from the fact that the dialectal material, and to some extent the experimental material as well, belong to a wide range of languages from not only Europe but also all the other continents.
Style APA, Harvard, Vancouver, ISO itp.
5

Vihman, Marilyn May. Phonological Templates in Development. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198793564.001.0001.

Pełny tekst źródła
Streszczenie:
Based on cross-linguistic data from several children each learning one of eight languages and grounded in the theoretical frameworks of usage-based phonology, exemplar theory, and Dynamic Systems Theory, this book explores the patterns or phonological templates children develop once they are producing 20–50 words or more. The children are found to begin with ‘selected’ words, which match some of the vocal forms they have practised in babbling; this is followed by the production of more challenging adult word forms, adapted—differently by different children and with some shaping by the particular adult language—to fit that child’s existing word forms. Early accuracy is replaced by later recourse to an ‘inner model’ of what a word can sound like; this is a template, or fixed output pattern to which a high proportion of the children’s forms adhere for a short time, before being replaced by ‘ordinary’ (more adult-like) forms with regular substitutions and omissions. The idea of templates developed in adult theorizing about phonology and morphology; in adult language it is most productive in colloquial forms and pet names or hypocoristics, found in informal settings or ‘language at play’. These are illustrated in some detail for over 200 English rhyming compounds, 100 Estonian and 500 French short forms. The issues of emergent systematicity, the roles of articulatory and memory challenges for children, and the similarities and differences in the function of templates for adults as compared with children are central concerns.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Articulatory data"

1

Bauer, Dominik, Jim Kannampuzha i Bernd J. Kröger. "Articulatory Speech Re-synthesis: Profiting from Natural Acoustic Speech Data". W Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, 344–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03320-9_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Perkell, J. S. "Testing Theories of Speech Production: Implications of Some Detailed Analyses of Variable Articulatory Data". W Speech Production and Speech Modelling, 263–88. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sepulveda-Sepulveda, Alexander, i German Castellanos-Dominguez. "Assessment of the Relation Between Low-Frequency Features and Velum Opening by Using Real Articulatory Data". W Speech and Computer, 131–39. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Badin, Pierre, Frédéric Elisei, Gérard Bailly i Yuliya Tarabalka. "An Audiovisual Talking Head for Augmented Speech Generation: Models and Animations Based on a Real Speaker’s Articulatory Data". W Articulated Motion and Deformable Objects, 132–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-70517-8_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zampaulo, André. "The phonetics of palatals". W Palatal Sound Change in the Romance Languages, 31–45. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198807384.003.0003.

Pełny tekst źródła
Streszczenie:
This chapter provides a detailed characterization of both articulatory and acoustic patterns of Romance palatals and their relevance to the goals of the book. While focusing on available data for sounds that are commonly found across the Romance-speaking world, this chapter also characterizes consonants whose emergence appear more restricted and/or for which articulatory and acoustic data do not abound in the Romance literature. Knowing the articulatory and acoustic characteristics of these sounds reveals itself as crucial to understanding the basic phonetic motivations for their diachronic pathways as well as their patterns of synchronic dialectal variation.
Style APA, Harvard, Vancouver, ISO itp.
6

Recasens, Daniel, i Meritxell Mira. "Articulatory setting, articulatory symmetry, and production mechanisms for Catalan consonant sequences". W Romance Phonetics and Phonology, 146–58. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0009.

Pełny tekst źródła
Streszczenie:
This study reports articulatory and acoustic data for three Catalan dialects (Eastern, Western, Valencian), showing that the sequences /tsʃ/ and /sʃ/, and /tʃs/ and /ʃs/, are implemented through analogous production mechanisms and therefore that fricative+fricative and affricate+fricative sequences behave symmetrically at the articulatory level. Analysis results also reveal a clear trend for regressive assimilation in the case of /(t)sʃ/ and for blending or a two-target realization in the case of /(t)ʃs/; differences in degree of articulatory complexity among the segmental sequences under analysis account for these production strategies. Moreover, the final phonetic outcome is strongly dependent on the dialect-dependent articulatory differences in fricative articulation; thus, in Valencian, /(t)sʃ / may undergo regressive assimilation or blending and /(t)ʃs/ regressive assimilation, owing to a more anterior lingual constriction for /ʃ/ than in the other dialects.
Style APA, Harvard, Vancouver, ISO itp.
7

Recasens, Daniel. "Velar palatalization". W Phonetic Causes of Sound Change, 22–76. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.003.0003.

Pełny tekst źródła
Streszczenie:
An analysis of the conversion of velar stops before front vocalic segments, and in other contextual and positional conditions, into plain palatal, alveolopalatal, and even alveolar articulations is carried out using descriptive data from a considerable number of languages. Articulatory data on (alveolo)palatal stops reveal that these consonants are mostly alveolopalatal in the world’s languages, and also that their closure location may be highly variable, which accounts for their identification as /t/ or /k/. It is claimed that velar palatalization may be triggered by articulatory strengthening through an increase in tongue-to-palate contact in non-front vocalic environments.
Style APA, Harvard, Vancouver, ISO itp.
8

Recasens, Daniel. "Introduction". W Phonetic Causes of Sound Change, 1–12. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.003.0001.

Pełny tekst źródła
Streszczenie:
The chapter deals with the origin and phonetic causes of sound changes involving consonants, with the diachronic pathways connecting the input and output phonetic forms, and with models of sound change (e.g., Evolutionary Phonology, the Neogrammarian’s articulatory model, Ohala’s acoustic equivalence model). The need to use articulatory and acoustic data for ascertaining the causes of sound change (and in particular the palatalization and assibilation of velar, labial, and dentoalveolar obstruents) is emphasized. The chapter is also concerned with how allophones are phonologized in sound-change processes and with the special status of (alveolo)palatal stops regarding allophonic phonologization.
Style APA, Harvard, Vancouver, ISO itp.
9

Chitoran, Ioana, i Stefania Marin. "Vowels and diphthongs". W Romance Phonetics and Phonology, 118–32. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0007.

Pełny tekst źródła
Streszczenie:
This study compares the acoustic and articulatory properties of the Romanian mid diphthong /ea/ to the hiatus sequence /e.a/, and the high diphthong /ja/ to the hiatus sequence /i.a/. Both acoustic and articulatory (EMA) data support the analysis of the mid diphthong as forming a complex nucleus, consistent with its phonotactic behavior. This diphthong exhibits the greatest temporal overlap between the two vowels and the largest coarticulation/blend between its vocalic targets. The hiatus sequence /i.a/, which spans two syllables, shows the least overlap and coarticulation. The high diphthong /ja/ is a tautosyllabic sequence, displaying an intermediate degree of overlap, more similar to /ea/ than to hiatus sequences in its timing properties.
Style APA, Harvard, Vancouver, ISO itp.
10

Celata, Chiara, Alessandro Vietti i Lorenzo Spreafico. "An articulatory account of rhotic variation in Tuscan Italian". W Romance Phonetics and Phonology, 91–117. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0006.

Pełny tekst źródła
Streszczenie:
Rhotic variation in a spoken variety of Tuscan Italian is investigated. The chapter takes a multilevel articulatory approach, based on real-time synchronization and analysis of acoustic, electropalatographic (EPG), and ultrasound tongue imaging (UTI) data. Contrary to the expectations based on the received dialectological literature, it emerges that speakers produce various alveolar variants: taps, trills, fricatives, and approximant realizations. To examine the factors that may constrain the variation of /r/, a multiple correspondence analysis is carried out. The result is that there are significant associations between the phonetic properties of /r/ variants and their preferred contexts of occurrence. A particular focus is then placed on the articulatory properties of the singleton–geminate distinction. It is shown that the length contrast is maintained but contrary to expectation, trills are not primarily used for geminates. Instead, each speaker differentiates the singleton from the geminate according to a variety of production strategies.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Articulatory data"

1

Kato, Tsuneo, Sungbok Lee i Shrikanth Narayanan. "An analysis of articulatory-acoustic data based on articulatory strokes". W ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960628.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wrench, Alan A., i Korin Richmond. "Continuous speech recognition using articulatory data". W 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-772.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Payan, Yohan. "A 2D Biomechanical Model of the Human Tongue". W ASME 1998 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/imece1998-0306.

Pełny tekst źródła
Streszczenie:
Abstract This study aims to evaluate the impact of anatomical, morphological and biomechanical properties of one of the main articulator, namely the tongue, onto the kinematic properties of speech movements. For this, a 2D biomechanical Finite Element model of the tongue was developed. It integrates four extrinsic muscles and three intrinsic ones. This model is controled according to the Equilibrium Point Hypothesis, proposed by Feldman (1966, 1986). The deformations of the model are computed, in order to simulate Vowel-to-Vowel transitions. The articulatory patterns synthesized with this model are then compared to data collected on a male native speaker of French. Emphasis is put on the potential influence of biomechanical tongue properties on to measurable kinematic features.
Style APA, Harvard, Vancouver, ISO itp.
4

Maharana, Sarthak Kumar, Aravind Illa, Renuka Mannem, Yamini Belur, Preetie Shetty, Veeramani Preethish Kumar, Seena Vengalil, Kiran Polavarapu, Nalini Atchayaram i Prasanta Kumar Ghosh. "Acoustic-to-Articulatory Inversion for Dysarthric Speech by Using Cross-Corpus Acoustic-Articulatory Data". W ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413625.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ouni, Slim, Loïc Mangeonjean i Ingmar Steiner. "Visartico: a visualization tool for articulatory data". W Interspeech 2012. ISCA: ISCA, 2012. http://dx.doi.org/10.21437/interspeech.2012-510.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Aron, Michael, Nicolas Ferveur, Erwan Kerrien, Marie-Odile Berger i Yves Laprie. "Acquisition and synchronization of multimodal articulatory data". W Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Jun Wa, Ashok Samal, Jordan R. Green i Tom D. Carrell. "Vowel recognition from articulatory position time-series data". W 2009 3rd International Conference on Signal Processing and Communication Systems (ICSPCS 2009). IEEE, 2009. http://dx.doi.org/10.1109/icspcs.2009.5306418.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Prom-on, Santitham, Peter Birkholz i Yi Xu. "Training an articulatory synthesizer with continuous acoustic data". W Interspeech 2013. ISCA: ISCA, 2013. http://dx.doi.org/10.21437/interspeech.2013-98.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Krug, Paul Konstantin, Peter Birkholz, Branislav Gerazov, Daniel Rudolph van Niekerk, Anqi Xu i Yi Xu. "Articulatory Synthesis for Data Augmentation in Phoneme Recognition". W Interspeech 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/interspeech.2022-10874.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Toth, Arthur R., i Alan W. Black. "Cross-speaker articulatory position data for phonetic feature prediction". W Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-132.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii