Статті в журналах з теми "Articulatory data"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Articulatory data.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Articulatory data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Silva, Samuel, Nuno Almeida, Conceição Cunha, Arun Joseph, Jens Frahm, and António Teixeira. "Data-Driven Critical Tract Variable Determination for European Portuguese." Information 11, no. 10 (October 21, 2020): 491. http://dx.doi.org/10.3390/info11100491.

Повний текст джерела
Анотація:
Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a visual analysis and comparison of imaging data or vocal tract profiles can already provide relevant findings, the sheer amount of available data demands and can strongly profit from unsupervised data-driven approaches. Recent work, in this regard, has asserted the possibility of determining critical articulators from RT-MRI data by considering a representation of vocal tract configurations based on landmarks placed on the tongue, lips, and velum, yielding meaningful results for European Portuguese (EP). Advancing this previous work to obtain a characterization of EP sounds grounded on Articulatory Phonology, important to explore critical gestures and advance, for example, articulatory speech synthesis, entails the consideration of a novel set of tract variables. To this end, this article explores critical variable determination considering a vocal tract representation aligned with Articulatory Phonology and the Task Dynamics framework. The overall results, obtained considering data for three EP speakers, show the applicability of this approach and are consistent with existing descriptions of EP sounds.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Abirami, S., L. Anirudh, and P. Vijayalakshmi. "Silent Speech Interface: An Inversion Problem." Journal of Physics: Conference Series 2318, no. 1 (August 1, 2022): 012008. http://dx.doi.org/10.1088/1742-6596/2318/1/012008.

Повний текст джерела
Анотація:
Abstract When conventional acoustic-verbal communication is neither possible or desirable, silent speech interfaces (SSI) rely on biosignals, non-acoustic signals created by the human body during speech production, to facilitate communication. Despite considerable advances in sensing techniques that can be employed to capture these biosignals, majority of them are used under controlled scenarios in laboratories. One such example is Electromagnetic Articulograph (EMA), which monitors articulatory motion. It is expensive with inconvenient wiring and practically not portable in real world. Since articulator measurement is difficult, articulatory parameters may be estimated from acoustics through inversion. Acoustic-to-articulatory inversion (AAI) is a technique for determining articulatory parameters using acoustic input. Automatic voice recognition, text-to- speech synthesis, and speech accent conversion can all benefit from this. However, for speakers with no articulatory data, inversion is required in many practical applications. Articulatory reconstruction is more useful when the inversion is speaker independent. Initially, we analysed positional data to better understand the relationship between sensor data and uttered speech. Following the analysis, we built a speaker independent articulatory reconstruction system that uses a Bi- LSTM model. Additionally, we evaluated the trained model using standard evaluation measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Browman, Catherine P., and Louis Goldstein. "Articulatory gestures as phonological units." Phonology 6, no. 2 (August 1989): 201–51. http://dx.doi.org/10.1017/s0952675700001019.

Повний текст джерела
Анотація:
We have argued that dynamically defined articulatory gestures are the appropriate units to serve as the atoms of phonological representation. Gestures are a natural unit, not only because they involve task-oriented movements of the articulators, but because they arguably emerge as prelinguistic discrete units of action in infants. The use of gestures, rather than constellations of gestures as in Root nodes, as basic units of description makes it possible to characterise a variety of language patterns in which gestural organisation varies. Such patterns range from the misorderings of disordered speech through phonological rules involving gestural overlap and deletion to historical changes in which the overlap of gestures provides a crucial explanatory element.Gestures can participate in language patterns involving overlap because they are spatiotemporal in nature and therefore have internal duration. In addition, gestures differ from current theories of feature geometry by including the constriction degree as an inherent part of the gesture. Since the gestural constrictions occur in the vocal tract, which can be charactensed in terms of tube geometry, all the levels of the vocal tract will be constricted, leading to a constriction degree hierarchy. The values of the constriction degree at each higher level node in the hierarchy can be predicted on the basis of the percolation principles and tube geometry. In this way, the use of gestures as atoms can be reconciled with the use of Constriction degree at various levels in the vocal tract (or feature geometry) hierarchy.The phonological notation developed for the gestural approach might usefully be incorporated, in whole or in part, into other phonologies. Five components of the notation were discussed, all derived from the basic premise that gestures are the primitive phonological unit, organised into gestural scores. These components include (1) constriction degree as a subordinate of the articulator node and (2) stiffness (duration) as a subordinate of the articulator node. That is, both CD and duration are inherent to the gesture. The gestures are arranged in gestural scores using (3) articulatory tiers, with (4) the relevant geometry (articulatory, tube or feature) indicated to the left of the score and (5) structural information above the score, if desired. Association lines can also be used to indicate how the gestures are combined into phonological units. Thus, gestures can serve both as characterisations of articulatory movement data and as the atoms of phonological representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Jun, Jordan R. Green, Ashok Samal, and Yana Yunusova. "Articulatory Distinctiveness of Vowels and Consonants: A Data-Driven Approach." Journal of Speech, Language, and Hearing Research 56, no. 5 (October 2013): 1539–51. http://dx.doi.org/10.1044/1092-4388(2013/12-0030).

Повний текст джерела
Анотація:
Purpose To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach. Method Tongue and lip movements of 8 vowels and 11 consonants from 10 healthy talkers were collected. First, classification accuracies were obtained using 2 complementary approaches: (a) Procrustes analysis and (b) a support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces using multidimensional scaling. Results Vowel classification accuracies of 91.67% and 89.05% and consonant classification accuracies of 91.37% and 88.94% were obtained using Procrustes analysis and a support vector machine, respectively. Articulatory vowel and consonant spaces were derived based on the pairwise Procrustes distances. Conclusions The articulatory vowel space derived in this study resembled the long-standing descriptive articulatory vowel space defined by tongue height and advancement. The articulatory consonant space was consistent with feature-based classification of English consonants. The derived articulatory vowel and consonant spaces may have clinical implications, including serving as an objective measure of the severity of articulatory impairment.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kuruvilla-Dugdale, Mili, and Antje S. Mefferd. "Articulatory Performance in Dysarthria: Using a Data-Driven Approach to Estimate Articulatory Demands and Deficits." Brain Sciences 12, no. 10 (October 20, 2022): 1409. http://dx.doi.org/10.3390/brainsci12101409.

Повний текст джерела
Анотація:
This study pursued two goals: (1) to establish range of motion (ROM) demand tiers (i.e., low, moderate, high) specific to the jaw (J), lower lip (LL), posterior tongue (PT), and anterior tongue (AT) for multisyllabic words based on the articulatory performance of neurotypical talkers and (2) to identify demand- and disease-specific articulatory performance characteristics in talkers with amyotrophic lateral sclerosis (ALS) and Parkinson’s disease (PD). J, LL, PT, and AT movements of 12 talkers with ALS, 12 talkers with PD, and 12 controls were recorded using electromagnetic articulography. Vertical ROM, average speed, and movement duration were measured. Results showed that in talkers with PD, J and LL ROM were already significantly reduced at the lowest tier whereas PT and AT ROM were only significantly reduced at moderate and high tiers. In talkers with ALS, J ROM was significantly reduced at the moderate tier whereas LL, PT, and AT ROM were only significantly reduced at the highest tier. In both clinical groups, significantly reduced J and LL speeds could already be observed at the lowest tier whereas significantly reduced AT speeds could only be observed at the highest tier. PT speeds were already significantly reduced at the lowest tier in the ALS group but not until the moderate tier in the PD group. Finally, movement duration, but not ROM or speed performance, differentiated between ALS and PD even at the lowest tier. Results suggest that articulatory deficits vary with stimuli-specific motor demands across articulators and clinical groups.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

M., Dhanalakshmi, Nagarajan T., and Vijayalakshmi P. "Significant sensors and parameters in assessment of dysarthric speech." Sensor Review 41, no. 3 (July 26, 2021): 271–86. http://dx.doi.org/10.1108/sr-01-2021-0004.

Повний текст джерела
Анотація:
Purpose Dysarthria is a neuromotor speech disorder caused by neuromuscular disturbances that affect one or more articulators resulting in unintelligible speech. Though inter-phoneme articulatory variations are well captured by formant frequency-based acoustic features, these variations are expected to be much higher for dysarthric speakers than normal. These substantial variations can be well captured by placing sensors in appropriate articulatory position. This study focuses to determine a set of articulatory sensors and parameters in order to assess articulatory dysfunctions in dysarthric speech. Design/methodology/approach The current work aims to determine significant sensors and parameters associated using motion path and correlation analyzes on the TORGO database of dysarthric speech. Among eight informative sensor channels and six parameters per channel in positional data, the sensors such as tongue middle, back and tip, lower and upper lips and parameters (y, z, φ) are found to contribute significantly toward capturing the articulatory information. Acoustic and positional data analyzes are performed to validate these identified significant sensors. Furthermore, a convolutional neural network-based classifier is developed for both phone-and word-level classification of dysarthric speech using acoustic and positional data. Findings The average phone error rate is observed to be lower, up to 15.54% for positional data when compared with acoustic-only data. Further, word-level classification using a combination of both acoustic and positional information is performed to study that the positional data acquired using significant sensors will boost the performance of classification even for severe dysarthric speakers. Originality/value The proposed work shows that the significant sensors and parameters can be used to assess dysfunctions in dysarthric speech effectively. The articulatory sensor data helps in better assessment than the acoustic data even for severe dysarthric speakers.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Byrd, Dani, Edward Flemming, Carl Andrew Mueller, and Cheng Cheng Tan. "Using Regions and Indices in EPG Data Reduction." Journal of Speech, Language, and Hearing Research 38, no. 4 (August 1995): 821–27. http://dx.doi.org/10.1044/jshr.3804.821.

Повний текст джерела
Анотація:
This note describes how dynamic electropalatography (EPG) can be used for the acquisition and analysis of articulatory data. Various data reduction procedures developed to analyze the electropalatographic data are reported. Specifically, these procedures concern two interesting areas in EPG data analysis—first, the novel use of speaker-specific articulatory regions and second, the development of arithmetic indices to quantify time-varying articulatory behavior and reflect reduction and coarticulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lee, Jimin, Michael Bell, and Zachary Simmons. "Articulatory Kinematic Characteristics Across the Dysarthria Severity Spectrum in Individuals With Amyotrophic Lateral Sclerosis." American Journal of Speech-Language Pathology 27, no. 1 (February 6, 2018): 258–69. http://dx.doi.org/10.1044/2017_ajslp-16-0230.

Повний текст джерела
Анотація:
Purpose The current study investigated whether articulatory kinematic patterns can be extrapolated across the spectrum of dysarthria severity in individuals with amyotrophic lateral sclerosis (ALS). Method Temporal and spatial articulatory kinematic data were collected using electromagnetic articulography from 14 individuals with dysarthria secondary to ALS and 6 typically aging speakers. Speech intelligibility and speaking rate were used as indices of severity. Results Temporal measures (duration, speed of articulators) were significantly correlated with both indices of severity. In speakers with dysarthria, spatial measures were not correlated with severity except in 3 measures: tongue movement displacement was more reduced in the anterior–posterior dimension; jaw movement distance was greater in the inferior–superior dimension; jaw convex hull area was larger in speakers with slower speaking rates. Visual inspection of movement trajectories revealed that overall spatial kinematic characteristics in speakers with severe dysarthria differed qualitatively from those in speakers with mild or moderate dysarthria. Unlike speakers with dysarthria, typically aging speakers displayed variable tongue movement and minimal jaw movement. Conclusions The current study revealed that spatial articulatory characteristics, unlike temporal characteristics, showed a complicated pattern across the severity spectrum. The findings suggest that articulatory characteristics in speakers with severe dysarthria cannot simply be extrapolated from those in speakers with mild-to-moderate dysarthria secondary to ALS.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Stevens, Kenneth N. "Inferring articulatory movements from acoustic data." Journal of the Acoustical Society of America 93, no. 4 (April 1993): 2416. http://dx.doi.org/10.1121/1.405910.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Baum, Shari R., David H. McFarland, and Mai Diab. "Compensation to articulatory perturbation: Perceptual data." Journal of the Acoustical Society of America 99, no. 6 (June 1996): 3791–94. http://dx.doi.org/10.1121/1.414996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kim, Hyunsoon. "The place of articulation of the Korean plain affricate in intervocalic position: an articulatory and acoustic study." Journal of the International Phonetic Association 31, no. 2 (December 2001): 229–57. http://dx.doi.org/10.1017/s0025100301002055.

Повний текст джерела
Анотація:
The place of articulation of the Korean plain affricate /c/ and the obstruents /t, s/ is articulatorily and acoustically examined in the intervocalic positions /a―a, a―i, a―u/ taken from four subjects in three dialects. The articulatory data of direct palatograms and linguograms have shown that in these contexts, the plain affricate is not post-alveolar as usually assumed in the literature, but alveolar, just like the alveolar consonants /t, s/, despite some speaker variation regarding the active articulator (tip, blade, anterodorsum). The examination of LPC data has also shown that the affricate is alveolar, like the consonants /t, s/, both for its stop part and for its frication part. The phonetic results are then confirmed by the review of Skalicková's (1960) palatogram of the affricate, and the comparison of X-ray data of the affricate /c/ to those of the Korean obstruents /t, s/ (Skalicková 1960), the Korean vowel /i/ (Han 1978) and post-alveolars in other langauges such as Czech (Danes et al. 1954) and Polish (Wierzchowska 1980). Based on the present phonetic results, we propose that, following IPA usage, the Korean plain affricate is transcribed as /ts/.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Rong, Panying. "Neuromotor Control of Speech and Speechlike Tasks: Implications From Articulatory Gestures." Perspectives of the ASHA Special Interest Groups 5, no. 5 (October 23, 2020): 1324–38. http://dx.doi.org/10.1044/2020_persp-20-00070.

Повний текст джерела
Анотація:
Purpose This study aimed to provide a preliminary examination of the articulatory control of speech and speechlike tasks based on a gestural framework and identify shared and task-specific articulatory factors in speech and speechlike tasks. Method Ten healthy participants performed two speechlike tasks (i.e., alternating motion rate [AMR] and sequential motion rate [SMR]) and three speech tasks (i.e., reading of “clever Kim called the cat clinic” at the regular, fast, and slow rates) that varied in phonological complexity and rate. Articulatory kinematics were recorded using an electromagnetic kinematic tracking system (Wave, Northern Digital Inc.). Based on the gestural framework for articulatory phonology, the gestures of tongue body and lips were derived from the kinematic data. These gestures were subjected to a fine-grained analysis, which extracted (a) four gestural features (i.e., range of magnitude [ROM], frequency [Freq], acceleration time, and maximum speed [maxSpd]) for the tongue body gesture; (b) three intergestural measures including the peak intergestural coherence (InterCOH), frequency at which the peak intergestural coherence occurs (Freq_InterCOH), and the mean absolute relative phase between the tongue body and lip gestures; and (c) three intragestural (i.e., interarticulator) measures including the peak intragestural coherence (IntraCOH), Freq_IntraCOH, and mean absolute relative phase between the tongue body and the jaw, which are the component articulators that underlie the tongue body gesture. In addition, the performance rate for each task was also derived. The effects of task and sex on all the articulatory and behavioral measures were examined using mixed-design analysis of variance followed by post hoc pairwise comparisons across tasks. Results Task had a significant effect on performance rate, ROM, Freq, maxSpd, InterCOH, Freq_InterCOH, IntraCOH, and Freq_IntraCOH. Compared to the speech tasks, the AMR task showed a decrease in ROM and increases in Freq, InterCOH, Freq_InterCOH, IntraCOH, and Freq_IntraCOH. The SMR task showed similar ROM, Freq, maxSpd, InterCOH, and IntraCOH as the fast and regular speech tasks. Conclusions The simple phonological structure and demand for rapid syllable rate for the AMR task may elicit a distinct articulatory control mechanism. Despite being a rapid nonsense syllable repetition task, the relatively complex phonological structure of the SMR task appeared to elicit a similar articulatory control mechanism as that of speech production. Based on these shared and task-specific articulatory features between speech and speechlike tasks, the clinical implications for articulatory assessment were discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Lucero, Jorge C., and Anders Lofqvist. "Studying articulatory variability using functional data analysis." Journal of the Acoustical Society of America 112, no. 5 (November 2002): 2417. http://dx.doi.org/10.1121/1.4779885.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Delmoral, Jessica C., Sandra M. Rua Ventura, and João Manuel RS Tavares. "Segmentation of tongue shapes during vowel production in magnetic resonance images based on statistical modelling." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 232, no. 3 (January 19, 2018): 271–81. http://dx.doi.org/10.1177/0954411917751000.

Повний текст джерела
Анотація:
Quantification of the anatomic and functional aspects of the tongue is pertinent to analyse the mechanisms involved in speech production. Speech requires dynamic and complex articulation of the vocal tract organs, and the tongue is one of the main articulators during speech production. Magnetic resonance imaging has been widely used in speech-related studies. Moreover, the segmentation of such images of speech organs is required to extract reliable statistical data. However, standard solutions to analyse a large set of articulatory images have not yet been established. Therefore, this article presents an approach to segment the tongue in two-dimensional magnetic resonance images and statistically model the segmented tongue shapes. The proposed approach assesses the articulator morphology based on an active shape model, which captures the shape variability of the tongue during speech production. To validate this new approach, a dataset of mid-sagittal magnetic resonance images acquired from four subjects was used, and key aspects of the shape of the tongue during the vocal production of relevant European Portuguese vowels were evaluated.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chang, Edward F., Garret Kurteff, John P. Andrews, Robert G. Briggs, Andrew K. Conner, James D. Battiste, and Michael E. Sughrue. "Pure Apraxia of Speech After Resection Based in the Posterior Middle Frontal Gyrus." Neurosurgery 87, no. 3 (February 25, 2020): E383—E389. http://dx.doi.org/10.1093/neuros/nyaa002.

Повний текст джерела
Анотація:
Abstract BACKGROUND AND IMPORTANCE Apraxia of speech is a disorder of articulatory coordination and planning in speech sound production. Its diagnosis is based on deficits in articulation, prosody, and fluency. It is often described concurrent with aphasia or dysarthria, while pure apraxia of speech is a rare entity. CLINICAL PRESENTATION A right-handed man underwent focal surgical resection of a recurrent grade III astrocytoma in the left hemisphere dorsal premotor cortex located in the posterior middle frontal gyrus. After the procedure, he experienced significant long-term speech production difficulties. A battery of standard and custom language and articulatory assessments were administered, revealing intact comprehension and naming abilities, and preserved strength in orofacial articulators, but considerable deficits in articulatory coordination, fluency, and prosody—consistent with diagnosis of pure apraxia of speech. Tractography and resection volumes compared with publicly available imaging data from the Human Connectome Project suggest possible overlap with area 55b, an under-recognized language area in the dorsal premotor cortex and has white matter connectivity with the superior longitudinal fasciculus. CONCLUSION The case reported here details a rare clinical entity, pure apraxia of speech resulting from resection of posterior middle frontal gyrus. While not a classical language area, emerging literature supports the role of this area in the production of fluent speech, and has implications for surgical planning and the general neurobiology of language.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Jiang, Jintao, Abeer Alwan, Patricia Keating, Lynne E. Bernstein, and Edward Auer. "On the correlation between articulatory and acoustic data." Journal of the Acoustical Society of America 108, no. 5 (November 2000): 2508. http://dx.doi.org/10.1121/1.4743268.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wang, Jun, Ashok Samal, Jordan Green, and Tom Carrell. "Vowel recognition from articulatory position time‐series data." Journal of the Acoustical Society of America 125, no. 4 (April 2009): 2498. http://dx.doi.org/10.1121/1.4783353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Aryal, Sandesh, and Ricardo Gutierrez-Osuna. "Data driven articulatory synthesis with deep neural networks." Computer Speech & Language 36 (March 2016): 260–73. http://dx.doi.org/10.1016/j.csl.2015.02.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Teplansky, Kristin J., Alan Wisler, Jordan R. Green, Daragh Heitzman, Sara Austin, and Jun Wang. "Measuring Articulatory Patterns in Amyotrophic Lateral Sclerosis Using a Data-Driven Articulatory Consonant Distinctiveness Space Approach." Journal of Speech, Language, and Hearing Research 66, no. 8S (August 17, 2023): 3076–88. http://dx.doi.org/10.1044/2022_jslhr-22-00320.

Повний текст джерела
Анотація:
Purpose: The aim of this study was to leverage data-driven approaches, including a novel articulatory consonant distinctiveness space (ACDS) approach, to better understand speech motor control in amyotrophic lateral sclerosis (ALS). Method: Electromagnetic articulography was used to record tongue and lip movement data during the production of 10 consonants from healthy controls ( n = 15) and individuals with ALS ( n = 47). To assess phoneme distinctness, speech data were analyzed using two classification algorithms, Procrustes matching (PM) and support vector machine (SVM), and the area/volume of the ACDS. Pearson's correlation coefficient was used to examine the relationship between bulbar impairment and the ACDS. Analysis of variance was used to examine the effects of bulbar impairment on consonant distinctiveness and consonant classification accuracies in clinical subgroups. Results: There was a significant relationship between the ACDS and intelligible speaking rate (area, p = .003; volume, p = .010), and the Amyotrophic Lateral Sclerosis Functional Rating Scale–Revised (ALSFRS-R) bulbar subscore (area, p = .009; volume, p = .027). Consonant classification performance followed a consistent pattern with bulbar severity, where consonants produced by speakers with more severe ALS were classified less accurately (SVM = 75.27%; PM = 74.54%) than the healthy, asymptomatic, and mild–moderate groups. In severe ALS, area of the ACDS was significantly condensed compared to both asymptomatic ( p = .004) and mild–moderate ( p = .013) groups. There was no statistically significant difference in area between the severe ALS group and healthy speakers ( p = .292). Conclusions: Our comprehensive approach is sensitive to early oromotor changes in response due to disease progression. The preserved articulatory consonant space may capture the use of compensatory adaptations to counteract influences of neurodegeneration. Supplemental Material: https://doi.org/10.23641/asha.22044320
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Shellikeri, Sanjana, Reeman Marzouqah, Benjamin Rix Brooks, Lorne Zinman, Jordan R. Green, and Yana Yunusova. "Psychometric Properties of Rapid Word-Based Rate Measures in the Assessment of Bulbar Amyotrophic Lateral Sclerosis: Comparisons With Syllable-Based Rate Tasks." Journal of Speech, Language, and Hearing Research 64, no. 11 (November 8, 2021): 4178–91. http://dx.doi.org/10.1044/2021_jslhr-21-00038.

Повний текст джерела
Анотація:
Purpose Rapid maximum performance repetition tasks have increasingly demonstrated their utility as clinimetric markers supporting diagnosis and monitoring of bulbar disease in amyotrophic lateral sclerosis (ALS). A recently developed protocol uses novel real-word repetitions instead of traditional nonword/syllable sequences in hopes of improving sensitivity to motor speech impairments by adding a phonological target constraint that would activate a greater expanse of the motor speech neuroanatomy. This study established the psychometric properties of this novel clinimetric protocol in its assessment of bulbar ALS and compared performance to traditional syllable sequence dysdiadochokinetic (DDK) tasks. Specific objectives were to (a) compare rates between controls and speakers with symptomatic versus presymptomatic bulbar disease, (b) characterize their discriminatory ability in detecting presymptomatic bulbar disease compared to healthy speech, (c) determine their articulatory movement underpinnings, and (d) establish within-individual longitudinal changes. Method DDK and novel tongue (“ticker”—TAR) and labial (“pepper”—LAR) articulatory rates were compared between n = 18 speakers with presymptomatic bulbar disease, n = 10 speakers with symptomatic bulbar disease, and n = 13 healthy controls. Bulbar disease groups were determined by a previously validated speaking rate cutoff. Discriminatory ability was determined using receiver operating characteristic analysis. Within-individual change over time was characterized in a subset of 16 participants with available longitudinal data using linear mixed-effects models. Real-time articulatory movements of the tongue front, tongue dorsum, jaw, and lips were captured using 3-D electromagnetic articulography; effects of movement displacement and speed on clinimetric rates were determined using stepwise linear regressions. Results All clinimetric rates (traditional DDK tasks and novel tasks) were reduced in speakers with symptomatic bulbar disease; only TAR was reduced in speakers with presymptomatic bulbar disease and was able to detect this group with an excellent discrimination ability (area under the curve = 0.83). Kinematic analyses revealed associations with expected articulators, greater motor complexity, and differential articulatory patterns for the novel real-word repetitions than their DDK counterparts. Only LAR significantly declined longitudinally over the disease course. Conclusion Novel real-word clinimetric rate tasks evaluating tongue and labial articulatory dysfunction are valid and effective markers for early detection and tracking of bulbar disease in ALS.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Munhall, K. G., and J. A. Jones. "Articulatory evidence for syllabic structure." Behavioral and Brain Sciences 21, no. 4 (August 1998): 524–25. http://dx.doi.org/10.1017/s0140525x98391268.

Повний текст джерела
Анотація:
Because the evolution of speech production is beyond our expertise (and perhaps beyond everyone's expertise) we restrict our comments to areas in which data actually exist. We provide articulatory evidence consistent with the claims made about syllable structure in adult speech and infant babbling, but we also voice some disagreement about speech errors and the typing data.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Byrd, Dani. "A Phase Window Framework for Articulatory Timing." Phonology 13, no. 2 (August 1996): 139–69. http://dx.doi.org/10.1017/s0952675700002086.

Повний текст джерела
Анотація:
One of the most significant challenges in the study of speech production is to acquire a theoretical understanding of how speakers coordinate articulatory movements. A variety of work has demonstrated that articulatory, prosodic and extralinguistic factors all influence speech timing in a complex and interactive way. Models such as Articulatory Phonology that stipulate the relative timing of articulatory units must be revised to allow for this variability. Such a revision is outlined below.The following work should be viewed as a presentation of a new framework for conceptualising articulatory timing. This approach, meant to be programmatic rather than conclusive, is productive if it motivates research that might not otherwise have been undertaken. §1 overviews Articulatory Phonology. The implementation of articulatory timing in terms of phasing relations is discussed. Speech production data bearing on timing variability are discussed in §2. §3 argues for an alternative to Articulatory Phonology's current rule-based approach to intergestural timing that can allow for linguistic and extralinguistic variables to systematically influence phasing relations. §3.2 introduces the PHASE WINDOW framework, which allows the degree of articulatory overlap between linguistic gestures to vary within a constrained range. Finally, §4 concerns the relation of intergestural timing to the postulation of the segment as a primitive unit in phonology. It is hypothesised that certain intergestural timing relations are stable and lexically specified. Gestures whose coordination is constrained by lexical PHASE WINDOWS seem to bear a close relation to those conglomerates of gestures that constitute what is traditionally considered to be a segment.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Perrier, Pascal, and Susanne Fuchs. "Speed–Curvature Relations in Speech Production Challenge the 1/3 Power Law." Journal of Neurophysiology 100, no. 3 (September 2008): 1171–83. http://dx.doi.org/10.1152/jn.01116.2007.

Повний текст джерела
Анотація:
Relations between tangential velocity and trajectory curvature are analyzed for tongue movements during speech production in the framework of the 1/3 power law, discovered by Viviani and colleagues for arm movements. In 2004, Tasko and Westbury found for American English that the power function provides a good account of speech kinematics, but with an exponent that varies across articulators. The present work aims at broadening Tasko and Westbury's study 1) by analyzing speed–curvature relations for various languages (French, German, Mandarin) and for a biomechanical tongue model simulating speech gestures at various speaking rates and 2) by providing for each speaker or each simulated speaking rate a comparison of results found for the complete set of movements with those found for each movement separately. It is found that the 1/3 power law offers a fair description of the global speed–curvature relations for all speakers and all languages, when articulatory speech data are considered in their whole. This is also observed in the simulations, where the motor control model does not specify any kinematic property of the articulatory paths. However, the refined analysis for individual movements reveals numerous exceptions to this law: the velocity always decreases when curvature increases, but the slope in the log–log representation is variable. It is concluded that the speed–curvature relation is not controlled in speech movements and that it accounts only for general properties of the articulatory movements, which could arise from vocal tract dynamics or/and from stochastic characteristics of the measured signals.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lee, Sungbok, Dani Byrd, and Jelena Krivokapić. "Functional data analysis of prosodic effects on articulatory timing." Journal of the Acoustical Society of America 119, no. 3 (March 2006): 1666–71. http://dx.doi.org/10.1121/1.2161436.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

McGowan, Richard S., and Philip E. Rubin. "Perceptual evaluation of articulatory movement recovered from acoustic data." Journal of the Acoustical Society of America 96, no. 5 (November 1994): 3328. http://dx.doi.org/10.1121/1.410732.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Collins, Michael J., Stanley C. Ahalt, and Ashok K. Krishnamurthy. "Generating gestural scores from articulatory data using temporal decomposition." Journal of the Acoustical Society of America 97, no. 5 (May 1995): 3246. http://dx.doi.org/10.1121/1.411696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Kröger, Bernd J., Georg Schröder, and Claudia Opgen‐Rhein. "A gesture‐based dynamic model describing articulatory movement data." Journal of the Acoustical Society of America 98, no. 4 (October 1995): 1878–89. http://dx.doi.org/10.1121/1.413374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Aron, Michaël, Marie-Odile Berger, Erwan Kerrien, Brigitte Wrobel-Dautcourt, Blaise Potard, and Yves Laprie. "Multimodal acquisition of articulatory data: Geometrical and temporal registration." Journal of the Acoustical Society of America 139, no. 2 (February 2016): 636–48. http://dx.doi.org/10.1121/1.4940666.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Collins, M. J., A. K. Krishnamurthy, and S. C. Ahalt. "Generating gestural scores from articulatory data using temporal decomposition." IEEE Transactions on Speech and Audio Processing 7, no. 2 (March 1999): 230–33. http://dx.doi.org/10.1109/89.748129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Schmidt, Anna Marie. "Korean to English articulatory mapping: Palatometric and acoustic data." Journal of the Acoustical Society of America 95, no. 5 (May 1994): 2820–21. http://dx.doi.org/10.1121/1.409681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Nam, Hosung, Vikramjit Mitra, Mark K. Tiede, Elliot Saltzman, Louis Goldstein, Carol Espy‐Wilson, and Mark Hasegawa‐Johnson. "A procedure for estimating gestural scores from articulatory data." Journal of the Acoustical Society of America 127, no. 3 (March 2010): 1851. http://dx.doi.org/10.1121/1.3384376.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Bultena, Sybrine. "Are You in English Gear?" Toegepaste Taalwetenschap in Artikelen 79 (January 1, 2008): 9–20. http://dx.doi.org/10.1075/ttwia.79.02bul.

Повний текст джерела
Анотація:
It is assumed that the overall combination of the positioning of speech articulators such as the tongue, jaws and lips differs per language, which is commonly referred to as articulatory settings. Previous studies involving analytic listening, as well as acoustic analyses and those based on modern scanning techniques that can visualize the vocal tract claim to have found evidence for the existence of articulatory settings; yet, thus far none of these seems to have found unambiguous measurable evidence for language specific settings. The present study attempts to acoustically measure differences between the settings of English and Dutch under optimal conditions, based on within-speaker comparisons of comparable vowels in similar phonetic contexts. Formant frequencies of eight different Dutch-English vowel pairs that appear in interlingual homophones produced by five advanced Dutch learners of English were measured for this purpose. Statistical analyses of the acoustic data seem to point to overall distinct patterns in the positions of Dutch and English vowels, which can be related to the language-specific settings of the two languages examined. Most of all, the outcomes of the analyses seem to highlight the dynamic nature of articulation, which can explain the difficulty previous studies have encountered.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Silva, Adelaide H. P., and André Nogueira Xavier. "Libras and Articulatory Phonology." Gradus - Revista Brasileira de Fonologia de Laboratório 3, no. 1 (July 31, 2018): 103–24. http://dx.doi.org/10.47627/gradus.v3i1.121.

Повний текст джерела
Анотація:
This paper proposes a new approach to the phonological representation of Brazilian Sign Language (Libras). We depart from the observation that traditional analyses have overlooked features of signed languages which have no (exact) correspondence in spoken languages. Moreover, traditional approaches impose spoken language theoretical constructs on signed languages analyses and, by doing so, they disregard the possibility that signed languages follow different principles, as well as that analytical categories for spoken languages may be inaccurate for signed languages. Therefore, we argue that an approach grounded on a general theory of movement can account for signed language phonology in a more accurate way. Following Articulatory Phonology, we propose the analytical primes for a motor-oriented phonological approach to Libras, i.e., we determine which are the articulatory gestures that constitute the lexical items in a signed language. Besides, we propose a representation for the sign BEETLE-CAR in terms of a gestural score, and explain how gestures coordinate in relation to each other. As it is discussed, this approach allows us to more satisfactorily explain cases of variation attested in our data.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Thies, Tabea, Doris Mücke, Richard Dano, and Michael T. Barbe. "Levodopa-Based Changes on Vocalic Speech Movements during Prosodic Prominence Marking." Brain Sciences 11, no. 5 (May 4, 2021): 594. http://dx.doi.org/10.3390/brainsci11050594.

Повний текст джерела
Анотація:
The present study investigates speech changes in Parkinson’s disease on the acoustic and articulatory level with respect to prosodic prominence marking. To display movements of the underlying articulators, speech data from 16 patients with Parkinson’s disease were recorded using electromagnetic articulography. Speech tasks focused on strategies of prominence marking. Patients’ ability to encode prominence in the laryngeal and supra-laryngeal domain is tested in two conditions to examine the influence of motor performance on speech production further: without dopaminergic medication and with dopaminergic medication. The data reveal that patients with Parkinson’s disease are able to highlight important information in both conditions. They maintain prominence relations across- and within-accentuation by adjusting prosodic markers, such as vowel duration and pitch modulation, while the acoustic vowel space remains the same. For differentiating across-accentuation, not only intensity but also all temporal and spatial parameters related to the articulatory tongue body movements during the production of vowels are modulated to signal prominence. In response to the levodopa intake, gross motor performance improved significantly by 42%. The improvement in gross motor performance was accompanied by an improvement in speech motor performance in terms of louder speech and shorter, larger and faster tongue body movements. The tongue body is more agile under levodopa increase, a fact that is not necessarily detectable on the acoustic level but important for speech therapy.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Serrurier, Antoine, and Christiane Neuschaefer-Rube. "Morphological and acoustic modeling of the vocal tract." Journal of the Acoustical Society of America 153, no. 3 (March 2023): 1867–86. http://dx.doi.org/10.1121/10.0017356.

Повний текст джерела
Анотація:
In speech production, the anatomical morphology forms the substrate on which the speakers build their articulatory strategy to reach specific articulatory-acoustic goals. The aim of this study is to characterize morphological inter-speaker variability by building a shape model of the full vocal tract including hard and soft structures. Static magnetic resonance imaging data from 41 speakers articulating altogether 1947 phonemes were considered, and the midsagittal articulator contours were manually outlined. A phoneme-independent average-articulation representative of morphology was calculated as the speaker mean articulation. A principal component analysis-driven shape model was derived from average-articulations, leading to five morphological components, which explained 87% of the variance. Almost three-quarters of the variance was related to independent variations of the horizontal oral and vertical pharyngeal lengths, the latter capturing male-female differences. The three additional components captured shape variations related to head tilt and palate shape. Plane wave propagation acoustic simulations were run to characterize morphological components. A lengthening of 1 cm of the vocal tract in the vertical or horizontal directions led to a decrease in formant values of 7%–8%. Further analyses are required to analyze three-dimensional variability and to understand the morphological-acoustic relationships per phoneme. Average-articulations and model code are publicly available ( https://github.com/tonioser/VTMorphologicalModel ).
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Siriwardena, Yashish M., Nadee Seneviratne, and Carol Espy-Wilson. "Emotion recognition with speech articulatory coordination features." Journal of the Acoustical Society of America 150, no. 4 (October 2021): A358. http://dx.doi.org/10.1121/10.0008586.

Повний текст джерела
Анотація:
Mental health illnesses like Major Depressive Disorder and Schizophrenia affect the coordination between articulatory gestures in speech production. Coordination features derived from Vocal tract variables (TVs) predicted by a speech inversion system can quantify the changes in articulatory gestures and have proven to be effective in the classification of mental health disorders. In this study we use data from the IEMOCAP (acted emotions) and MSP Podcast (natural emotions) datasets to understand how coordination features extracted from TVs can be used to capture changes between different emotions for the first time. We compared the eigenspectra extracted from channel delay correlation matrices for Angry, Sad and Happy emotions with respect to the “Neutral” emotion. Across both the datasets, it was observed that the “Sad” emotion follows a pattern suggesting simpler articulatory coordination while the “Angry” emotion follows the opposite showing signs of complex articulatory coordination. For the majority of subjects, the ‘Happy’ emotion follows a complex articulatory coordination pattern, but has significant confusion with “Neutral” emotion. We trained a Convolutional Neural Network with the coordination features as inputs to perform emotion classification. A detailed interpretation of the differences in eigenspectra and the results of the classification experiments will be discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ren, Guofeng, Jianmei Fu, Guicheng Shao, and Yanqin Xun. "Articulatory-to-Acoustic Conversion of Mandarin Emotional Speech Based on PSO-LSSVM." Complexity 2021 (January 31, 2021): 1–10. http://dx.doi.org/10.1155/2021/8876005.

Повний текст джерела
Анотація:
The production of emotional speech is determined by the movement of the speaker’s tongue, lips, and jaw. In order to combine articulatory data and acoustic data of speakers, articulatory-to-acoustic conversion of emotional speech has been studied. In this paper, parameters of LSSVM model have been optimized using the PSO method, and the optimized PSO-LSSVM model was applied to the articulatory-to-acoustic conversion. The root mean square error (RMSE) and mean Mel-cepstral distortion (MMCD) have been used to evaluate the results of conversion; the evaluated result illustrates that MMCD of MFCC is 1.508 dB, and RMSE of the second formant (F2) is 25.10 Hz. The results of this research can be further applied to the feature fusion of emotion speech recognition to improve the accuracy of emotion recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Gibbon, Fiona E., and Alice Lee. "Using EPG data to display articulatory separation for phoneme contrasts." Clinical Linguistics & Phonetics 25, no. 11-12 (October 3, 2011): 1014–21. http://dx.doi.org/10.3109/02699206.2011.601393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hutchins, Sandra E. "Method and apparatus for determining articulatory parameters from speech data." Journal of the Acoustical Society of America 91, no. 6 (June 1992): 3594. http://dx.doi.org/10.1121/1.402800.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Laprie, Yves. "An articulatory model of the velum developed from cineradiographic data." Journal of the Acoustical Society of America 137, no. 4 (April 2015): 2269. http://dx.doi.org/10.1121/1.4920288.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gonzalez, Jose A., Lam A. Cheah, Angel M. Gomez, Phil D. Green, James M. Gilbert, Stephen R. Ell, Roger K. Moore, and Ed Holdsworth. "Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning." IEEE/ACM Transactions on Audio, Speech, and Language Processing 25, no. 12 (December 2017): 2362–74. http://dx.doi.org/10.1109/taslp.2017.2757263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Mooshammer, Christine R., Louis Goldstein, Mark Tiede, Manisha Kulshreshtha, Scott McClure, and Argyro Katsika. "Planning time effects of phonological competition: Articulatory and acoustic data." Journal of the Acoustical Society of America 125, no. 4 (April 2009): 2657. http://dx.doi.org/10.1121/1.4784180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Badino, Leonardo, Claudia Canevari, Luciano Fadiga, and Giorgio Metta. "Integrating articulatory data in deep neural network-based acoustic modeling." Computer Speech & Language 36 (March 2016): 173–95. http://dx.doi.org/10.1016/j.csl.2015.05.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Kröger, Bernd J., Julia Gotto, Susanne Albert, and Christiane Neuschaefer-Rube. "visual articulatory model and its application to therapy of speech disorders: a pilot study." ZAS Papers in Linguistics 40 (January 1, 2005): 79–94. http://dx.doi.org/10.21248/zaspil.40.2005.259.

Повний текст джерела
Анотація:
A visual articulatory model based on static MRI-data of isolated sounds and its application in therapy of speech disorders is described. The model is capable of generating video sequences of articulatory movements or still images of articulatory target positions within the midsagittal plane. On the basis of this model (1) a visual stimulation technique for the therapy of patients suffering from speech disorders and (2) a rating test for visual recognition of speech movements was developed. Results indicate that patients produce recognition rates above level of chance already without any training and that patients are capable of increasing their recognition rate over the time course of therapy significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Cuzzocrea, Alfredo, Enzo Mumolo, and Giorgio Mario Grasso. "An Effective and Efficient Genetic-Fuzzy Algorithm for Supporting Advanced Human-Machine Interfaces in Big Data Settings." Algorithms 13, no. 1 (December 31, 2019): 13. http://dx.doi.org/10.3390/a13010013.

Повний текст джерела
Анотація:
In this paper we describe a novel algorithm, inspired by the mirror neuron discovery, to support automatic learning oriented to advanced man-machine interfaces. The algorithm introduces several points of innovation, based on complex metrics of similarity that involve different characteristics of the entire learning process. In more detail, the proposed approach deals with an humanoid robot algorithm suited for automatic vocalization acquisition from a human tutor. The learned vocalization can be used to multi-modal reproduction of speech, as the articulatory and acoustic parameters that compose the vocalization database can be used to synthesize unrestricted speech utterances and reproduce the articulatory and facial movements of the humanoid talking face automatically synchronized. The algorithm uses fuzzy articulatory rules, which describe transitions between phonemes derived from the International Phonetic Alphabet (IPA), to allow simpler adaptation to different languages, and genetic optimization of the membership degrees. Large experimental evaluation and analysis of the proposed algorithm on synthetic and real data sets confirms the benefits of our proposal. Indeed, experimental results show that the vocalization acquired respects the basic phonetic rules of Italian languages and that subjective results show the effectiveness of multi-modal speech production with automatic synchronization between facial movements and speech emissions. The algorithm has been applied to a virtual speaking face but it may also be used in mechanical vocalization systems as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Albuquerque, Luciana, Ana Rita Valente, Fábio Barros, António Teixeira, Samuel Silva, Paula Martins, and Catarina Oliveira. "Exploring the Age Effects on European Portuguese Vowel Production: An Ultrasound Study." Applied Sciences 12, no. 3 (January 28, 2022): 1396. http://dx.doi.org/10.3390/app12031396.

Повний текст джерела
Анотація:
For aging speech, there is limited knowledge regarding the articulatory adjustments underlying the acoustic findings observed in previous studies. In order to investigate the age-related articulatory differences in European Portuguese (EP) vowels, the present study analyzes the tongue configuration of the nine EP oral vowels (isolated context and pseudoword context) produced by 10 female speakers of two different age groups (young and old). From the tongue contours automatically segmented from the US images and manually revised, the parameters (tongue height and tongue advancement) were extracted. The results suggest that the tongue tends to be higher and more advanced for the older females compared to the younger ones for almost all vowels. Thus, the vowel articulatory space tends to be higher, advanced, and bigger with age. For older females, unlike younger females that presented a sharp reduction in the articulatory vowel space in disyllabic sequences, the vowel space tends to be more advanced for isolated vowels compared with vowels produced in disyllabic sequences. This study extends our pilot research by reporting articulatory data from more speakers based on an improved automatic method of tongue contours tracing, and it performs an inter-speaker comparison through the application of a novel normalization procedure.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Kuruvilla-Dugdale, Mili, Claire Custer, Lindsey Heidrick, Richard Barohn, and Raghav Govindarajan. "A Phonetic Complexity-Based Approach for Intelligibility and Articulatory Precision Testing: A Preliminary Study on Talkers With Amyotrophic Lateral Sclerosis." Journal of Speech, Language, and Hearing Research 61, no. 9 (September 19, 2018): 2205–14. http://dx.doi.org/10.1044/2018_jslhr-s-17-0462.

Повний текст джерела
Анотація:
Purpose This study describes a phonetic complexity-based approach for speech intelligibility and articulatory precision testing using preliminary data from talkers with amyotrophic lateral sclerosis. Method Eight talkers with amyotrophic lateral sclerosis and 8 healthy controls produced a list of 16 low and high complexity words. Sixty-four listeners judged the samples for intelligibility, and 2 trained listeners completed phoneme-level analysis to determine articulatory precision. To estimate percent intelligibility, listeners orthographically transcribed each word, and the transcriptions were scored as being either accurate or inaccurate. Percent articulatory precision was calculated based on the experienced listeners' judgments of phoneme distortions, deletions, additions, and/or substitutions for each word. Articulation errors were weighted based on the perceived impact on intelligibility to determine word-level precision. Results Between-groups differences in word intelligibility and articulatory precision were significant at lower levels of phonetic complexity as dysarthria severity increased. Specifically, more severely impaired talkers showed significant reductions in word intelligibility and precision at both complexity levels, whereas those with milder speech impairments displayed intelligibility reductions only for more complex words. Articulatory precision was less sensitive to mild dysarthria compared to speech intelligibility for the proposed complexity-based approach. Conclusions Considering phonetic complexity for dysarthria tests could result in more sensitive assessments for detecting and monitoring dysarthria progression.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Campbell, Jessica, Dani Byrd, and Louis Goldstein. "Frequency stability of articulatory and acoustic modulation functions." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A288. http://dx.doi.org/10.1121/10.0016304.

Повний текст джерела
Анотація:
During speech perception, neural activity entrains with moments of high acoustic change. For example, periods of high change in speech amplitude envelope magnitude are tracked by neurons in the human superior temporal gyrus. However, it is unknown whether neural entrainment may also be driven by modulation in the articulatory domain. To locate periods of high articulatory change, a spatiotemporal modulation function (Goldstein, 2019) that quantifies change over time in global vocal tract posture can be used to investigate the potential for such entrainment. Here, the frequency patterning and stability of modulation maxima, called “pulses,” are assessed using articulatory point-tracking data. The median frequency of both articulatory and acoustic pulses is found to be only slightly higher than theta band frequencies (6i–8 Hz), at which neural entrainment with speech has been reported. Within- and between-speaker variability of inter-pulse intervals is also compared to the variability of acoustic syllable and acoustic stress foot durations. The results show that intervals between pulses are more stable than syllable and foot durations. In sum, the spatiotemporal modulation function exhibits a stable frequency profile in the articulatory and acoustic domains that could be leveraged in the neurocognitive functions at work in speech perception. [Work supported by the NIH.]
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Dugan, Sarah, Sarah R. Li, Kathryn Eary, AnnaKate Spotts, Nicholas S. Schoenleb, Ben Connolly, Renee Seward, Michael A. Riley, T. Douglas Mast, and Suzanne Boyce. "Articulatory response to delayed and real-time feedback based on regional tongue displacements." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A199. http://dx.doi.org/10.1121/10.0016021.

Повний текст джерела
Анотація:
Speech is one of the most complex motor tasks, due to its rapid timing and necessary precision. The difficulty of measuring articulatory movement in real time has made it difficult to investigate motion-based biofeedback for speech. Previously, we demonstrated the use of an automatic measure of tongue movement accuracy from ultrasound imaging. Using this measure for articulatory biofeedback in a simplified, game-like display may benefit the learning of speech movement patterns. To better understand real-time articulatory biofeedback and improve the design of this display, this study presented articulatory biofeedback for the target word /ɑr/ (“are”) in a game with two conditions for feedback timing (delayed and concurrent, indicating whether the game object started moving after or during speech production) and for difficulty level (easy and hard target width, indicating the articulatory precision necessary for achieving the target). For each participant, two blocks of biofeedback for 20–50 productions were presented (randomizing whether the delayed or concurrent block was presented first) in one collection session, with the difficulty level randomized for each production within each block. Data from nine children with typical speech or residual speech sound disorder were analyzed, showing that response and preference of feedback condition vary among individuals.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Richmond, Korin, Zhenhua Ling, and Junichi Yamagishi. "The use of articulatory movement data in speech synthesis applications: An overview — Application of articulatory movements using machine learning algorithms —." Acoustical Science and Technology 36, no. 6 (2015): 467–77. http://dx.doi.org/10.1250/ast.36.467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії