Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Automatic diagnosis of speech disorder.

Zeitschriftenartikel zum Thema „Automatic diagnosis of speech disorder“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Automatic diagnosis of speech disorder" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Sarria Paja, Milton Orlando. „Automatic detection of Parkinson's disease from components of modulators in speech signals“. Computer and Electronic Sciences: Theory and Applications 1, Nr. 1 (14.12.2020): 71–82. http://dx.doi.org/10.17981/cesta.01.01.2020.05.

Der volle Inhalt der Quelle
Annotation:
Parkinson's disease (PD) is the second most common neurodegenerative disorder after Alzheimer's disease. This disorder mainly affects older adults at a rate of about 2%, and about 89% of people diagnosed with PD also develop speech disorders. This has led scientific community to research information embedded in speech signal from Parkinson's patients, which has allowed not only a diagnosis of the pathology but also a follow-up of its evolution. In recent years, a large number of studies have focused on the automatic detection of pathologies related to the voice, in order to make objective evaluations of the voice in a non-invasive manner. In cases where the pathology primarily affects the vibratory patterns of vocal folds such as Parkinson's, the analyses typically performed are sustained over vowel pronunciations. In this article, it is proposed to use information from slow and rapid variations in speech signals, also known as modulating components, combined with an effective dimensionality reduction reduction approach that will be used as input to the classification system. The proposed approach achieves classification rates higher than 88%, surpassing the classical approach based on mel cepstrals coefficients (MFCC). The results show that the information extracted from slow varying components is highly discriminative for the task at hand, and could support assisted diagnosis systems for PD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mesallam, Tamer A., Mohamed Farahat, Khalid H. Malki, Mansour Alsulaiman, Zulfiqar Ali, Ahmed Al-nasheri und Ghulam Muhammad. „Development of the Arabic Voice Pathology Database and Its Evaluation by Using Speech Features and Machine Learning Algorithms“. Journal of Healthcare Engineering 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/8783751.

Der volle Inhalt der Quelle
Annotation:
A voice disorder database is an essential element in doing research on automatic voice disorder detection and classification. Ethnicity affects the voice characteristics of a person, and so it is necessary to develop a database by collecting the voice samples of the targeted ethnic group. This will enhance the chances of arriving at a global solution for the accurate and reliable diagnosis of voice disorders by understanding the characteristics of a local group. Motivated by such idea, an Arabic voice pathology database (AVPD) is designed and developed in this study by recording three vowels, running speech, and isolated words. For each recorded samples, the perceptual severity is also provided which is a unique aspect of the AVPD. During the development of the AVPD, the shortcomings of different voice disorder databases were identified so that they could be avoided in the AVPD. In addition, the AVPD is evaluated by using six different types of speech features and four types of machine learning algorithms. The results of detection and classification of voice disorders obtained with the sustained vowel and the running speech are also compared with the results of an English-language disorder database, the Massachusetts Eye and Ear Infirmary (MEEI) database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Walker, Traci, Heidi Christensen, Bahman Mirheidari, Thomas Swainston, Casey Rutten, Imke Mayer, Daniel Blackburn und Markus Reuber. „Developing an intelligent virtual agent to stratify people with cognitive complaints: A comparison of human–patient and intelligent virtual agent–patient interaction“. Dementia 19, Nr. 4 (14.09.2018): 1173–88. http://dx.doi.org/10.1177/1471301218795238.

Der volle Inhalt der Quelle
Annotation:
Previous work on interactions in the memory clinic has shown that conversation analysis can be used to differentiate neurodegenerative dementia from functional memory disorder. Based on this work, a screening system was developed that uses a computerised ‘talking head’ (intelligent virtual agent) and a combination of automatic speech recognition and conversation analysis-informed programming. This system can reliably differentiate patients with functional memory disorder from those with neurodegenerative dementia by analysing the way they respond to questions from either a human doctor or the intelligent virtual agent. However, much of this computerised analysis has relied on simplistic, nonlinguistic phonetic features such as the length of pauses between talk by the two parties. To gain confidence in automation of the stratification procedure, this paper investigates whether the patients’ responses to questions asked by the intelligent virtual agent are qualitatively similar to those given in response to a doctor. All the participants in this study have a clear functional memory disorder or neurodegenerative dementia diagnosis. Analyses of patients’ responses to the intelligent virtual agent showed similar, diagnostically relevant sequential features to those found in responses to doctors’ questions. However, since the intelligent virtual agent’s questions are invariant, its use results in more consistent responses across people – regardless of diagnosis – which facilitates automatic speech recognition and makes it easier for a machine to learn patterns. Our analysis also shows why doctors do not always ask the same question in the exact same way to different patients. This sensitivity and adaptation to nuances of conversation may be interactionally helpful; for instance, altering a question may make it easier for patients to understand. While we demonstrate that some of what is said in such interactions is bound to be constructed collaboratively between doctor and patient, doctors could consider ensuring that certain, particularly important and/or relevant questions are asked in as invariant a form as possible to be better able to identify diagnostically relevant differences in patients’ responses.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tawhid, Md Nurul Ahad, Siuly Siuly, Hua Wang, Frank Whittaker, Kate Wang und Yanchun Zhang. „A spectrogram image based intelligent technique for automatic detection of autism spectrum disorder from EEG“. PLOS ONE 16, Nr. 6 (25.06.2021): e0253094. http://dx.doi.org/10.1371/journal.pone.0253094.

Der volle Inhalt der Quelle
Annotation:
Autism spectrum disorder (ASD) is a developmental disability characterized by persistent impairments in social interaction, speech and nonverbal communication, and restricted or repetitive behaviors. Currently Electroencephalography (EEG) is the most popular tool to inspect the existence of neurological disorders like autism biomarkers due to its low setup cost, high temporal resolution and wide availability. Generally, EEG recordings produce vast amount of data with dynamic behavior, which are visually analyzed by professional clinician to detect autism. It is laborious, expensive, subjective, error prone and has reliability issue. Therefor this study intends to develop an efficient diagnostic framework based on time-frequency spectrogram images of EEG signals to automatically identify ASD. In the proposed system, primarily, the raw EEG signals are pre-processed using re-referencing, filtering and normalization. Then, Short-Time Fourier Transform is used to transform the pre-processed signals into two-dimensional spectrogram images. Afterward those images are evaluated by machine learning (ML) and deep learning (DL) models, separately. In the ML process, textural features are extracted, and significant features are selected using principal component analysis, and feed them to six different ML classifiers for classification. In the DL process, three different convolutional neural network models are tested. The proposed DL based model achieves higher accuracy (99.15%) compared to the ML based model (95.25%) on an ASD EEG dataset and also outperforms existing methods. The findings of this study suggest that the DL based structure could discover important biomarkers for efficient and automatic diagnosis of ASD from EEG and may assist to develop computer-aided diagnosis system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chui, Kwok Tai, Miltiadis D. Lytras und Pandian Vasant. „Combined Generative Adversarial Network and Fuzzy C-Means Clustering for Multi-Class Voice Disorder Detection with an Imbalanced Dataset“. Applied Sciences 10, Nr. 13 (01.07.2020): 4571. http://dx.doi.org/10.3390/app10134571.

Der volle Inhalt der Quelle
Annotation:
The world has witnessed the success of artificial intelligence deployment for smart healthcare applications. Various studies have suggested that the prevalence of voice disorders in the general population is greater than 10%. An automatic diagnosis for voice disorders via machine learning algorithms is desired to reduce the cost and time needed for examination by doctors and speech-language pathologists. In this paper, a conditional generative adversarial network (CGAN) and improved fuzzy c-means clustering (IFCM) algorithm called CGAN-IFCM is proposed for the multi-class voice disorder detection of three common types of voice disorders. Existing benchmark datasets for voice disorders, the Saarbruecken Voice Database (SVD) and the Voice ICar fEDerico II Database (VOICED), use imbalanced classes. A generative adversarial network offers synthetic data to reduce bias in the detection model. Improved fuzzy c-means clustering considers the relationship between adjacent data points in the fuzzy membership function. To explain the necessity of CGAN and IFCM, a comparison is made between the algorithm with CGAN and that without CGAN. Moreover, the performance is compared between IFCM and traditional fuzzy c-means clustering. Lastly, the proposed CGAN-IFCM outperforms existing models in its true negative rate and true positive rate by 9.9–12.9% and 9.1–44.8%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Beavis, Lizzie, Ronan O'Malley, Bahman Mirheidari, Heidi Christensen und Daniel Blackburn. „How can automated linguistic analysis help to discern functional cognitive disorder from healthy controls and mild cognitive impairment?“ BJPsych Open 7, S1 (Juni 2021): S7. http://dx.doi.org/10.1192/bjo.2021.78.

Der volle Inhalt der Quelle
Annotation:
AimsThe disease burden of cognitive impairment is significant and increasing. The aetiology of cognitive impairment can be structural, such as in mild cognitive impairment (MCI) due to early Alzheimer's disease (AD), or in functional cognitive disorder (FCD), where there is no structural pathology. Many people with FCD receive a delayed diagnosis following invasive or costly investigations. Accurate, timely diagnosis improves outcomes across all patients with cognitive impairment. Research suggests that analysis of linguistic features of speech may provide a non-invasive diagnostic tool. This study aimed to investigate the linguistic differences in conversations between people with early signs of cognitive impairment with and without structural pathology, with a view to developing a screening tool using linguistic analysis of conversations.MethodIn this explorative, cross-sectional study, we recruited 25 people with MCI considered likely due to AD, (diagnosed according to Petersen's criteria and referred to as PwMCI), 25 healthy controls (HCs) and 15 people with FCD (PwFCD). Participants’ responses to a standard questionnaire asked by an interactional virtual agent (Digital Doctor) were quantified using previously identified parameters. This paper presents statistical analyses of the responses and a discussion of the results.ResultPwMCI produced fewer words than PwFCD and HCs. The ratio of pauses to speech was generally lower for PwMCI and PwFCD than for HCs. PwMCI showed a greater pause to speech ratio for recent questions (such as ‘what did you do at the weekend?’) compared with the HCs. Those with FCD showed the greatest pause to speech ratio in remote memory questions (such as ‘what was your first job?’). The average age of acquisition of answers for verbal fluency questions was lower in the MCI group than HCs.ConclusionThe results and qualitative observations support the relative preservation of remote memory compared to recent memory in MCI due to AD and decreased spontaneous elaboration in MCI compared with healthy controls and patients with FCD. Word count, age of acquisition and pause to speech ratio could form part of a diagnostic toolkit in identifying those with structural and functional causes of cognitive impairment. Further investigation is required using a large sample.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Di Matteo, Daniel, Wendy Wang, Kathryn Fotinos, Sachinthya Lokuge, Julia Yu, Tia Sternat, Martin A. Katzman und Jonathan Rose. „Smartphone-Detected Ambient Speech and Self-Reported Measures of Anxiety and Depression: Exploratory Observational Study“. JMIR Formative Research 5, Nr. 1 (29.01.2021): e22723. http://dx.doi.org/10.2196/22723.

Der volle Inhalt der Quelle
Annotation:
Background The ability to objectively measure the severity of depression and anxiety disorders in a passive manner could have a profound impact on the way in which these disorders are diagnosed, assessed, and treated. Existing studies have demonstrated links between both depression and anxiety and the linguistic properties of words that people use to communicate. Smartphones offer the ability to passively and continuously detect spoken words to monitor and analyze the linguistic properties of speech produced by the speaker and other sources of ambient speech in their environment. The linguistic properties of automatically detected and recognized speech may be used to build objective severity measures of depression and anxiety. Objective The aim of this study was to determine if the linguistic properties of words passively detected from environmental audio recorded using a participant’s smartphone can be used to find correlates of symptom severity of social anxiety disorder, generalized anxiety disorder, depression, and general impairment. Methods An Android app was designed to collect periodic audiorecordings of participants’ environments and to detect English words using automatic speech recognition. Participants were recruited into a 2-week observational study. The app was installed on the participants’ personal smartphones to record and analyze audio. The participants also completed self-report severity measures of social anxiety disorder, generalized anxiety disorder, depression, and functional impairment. Words detected from audiorecordings were categorized, and correlations were measured between words counts in each category and the 4 self-report measures to determine if any categories could serve as correlates of social anxiety disorder, generalized anxiety disorder, depression, or general impairment. Results The participants were 112 adults who resided in Canada from a nonclinical population; 86 participants yielded sufficient data for analysis. Correlations between word counts in 67 word categories and each of the 4 self-report measures revealed a strong relationship between the usage rates of death-related words and depressive symptoms (r=0.41, P<.001). There were also interesting correlations between rates of word usage in the categories of reward-related words with depression (r=–0.22, P=.04) and generalized anxiety (r=–0.29, P=.007), and vision-related words with social anxiety (r=0.31, P=.003). Conclusions In this study, words automatically recognized from environmental audio were shown to contain a number of potential associations with severity of depression and anxiety. This work suggests that sparsely sampled audio could provide relevant insight into individuals’ mental health.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Panek, Daria, Andrzej Skalski, Janusz Gajda und Ryszard Tadeusiewicz. „Acoustic analysis assessment in speech pathology detection“. International Journal of Applied Mathematics and Computer Science 25, Nr. 3 (01.09.2015): 631–43. http://dx.doi.org/10.1515/amcs-2015-0046.

Der volle Inhalt der Quelle
Annotation:
Abstract Automatic detection of voice pathologies enables non-invasive, low cost and objective assessments of the presence of disorders, as well as accelerating and improving the process of diagnosis and clinical treatment given to patients. In this work, a vector made up of 28 acoustic parameters is evaluated using principal component analysis (PCA), kernel principal component analysis (kPCA) and an auto-associative neural network (NLPCA) in four kinds of pathology detection (hyperfunctional dysphonia, functional dysphonia, laryngitis, vocal cord paralysis) using the a, i and u vowels, spoken at a high, low and normal pitch. The results indicate that the kPCA and NLPCA methods can be considered a step towards pathology detection of the vocal folds. The results show that such an approach provides acceptable results for this purpose, with the best efficiency levels of around 100%. The study brings the most commonly used approaches to speech signal processing together and leads to a comparison of the machine learning methods determining the health status of the patient
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bone, Daniel, Chi-Chun Lee, Matthew P. Black, Marian E. Williams, Sungbok Lee, Pat Levitt und Shrikanth Narayanan. „The Psychologist as an Interlocutor in Autism Spectrum Disorder Assessment: Insights From a Study of Spontaneous Prosody“. Journal of Speech, Language, and Hearing Research 57, Nr. 4 (August 2014): 1162–77. http://dx.doi.org/10.1044/2014_jslhr-s-13-0062.

Der volle Inhalt der Quelle
Annotation:
PurposeThe purpose of this study was to examine relationships between prosodic speech cues and autism spectrum disorder (ASD) severity, hypothesizing a mutually interactive relationship between the speech characteristics of the psychologist and the child. The authors objectively quantified acoustic-prosodic cues of the psychologist and of the child with ASD during spontaneous interaction, establishing a methodology for future large-sample analysis.MethodSpeech acoustic-prosodic features were semiautomatically derived from segments of semistructured interviews (Autism Diagnostic Observation Schedule, ADOS; Lord, Rutter, DiLavore, & Risi, 1999; Lord et al., 2012) with 28 children who had previously been diagnosed with ASD. Prosody was quantified in terms of intonation, volume, rate, and voice quality. Research hypotheses were tested via correlation as well as hierarchical and predictive regression between ADOS severity and prosodic cues.ResultsAutomatically extracted speech features demonstrated prosodic characteristics of dyadic interactions. As rated ASD severity increased, both the psychologist and the child demonstrated effects for turn-end pitch slope, and both spoke with atypical voice quality. The psychologist's acoustic cues predicted the child's symptom severity better than did the child's acoustic cues.ConclusionThe psychologist, acting as evaluator and interlocutor, was shown to adjust his or her behavior in predictable ways based on the child's social-communicative impairments. The results support future study of speech prosody of both interaction partners during spontaneous conversation, while using automatic computational methods that allow for scalable analysis on much larger corpora.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cantürk, İsmail. „A Feature Driven Intelligent System for Neurodegenerative Disorder Detection: An Application on Speech Dataset for Diagnosis of Parkinson’s Disease“. International Journal on Artificial Intelligence Tools 30, Nr. 03 (Mai 2021): 2150011. http://dx.doi.org/10.1142/s0218213021500111.

Der volle Inhalt der Quelle
Annotation:
Parkinson’s disease (PD) is a prevalent, and progressive neurological disorder. Due to the motor and non-motor symptoms of the disease, it lowers life quality of the patients. Tremor, rigidity, depression, anxiety etc. are among the symptoms. Clinical diagnosis of PD is usually based on appearance of motor features. Additionally, different empirical tests were proposed by scholars for early detection of the disease. It is known that people with PD have speech impairments. Therefore, voice tests are used for early detection of the disease. In this study, an automated machine learning system was proposed for high accuracy classification of the speech signals of PD patients. The system includes feature reduction methods and classification algorithms. Feature reductions and classifications were performed for all participants, males, and females separately. Contributions of feature sets to classification accuracy were discussed. Experimental results were evaluated with different performance metrics. The proposed system obtained state of the art results in all categories. We acquired better performances for gender based classifications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Mota, Natália, Mauro Copelli und Sidarta Ribeiro. „S110. INTERESTING HAPPY THOUGHTS: STRUCTURAL, SEMANTIC AND EMOTIONAL ANALYSIS OF PSYCHOTIC SPEECH USING TIME-LIMITED POSITIVE IMAGE NARRATIVES“. Schizophrenia Bulletin 46, Supplement_1 (April 2020): S76. http://dx.doi.org/10.1093/schbul/sbaa031.176.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Speech and language analysis from free speech protocols has recently provided a discriminative signal, useful for early diagnosis of schizophrenia. Although different aspects of language (such as structural and semantic coherence) have been applied to different contexts using different data collection protocols, we need to standardize a safe and minimum-effort protocol that can reveal discriminative data, enabling large and remote dataset collection. Also, it is important to understand the correlations between semantic, structural and emotional analysis from the same dataset. In the past decade, we have developed a non-semantic structural analysis based on graph theory that was able to automatically discriminate speech samples from patients with schizophrenia diagnosis with more than 90% accuracy in chronic, first-episode patients, and in different languages. Moreover, we could verify correlations of structural attributes with negative symptoms, as well as with cognitive performances in patients and in typical children at regular school time. But the most predictive contents were a dream report (sometimes absent) or a negative image report (which could cause a psychological burden for some subjects). The current project aims to verify the accuracy to discriminate schizophrenia reports from 3 different positive image prompts, using a minimum of 30 seconds reports. Furthermore, we want to verify correlates between semantic, structural and emotional analysis. Methods We analyzed reports of 3 positive images from 31 subjects (10 matched controls and 21 at the first episode of psychosis - 11 with schizophrenia and 10 with bipolar disorder as a final diagnosis after 6 months of follow-up). We performed speech graph analysis to extract speech connectedness attributes. Next, we combined connectedness measures from the 3 prompts (after extracting collinear measures) to create a disorganization index (performing multilinear correlation with the PANSS negative subscale). We used this index as an input to a machine learning classifier to verify the accuracy to discriminate reports from the schizophrenia group. Finally, we studied the correlations between the connectedness-based disorganization index and minimum semantic coherence between consecutive sentences, and the emotional intensity measured by the proportion of emotional words. Results Speech connectedness of positive image reports was correlated with negative symptomatology severity measured by PANSS negative subscale (R2 = 0.73, p = 0.0160), and the disorganization index was able to discriminate the subjects diagnosed with schizophrenia disorder six months later with AUC = 0.82. Moreover, the disorganization index was negatively correlated with positive emotional intensity (Rho = -0.48, p = 0.0061), but not correlated with minimum semantic coherence (Rho = -0.06, p = 0.7442). Emotional intensity was not correlated with minimum semantic coherence (Rho = 0.17, p = 0.3458). Discussion This safe, short and standardized data collection protocol seems to be informative and reveals an interdependent relationship between different aspects of computational language analysis. With less than two minutes of oral speech data, we can accurately discriminate reports from the schizophrenia group at the first interview, and verify that the less connected the report, the fewer positive emotional words are used. Future directions point to the feasibility of automatic and remote access of a large and diverse population, allowing the upscaling of this type of assessment to big data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Jahan, Sultana. „83 Catatonia in a 17-year-old Male Patient with Bipolar Disorder, a Case Study“. CNS Spectrums 24, Nr. 1 (Februar 2019): 217. http://dx.doi.org/10.1017/s1092852919000622.

Der volle Inhalt der Quelle
Annotation:
AbstractStudy Objective(s)Catatonia is not only present in adults; children & adolescents can suffer from catatonia but are often misdiagnosed. A study by Ghaziuddin, Dhossche and Marcotte (2012) found that 18 of the 101 child and adolescent patients had symptoms of catatonia, but only 2 actually had been given a diagnosis by their providers.Method17-year-old male who was recently discharged from the inpatient psychiatric unit with the diagnosis of Major Depressive Disorder. His discharge medication was bupropion XL 150mg daily. Within 10 days of his discharge, he was back to the emergency room with worsening anxiety and manic symptoms. At the emergency room, patient’s sister reported that he was acting differently within the last 3–4 days, and making statements that he can save the world that everyone was talking about him. He was also speaking faster than usual, having decreased need for sleep. He reported hearing voices, seeing things. Patient was admitted again, and was given diagnosis of Bipolar Mood Disorder, type I, manic phase, with psychosis. He was started on divalproex 500mg bid for mood stabilization. His Bupropion was discontinued. Gradually his Divalproex was increased to 750mg bid. During his hospital stay he developed lack of spontaneous speech, sluggish responses to questions with automatic answers such as “I don’t know”. He also developed very sluggish motor movements. There was negativism. He needed one on one support for his daily activities of living, needed step by step instructions for all ADLs.All the test results were negative including EEG, MRI and CT scan of the brain. Bush Francis catatonia rating scale was done and he scored 15. Lorazepam Challenge Test was performed, and the scale was repeated after the patient was given an IM dose of 2mg of Lorazepam, and he scored 2. At this point catatonia diagnosis was confirmed. He was started on scheduled doses of Lorazepam, gradually his Lorazepam dose was increased up to 9mg per day. His catatonia responded to Lorazepam treatment.Results17 year old male who initially hospitalized for symptoms of MDD and discharged with antidepressant, came back to the ER within 10 day with symptoms of mania with psychosis. During in-patient’s 2nd in-patient stay he developed catatonia, which was promptly diagnosed and appropriately treated with Lorazepam.ConclusionsCatatonia can happen in children & adolescents with mood disorders, or with other psychiatric or medical conditions. Timely diagnosis and treatment is very crucial to avoid poor outcome, especially because treatment options for catatonia are well understood; Benzodiazepines, electroconvulsive therapy and reduction or discontinuation of antipsychotics are successful in the treatment of catatonia (Ghaziuddin, Dhossche and Marcotte, 2012).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Alemami, Yahia, und Laiali Almazaydeh. „Pathological Voice Signal Analysis Using Machine Learning Based Approaches“. Computer and Information Science 11, Nr. 1 (22.11.2017): 8. http://dx.doi.org/10.5539/cis.v11n1p8.

Der volle Inhalt der Quelle
Annotation:
Voice signal analysis is becoming one of the most significant examination in clinical practice due to the importance of extracting related parameters to reflect the patient's health. In this regard, various acoustic studies have been revealed that the analysis of laryngeal, respiratory and articulatory function may be efficient as an early indicator in the diagnosis of Parkinson disease (PD). PD is a common chronic neurodegenerative disorder, which affects a central nervous system and it is characterized by progressive loss of muscle control. Tremor, movement and speech disorders are the main symptoms of PD. The diagnosis decision of PD is obtained by continued clinical observation which relies on expert human observer. Therefore, an additional diagnosis method is desirable for most comfortable and timely detection of PD as well as faster treatment is needed. In this study, we develop and validate automated classification algorithms, which are based on Naïve Bayes and K- Nearest Neighbors (KNN) using voice signal measurements to predict PD. According to the results, the diagnostic performance provided by the automated classification algorithm using Naïve Bayes was superior to that of the KNN and it is useful as a predictive tool for PD screening with a high degree of accuracy, approximately 93.3%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Borovikova, Daria, Oleg Grishin, Anastasia Nenko, Anton Yupashevsky, Anna Kazmina, Artem Markov und Konstantin Metsler. „Development of a hardware and software complex for speech analysis and correction“. Analysis and data processing systems, Nr. 2 (18.06.2021): 135–45. http://dx.doi.org/10.17212/2782-2001-2021-2-135-145.

Der volle Inhalt der Quelle
Annotation:
In recent years, there has been a dramatic increase in the number of people suffering from functional disorders of voice, usually caused by a psychoemotional stress. Such disorders bring significant discomfort to a person's life as they reduce their communication and social adaptation capacitty, which in turn increases the psychoemotional load. As a result, functional disorders are fixed by the vicious circle mechanism o and can be transformed into the pathology of the speech apparatus. The main method of diagnosis remains expert assessment, which directly depends on the professional skills of a specialist in working with voice. In this connection, the issue of developing such systems for diagnosing voice-speech disorders that would allow for an objective assessment based on the processing of voice-speech characteristics, as well as to identify the violation in time and prevent the development of pathology, is relevant. Such methods and systems can be useful both for diagnostics and for monitoring the effectiveness of voice therapy. The existing methods of hardware diagnostics have not yet found their application in practice due to their inconsistency with the results of expert evaluation. In this paper, we propose a new concept of hardware and software complex for the analysis of voice based on acoustic characteristics of a set of harmonics of the voice signal. A VASA (Voice and Speech Analyzing system) complex has been developed that provides an automatic analysis of the amplitudes of the first 16 harmonics. The tests performed on three volunteers showed a high level of reproducibility and repeatability (within 10 % < %R&R < 30 %), sufficient for conducting comparative studies on healthy people and people with functional speech disorders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Olivares, Rodrigo, Roberto Munoz, Ricardo Soto, Broderick Crawford, Diego Cárdenas, Aarón Ponce und Carla Taramasco. „An Optimized Brain-Based Algorithm for Classifying Parkinson’s Disease“. Applied Sciences 10, Nr. 5 (06.03.2020): 1827. http://dx.doi.org/10.3390/app10051827.

Der volle Inhalt der Quelle
Annotation:
During the last years, highly-recognized computational intelligence techniques have been proposed to treat classification problems. These automatic learning approaches lead to the most recent researches because they exhibit outstanding results. Nevertheless, to achieve this performance, artificial learning methods firstly require fine tuning of their parameters and then they need to work with the best-generated model. This process usually needs an expert user for supervising the algorithm’s performance. In this paper, we propose an optimized Extreme Learning Machine by using the Bat Algorithm, which boosts the training phase of the machine learning method to increase the accuracy, and decreasing or keeping the loss in the learning phase. To evaluate our proposal, we use the Parkinson’s Disease audio dataset taken from UCI Machine Learning Repository. Parkinson’s disease is a neurodegenerative disorder that affects over 10 million people. Although its diagnosis is through motor symptoms, it is possible to evidence the disorder through variations in the speech using machine learning techniques. Results suggest that using the bio-inspired optimization algorithm for adjusting the parameters of the Extreme Learning Machine is a real alternative for improving its performance. During the validation phase, the classification process for Parkinson’s Disease achieves a maximum accuracy of 96.74% and a minimum loss of 3.27%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Tonn, Peter, Yoav Degani, Shani Hershko, Amit Klein, Lea Seule und Nina Schulze. „Development of a Digital Content-Free Speech Analysis Tool for the Measurement of Mental Health and Follow-Up for Mental Disorders: Protocol for a Case-Control Study“. JMIR Research Protocols 9, Nr. 5 (14.05.2020): e13852. http://dx.doi.org/10.2196/13852.

Der volle Inhalt der Quelle
Annotation:
Background The prevalence of mental disorders worldwide is very high. The guideline-oriented care of patients depends on early diagnosis and regular and valid evaluation of their treatment to be able to quickly intervene should the patient’s mental health deteriorate. To ensure effective treatment, the level of experience of the physician or therapist is of importance, both in the initial diagnosis and in the treatment of mental illnesses. Nevertheless, experienced physicians and psychotherapists are not available in enough numbers everywhere, especially in rural areas or in less developed countries. Human speech can reveal a speaker’s mental state by altering its noncontent aspects (speech melody, intonations, speech rate, etc). This is noticeable in both the clinic and everyday life by having prior knowledge of the normal speech patterns of the affected person, and with enough time spent listening to the patient. However, this time and experience are often unavailable, leaving unused opportunities to capture linguistic, noncontent information. To improve the care of patients with mental disorders, we have developed a concept for assessing their most important mental parameters through a noncontent analysis of their active speech. Using speech analysis for the assessment and tracking of mental health patients opens up the possibility of remote, automatic, and ongoing evaluation when used with patients’ smartphones, as part of the current trends toward the increasing use of digital and mobile health tools. Objective The primary objective of this study is to evaluate measurements of participants' mental state by comparing the analysis of noncontent speech parameters to the results of several psychological questionnaires (Symptom Checklist-90 [SCL-90], the Patient Health Questionnaire [PHQ], and the Big 5 Test). Methods In this paper, we described a case-controlled study (with a case group and one control group). The participants will be recruited in an outpatient neuropsychiatric treatment center. Inclusion criteria are a neurological or psychiatric diagnosis made by a specialist, no terminal or life-threatening illnesses, and fluent use of the German language. Exclusion criteria include psychosis, dementia, speech or language disorders in neurological diseases, addiction history, a suicide attempt recently or in the last 12 months, or insufficient language skills. The measuring instrument will be the VoiceSense digital voice analysis tool, which enables the analysis of 200 specific speech parameters, and the assessment of findings using psychometric instruments and questionnaires (SCL-90, PHQ, Big 5 Test). Results The study is ongoing as of September 2019, but we have enrolled 254 participants. There have been 161 measurements completed at timepoint 1, and a total of 62 participants have completed every psychological and speech analysis measurement. Conclusions It appears that the tone and modulation of speech are as important, if not more so, than the content, and should not be underestimated. This is particularly evident in the interpretation of the psychological findings thus far acquired. Therefore, the application of a software analysis tool could increase the accuracy of finding assessments and improve patient care. Trial Registration ClinicalTrials.gov NCT03700008; https://clinicaltrials.gov/ct2/show/NCT03700008 International Registered Report Identifier (IRRID) PRR1-10.2196/13852
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mohammed, Mazin Abed, Karrar Hameed Abdulkareem, Salama A. Mostafa, Mohd Khanapi Abd Ghani, Mashael S. Maashi, Begonya Garcia-Zapirain, Ibon Oleagordia, Hosam Alhakami und Fahad Taha AL-Dhief. „Voice Pathology Detection and Classification Using Convolutional Neural Network Model“. Applied Sciences 10, Nr. 11 (27.05.2020): 3723. http://dx.doi.org/10.3390/app10113723.

Der volle Inhalt der Quelle
Annotation:
Voice pathology disorders can be effectively detected using computer-aided voice pathology classification tools. These tools can diagnose voice pathologies at an early stage and offering appropriate treatment. This study aims to develop a powerful feature extraction voice pathology detection tool based on Deep Learning. In this paper, a pre-trained Convolutional Neural Network (CNN) was applied to a dataset of voice pathology to maximize the classification accuracy. This study also proposes a distinguished training method combined with various training strategies in order to generalize the application of the proposed system on a wide range of problems related to voice disorders. The proposed system has tested using a voice database, namely the Saarbrücken voice database (SVD). The experimental results show the proposed CNN method for speech pathology detection achieves accuracy up to 95.41%. It also obtains 94.22% and 96.13% for F1-Score and Recall. The proposed system shows a high capability of the real-clinical application that offering a fast-automatic diagnosis and treatment solutions within 3 s to achieve the classification accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Lee, Jung Hyuk, Geon Woo Lee, Guiyoung Bong, Hee Jeong Yoo und Hong Kook Kim. „Deep-Learning-Based Detection of Infants with Autism Spectrum Disorder Using Auto-Encoder Feature Representation“. Sensors 20, Nr. 23 (26.11.2020): 6762. http://dx.doi.org/10.3390/s20236762.

Der volle Inhalt der Quelle
Annotation:
Autism spectrum disorder (ASD) is a developmental disorder with a life-span disability. While diagnostic instruments have been developed and qualified based on the accuracy of the discrimination of children with ASD from typical development (TD) children, the stability of such procedures can be disrupted by limitations pertaining to time expenses and the subjectivity of clinicians. Consequently, automated diagnostic methods have been developed for acquiring objective measures of autism, and in various fields of research, vocal characteristics have not only been reported as distinctive characteristics by clinicians, but have also shown promising performance in several studies utilizing deep learning models based on the automated discrimination of children with ASD from children with TD. However, difficulties still exist in terms of the characteristics of the data, the complexity of the analysis, and the lack of arranged data caused by the low accessibility for diagnosis and the need to secure anonymity. In order to address these issues, we introduce a pre-trained feature extraction auto-encoder model and a joint optimization scheme, which can achieve robustness for widely distributed and unrefined data using a deep-learning-based method for the detection of autism that utilizes various models. By adopting this auto-encoder-based feature extraction and joint optimization in the extended version of the Geneva minimalistic acoustic parameter set (eGeMAPS) speech feature data set, we acquire improved performance in the detection of ASD in infants compared to the raw data set.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kist, Andreas M., Pablo Gómez, Denis Dubrovskiy, Patrick Schlegel, Melda Kunduk, Matthias Echternach, Rita Patel et al. „A Deep Learning Enhanced Novel Software Tool for Laryngeal Dynamics Analysis“. Journal of Speech, Language, and Hearing Research 64, Nr. 6 (04.06.2021): 1889–903. http://dx.doi.org/10.1044/2021_jslhr-20-00498.

Der volle Inhalt der Quelle
Annotation:
Purpose High-speed videoendoscopy (HSV) is an emerging, but barely used, endoscopy technique in the clinic to assess and diagnose voice disorders because of the lack of dedicated software to analyze the data. HSV allows to quantify the vocal fold oscillations by segmenting the glottal area. This challenging task has been tackled by various studies; however, the proposed approaches are mostly limited and not suitable for daily clinical routine. Method We developed a user-friendly software in C# that allows the editing, motion correction, segmentation, and quantitative analysis of HSV data. We further provide pretrained deep neural networks for fully automatic glottis segmentation. Results We freely provide our software Glottis Analysis Tools (GAT). Using GAT, we provide a general threshold-based region growing platform that enables the user to analyze data from various sources, such as in vivo recordings, ex vivo recordings, and high-speed footage of artificial vocal folds. Additionally, especially for in vivo recordings, we provide three robust neural networks at various speed and quality settings to allow a fully automatic glottis segmentation needed for application by untrained personnel. GAT further evaluates video and audio data in parallel and is able to extract various features from the video data, among others the glottal area waveform, that is, the changing glottal area over time. In total, GAT provides 79 unique quantitative analysis parameters for video- and audio-based signals. Many of these parameters have already been shown to reflect voice disorders, highlighting the clinical importance and usefulness of the GAT software. Conclusion GAT is a unique tool to process HSV and audio data to determine quantitative, clinically relevant parameters for research, diagnosis, and treatment of laryngeal disorders. Supplemental Material https://doi.org/10.23641/asha.14575533
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Loku, Lindita, Bekim Fetaji und Aleksandar Krsteski. „AUTOMATED MEDICAL DATA ANALYSES OF DISEASES USING BIG DATA“. Knowledge International Journal 28, Nr. 5 (10.12.2018): 1719–24. http://dx.doi.org/10.35120/kij28051719l.

Der volle Inhalt der Quelle
Annotation:
Diagnosis of different diseases is a growing concern and one of the most difficult challenges for modern medicine. Current diagnosis technologies (e.g. magnetic resonance imaging, electroencephalogram) produce huge quantity data (in size and dimension) for detection, monitoring and treatment of neurological diseases. In general, analysis of those medical big data is performed manually by experts to identify and understand the abnormalities. It is really difficult task for a person to accumulate, manage, analyse and assimilate such large volumes of data by visual inspection. As a result, the experts have been demanding computerised diagnosis systems, called “computer-aided diagnosis (CAD)” that can automatically detect the neurological abnormalities using the medical big data. This system improves consistency of diagnosis and increases the success of treatment, save lives and reduce cost and time. Recently, there are some research works performed in the development of the CAD systems for management of medical big data for diagnosis assessment. Such data analyses to realize diagnosis is very interesting for diabetes and autism. Many companies and research groups are working to treat diabetes, but preventing the disease will have a greater impact on health in at-risk groups. A team of US researchers are using data analytics to create a precision medicine approach to prevention of diabetes that steers efforts towards those who are at highest risk of developing the disease and who would benefit most from drug treatment or preventive lifestyle strategies. The analyses yielded most important 17 factors that were assessed that could predict an individual’s risk of diabetes. Autism Spectrum Disorder (ASD) is characterized by difficulties in social communication, social interactions, and repetitive behaviors. It is diagnosed during the first three years of life. Early and intensive interventions have been shown to improve the developmental trajectory of the affected children. The earlier the diagnosis, the sooner the intervention therapy can begin, thus, making early diagnosis is set as our important research goal. Because ASD is not a neurodegenerative disorder, many of the core symptoms can improve as the individuals learn to cope with their environments under the right conditions. The earlier the age at which intervention can be started, the better their learning and daily function can be facilitated. Recent Big Data software packages and innovations in Artificial Intelligence have tremendous potential to assist with early diagnosis and improve intervention programs. The research study will focus on methodological evaluation of emerging technologies and will investigate by comparing different data sets and find a pattern that can be established as prognosis system. The research study investigated peer-reviewed studies in order to understand the current status of empirically-based evidence on the clinical applications in the diagnosis and treatment of Autism Spectrum Disorders (ASD). Also a survey and investigation on different sensing technologies for ASD like: eye trackers, movement trackers, electrodermal activity monitors, tactile sensors, vocal prosody and speech detectors. We assess their effectiveness and study their limitations. We also examine the challenges faced by this growing field that need to be addressed before these technologies can perform up to their theoretical potential In some cases, a technology is unable to deliver up to its potential, not due to the hardware but due to the inefficiency of the accompanied algorithms, as in the case of classifiers for repetitive behavior detection. Therefore, equal emphasis needs to be placed on the improvement of all aspects of a tracking technology. The nature of the sensors makes the tracked data very sensitive to experimental and systematic errors, often causing the collected data to be discarded due to unreliability. Efforts to reduce such inaccuracies can significantly improve the performance and potential of the overall technology. By collecting specific data, these sensors may be able to acquire objective measures that can be used to identify symptoms specific to ASD. The contribution of the analyses will assist not only the therapists and clinicians in their selection of suitable tools, but to also guide the developers of the technologies and devise new algorithm in prediction of autism.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Dahmani, Mohamed, und Mhania Guerti. „Recurrence Quantification Analysis of Glottal Signal as non Linear Tool for Pathological Voice Assessment and Classification“. International Arab Journal of Information Technology 17, Nr. 6 (01.11.2020): 857–66. http://dx.doi.org/10.34028/iajit/17/6/4.

Der volle Inhalt der Quelle
Annotation:
Automatic detection and assessment of Vocal Folds pathologies using signal processing techniques knows an extensively challenge use in the voice or speech research community. This paper contributes the application of the Recurrence Quantification Analysis (RQA) to a glottal signal waveform in order to evaluate the dynamic process of Vocal Folds (VFs) for diagnosis and classify the voice disorders. The proposed solution starts by extracting the glottal signal waveform from the voice signal through an inverse filtering algorithm. In the next step, the parameters of RQA are determined via the Recurrent Plot (RP) structure of the glottal signal where the normal voice is considered as a reference. Finally, these parameters are used as input features set of a hybrid Particle Swarm Optimization-Support Vector Machines (PSO-SVM) algorithms to segregate between normal and pathological voices. For the test validation, we have adopted the collection of Saarbrucken Voice Database (SVD) where we have selected the long vowel /a:/ of 133 normal samples and 260 pathological samples uttered by four groups of subjects : persons having suffered from vocal folds paralysis, persons having vocal folds polyps, persons having spasmodic dysphonia and normal voices. The obtained results show the effectiveness of RQA applied to the glottal signal as a features extraction technique. Indeed, the PSO-SVM as a classification method presented an effective tool for assessment and diagnosis of pathological voices with an accuracy of 97.41%
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Robillard, Manon, Annie Roy-Charland und Sylvie Cazabon. „The Role of Cognition on Navigational Skills of Children and Adolescents With Autism Spectrum Disorders“. Journal of Speech, Language, and Hearing Research 61, Nr. 7 (13.07.2018): 1579–90. http://dx.doi.org/10.1044/2018_jslhr-s-17-0206.

Der volle Inhalt der Quelle
Annotation:
Purpose This study examined the role of cognition on the navigational process of a speech-generating device (SGD) among individuals with a diagnosis of autism spectrum disorder (ASD). The objective was to investigate the role of various cognitive factors (i.e., cognitive flexibility, sustained attention, categorization, fluid reasoning, and working memory) on the ability to navigate an SGD with dynamic paging and taxonomic grids in individuals with ASD. Method Twenty individuals aged 5 to 20 years with ASD were assessed using the Leiter International Performance Scale–Revised (Roid & Miller, 1997) and the Automated Working Memory Assessment (Alloway, 2007). They also completed a navigational task using an iPad 4 (Apple, 2017; taxonomic organization). Results Significant correlations between all of the cognitive factors and the ability to navigate an SGD were revealed. A stepwise linear regression suggested that cognitive flexibility was the best predictor of navigational ability with this population. Conclusion The importance of cognition in the navigational process of an SGD with dynamic paging in children and adolescents with ASD has been highlighted by the results of this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

You, Eunice, Vincent Lin, Tamara Mijovic, Antoine Eskander und Matthew G. Crowson. „Artificial Intelligence Applications in Otology: A State of the Art Review“. Otolaryngology–Head and Neck Surgery 163, Nr. 6 (09.06.2020): 1123–33. http://dx.doi.org/10.1177/0194599820931804.

Der volle Inhalt der Quelle
Annotation:
Objective Recent advances in artificial intelligence (AI) are driving innovative new health care solutions. We aim to review the state of the art of AI in otology and provide a discussion of work underway, current limitations, and future directions. Data Sources Two comprehensive databases, MEDLINE and EMBASE, were mined using a directed search strategy to identify all articles that applied AI to otology. Review Methods An initial abstract and title screening was completed. Exclusion criteria included nonavailable abstract and full text, language, and nonrelevance. References of included studies and relevant review articles were cross-checked to identify additional studies. Conclusion The database search identified 1374 articles. Abstract and title screening resulted in full-text retrieval of 96 articles. A total of N = 38 articles were retained. Applications of AI technologies involved the optimization of hearing aid technology (n = 5; 13% of all articles), speech enhancement technologies (n = 4; 11%), diagnosis and management of vestibular disorders (n = 11; 29%), prediction of sensorineural hearing loss outcomes (n = 9; 24%), interpretation of automatic brainstem responses (n = 5; 13%), and imaging modalities and image-processing techniques (n = 4; 10%). Publication counts of the included articles from each decade demonstrated a marked increase in interest in AI in recent years. Implications for Practice This review highlights several applications of AI that otologists and otolaryngologists alike should be aware of given the possibility of implementation in mainstream clinical practice. Although there remain significant ethical and regulatory challenges, AI powered systems offer great potential to shape how healthcare systems of the future operate and clinicians are key stakeholders in this process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

C R, Chethan. „Diagnosis of Parkinson Disorder through Speech Data“. International Journal for Research in Applied Science and Engineering Technology 8, Nr. 9 (30.09.2020): 337–41. http://dx.doi.org/10.22214/ijraset.2020.31054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Dodd, Barbara. „Differential Diagnosis of Pediatric Speech Sound Disorder“. Current Developmental Disorders Reports 1, Nr. 3 (13.05.2014): 189–96. http://dx.doi.org/10.1007/s40474-014-0017-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Hildebrand, Michael S., Victoria E. Jackson, Thomas S. Scerri, Olivia Van Reyk, Matthew Coleman, Ruth O. Braden, Samantha Turner et al. „Severe childhood speech disorder“. Neurology 94, Nr. 20 (28.04.2020): e2148-e2167. http://dx.doi.org/10.1212/wnl.0000000000009441.

Der volle Inhalt der Quelle
Annotation:
ObjectiveDetermining the genetic basis of speech disorders provides insight into the neurobiology of human communication. Despite intensive investigation over the past 2 decades, the etiology of most speech disorders in children remains unexplained. To test the hypothesis that speech disorders have a genetic etiology, we performed genetic analysis of children with severe speech disorder, specifically childhood apraxia of speech (CAS).MethodsPrecise phenotyping together with research genome or exome analysis were performed on children referred with a primary diagnosis of CAS. Gene coexpression and gene set enrichment analyses were conducted on high-confidence gene candidates.ResultsThirty-four probands ascertained for CAS were studied. In 11/34 (32%) probands, we identified highly plausible pathogenic single nucleotide (n = 10; CDK13, EBF3, GNAO1, GNB1, DDX3X, MEIS2, POGZ, SETBP1, UPF2, ZNF142) or copy number (n = 1; 5q14.3q21.1 locus) variants in novel genes or loci for CAS. Testing of parental DNA was available for 9 probands and confirmed that the variants had arisen de novo. Eight genes encode proteins critical for regulation of gene transcription, and analyses of transcriptomic data found CAS-implicated genes were highly coexpressed in the developing human brain.ConclusionWe identify the likely genetic etiology in 11 patients with CAS and implicate 9 genes for the first time. We find that CAS is often a sporadic monogenic disorder, and highly genetically heterogeneous. Highly penetrant variants implicate shared pathways in broad transcriptional regulation, highlighting the key role of transcriptional regulation in normal speech development. CAS is a distinctive, socially debilitating clinical disorder, and understanding its molecular basis is the first step towards identifying precision medicine approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Lansford, Kaitlin L., und Julie M. Liss. „Vowel Acoustics in Dysarthria: Speech Disorder Diagnosis and Classification“. Journal of Speech, Language, and Hearing Research 57, Nr. 1 (Februar 2014): 57–67. http://dx.doi.org/10.1044/1092-4388(2013/12-0262).

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this study was to determine the extent to which vowel metrics are capable of distinguishing healthy from dysarthric speech and among different forms of dysarthria. Method A variety of vowel metrics were derived from spectral and temporal measurements of vowel tokens embedded in phrases produced by 45 speakers with dysarthria and 12 speakers with no history of neurological disease. Via means testing and discriminant function analysis (DFA), the acoustic metrics were used to (a) detect the presence of dysarthria and (b) classify the dysarthria subtype. Results Significant differences between dysarthric and healthy control speakers were revealed for all vowel metrics. However, the results of the DFA demonstrated some metrics (particularly metrics that capture vowel distinctiveness) to be more sensitive and specific predictors of dysarthria. Only the vowel metrics that captured slope of the second formant (F2) demonstrated between-group differences across the dysarthrias. However, when subjected to DFA, these metrics proved unreliable classifiers of dysarthria subtype. Conclusion The results of these analyses suggest that some vowel metrics may be useful clinically for the detection of dysarthria but may not be reliable indicators of dysarthria subtype using the current dysarthria classification scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Mumtaz, Wajid, Pham Lam Vuong, Likun Xia, Aamir Saeed Malik und Rusdi Bin Abd Rashid. „Automatic diagnosis of alcohol use disorder using EEG features“. Knowledge-Based Systems 105 (August 2016): 48–59. http://dx.doi.org/10.1016/j.knosys.2016.04.026.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Velichko, Alena, und Alexey Karpov. „Analytical review of automatic systems for depression detection by speech“. Informatics and Automation 20, Nr. 3 (28.05.2021): 497–529. http://dx.doi.org/10.15622/ia.2021.3.1.

Der volle Inhalt der Quelle
Annotation:
In recent years the interest in automatic depression detection has grown within medical and scientific-technical communities. Depression is one of the most widespread mental illnesses that affects human life. In this review we present and analyze the latest researches devoted to depression detection. Basic notions related to the definition of depression were specified, the review includes both unimodal and multimodal corpora containing records of informants diagnosed with depression and control groups of non-depressed people. Theoretical and practical researches which present automated systems for depression detection were reviewed. The last ones include unimodal as well as multimodal systems. A part of reviewed systems addresses the challenge of regressive classification predicting the degree of depression severity (non-depressed, mild, moderate and severe), and another part solves a problem of binary classification predicting the presence of depression (if a person is depressed or not). An original classification of methods for computing of informative features for three communicative modalities (audio, video, text information) is presented. New methods for depression detection in every modality and all modalities in total are defined. The most popular methods for depression detection in reviewed studies are neural networks. The survey has shown that the main features of depression are psychomotor retardation that affects all communicative modalities and strong correlation with affective values of valency, activation and domination, also there has been observed an inverse correlation between depression and aggression. Discovered correlations confirm interrelation of affective disorders and human emotional states. The trend observed in many reviewed papers is that combining modalities improves the results of depression detection systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Jerger, James. „On the Diagnosis of Auditory Processing Disorder (APD)“. Journal of the American Academy of Audiology 20, Nr. 03 (März 2009): 160. http://dx.doi.org/10.3766/jaaa.20.3.1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Chen, Tianhua, Grigoris Antoniou, Marios Adamou, Ilias Tachmazidis und Pan Su. „Automatic Diagnosis of Attention Deficit Hyperactivity Disorder Using Machine Learning“. Applied Artificial Intelligence 35, Nr. 9 (02.06.2021): 657–69. http://dx.doi.org/10.1080/08839514.2021.1933761.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Iliadou, Vasiliki Vivian, Gail D. Chermak und Doris-Eva Bamiou. „Differential Diagnosis of Speech Sound Disorder (Phonological Disorder): Audiological Assessment beyond the Pure-tone Audiogram“. Journal of the American Academy of Audiology 26, Nr. 04 (April 2015): 423–35. http://dx.doi.org/10.3766/jaaa.26.4.9.

Der volle Inhalt der Quelle
Annotation:
Background: According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. Purpose: To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child’s SSD. Research Design: Central auditory processing disorder clinic pediatric case reports. Study Sample: Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech–language pathologists as a result of slower than expected progress in therapy. Results: Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient’s speech sound (phonological) disorder. Conclusions: Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Afify, Heba M., und Basma Ahmed. „Computer-Aided Diagnosis of Speech Disorder Signal in Parkinson’s Disease“. INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 15, Nr. 9 (22.07.2016): 7117–23. http://dx.doi.org/10.24297/ijct.v15i9.730.

Der volle Inhalt der Quelle
Annotation:
Computer-aided diagnosis (CAD) can be used as a decision support system by physicians in the diagnosis and treatmentof disordered speech especially those who specialize in neurophysiology diseases. Parkinson's disease (PD) is aprogressive disorder of the nervous system that affects movement. It develops gradually, sometimes starting with a barelynoticeable tremor in speech. It has been found that 80% of persons with PD reported speech and voice disorders.Parkinson's disease symptoms worsen as the condition progresses over time. Therefore, Speech may become soft orslurred and these deficits in speech intelligibility impact on health status and quality of life. Different researchers arecurrently working in the analysis of speech signal of people with PD, including the study of different dimensions in speechsuch as phonation, articulation, prosody, and intelligibility. Here, we present the characteristics and features of normalspeech and speech disorders in people with PD and the types of classification for implementation of the efficacy oftreatment interventions. The results show that our classification algorithm using ANN is outperformed KNN and SVM. ANNis a practical and useful as a predictive tool for PD screening with a high degree of accuracy, approximately 96.1% of acorrect detection rate (sensitivity 94.7%, and specificity 96.6%). Based on the high levels of accuracy obtained by ourproposed algorithm, it can be used for enhancing the detection purpose to discriminate PD patients from healthy people.Our algorithm may be used by the clinicians as a tool to confirm their diagnosis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Thomas, Sheila, Joerg Schulz und Nuala Ryder. „Assessment and diagnosis of Developmental Language Disorder: The experiences of speech and language therapists“. Autism & Developmental Language Impairments 4 (Januar 2019): 239694151984281. http://dx.doi.org/10.1177/2396941519842812.

Der volle Inhalt der Quelle
Annotation:
Background For many years research and practice have noted the impact of the heterogeneous nature of Developmental Language Disorder (also known as language impairment or specific language impairment) on diagnosis and assessment. Recent research suggests the disorder is not restricted to the language domain and against this background, the challenge for the practitioner is to provide accurate assessment and effective therapy. The speech and language therapist aims to support the child and their carers to achieve the best outcomes. However, little is known about the experiences of the speech and language therapist in the assessment process, in contrast to other childhood disorders, yet their expertise is central in the assessment and diagnosis of children with language disorder. Aims This study aimed to gain an in-depth understanding of the experiences of speech and language therapists involved in the assessment and diagnosis of children with Developmental Language Disorder including the linguistic and non-linguistic aspects of the disorder. Methods and procedures The qualitative study included three focus groups to provide a credible and rich description of the experiences of speech and language therapists involved in the assessment of Developmental Language Disorder. The speech and language therapists who participated in the study were recruited from different types of institution in three NHS trusts across the UK and all were directly involved in the assessment and diagnosis procedures. The lengths of speech and language therapist experience ranged from 2 years to 38 years. The data were analysed using inductive thematic analysis within a phenomenological approach. Outcomes and results The analysis of the data showed three main themes relating to the speech and language therapists’ experience in assessment and diagnosis of Developmental Language Disorder. These themes were the participants’ experiences of the barriers to early referral (subthemes – parents’ misunderstanding and misconceptions of Developmental Language Disorder, bilingualism can mask Developmental Language Disorder and public lack of knowledge of support services), factors in assessment (subthemes – individual nature of impairments, choosing appropriate assessments, key indicators and identifying non-language difficulties) and the concerns over continued future support (subthemes – disadvantages with academic curriculum, disadvantages for employment, impact of Developmental Language Disorder on general life chances). Conclusions and implications This study provides first-hand evidence from speech and language therapists in the assessment of children with Developmental Language Disorder, drawing together experiences from speech and language therapists from different regions. The implications are that support for early referral and improved assessment tools are needed together with greater public awareness of Developmental Language Disorder. The implications are discussed in relation to the provision of early and effective assessment and the use of current research in these procedures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Oh, Shu Lih, Jahmunah Vicnesh, Edward J. Ciaccio, Rajamanickam Yuvaraj und U. Rajendra Acharya. „Deep Convolutional Neural Network Model for Automated Diagnosis of Schizophrenia Using EEG Signals“. Applied Sciences 9, Nr. 14 (18.07.2019): 2870. http://dx.doi.org/10.3390/app9142870.

Der volle Inhalt der Quelle
Annotation:
A computerized detection system for the diagnosis of Schizophrenia (SZ) using a convolutional neural system is described in this study. Schizophrenia is an anomaly in the brain characterized by behavioral symptoms such as hallucinations and disorganized speech. Electroencephalograms (EEG) indicate brain disorders and are prominently used to study brain diseases. We collected EEG signals from 14 healthy subjects and 14 SZ patients and developed an eleven-layered convolutional neural network (CNN) model to analyze the signals. Conventional machine learning techniques are often laborious and subject to intra-observer variability. Deep learning algorithms that have the ability to automatically extract significant features and classify them are thus employed in this study. Features are extracted automatically at the convolution stage, with the most significant features extracted at the max-pooling stage, and the fully connected layer is utilized to classify the signals. The proposed model generated classification accuracies of 98.07% and 81.26% for non-subject based testing and subject based testing, respectively. The developed model can likely aid clinicians as a diagnostic tool to detect early stages of SZ.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Gonzalez-Moreira, Eduardo, Diana Torres-Boza, Héctor Arturo Kairuz, Carlos Ferrer, Marlene Garcia-Zamora, Fernando Espinoza-Cuadros und Luis Alfonso Hernandez-Gómez. „Automatic Prosodic Analysis to Identify Mild Dementia“. BioMed Research International 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/916356.

Der volle Inhalt der Quelle
Annotation:
This paper describes an exploratory technique to identify mild dementia by assessing the degree of speech deficits. A total of twenty participants were used for this experiment, ten patients with a diagnosis of mild dementia and ten participants like healthy control. The audio session for each subject was recorded following a methodology developed for the present study. Prosodic features in patients with mild dementia and healthy elderly controls were measured using automatic prosodic analysis on a reading task. A novel method was carried out to gather twelve prosodic features over speech samples. The best classification rate achieved was of 85% accuracy using four prosodic features. The results attained show that the proposed computational speech analysis offers a viable alternative for automatic identification of dementia features in elderly adults.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ihori, Nami, Shigeo Araki, Kenji Ishihara und Mitsuru Kawamura. „A Case of Frontotemporal Lobar Degeneration with Progressive Dysarthria“. Behavioural Neurology 17, Nr. 2 (2006): 97–104. http://dx.doi.org/10.1155/2006/320638.

Der volle Inhalt der Quelle
Annotation:
We investigated the evolution of the neurological and neuropsychological characteristics in a right-handed woman who was 53-years-old at the onset and who showed personality changes and behavioral disorders accompanied by progressive dysarthria. She had hypernasality and a slow rate of speech with distorted consonants and vowels, which progressed as motor disturbances affecting her speech apparatus increased; finally, she became mute two years post onset. Her dysarthria due to bilateral voluntary facio-velo-linguo-pharyngeal paralysis accompanied with automatic-voluntary dissociation fit the description of anterior opercular syndrome. She showed personality changes and behavioral abnormalities from the initial stage of the disease, as is generally observed in frontotemporal degeneration (FTD), and her magnetic resonance image showed progressive atrophy in the frontotemporal lobes; thus, she was clinically diagnosed with FTLD. This patient’s symptoms suggest that FTLD, including bilateral anterior operculum degeneration, causes progressive pseudobulbar paretic dysarthria accompanied by clinical symptoms of FTD, which raises the possibility of a new clinical subtype in the FTLD spectrum.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

de Boer, Janna, Alban Voppel, Frank Wijnen und Iris Sommer. „T59. ACOUSTIC SPEECH MARKERS FOR SCHIZOPHRENIA“. Schizophrenia Bulletin 46, Supplement_1 (April 2020): S253—S254. http://dx.doi.org/10.1093/schbul/sbaa029.619.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Clinicians routinely use impressions of speech as an element of mental status examination, including ‘pressured’ speech in mania and ‘monotone’ or ‘soft’ speech in depression or psychosis. In psychosis in particular, descriptions of speech are used to monitor (negative) symptom severity. Recent advances in computational linguistics have paved the way towards automated speech analyses as a biomarker for psychosis. In the present study, we assessed the diagnostic value of acoustic speech features in schizophrenia. We hypothesized that a classifier would be highly accurate (~ 80%) in classifying patients and healthy controls. Methods Natural speech samples were obtained from 86 patients with schizophrenia and 77 age and gender matched healthy controls through a semi-structured interview, using a set of neutral open-ended questions. Symptom severity was rated by consensus rating of two trained researchers, blinded to phonetic analysis, with the Positive And Negative Syndrome Scale (PANSS). Acoustic features were extracted with OpenSMILE, employing the Geneva Acoustic Minimalistic Parameter Set (GeMAPS), which comprises standardized analyses of pitch (F0), formants (F1, F2 and F3, i.e. acoustic resonance frequencies that indicate the position and movement of the articulatory muscles during speech production), speech quality, length of voiced and unvoiced regions. Speech features were fed into a linear kernel support vector machine (SVM) with leave-one-out cross-validation to assess their value for psychosis diagnosis. Results Demographic analyses revealed no differences between patients with schizophrenia and healthy controls in age or parental education. An automated machine-learning speech classifier reached an accuracy of 82.8% in classifying patients with schizophrenia and controls on speech features alone. Important features in the model were variation in loudness, spectral slope (i.e. the gradual decay in energy in high frequency speech sounds) and the amount of voiced regions (i.e. segments of the interview where the participant was speaking). PANSS positive, negative and general scores were significantly correlated with pitch, formant frequencies and length of voiced and unvoiced regions. Discussion This study demonstrates that an algorithm using quantified features of speech can objectively differentiate patients with schizophrenia from controls with high accuracy. Further validation in an independent sample is required. Employing standardized parameter sets ensures easy replication and comparison of analyses and can be used for cross linguistic studies. Although at an early stage, the field of clinical computational linguistics introduces a powerful tool for diagnosis and prognosis of psychosis and neuropsychiatric disorders in general. We consider this new diagnostic tool to be of high potential given its ease of acquirement, low costs and patient burden. For example, this tool could easily be implemented as a smartphone app to be used in treatment settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Sininger, Yvonne S. „Otoacoustic emissions in the diagnosis of hearing disorder in infants“. Hearing Journal 55, Nr. 11 (November 2002): 22–26. http://dx.doi.org/10.1097/01.hj.0000324168.58983.30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Moore, David R. „Auditory processing disorder (APD): Definition, diagnosis, neural basis, and intervention“. Audiological Medicine 4, Nr. 1 (Januar 2006): 4–11. http://dx.doi.org/10.1080/16513860600568573.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Caballero-Morales, Santiago-Omar. „Towards the Development of a Mexican Speech-to-Sign-Language Translator for the Deaf Community“. Acta Universitaria 22 (01.03.2012): 83–89. http://dx.doi.org/10.15174/au.2012.346.

Der volle Inhalt der Quelle
Annotation:
A significant population of Mexican people are deaf. This disorder restricts their social interac-tion skills with people who don't have such disorder and viceversa. In this paper we presentour advances towards the development of a Mexican Speech-to-Sign-Language translator toassist normal people to interact with deaf people. The proposed design methodology considerslimited resources for (1) the development of the Mexican Automatic Speech Recogniser (ASR)system, which is the main module in the translator, and (2) the Mexican Sign Language(MSL) vocabulary available to represent the decoded speech. Speech-to-MSL translation wasaccomplished with an accuracy level over 97% for test speakers different from those selectedfor ASR training.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Jones, Harrison N., Tyler J. Story, Timothy A. Collins, Daniel DeJoy und Christopher L. Edwards. „Multidisciplinary Assessment and Diagnosis of Conversion Disorder in a Patient with Foreign Accent Syndrome“. Behavioural Neurology 24, Nr. 3 (2011): 245–55. http://dx.doi.org/10.1155/2011/786560.

Der volle Inhalt der Quelle
Annotation:
Multiple reports have described patients with disordered articulation and prosody, often following acute aphasia, dysarthria, or apraxia of speech, which results in the perception by listeners of a foreign-like accent. These features led to the term foreign accent syndrome (FAS), a speech disorder with perceptual features that suggest an indistinct, non-native speaking accent. Also correctly known as psuedoforeign accent, the speech does not typically match a specific foreign accent, but is rather a constellation of speech features that result in the perception of a foreign accent by listeners. The primary etiologies of FAS are cerebrovascular accidents or traumatic brain injuries which affect cortical and subcortical regions critical to expressive speech and language production. Far fewer cases of FAS associated with psychiatric conditions have been reported. We will present the clinical history, neurological examination, neuropsychological assessment, cognitive-behavioral and biofeedback assessments, and motor speech examination of a patient with FAS without a known vascular, traumatic, or infectious precipitant. Repeated multidisciplinary examinations of this patient provided convergent evidence in support of FAS secondary to conversion disorder. We discuss these findings and their implications for evaluation and treatment of rare neurological and psychiatric conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Barreto, Simone dos Santos, und Karin Zazo Ortiz. „Speech in the foreign accent syndrome: differential diagnosis between organic and functional cases“. Dementia & Neuropsychologia 14, Nr. 3 (September 2020): 329–32. http://dx.doi.org/10.1590/1980-57642020dn14-030015.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT. Foreign accent syndrome (FAS) is an extremely rare disorder, with 112 cases described until 2019. We compare two cases of the foreign accent syndrome in native speakers of Brazilian Portuguese in its classic form (FAS) and psychiatric variant (FALS). Two cases were analyzed: (1) a right-handed, 69-year-old man, with a prior history of stroke, and (2) a right-handed, 43-year-old woman, diagnosed with schizophrenia. They were evaluate for language and speech, including the speech intelligibility. Both patients had speech impairments complaints, similar to a new accent, without previous exposure to a foreign language. However, the onset of the speech disorder was sudden in case 1 and insidious and with transient events in case 2, with speech intelligibility scores of 95.5 and 55.3% respectively. Besides neurologic impairment, the clinical presentation of FALS was extremely severe and differed to that expected in FAS cases, in which speech intelligibility is preserved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Lalitha, S., Deepa Gupta, Mohammed Zakariah und Yousef Ajami Alotaibi. „Mental Illness Disorder Diagnosis Using Emotion Variation Detection from Continuous English Speech“. Computers, Materials & Continua 69, Nr. 3 (2021): 3217–38. http://dx.doi.org/10.32604/cmc.2021.018406.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Bhattacharjee, Soumyendu, Zinkar Das und Biswarup Neogi. „Design and simulation aspect towards modelling of automatic cardiovascular disorder diagnosis system“. International Journal of Biomedical Engineering and Technology 19, Nr. 4 (2015): 303. http://dx.doi.org/10.1504/ijbet.2015.073422.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Stach, Brad, und Louise Loiselle. „Central Auditory Processing Disorder: Diagnosis and Management in a Young Child“. Seminars in Hearing 14, Nr. 03 (August 1993): 288–95. http://dx.doi.org/10.1055/s-0028-1085127.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

López-de-Ipiña, K., J. B. Alonso, J. Solé-Casals, N. Barroso, P. Henriquez, M. Faundez-Zanuy, C. M. Travieso, M. Ecay-Torres, P. Martínez-Lage und H. Eguiraun. „On Automatic Diagnosis of Alzheimer’s Disease Based on Spontaneous Speech Analysis and Emotional Temperature“. Cognitive Computation 7, Nr. 1 (30.08.2013): 44–55. http://dx.doi.org/10.1007/s12559-013-9229-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Li, Zhichao, Jilin Huang und Zhiping Hu. „Screening and Diagnosis of Chronic Pharyngitis Based on Deep Learning“. International Journal of Environmental Research and Public Health 16, Nr. 10 (14.05.2019): 1688. http://dx.doi.org/10.3390/ijerph16101688.

Der volle Inhalt der Quelle
Annotation:
Chronic pharyngitis is a common disease, which has a long duration and a wide range of onset. It is easy to misdiagnose by mistaking it with other diseases, such as chronic tonsillitis, by using common diagnostic methods. In order to reduce costs and avoid misdiagnosis, the search for an affordable and rapid diagnostic method is becoming more and more important for chronic pharyngitis research. Speech disorder is one of the typical symptoms of patients with chronic pharyngitis. This paper introduces a convolutional neural network model for diagnosis based on the typical symptom of speech disorder. First of all, the voice data is converted into a speech spectrogram, which can better output the speech characteristic information and lay a foundation for computer diagnosis and discrimination. Second, we construct a deep convolutional neural network for the diagnosis of chronic pharyngitis through the design of the structure, the design of the network layer, and the description of the function. Finally, we perform a parameter optimization experiment on the convolutional neural network and judge the recognition efficiency of chronic pharyngitis. The results show that the convolutional neural network has a high recognition rate for patients with chronic pharyngitis and has a good diagnostic effect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

TRIFU, Raluca Nicoleta. „Developmental coordination disorder DCD – terminology, diagnosis and intervention. The implication for speech therapy.“ Revista Română de Terapia Tulburărilor de Limbaj şi Comunicare VI, Nr. 2 (31.10.2020): 101–21. http://dx.doi.org/10.26744/rrttlc.2020.6.2.10.

Der volle Inhalt der Quelle
Annotation:
Developmental coordination disorder DCD is a specific set of impairments corelated with gross and fine motor disfunction, poor motor planning and impaired sensor integration. The term is use wildly for this condition, based on the proposed term made by the Diagnostic and Statistical Manual of Mental Disorders (DSM–5), but others terms such as dyspraxia, specific motor dysfunction, specific coordination motor disfunction (ICD – 10) are used and preferred in the same time. The article displays the multiple terms used in the literature connected with the DCD, the criteria for diagnosis, the implication for education and target specific intervention in case of DCD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Nizam Mazenan, Mohd, und Tian-Swee Tan. „Malay Wordlist Modeling for Articulation Disorder Patient by Using Computerized Speech Diagnosis System“. Research Journal of Applied Sciences, Engineering and Technology 7, Nr. 21 (05.06.2014): 4535–40. http://dx.doi.org/10.19026/rjaset.7.830.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie