To see the other types of publications on this topic, follow the link: Spoken language.

Dissertations / Theses on the topic 'Spoken language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Spoken language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ryu, Koichiro, and Shigeki Matsubara. "SIMULTANEOUS SPOKEN LANGUAGE TRANSLATION." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jones, J. M. "Iconicity and spoken language." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/1559788/.

Full text
Abstract:
Contrary to longstanding assumptions about the arbitrariness of language, recent work has highlighted how much iconicity – i.e. non-arbitrariness – exists in language, in the form of not only onomatopoeia (bang, splash, meow), but also sound-symbolism, signed vocabulary, and (in a paralinguistic channel) mimetic gesture. But is this iconicity ornamental, or does it represent a systematic feature of language important in language acquisition, processing, and evolution? Scholars have begun to address this question, and this thesis adds to that effort, focusing on spoken language (including gesture). After introducing iconicity and reviewing the literature in the introduction, Chapter 2 reviews sound-shape iconicity (the “kiki-bouba” effect), and presents a norming study that verifies the phonetic parameters of the effect, suggesting that it likely involves multiple mechanisms. Chapter 3 shows that sound-shape iconicity helps participants learn in a model of vocabulary acquisition (cross-situational learning) by disambiguating reference. Variations on this experiment show that the round association may be marginally stronger than the spiky, but only barely, suggesting that representations of lip shape may be partly but not entirely responsible for the effect. Chapter 4 models language change using the iterated learning paradigm. It shows that iconicity (both sound-shape and motion) emerges from an arbitrary initial language over ten ‘generations’ of speakers. I argue this shows that psychological biases introduce systematic pressure towards iconicity over language change, and that moreover spoken iconicity can help bootstrap a system of communication. Chapter 5 shifts to children and gesture, attempting to answer whether children can take meaning from iconic action gestures. Results here were null, but definitive conclusions must await new experiments with higher statistical power. The conclusion sums up my findings and their significance, and points towards crucial research for the future.
APA, Harvard, Vancouver, ISO, and other styles
3

Dinarelli, Marco. "Spoken Language Understanding: from Spoken Utterances to Semantic Structures." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/367830.

Full text
Abstract:
In the past two decades there have been several projects on Spoken Language Understanding (SLU). In the early nineties DARPA ATIS project aimed at providing a natural language interface to a travel information database. Following the ATIS project, DARPA Communicator project aimed at building a spoken dialog system automatically providing information on flights and travel reservation. These two projects defined a first generation of conversational systems. In late nineties ``How may I help you'' project from AT\&T, with Large Vocabulary Continuous Speech Recognition (LVCSR) and mixed initiatives spoken interfaces, started the second generation of conversational systems, which later have been improved integrating approaches based on machine learning techniques. The European funded project LUNA aims at starting the third generation of spoken language interfaces. In the context of this project we have acquired the first Italian corpus of spontaneous speech from real users engaged in a problem solving task, as opposed to previous projects. The corpus contains transcriptions and annotations based on a new multilevel protocol studied specifically for the goal of the LUNA project. The task of Spoken Language Understanding is the extraction of the meaning structure from spoken utterances in conversational systems. For this purpose, two main statistical learning paradigms have been proposed in the last decades: generative and discriminative models. The former are robust to over-fitting and they are less affected by noise but they cannot easily integrate complex structures (e.g. trees). In contrast, the latter can easily integrate very complex features that can capture arbitrarily long distance dependencies. On the other hand they tend to over-fit training data and so they are less robust to annotation errors in the data needed to learn the model. This work presents an exhaustive study of Spoken Language Understanding models, putting particular focus on structural features used in a Joint Generative and Discriminative learning framework. This combines the strengths of both approaches while training segmentation and labeling models for SLU. Its main characteristic is the use of Kernel Methods to encode structured features in Support Vector Machines, which in turn re-rank the hypotheses produced by an first step SLU module based either on Stochastic Finite State Transducers or Conditional Random Fields. Joint models based on transducers are also amenable to decode word lattices generated by large vocabulary speech recognizers. We show the benefit of our approach with comparative experiments among generative, discriminative and joint models on some of the most representative corpora of SLU, for a total of four corpora in four different languages: the ATIS corpus (English), the MEDIA corpus (French) and the LUNA Italian and Polish corpora (Italian and Polish respectively). These also represent three different kinds of domain applications, i.e. informational, transactional and problem-solving domains. The results, although depending on the task and in some range on the first model baseline, show that joint models improve in most cases the state-of-the-art, especially when a small training set is available.
APA, Harvard, Vancouver, ISO, and other styles
4

Dinarelli, Marco. "Spoken Language Understanding: from Spoken Utterances to Semantic Structures." Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/280/1/PhD-Thesis-Dinarelli.pdf.

Full text
Abstract:
In the past two decades there have been several projects on Spoken Language Understanding (SLU). In the early nineties DARPA ATIS project aimed at providing a natural language interface to a travel information database. Following the ATIS project, DARPA Communicator project aimed at building a spoken dialog system automatically providing information on flights and travel reservation. These two projects defined a first generation of conversational systems. In late nineties ``How may I help you'' project from AT\&T, with Large Vocabulary Continuous Speech Recognition (LVCSR) and mixed initiatives spoken interfaces, started the second generation of conversational systems, which later have been improved integrating approaches based on machine learning techniques. The European funded project LUNA aims at starting the third generation of spoken language interfaces. In the context of this project we have acquired the first Italian corpus of spontaneous speech from real users engaged in a problem solving task, as opposed to previous projects. The corpus contains transcriptions and annotations based on a new multilevel protocol studied specifically for the goal of the LUNA project. The task of Spoken Language Understanding is the extraction of the meaning structure from spoken utterances in conversational systems. For this purpose, two main statistical learning paradigms have been proposed in the last decades: generative and discriminative models. The former are robust to over-fitting and they are less affected by noise but they cannot easily integrate complex structures (e.g. trees). In contrast, the latter can easily integrate very complex features that can capture arbitrarily long distance dependencies. On the other hand they tend to over-fit training data and so they are less robust to annotation errors in the data needed to learn the model. This work presents an exhaustive study of Spoken Language Understanding models, putting particular focus on structural features used in a Joint Generative and Discriminative learning framework. This combines the strengths of both approaches while training segmentation and labeling models for SLU. Its main characteristic is the use of Kernel Methods to encode structured features in Support Vector Machines, which in turn re-rank the hypotheses produced by an first step SLU module based either on Stochastic Finite State Transducers or Conditional Random Fields. Joint models based on transducers are also amenable to decode word lattices generated by large vocabulary speech recognizers. We show the benefit of our approach with comparative experiments among generative, discriminative and joint models on some of the most representative corpora of SLU, for a total of four corpora in four different languages: the ATIS corpus (English), the MEDIA corpus (French) and the LUNA Italian and Polish corpora (Italian and Polish respectively). These also represent three different kinds of domain applications, i.e. informational, transactional and problem-solving domains. The results, although depending on the task and in some range on the first model baseline, show that joint models improve in most cases the state-of-the-art, especially when a small training set is available.
APA, Harvard, Vancouver, ISO, and other styles
5

Melander, Linda. "Language attitudes : Evaluational Reactions to Spoken Language." Thesis, Högskolan Dalarna, Engelska, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:du-2282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harwath, David F. (David Frank). "Learning spoken language through vision." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118081.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 145-159).
Humans learn language at an early age by simply observing the world around them. Why can't computers do the same? Conventional automatic speech recognition systems have a long history and have recently made great strides thanks to the revival of deep neural networks. However, their reliance on highly supervised (and therefore expensive) training paradigms has restricted their application to the major languages of the world, accounting for a small fraction of the more than 7,000 human languages spoken worldwide. This thesis introduces datasets, models, and methodologies for grounding continuous speech signals at the raw waveform level to natural image scenes. The context and constraint provided by the visual information enables our models to efficiently learn linguistic units, such as words, along with their visual semantics. For example, our models are able to recognize instances of the spoken word "water" within spoken captions and associate them with image regions containing bodies of water. Further, we demonstrate that our models are capable of learning cross-lingual semantics by using the visual space as an interlingua to perform speech-to-speech retrieval between English and Hindi. In all cases, this learning is done without linguistic transcriptions or conventional speech recognition - yet we show that our methods achieve retrieval scores close to what is possible when transcriptions are available. This offers a promising new direction for speech processing that only requires speakers to provide narrations of what they see.
by David Frank Harwath.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Lainio, Jarmo. "Spoken Finnish in urban Sweden." Uppsala : Centre for multiethnic research, 1989. http://catalogue.bnf.fr/ark:/12148/cb35513801d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kanda, Naoyuki. "Open-ended Spoken Language Technology: Studies on Spoken Dialogue Systems and Spoken Document Retrieval Systems." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Intilisano, Antonio Rosario. "Spoken dialog systems: from automatic speech recognition to spoken language understanding." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zámečník, Jiří [Verfasser], Christian [Akademischer Betreuer] Mair, and John A. [Akademischer Betreuer] Nerbonne. "Disfluency prediction in natural spoken language." Freiburg : Universität, 2019. http://d-nb.info/1238517714/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alhanai, Tuka(Tuka Waddah Talib Ali Al Hanai). "Detecting cognitive impairment from spoken language." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122724.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 141-165).
Dementia comes second only to spinal cord injuries in terms of its debilitating effects; from memory-loss to physical disability. The standard approach to evaluate cognitive conditions are neuropsychological exams, which are conducted via in-person interviews to measure memory, thinking, language, and motor skills. Work is on-going to determine biomarkers of cognitive impairment, yet one modality that has been relatively less explored is speech. Speech has the advantage of being easy to record, and contains the majority of information transmitted during neuropsychological exams. To determine the viability of speech-based biomarkers, we utilize data from the Framingham Heart Study, that contains hour-long audio recordings of neuropsychological exams for over 5,000 individuals. The data is representative of a population and the real-world prevalence of cognitive conditions (3-4%). We first explore modeling cognitive impairment from a relatively small set of 92 subjects with complete information on audio, transcripts, and speaker turns. We loosen these constraints by modeling with only a fraction of audio (~2-3 minutes), of which the speaker segments are defined through text-based diarization. We next apply this diarization method to extract audio features from all 7,000+ recordings (most of which have no transcripts), to model cognitive impairment (AUC 0.83, spec. 78%, sens. 79%). Finally, we eliminate the need for feature-engineering by training a neural network to learn higher-order representations from filterbank features (AUC 0.85, spec. 81%, sens. 82%). Our speech models exhibit strong performance and are comparable to the baseline demographic model (AUC 0.85, spec. 93%, sens. 65%). Further analysis shows that our neural network model automatically learns to detect specific speech activity which clusters according to: pause followed by onset of speech, short burst of speech, speech activity in high-frequency spectral energy bands, and silence.
by Tuka Alhanai.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
12

Nguyen, Tu Anh. "Spoken Language Modeling from Raw Audio." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS089.

Full text
Abstract:
La parole a toujours été un mode dominant de connexion sociale et de communication. Cependant, le traitement et la modélisation de la parole sont difficiles en raison de la variabilité le parole. Les technologies classiques de la parole reposent sur une modélisation en cascade, c'est-à-dire la transcription de la parole en texte avec un système de reconnaissance automatique de la parole (ASR), le traitement du texte transcrit à l'aide de méthodes de traitement du langage naturel (NLP) et la conversion du texte en parole avec un modèle de synthèse vocale. Cette méthode élimine la variabilité de la parole mais nécessite beaucoup de jeux de données textuelles, qui ne sont pas toujours disponibles pour toutes les langues. De plus, elle supprime toute l'expressivité contenue dans la parole elle-même.De récentes avancées dans le domaine de l'apprentissage auto-supervisé de la parole (SpeechSSL) ont permis d'apprendre de bonnes représentations discrètes de la parole à partir du signal audio, comblant ainsi le fossé entre les technologies de la parole et du texte. Cela permet d'entraîner des modèles de langue sur des représentations discrètes (unités discrètes ou pseudo-texte) obtenues à partir de la parole et a donné naissance à un nouveau domaine appelé TextlessNLP, où la tâche consiste à apprendre la langue directement sur les signaux audio, sans avoir recours à des systèmes ASR. Les modèles de langue parlé (SpeechLMs) ont été montrés comme faisables et offrent de nouvelles possibilités pour le traitement de la parole par rapport aux systèmes en cascade.L'objectif de cette thèse est donc d'explorer et d'améliorer ce domaine nouvellement formé. Nous allons analyser pourquoi ces représentations discrètes sont efficaces, découvrir de nouvelles applications des SpeechLMs aux dialogues parlés, étendre le TextlessNLP aux paroles plus expressives ainsi qu'améliorer les performances des SpeechLMs pour réduire l'écart entre les SpeechLMs et les TextLMs
Speech has always been a dominant mode of social connection and communication. However, speech processing and modeling have been challenging due to its variability. Classic speech technologies rely on cascade modeling, i.e. transcribing speech to text with an Automatic Speech Recognition (ASR) system, processing transcribed text using Natural Language Processing (NLP) methods, and converting text back to speech with a Speech Synthesis model. This method eliminates speech variability but requires a lot of textual datasets, which are not always available for all languages. In addition, it removes all the expressivity contained in the speech itself.Recent advancements in self-supervised speech learning (SpeechSSL) have enabled the learning of good discrete speech representations from raw audio, bridging the gap between speech and text technologies. This allows to train language models on discrete representations (discrete units, or pseudo-text) obtained from the speech and has given rise to a new domain called TextlessNLP, where the task is to learn the language directly on audio signals, bypassing the need for ASR systems. The so-called Spoken Language Models (Speech Language Models, or SpeechLMs) have been shown to be working and offer new possibilities for speech processing compared to cascade systems.The objective of this thesis is thus to explore and improve this newly-formed domain. We are going to analyze why these discrete representations work, discover new applications of SpeechLMs to spoken dialogues, extend TextlessNLP to more expressive speech as well as improve the performance of SpeechLMs to reduce the gap between SpeechLMs and TextLMs
APA, Harvard, Vancouver, ISO, and other styles
13

Goldie, Anna Darling. "CHATTER : a spoken language dialogue system for language learning applications." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66420.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 110).
The goal of this thesis is to build a Computer Aided Language Learning game that simulates a casual conversation in Mandarin Chinese. In the envisioned system, users will chat with a computer on topics ranging from relationship status to favorite Chinese dish. I hope to provide learners with more opportunities to practice speaking and reading foreign languages. The system was designed with generality in mind. The framework allows developers to easily implement dialogue systems to allow students to practice communications in a variety of situations, such as in a street market, at a restaurant, or in a hospital. A user simulator was also implemented, which was useful for the code development, as a tutor for the student, and as an evaluation tool. All of the 18 topics were covered within the 20 sample dialogues, no two dialogues took the same path, questions and remarks were worded differently, and no two users had the same profile, resulting in high variety, coherence, and natural language quality.
by Anna Darling Goldie.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Inagaki, Yasuyoshi, Katsuhiko Toyama, Shigeki Matsubara, and Yoshihide Kato. "SPOKEN LANGUAGE PARSING BASED ON INCREMENTAL DISAMBIGUATION." ISCA(International Speech Communication Association), 2000. http://hdl.handle.net/2237/15103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Inagaki, Yasuyoshi, Katsuhiko Toyama, Nobuo Kawaguchi, Shigeki Matsubara, and Yasuyuki Aizawa. "Spoken Language Corpus for Machine Interpretation Research." ISCA(International Speech Communication Association), 2000. http://hdl.handle.net/2237/15104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

He, Y. "A statistical approach to spoken language understanding." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603917.

Full text
Abstract:
The research work described here focuses on statistical learning approaches for building a purely data-driven spoken language understanding (SLU) system whose three major components, the speech recognizer, the semantic parser, and the dialogue act decoder are trained entirely from data. The system is comparable to existing SLU systems which rely on either hand-crafted semantic grammar rules or statistical model trained on fully-annotated training corpora but it has greatly reduced build cost. The core of the system is a novel hierarchical semantic parser model called a Hidden Vector State (HVS) model. Unlike other hierarchical parsing models which require fully-annotated treebank data for training, the HVS model can be trained using only lightly annotated data whilst simultaneously retaining sufficient ability to capture the hierarchical structure needed to robustly extract task domain semantics. The HVS parser is combined with a dialogue act detector based on Naive Bayesian networks which have been extended and refined by introducing Tree-Augmented Naive Bayes networks (TANs) to allow inter-concept dependencies to be robustly modelled. Finally, the two semantic analyzer components, the HVS semantic parser and the modified-TAN dialogue act decoder, have been integrated with a standard HTK-based Hidden Markov Model (HMM) speech recognizer and the additional knowledge provided by the semantic analyzer has been used to determine the best-scoring word hypothesis from the N-best lists generated by the speech recognizer. This purely data-driven spoken language understanding (SLU) system has been built and tested using both the ATIS and DARPA Communicator test sets. In addition to testing on clean data, the systems has been tested on various levels of noisy data and on modified application domains. The results support the claim that an SLU system which is statistically-based and trained entirely from data is intrinsically robust and can be readily adapted to new applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Molle, Jo. "Shallow semantic processing in spoken language comprehension." Thesis, University of Strathclyde, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McGraw, Ian C. (Ian Carmichael). "Crowd-supervised training of spoken language systems." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75641.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 155-166).
Spoken language systems are often deployed with static speech recognizers. Only rarely are parameters in the underlying language, lexical, or acoustic models updated on-the-fly. In the few instances where parameters are learned in an online fashion, developers traditionally resort to unsupervised training techniques, which are known to be inferior to their supervised counterparts. These realities make the development of spoken language interfaces a difficult and somewhat ad-hoc engineering task, since models for each new domain must be built from scratch or adapted from a previous domain. This thesis explores an alternative approach that makes use of human computation to provide crowd-supervised training for spoken language systems. We explore human-in-the-loop algorithms that leverage the collective intelligence of crowds of non-expert individuals to provide valuable training data at a very low cost for actively deployed spoken language systems. We also show that in some domains the crowd can be incentivized to provide training data for free, as a byproduct of interacting with the system itself. Through the automation of crowdsourcing tasks, we construct and demonstrate organic spoken language systems that grow and improve without the aid of an expert. Techniques that rely on collecting data remotely from non-expert users, however, are subject to the problem of noise. This noise can sometimes be heard in audio collected from poor microphones or muddled acoustic environments. Alternatively, noise can take the form of corrupt data from a worker trying to game the system - for example, a paid worker tasked with transcribing audio may leave transcripts blank in hopes of receiving a speedy payment. We develop strategies to mitigate the effects of noise in crowd-collected data and analyze their efficacy. This research spans a number of different application domains of widely-deployed spoken language interfaces, but maintains the common thread of improving the speech recognizer's underlying models with crowd-supervised training algorithms. We experiment with three central components of a speech recognizer: the language model, the lexicon, and the acoustic model. For each component, we demonstrate the utility of a crowd-supervised training framework. For the language model and lexicon, we explicitly show that this framework can be used hands-free, in two organic spoken language systems.
by Ian C. McGraw.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
19

Den, Yasuharu. "A Uniform Approach to Spoken Language Analysis." Kyoto University, 1996. http://hdl.handle.net/2433/154673.

Full text
Abstract:
本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである
Kyoto University (京都大学)
0048
新制・論文博士
博士(工学)
乙第9340号
論工博第3147号
新制||工||1050(附属図書館)
UT51-96-W431
(主査)教授 長尾 真, 教授 堂下 修司, 教授 石田 亨
学位規則第4条第2項該当
APA, Harvard, Vancouver, ISO, and other styles
20

Hall, Mica. "Russian as spoken by the Crimean Tatars /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/7163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hoffiz, Benjamin Theodore III. "Morphology of United Arab Emirates Arabic, Dubai dialect." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187179.

Full text
Abstract:
This study is a synchronic descriptive analysis of the morphology of the Arabic dialect spoken by natives of the city of Dubai, United Arab Emirates. Hereafter, the dialect will be abbreviated 'DD' and also referred to as 'the dialect' or 'this dialect'. The central focus of this study is the morphological component of DD as it interplays with phonological processes. Definitions of words are provided in the form of English glosses and translations, and are elaborated upon when the need calls for it. Layout of Chapters. This dissertation is presented in the following order. Chapter one is introductory. The historical background of the Arabic language and Arabic diglossia are discussed in this chapter. In the same vein, four descriptive models that treat the development of the Arabic dialects are discussed. The present linguistic situation in the U. A. E. is also touched upon. The aim of this research process and the methodology followed in it are also explained in it. Additionally, chapter one contains a review of the literature on Gulf Arabic, of which DD is a dialect, or subdialect, and a review of related literature. Chapter two deals with the phonological system of DD. It covers consonants and vowels and their distribution, in addition to anaptyxis, assimilation, elision, emphasis, etc. Morphology is treated in chapters three through six. The morphology of DD verbs, including inflection for tense, number and gender, is dealt with in the third chapter. Because DD morphology is root-based, the triliteral root system, which is extremely productive, is explained in some detail. Chapter four deals with the morphology of DD nouns, including verbal nouns, occupational nouns, nouns of location, etc. Noun inflection for number and gender is also discussed in this chapter. The morphology of noun modifiers is treated in chapter five. This includes participles, relative adjectives, positive adjectives and the construct phrase. Pronoun morphology, and the processes associated with it, are covered in chapter six. The seventh chapter is the conclusion. It delineates the limitations of this study and contains specific comments on observations made in the process of this research. The contributions of this dissertation and suggestions for further investigation and research are also discussed in chapter seven.
APA, Harvard, Vancouver, ISO, and other styles
22

Mac, Eoin Gearóid. "What language was spoken in Ireland before Irish?" Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1923/.

Full text
Abstract:
Extract: That the Celtic languages were of the Indo-European family was first recognised by Rasmus Christian Rask (*1787), a young Danish linguist, in 1818. However, the fact that he wrote in Danish meant that his discovery was not noted by the linguistic establishment until long after his untimely death in 1832. The same conclusion was arrived at independently of Rask and, apparently, of each other, by Adolphe Pictet (1836) and Franz Bopp (1837). This agreement between the foremost scholars made possible the completion of the picture of the spread of the Indo-European languages in the extreme west of the European continent. However, in the Middle Ages the speakers of Irish had no awareness of any special relationship between Irish and the other Celtic languages, and a scholar as linguistically competent as Cormac mac Cuillennáin (†908), or whoever compiled Sanas Chormaic, treated Welsh on the same basis as Greek, Latin, and the lingua northmannorum in the elucidation of the meaning and history of Irish words. [...]
APA, Harvard, Vancouver, ISO, and other styles
23

Hillard, Dustin Lundring. "Automatic sentence structure annotation for spoken language processing /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/6080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Inagaki, Yasuyoshi, Nobuo Kawaguchi, Shigeki Matsubara, and Tomohiro Ohno. "SPIRAL CONSTRUCTION OF SYNTACTICALLY ANNOTATED SPOKEN LANGUAGE CORPUS." IEEE, 2003. http://hdl.handle.net/2237/15085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Inagaki, Yasuyoshi, Nobuo Kawaguchi, Takahisa Murase, and Shigeki Matsubara. "Stochastic Dependency Parsing of Spontaneous Japanese Spoken Language." ACL(Association for computational linguistics), 2002. http://aclweb.org/anthology/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ohno, Tomohiro, Shigeki Matsubara, Nobuo Kawaguchi, and Yasuyoshi Inagaki. "Robust Dependency Parsing of Spontaneous Japanese Spoken Language." IEICE, 2005. http://hdl.handle.net/2237/7824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Moss, Helen Elizabeth. "Access to word meanings during spoken language comprehension." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kuo, Chen-Li. "Interpreting intonation in English-Chinese spoken language translation." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492917.

Full text
Abstract:
This thesis presents a system for translating spoken English into Mandarin, paying particular attention to the relationship between the phonologically marked emphatic/ contrastive focus in English and the lexical/syntactic focus constructions in Mandarin. This is based on the assumption that information carried by intonation in English may be expressed using lexical/syntactic devices in tone languages.
APA, Harvard, Vancouver, ISO, and other styles
29

Pon-Barry, Heather Roberta. "Inferring Speaker Affect in Spoken Natural Language Communication." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10710.

Full text
Abstract:
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, is the task of using information in the speech signal to infer a person’s emotional or mental state. In this dissertation, our approach is to assess the utility of prosody, or manner of speaking, in classifying speaker affect. Prosody refers to the acoustic features of natural speech: rhythm, stress, intonation, and energy. Affect refers to a person’s emotions and attitudes such as happiness, frustration, or uncertainty. We focus on one specific dimension of affect: level of certainty. Our goal is to automatically infer whether a person is confident or uncertain based on the prosody of his or her speech. Potential applications include conversational dialogue systems (e.g., in educational technology) and voice search (e.g., smartphone personal assistants). There are three main contributions of this thesis. The first contribution is a method for eliciting uncertain speech that binds a speaker’s uncertainty to a single phrase within the larger utterance, allowing us to compare the utility of contextually-based prosodic features. Second, we devise a technique for computing prosodic features from utterance segments that both improves uncertainty classification and can be used to determine which phrase a speaker is uncertain about. The level of certainty classifier achieves an accuracy of 75%. Third, we examine the differences between perceived, self-reported, and internal level of certainty, concluding that perceived certainty is aligned with internal certainty for some but not all speakers and that self-reports are a good proxy for internal certainty.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
30

Kirk, Steven J. "Second language spoken fluency in monologue and dialogue." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/38421/.

Full text
Abstract:
Although second language spoken fluency has long been recognized as a major component of language proficiency, it has never been clearly defined. It has been shown that fluency is a complex phenomenon, with a host of relevant factors, and it has been suggested that it might be better separated into multiple concepts, such as cognitive fluency and utterance fluency. There is also evidence that fluency has a dialogic aspect, that is, that the fluency of a conversation is a co-construction of the two speakers, rather than simply alternating monologues. This can be observed in the confluence created by smooth turn exchanges, which results in minimizing gaps and avoiding overlap. The present study seeks to examine the co-construction of dialogic fluency through a parallel case study of two Japanese learners of English. One learner was of lower-intermediate proficiency, and the other was of higher proficiency, but both were able to create good impressions of fluency in conversations with native speakers of English. The case study design was semi-experimental in that it involved a story-retelling task done in monologue and dialogue, which was repeated to take into account the effect of practice. The case study allowed the close examination of the construction of fluency in the story-retelling task moment-by-moment through the course of the retellings, taking into account all relevant factors. The semi-experimental, parallel case study design allowed the findings to be compared (1) between monologue (where the learner recorded herself telling the story alone) and dialogue (where the learner told the story to a native speaker interlocutor), and (2) between the two learners of differing proficiency. This study was also mixed-methods in that it combined a qualitative, grounded theory approach to data analysis involving discourse analytic techniques, with quantitative comparisons of temporal variables of fluency. It was also multi-modal in that video was employed to take into account gaze, gesture, and head nods. Results of quantitative analyses revealed that the dialogues were comparatively more fluent than the monologues in terms of speech rate, articulation rate, and length of silences, for both speakers, although the higher-proficiency subject had faster speech and articulation rates than the lower-proficiency learner. This implies that narrative in dialogue is not just a listener occasionally backchanneling while the speaker delivers a monologue. The qualitative analyses revealed that the co-construction of smooth conversation was facilitated by the alignment of rhythm between the speaker and listener, supported by gaze, gestures, and head nods. The learners in these case studies were able to employ different fluency techniques for stressing words in phrases to create rhythm in spite of lower speech rates, and were able to adjust those techniques to maintain rhythm with even lower speech rates at difficult points of the story. These results confirm previous research that some apparent “dysfluencies” in speech should be considered as speech management phenomena, that positively contribute to the co-construction of fluent conversation. They also suggest that alignment between the speakers in terms of rhythm of speech and gaze are important in conversation, confirming previous research showing alignment at these and other levels of interaction. Finally, it appears that fluency is a multi-level construct, and that dialogic fluency should be considered a separate construct from cognitive fluency, of equal or more importance. This has implications for language testing, such that fluency may not be able to be captured with single test types, and for language teaching and learning more generally.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Vivienne C. (Vivienne Catherine). "LanguageLand : a multimodal conversational spoken language learning system." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/33143.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 96-98).
LanguageLand is a multimodal conversational spoken language learning system whose purpose is to help native English users learn Mandarin Chinese. The system is centered on a game that involves navigation on a simulated map. It consists of an edit and play mode. In edit mode, users can set up the street map with whichever objects (such as a house, a church, a policeman) they see in the toolbar. This can be done through spoken conversation over the telephone or typed text along with mouse clicks. In play mode, users are given a start and end corner and the goal is to get from the start to the end on the map. While the system only responds actively to accurate Mandarin phrases, the user can speak or type in English to obtain Mandarin translations of those English words or phrases. The LanguageLand application is built using Java and Swing. The overall system is constructed using the Galaxy Communicator architecture and existing SLS technologies including Summit for speech recognition, Tina for NL understanding, Genesis for NL generation, and Envoice for speech synthesis.
by Vivienne C. Lee.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
32

Cowan, Brooke A. (Brooke Alissa) 1972. "PLUTO : a preprocessor for multilingual spoken language generation." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30102.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 111-115).
Surface realization, a subtask of natural language generation, maps a meaning representation to a natural language string. This thesis presents an architecture for a surface realization component in a spoken dialogue system. The architecture divides the surface realization task in two: (1) modification of the meaning representation to adhere to the constraints of the target language, and (2) string production. Each subtask is handled by a separate module. PLUTO is a new module, responsible for meaning representation modification, that has been added to the Spoken Language Systems group's surface realization component. PLUTO acts as a preprocessor to the main processor, GENESIS, which is responsible for string production. We show how this new, decoupled architecture is amenable to a hybrid approach to machine translation that combines transfer and interlingua. We also present a policy for generation that specifies the roles of PLUTO, GENESIS, and the lexicon they share. This policy formalizes a way of writing robust, reusable grammars. The primary contribution of this work is to simplify the development of such grammars in multilingual speech-based applications.
by Brooke A. Cowan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
33

Korpusik, Mandy B. "Spoken language understanding in a nutrition dialogue system." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99860.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-111).
Existing approaches for the prevention and treatment of obesity are hampered by the lack of accurate, low-burden methods for self-assessment of food intake, especially for hard-to-reach, low-literate populations. For this reason, we propose a novel approach to diet tracking that utilizes speech understanding and dialogue technology in order to enable efficient self-assessment of energy and nutrient consumption. We are interested in studying whether speech can lower user workload compared to existing self-assessment methods, whether spoken language descriptions of meals can accurately quantify caloric and nutrient absorption, and whether dialogue can efficiently and effectively be used to ascertain and clarify food properties, perhaps in conjunction with other modalities. In this thesis, we explore the core innovation of our nutrition system: the language understanding component which relies on machine learning methods to automatically detect food concepts in a user's spoken meal description. In particular, we investigate the performance of conditional random field (CRF) models for semantic labeling and segmentation of spoken meal descriptions. On a corpus of 10,000 meal descriptions, we achieve an average F1 test score of 90.7 for semantic tagging and 86.3 for associating foods with properties. In a study of users interacting with an initial prototype of the system, semantic tagging achieved an accuracy of 83%, which was sufficiently high to satisfy users.
by Mandy B. Korpusik.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Lau, Tien-Lok Jonathan 1980. "SLLS : an online conversational spoken language learning system." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29684.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003
Includes bibliographical references (leaves 75-77).
The Spoken Language Learning System (SLLS) is intended to be an engaging, educational, and extensible spoken language learning system showcasing the multilingual capabilities of the Spoken Language Systems Group's (SLS) systems. The motivation behind SLLS is to satisfy both the demand for spoken language learning in an increasingly multi-cultural society and the desire for continued development of the multilingual systems at SLS. SLLS is an integration of an Internet presence with augmentations to SLS's Mandarin systems built within the Galaxy architecture, focusing on the situation of an English speaker learning Mandarin. We offer language learners the ability to listen to spoken phrases and simulated conversations online, engage in interactive dynamic conversations over the telephone, and review audio and visual feedback of their conversations. We also provide a wide array of administration and maintenance features online for teachers and administrators to facilitate continued system development and user interaction, such as lesson plan creation, vocabulary management, and a requests forum. User studies have shown that there is an appreciation for the potential of the system and that the core operation is intuitive and entertaining. The studies have also helped to illuminate the vast array of future work necessary to further polish the language learning experience and reduce the administrative burden. The focus of this thesis is the creation of the first iteration of SLLS; we believe we have taken the first step down the long but hopeful path towards helping people speak a foreign language.
by Tien-Lok Jonathan Lau.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
35

Mrkšić, Nikola. "Data-driven language understanding for spoken dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/276689.

Full text
Abstract:
Spoken dialogue systems provide a natural conversational interface to computer applications. In recent years, the substantial improvements in the performance of speech recognition engines have helped shift the research focus to the next component of the dialogue system pipeline: the one in charge of language understanding. The role of this module is to translate user inputs into accurate representations of the user goal in the form that can be used by the system to interact with the underlying application. The challenges include the modelling of linguistic variation, speech recognition errors and the effects of dialogue context. Recently, the focus of language understanding research has moved to making use of word embeddings induced from large textual corpora using unsupervised methods. The work presented in this thesis demonstrates how these methods can be adapted to overcome the limitations of language understanding pipelines currently used in spoken dialogue systems. The thesis starts with a discussion of the pros and cons of language understanding models used in modern dialogue systems. Most models in use today are based on the delexicalisation paradigm, where exact string matching supplemented by a list of domain-specific rephrasings is used to recognise users' intents and update the system's internal belief state. This is followed by an attempt to use pretrained word vector collections to automatically induce domain-specific semantic lexicons, which are typically hand-crafted to handle lexical variation and account for a plethora of system failure modes. The results highlight the deficiencies of distributional word vectors which must be overcome to make them useful for downstream language understanding models. The thesis next shifts focus to overcoming the language understanding models' dependency on semantic lexicons. To achieve that, the proposed Neural Belief Tracking (NBT) model forsakes the use of standard one-hot n-gram representations used in Natural Language Processing in favour of distributed representations of user utterances, dialogue context and domain ontologies. The NBT model makes use of external lexical knowledge embedded in semantically specialised word vectors, obviating the need for domain-specific semantic lexicons. Subsequent work focuses on semantic specialisation, presenting an efficient method for injecting external lexical knowledge into word vector spaces. The proposed Attract-Repel algorithm boosts the semantic content of existing word vectors while simultaneously inducing high-quality cross-lingual word vector spaces. Finally, NBT models powered by specialised cross-lingual word vectors are used to train multilingual belief tracking models. These models operate across many languages at once, providing an efficient method for bootstrapping language understanding models for lower-resource languages with limited training data.
APA, Harvard, Vancouver, ISO, and other styles
36

Coria, Juan Manuel. "Continual Representation Learning in Written and Spoken Language." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG025.

Full text
Abstract:
L'apprentissage automatique a récemment connu des avancées majeures, mais les modèles actuels sont généralement entraînés une fois sur une tâche cible et leurs paramètres sont rarement révisés.Ce problème affecte les performances après la mise en production car les spécifications des tâches et les données peuvent évoluer avec le temps.Pour résoudre ce problème, l'apprentissage continu propose un entraînement au fil du temps, à mesure que de nouvelles données sont disponibles.Cependant, les modèles entraînés de cette manière souffrent d'une perte de performance sur les exemples déjà vus, un phénomène appelé oubli catastrophique.De nombreuses études ont proposé différentes stratégies pour prévenir l'oubli, mais elles s'appuient souvent sur des données étiquetées rarement disponibles en pratique. Dans cette thèse, nous étudions l'apprentissage continu pour la langue écrite et parlée.Notre objectif est de concevoir des systèmes autonomes et auto-apprenants capables d'exploiter les données disponibles sur le terrain pour s'adapter aux nouveaux environnements.Contrairement aux travaux récents sur l'apprentissage de représentations à usage général, nous proposons d'exploiter des représentations adaptées à une tâche cible.En effet, ces dernières pourraient être plus faciles à interpréter et à exploiter par des méthodes non supervisés et plus robustes à l'oubli, comme le clustering. Dans ce travail, nous améliorons notre compréhension de l'apprentissage continu dans plusieurs contextes.Nous montrons que les représentations spécifiques à une tâche permettent un apprentissage continu efficace à faibles ressources, et que les prédictions d'un modèle peuvent être exploitées pour l'auto-apprentissage
Although machine learning has recently witnessed major breakthroughs, today's models are mostly trained once on a target task and then deployed, rarely (if ever) revisiting their parameters.This problem affects performance after deployment, as task specifications and data may evolve with user needs and distribution shifts.To solve this, continual learning proposes to train models over time as new data becomes available.However, models trained in this way suffer from significant performance loss on previously seen examples, a phenomenon called catastrophic forgetting.Although many studies have proposed different strategies to prevent forgetting, they often rely on labeled data, which is rarely available in practice. In this thesis, we study continual learning for written and spoken language.Our main goal is to design autonomous and self-learning systems able to leverage scarce on-the-job data to adapt to the new environments they are deployed in.Contrary to recent work on learning general-purpose representations (or embeddings), we propose to leverage representations that are tailored to a downstream task.We believe the latter may be easier to interpret and exploit by unsupervised training algorithms like clustering, that are less prone to forgetting. Throughout our work, we improve our understanding of continual learning in a variety of settings, such as the adaptation of a language model to new languages for sequence labeling tasks, or even the adaptation to a live conversation in the context of speaker diarization.We show that task-specific representations allow for effective low-resource continual learning, and that a model's own predictions can be exploited for full self-learning
APA, Harvard, Vancouver, ISO, and other styles
37

譚成珠 and Chengzhu Tan. "Sentence structure in spoken modern standard Chinese." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Toivanen, Juhani H. "Perspectives on intonation English, Finnish, and English spoken by Finns /." Frankfurt am Main ; New York : Peter Lang, 2001. http://catalog.hathitrust.org/api/volumes/oclc/47142055.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jensen, Marie-Thérèse 1949. "Corrective feedback to spoken errors in adult ESL classrooms." Monash University, Faculty of Education, 2001. http://arrow.monash.edu.au/hdl/1959.1/8620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cheepen, C. "The interactive basis of spoken dialogue." Thesis, University of Hertfordshire, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Christensen, Matthew B. "Variation in spoken and written Mandarin narrative discourse /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487859313344186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Masud, Rabia. "Language spoken around the world: lessons from Le Corbusier." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33952.

Full text
Abstract:
Le Corbusier's method of creating Architecture in all regions of the world is endlessly rich in techniques. While it is impossible to exactly know his thoughts as he created his modern compositions that skillfully addressed contextual cues, I present a thesis of how Corbusier approached different sites and masterfully created residences that were places "where happiness is born". I will use Shape Grammars and formulate my own languages that will recreate Corbusier's two Monol houses: Maison Jaoul in Paris and Sarabhai Villa in Ahmedabad. Furthermore, I will expand on these houses by creating other iterations, and transforming the grammars to understand critical major and minor moves. In the end I hope to derive architectural lessons that come from formal exercises that can be used in future design processes. I explore this practical effort by creating designs for a site in Midtown, Atlanta. I compare the process of using Shape Grammars with that of the typical studio approach. In conclusion, I find that Shape Grammars allows one to produce iterations that connect to the lessons of the original houses in an intuitive manner.
APA, Harvard, Vancouver, ISO, and other styles
43

Inagaki, Yasuyoshi, Shigeki Matsubara, Atsushi Mizuno, and Koichiro Ryu. "Incremental Japanese Spoken Language Generation in Simultaneous Machine Interpretation." IEICE, 2004. http://hdl.handle.net/2237/15091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yao, Huan 1976. "Utterance verification in large vocabulary spoken language understanding system." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Reimers, Stian John. "Representations of phonology in spoken language comprehension and production." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhuang, Jie. "Lexical, semantic, and syntactic processes in spoken language comprehension." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hatchard, Rachel. "A construction-based approach to spoken language in aphasia." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/10385/.

Full text
Abstract:
Linguistic research into aphasia, like other areas of language research, has mainly been approached from the perspective of rule-based, generative theory (Chomsky, 1957 onwards). In turn, this has impacted on clinical practice, underpinning both aphasia assessment and therapy. However, this theory is now being widely questioned (e.g. Tomasello, 2003), and other approaches are emerging, such as the constructivist, usage-based perspective, influenced by cognitive and construction grammars (e.g. Langacker, 1987; Goldberg, 1995). This approach has yielded important results in, for example, child language (e.g. Ambridge, Noble, & Lieven, 2014), but it remains largely unapplied to language in aphasia. This thesis begins to address this by conducting an exploratory examination of spoken language in aphasia from a constructivist, usage-based perspective. Two central features of usage-based theory, the nature of constructions and the role of frequency, form the basis of the studies reported in the thesis. Reliable methods of transcription and speech segmentation appropriate for an analysis that employs this approach are developed and then applied to the examination of spoken narratives of the Cinderella story by twelve people with a range of aphasia types and severities. Beginning at the single word level, the effects of general versus ‘context-specific’ frequencies on participants’ nouns are examined, demonstrating that most participants’ noun production appears to be more influenced by context-specific frequency, that is, the frequency of nouns in the context of the Cinderella story. This is followed by an analysis of errors in marking these nouns for grammatical number. A main finding here was that error production seems to be affected by general frequency: the noun form used erroneously was always more frequent than that expected. Finally, beyond the single word level, an in-depth analysis is provided of the participants’ verbs and the strings these were produced in. This focuses on the number and productivity of constructions apparently available to the participants and shows that these speakers can be placed along a continuum largely corresponding to their expressive language capabilities. The productions of the more impaired speakers were mainly limited to a small number of high-frequency words and lexically-specific or item-based constructions. In contrast, those with greater expressive language capabilities used a larger number and variety of constructions, including more lengthy schematic patterns. They seemed much more able to use their constructions productively in creating novel utterances. In addition, an analysis of the errors in participants’ verb strings was conducted. This revealed some differences in the types of errors produced across the participant group, with the more impaired speakers making more omission and inflection errors, whilst the participants with greater expressive language capabilities produced more blending errors. The analysis demonstrates how these seemingly different error types could all be explained within a constructivist, usage-based approach, by problems with retrieval. In showing how the results of these studies can be accounted for by constructivist, usage-based theory, the thesis demonstrates how this view could help to elucidate language in aphasia and, equally, how aphasia offers new ground for testing this approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Hoshino, Takane Noda Mari. "An analysis of Hosii in modern spoken Japanese /." Connect to this title online, 1991. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1116617297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Thomson, Blaise Roger Marie. "Statistical methods for spoken dialogue management." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pijitra, Dissawarotham David Thomas. "The phonology of Plang as spoken in Banhuaynamkhum Chiengrai province /." abstract, 1986. http://mulinet3.li.mahidol.ac.th/thesis/2529/29E-Pijitra-D.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography