Thèses sur le sujet « Language Representation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Language Representation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Language Representation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Sukkarieh, Jana Zuheir. « Natural language for knowledge representation ». Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620452.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wilhelmson, Mika. « Representations of culture in EIL : Cultural representation in Swedish EFL textbooks ». Thesis, Högskolan Dalarna, Engelska, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:du-21120.

Texte intégral
Résumé :
The English language has become an international language and is globally used as a lingua franca. Therefore, there has been a shift in English-language education toward teaching English as an interna-tional language (EIL). Teaching from the EIL paradigm means that English is seen as an international language used in communication by people from different linguistic and cultural backgrounds. As the approach to English-language education changes from the traditional native-speaker, target country context, so does the role of culture within English-language teaching. The aim of this thesis is to in-vestigate and analyse cultural representations in two Swedish EFL textbooks used in upper-secondary school to see how they correspond with the EIL paradigm. This is done by focusing on the geograph-ical origin of the cultural content as well as looking at what kinds of culture are represented in the textbooks. A content analysis of the textbooks is conducted, using Kachru’s Concentric Circles of English as the model for the analysis of the geographical origin. Horibe’s model of the three different kinds of culture in EIL is the model used for coding the second part of the analysis. The results of the analysis show that culture of target countries and "Culture as social custom" dominate the cultural content of the textbook. Thus, although there are some indications that the EIL paradigm has influ-enced the textbooks, the traditional approach to culture in language teaching still prevails in the ana-lysed textbooks. Because of the relatively small sample included in the thesis, further studies need to be conducted in order to make conclusions regarding the Swedish context as a whole.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Luebbering, Candice Rae. « The Cartographic Representation of Language : Understanding language map construction and visualizing language diversity ». Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/37543.

Texte intégral
Résumé :
Language maps provide illustrations of linguistic and cultural diversity and distribution, appearing in outlets ranging from textbooks and news articles to websites and wall maps. They are valuable visual aids that accompany discussions of our cultural climate. Despite the prevalent use of language maps as educational tools, little recent research addresses the difficult task of map construction for this fluid cultural characteristic. The display and analysis capabilities of current geographic information systems (GIS) provide a new opportunity for revisiting and challenging the issues of language mapping. In an effort to renew language mapping research and explore the potential of GIS, this dissertation is composed of three studies that collectively present a progressive work on language mapping. The first study summarizes the language mapping literature, addressing the difficulties and limitations of assigning language to space before describing contemporary language mapping projects as well as future research possibilities with current technology. In an effort to identify common language mapping practices, the second study is a map survey documenting the cartographic characteristics of existing language maps. The survey not only consistently categorizes language map symbology, it also captures unique strategies observed for handling locations with linguistic plurality as well as representing language data uncertainty. A new typology of language map symbology is compiled based on the map survey results. Finally, the third study specifically addresses two gaps in the language mapping literature: the issue of visualizing linguistic diversity and the scarcity of GIS applications in language mapping research. The study uses census data for the Washington, D.C. Metropolitan Statistical Area to explore visualization possibilities for representing the linguistic diversity. After recreating mapping strategies already in use for showing linguistic diversity, the study applies an existing statistic (a linguistic diversity index) as a new mapping variable to generate a new visualization type: a linguistic diversity surface. The overall goal of this dissertation is to provide the impetus for continued language mapping research and contribute to the understanding and creation of language maps in education, research, politics, and other venues.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Perfors, Amy (Amy Francesca). « Learnability, representation, and language : a Bayesian approach ». Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45601.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 225-243).
Within the metaphor of the "mind as a computation device" that dominates cognitive science, understanding human cognition means understanding learnability not only what (and how) the brain learns, but also what data is available to it from the world. Ideal learnability arguments seek to characterize what knowledge is in theory possible for an ideal reasoner to acquire, which illuminates the path towards understanding what human reasoners actually do acquire. The goal of this thesis is to exploit recent advances in machine learning to revisit three common learnability arguments in language acquisition. By formalizing them in Bayesian terms and evaluating them given realistic, real-world datasets, we achieve insight about what must be assumed about a child's representational capacity, learning mechanism, and cognitive biases. Exploring learnability in the context of an ideal learner but realistic (rather than ideal) datasets enables us to investigate what could be learned in practice rather than noting what is impossible in theory. Understanding how higher-order inductive constraints can themselves be learned permits us to reconsider inferences about innate inductive constraints in a new light. And realizing how a learner who evaluates theories based on a simplicity/goodness-of-fit tradeoff can handle sparse evidence may lead to a new perspective on how humans reason based on the noisy and impoverished data in the world. The learnability arguments I consider all ultimately stem from the impoverishment of the input either because it lacks negative evidence, it lacks a certain essential kind of positive evidence, or it lacks suffcient quantity of evidence necessary for choosing from an infinite set of possible generalizations.
(cont.) I focus on these learnability arguments in the context of three major topics in language acquisition: the acquisition of abstract linguistic knowledge about hierarchical phrase structure, the acquisition of verb argument structures, and the acquisition of word leaning biases.
by Amy Perfors.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nayak, Sunita. « Representation and learning for sign language recognition ». [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002362.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jarosiewicz, Eugenio. « Natural language parsing and representation in XML ». [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000707.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dawborn, Timothy James. « DOCREP : Document Representation for Natural Language Processing ». Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14767.

Texte intégral
Résumé :
The field of natural language processing (NLP) revolves around the computational interpretation and generation of natural language. The language typically processed in NLP occurs in paragraphs or documents rather than in single isolated sentences. Despite this, most NLP tools operate over one sentence at a time, not utilising the context outside of the sentence nor any of the metadata associated with the underlying document. One pragmatic reason for this disparity is that representing documents and their annotations through an NLP pipeline is difficult with existing infrastructure. Representing linguistic annotations for a text document using a plain text markupbased format is not sufficient to capture arbitrarily nested and overlapping annotations. Despite this, most linguistic text corpora and NLP tools still operate in this fashion. A document representation framework (DRF) supports the creation of linguistic annotations stored separately to the original document, overcoming this nesting and overlapping annotations problem. Despite the prevalence of pipelines in NLP, there is little published work on, or implementations of, DRFs. The main DRFs, GATE and UIMA, exhibit usability issues which have limited their uptake by the NLP community. This thesis aims to solve this problem through a novel, modern DRF, DOCREP; a portmanteau of document representation. DOCREP is designed to be efficient, programming language and environment agnostic, and most importantly, easy to use. We want DOCREP to be powerful and simple enough to use that NLP researchers and language technology application developers would even use it in their own small projects instead of developing their own ad hoc solution. This thesis begins by presenting the design criteria for our new DRF, extending upon existing requirements from the literature with additional usability and efficiency requirements that should lead to greater use of DRFs. We outline how our new DRF, DOCREP, differs from existing DRFs in terms of the data model, serialisation strategy, developer interactions, support for rapid prototyping, and the expected runtime and environment requirements. We then describe our provided implementations of DOCREP in Python, C++, and Java, the most common languages in NLP; outlining their efficiency, idiomaticity, and the ways in which these implementations satisfy our design requirements. We then present two different evaluations of DOCREP. First, we evaluate its ability to model complex linguistic corpora through the conversion of the OntoNotes 5 corpus to DOCREP and UIMA, outlining the differences in modelling approaches required and efficiency when using these two DRFs. Second, we evaluate DOCREP against our usability requirements from the perspective of a computational linguist who is new to DOCREP. We walk through a number of common use cases for working with text corpora and contrast traditional approaches again their DOCREP counterpart. These two evaluations conclude that DOCREP satisfies our outlined design requirements and outperforms existing DRFs in terms of efficiency, and most importantly, usability. With DOCREP designed and evaluated, we then show how NLP applications can harness document structure. We present a novel document structureaware tokenization framework for the first stage of fullstack NLP systems. We then present a new structureaware NER system which achieves stateoftheart results on multiple standard NER evaluations. The tokenization framework produces its tokenization, sentence boundary, and document structure annotations as native DOCREP annotations. The NER system consumes DOCREP annotations and utilises many components of the DOCREP runtime. We believe that the adoption of DOCREP throughout the NLP community will assist in the reproducibility of results, substitutability of components, and overall quality assurance of NLP systems and corpora, all of which are problematic areas within NLP research and applications. This adoption will make developing and combining NLP components into applications faster, more efficient, and more reliable.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ramos, González Juan José. « PML - A modeling Language for Physical Knowledge Representation ». Doctoral thesis, Universitat Autònoma de Barcelona, 2003. http://hdl.handle.net/10803/5801.

Texte intégral
Résumé :
Esta tesis versa sobre la automatización del proceso de modelado de sistemas físicos. La automatización del modelado ha sido el objetivo común en la mayor parte de las principales herramientas disponibles hoy en día. La reutilización de modelos es probablemente el principal enfoque adoptado por dichas herramientas con el objeto de reducir el coste asociado a la tarea de modelado. No obtante, permitir la reutilización de modelos predefinidos no es una cuestión trivial y, como se discute con profucdidad en la tesis, la reutilización de modelos no puede ser garantizada cuando han sido predefinidos para representar la dinámica del sistima en un contextor físico concreto. Con el fin de evitar las restricciones sobre la reutilización derivadas de la formylación matemática de las dinámicas de interés, el lenguaje de modelado debe establecer un clara separación entre los aspectos relacionados con la representación del comportamiento físico (conocimiento declarativo) y los aspectos matemáticos relacionados con las herramientas de simulación (conocimiento procedural). El conomiento declarativo representará el comportamiento físico y será utilizado para analizar el contexto físico de reutilización de los modelos con el objeto de establecer la formulación adecuada de las dinámicas de interés.
El propósito de este trabajo ha sido el diseño de un lenguaje de modelado, PML, capaz de automatizar el proceso de modelado asegurando la reusabilidad de modelos que pueden ser predefinidos de manera independiente al contexto físico don seran reutilizados. La reutilización de modelos se contempla tanto en la contrucción de nuevos modelos (modelado estructurado) como en su utilización para diferentes objetivos de experimentación. Los nuevos modelos son contruidos acoplando modelos predefinidos de acurdo a la topología física del sistema modelado. Tales modelos pueden ser manipulados para adecuarlos a distintos objetivos de experimentación, adecuándose la formulación matemática de la dinámicas de interés marcadas por dichos objetivos.
PML es un lenguaje de modelado orientado a objetos diseñado para describir el comportamiento del sistema físico mediante estructuras de representación modulares (clases de modelado). La clases PML representan conceptos físicos que son familiares al modelador. El conocimiento físico declarado por la clases se utiliza para analizar los modelos estructurados, obteniéndose de manera automatizada la representación matemática de las dinámicas de interés.
The topic of this thesis is the automated modeling of physical systems. Modeling automation has been a common objective in many of the present modeling tools. Reuse of predefined models is probably the main approach adopted by many of them in order to reduce the modeling burden. However, to facilitate reuse is difficult to achieve and, as it is discussed thoroughly in the thesis, reusability of models can not be assured when they are predefined to represent the system dynamics in a particular physical context. In order to avoid the reuse constraints due to the system dynamics formulation, a modeling language should be defined with a clear separation between the physical behaviour representation aspects (declarative physical knowledge) and the computational aspects concerning to model simulation (procedural computational knowledge). The physical knowledge will represent the system behaviour and it will support the analysis of the model reusing context in order to set the system dynamics formulation.
The aim of this work is the design of a modeling language, PML, able to automate the modeling process by assuring the reusability of ready-made models independently of the physical context where they have been defined. The reuse of a predefined model contemplates both the construction of new models (structured modeling) and the model usage for different experimentation purposes. New models are constructed by coupling predefined models according to the physical system topology. Such structured models are manipulated in order to obtain the representation of the system dynamics which are of interest for the experimentation purposes.
PML is an object oriented modeling language designed to represent system behaviour by means of modular structures (modeling classes). The PML modeling classes describe physical concepts well-known by the modeller. The physical knowledge declared by the modeling classes is used to analyze structured models in order to generate automatically the mathematical representation of the system dynamics. The simulation model is obtained by means of an equation-based object oriented modeling language.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Stephens, Robert Andrew. « Representation and knowledge acquisition : the problem of language ». Thesis, University of the West of England, Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321831.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rocktaschel, Tim. « Combining representation learning with logic for language processing ». Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10040845/.

Texte intégral
Résumé :
The current state-of-the-art in many natural language processing and automated knowledge base completion tasks is held by representation learning methods which learn distributed vector representations of symbols via gradient based optimization. They require little or no hand-crafted features, thus avoiding the need for most preprocessing steps and task-specific assumptions. However, in many cases representation learning requires a large amount of annotated training data to generalize well to unseen data. Such labeled training data is provided by human annotators who often use formal logic as the language for specifying annotations. This thesis investigates different combinations of representation learning methods with logic for reducing the need for annotated training data, and for improving generalization. We introduce a mapping of function-free first-order logic rules to loss functions that we combine with neural link prediction models. Using this method, logical prior knowledge is directly embedded in vector representations of predicates and constants. We find that this method learns accurate predicate representations for which no or little training data is available, while at the same time generalizing to other predicates not explicitly stated in rules. However, this method relies on grounding first-order logic rules, which does not scale to large rule sets. To overcome this limitation, we propose a scalable method for embedding implications in a vector space by only regularizing predicate representations. Subsequently, we explore a tighter integration of representation learning and logical deduction. We introduce an end-to-end differentiable prover – a neural network that is recursively constructed from Prolog’s backward chaining algorithm. The constructed network allows us to calculate the gradient of proofs with respect to symbol representations and to learn these representations from proving facts in a knowledge base. In addition to incorporating complex first-order rules, it induces interpretable logic programs via gradient descent. Lastly, we propose recurrent neural networks with conditional encoding and a neural attention mechanism for determining the logical relationship between two natural language sentences.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Nordmann, Emily. « The nature of lexical representation in language production ». Thesis, University of Aberdeen, 2013. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=192283.

Texte intégral
Résumé :
This thesis presents an investigation of the three prominent models of language production: Levelt, Roelofs and Meyers (1999) two-stage account; Dell’s (1986) interactive account; and Caramazza’s (1997) Independent Network model. In particular, the thesis investigates four questions. First, is the activation of semantic, syntactic, and phonological representations serial or parallel? Secondly, is the flow of activation strictly modular, or is it cascading? Thirdly, how does the production system deal with the lemma level/syntactic representation of entries that whilst fixed, are larger than single words, e.g., idioms and fixed expressions? Finally, at what time point does the activation of syntactic information occur? Chapter 2 presents a tip-of-the-tongue (TOT) experiment in which support for cascading activation is found. Chapter 3 continues the use of the TOT paradigm to investigate the representation and a homophone advantage suggestive of shared phonological representations was found. Chapter 4 extends the TOT paradigm to the investigation of idiomatic expressions and the results suggest that both the literal and figurative meanings of an idiom are active during production – a finding that is best explained through bi-directional spreading activation. Chapter 5 continues the investigation of idiomatic expressions through a norming study and the results indicate that both native and non-native speakers can make fine-grained distinctions regarding idioms, but that this is heavily influenced by familiarity. Chapter 6 uses the picture-word interference paradigm to investigate the representation of count and mass nouns. The findings suggest that the activation of syntactic information is an early process and that mass nouns require the activation of an additional feature compared to count nouns. Finally, Chapter 8 presents the thesis conclusions, future directions for research, and argues that the evidence presented in the experimental chapters is strongly supportive of the interactive account of language production proposed by Dell (1986).
Styles APA, Harvard, Vancouver, ISO, etc.
12

Ji, Donghong. « Conceptual relevance : representation and analysis ». Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711639.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Parkinson, Frederick Brooke. « The representation of vowel height in phonology / ». The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487940308431245.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Lee, Su-Yong. « The aesthetic politics of poetic language : language and representation in Shelley's dramatic poetry ». Thesis, University of East Anglia, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396616.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Edwards, Carleen Marie. « Representation and simulation of a high level language using VHDL ». Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-11242009-020306/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Hara, Yurie. « Grammar of knowledge representation Japanese discourse items at interfaces/ ». Access to citation, abstract and download form provided by ProQuest Information and Learning Company ; downloadable PDF file 0.81 Mb., 200 p, 2006. http://wwwlib.umi.com/dissertations/fullcit/3205429.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Lau, Arthur Chunip. « Written representation of oral features in Cantonese Chinese / ». Access Digital Full Text version, 1995. http://pocketknowledge.tc.columbia.edu/home.php/bybib/11791603.

Texte intégral
Résumé :
Thesis (Ed.D.)--Teachers College, Columbia University, 1995.
Includes tables. Typescript; issued also on microfilm. Sponsor: JoAnne Kleifgen. Dissertation Committee: Clifford Hill. Includes bibliographical references (leaves 171-175).
Styles APA, Harvard, Vancouver, ISO, etc.
18

Jung, Tzyy-Ping. « An algorithm for deriving an articulatory-phonetic representation / ». The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487841975357253.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Brannigan, Holly P. « Language processing and the mental representation of syntactic structure ». Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/424.

Texte intégral
Résumé :
This thesis investigates the mental representation of syntactic structure. It takes an interdisciplinary approach which exploits methods and insights from both experimental psychology and theoretical linguistics to explore the claim that syntactic representation can be the subject of empirical psychological study. The thesis makes use of corpus analysis and two experimental methods, agreement error elicitation and syntactic priming, to examine syntactic structure in both language production and language comprehension. I argue that assumptions about syntactic representation are fundamental to all models of language processing. However, processing models have largely assumed the representations proposed by theoretical linguists in the belief that that syntactic representation is the province of theoretical linguistics. I propose that the mental representation of syntactic structure is a legitimate area of study for psycholinguists and that it can be investigated using experimental methods. The remainder of this thesis presents empirical evidence to support this claim. The main conclusion of this thesis is that syntactic representation is amenable to psychological study. The evidence which is gathered in this way is in principle relevant not only to theories of language processing but also to any linguistic theory which claims to characterise knowledge of language.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Wilde, C. « Myth-Representation, Language and the Other : a Jungian Perspective ». Thesis, University of Essex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495542.

Texte intégral
Résumé :
This thesis is concerned with the role of myth and mythical thought in analytical psychology. It addresses the use of myth as 'other' in the theory of archetypes, and mythical/imaginal thought as the underlying process. The research is focused on linguistic representation of dream and image through the theory of archetypes. Key questions are about the use of language in representing or othering human experience, and its effects on subject positions. The thesis begins by examining the postulated functions of myth within analytical psychology. It argues that analytic discourses, particularly the theory of archetypes, utilize mythical thought and act as, or 10, myth. Reference is made to Vico and Cassirer's work on language and myth, for their suggestions that language about myth has a 'developmental' function and can be read as indicating stages of human consciousness. The thesis follows two post-Jungian arguments about language and representation, one poststructuralist, the other phenomenologically biased. It argues that, through its concern with image, analytical psychology can articulate with and draw upon phenomenological perspectives on intersubjective lingUistic exchange, and on representing experience in language intra-subjectively. Reference is made to the work of Binswanger and Foucault on representing dreams. An examination is made of reading analytic discourse as symbolic, and functioning as other. Referring to Lacan, it is argued that linguistic representation can be read as a mything, imaginal process, and that awareness of linguistic relativity is important in developing postmodern analytical psychology. There follows an examination of how analysis can articulate and engage with postmodern culture and subjectivity. The notion of the nomadic subject is referred to as embodying desire, resistance and becoming other through the imaginal. Finally, it is suggested that postmodern analytical psychology can seek out mything, imaginal aspects of culture through engagement with nomadic thinking and postmodern thought aiJout dreaming.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Mao, Xinyue. « Visualization and Natural Language Representation of Simulated Cyber Attacks ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246090.

Texte intégral
Résumé :
The attack path is an effective tool for showing possible hacking routestaken by an attacker to target a specific computer network. It also informsadministrators about potential weakness in a network helpingthem roll-out network configuration changes. Based on predefinedcomputing methods, a large number of attack paths can be generated.However, attack paths show all possible routes for each calculationand represent them with terminologies specific to the cybersecurityfield. A major portion of attack routes and representations aretoo complicated for normal users, making it difficult to identify theparts they should pay more attention to. In this thesis project, a frameworkfor generating a concise and user-friendly attack path throughgrouping continuous attack steps is described. The framework is designedwith 6 levels of hierarchical abstraction. Top 3 levels of theseabstractions are classified based on the predefined structure of the softwareand named Structural Division. The other 3 lower levels areclassified based on semantics involving a taxonomy for natural languagerepresentation called SCV (Security Community Vocabulary),named semantics division. This visualization method is released aspart of securiCADR , a cybersecurity product released by Foreseeti,which provides a concise and understandable interaction by aggregatingoriginal attack steps according to different requirements of customers.
Anfallsstigen är ett effektivt verktyg för att visa möjliga hackningsvägarsom en angripare tar emot ett specifikt datornätverk. Det informerarockså administratörer om eventuell svaghet i ett nätverk somhjälper dem att utrulla nätverkskonfigurationsändringar. Baserat påfördefinierade datormetoder kan ett stort antal attackvägar genereras.Åtkomstvägar visar dock alla möjliga vägar för varje beräkning och representerardem med terminologier som är specifika för fältet Cybersecurity.En stor del av attackvägar och representationer är för kompliceradeför vanliga användare vilket gör det svårt att identifiera de delarsom de borde ägna mer uppmärksamhet åt. I denna avhandlingsrapportbeskrivs ett ramverk för att generera en kortfattad och användarvänligattackväg genom att gruppera kontinuerliga angreppssteg.Ramverket är utformat med 6 nivåer av hierarkisk abstraktion. Topp3 nivåer av dessa abstraktioner klassificeras baserat på den fördefinieradestrukturen av mjukvaran och namngiven strukturell uppdelning.De övriga 3 lägre nivåerna klassificeras baserat på semantik meden taxonomi för naturlig språkrepresentation som heter SCV (SecurityCommunity Vocabulary), namngiven semantikavdelning. Denna visualiseringsmetodsläpps som en del av securiCADR en cybersecurityproduktsom släpptes av Foreseeti, vilket ger en kortfattad och förståeliginteraktion genom att aggregera ursprungliga attacksteg enligtolika kunders krav.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hofmeister, Scott Thomas. « Intensional subsumption in a general taxonomic knowledge representation language ». Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35954.

Texte intégral
Résumé :
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 77-80).
by Scott Thomas Hofmeister.
M.S.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Henderson, Eric Kord. « A text representation language for contextual and distributional processing ». Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608403.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Erard, Michael-Jean. « Inscribing language : writing and scientific representation in American linguistics / ». Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004259.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

NOZZA, DEBORA. « Deep Learning for Feature Representation in Natural Language Processing ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/241185.

Texte intégral
Résumé :
La mole di dati generata dagli utenti sul Web è esponenzialmente cresciuta negli ultimi dieci anni, creando nuove e rilevanti opportunità per ogni tipo di dominio applicativo. Per risolvere i problemi derivanti dall’eccessiva quantità di dati, la ricerca nell’ambito dell’elaborazione del linguaggio naturale si è mossa verso lo sviluppo di modelli computazionali capaci di capirlo ed interpretarlo senza (o quasi) alcun intervento umano. Recentemente, questo campo di studi è stato testimone di un incremento sia in termini di efficienza computazionale che di risultati, per merito dell’avvento di una nuova linea di ricerca nell’apprendimento automatico chiamata Deep Learning. Questa tesi si focalizza in modo particolare su una specifica classe di modelli di Deep Learning atta ad apprendere rappresentazioni di alto livello, e conseguentemente più significative, dei dati di input in ambiente non supervisionato. Nelle tecniche di Deep Learning, queste rappresentazioni sono ottenute tramite multiple trasformazioni non lineari di complessità e astrazione crescente a partire dai dati di input. Questa fase, in cui vengono elaborate le sopracitate rappresentazioni, è un processo cruciale per l’elaborazione del linguaggio naturale in quanto include la procedura di trasformazione da simboli discreti (es. lettere) a una rappresentazione vettoriale che può essere facilmente trattata da un elaboratore. Inoltre, questa rappresentazione deve anche essere in grado di codificare la sintattica e la semantica espressa nel linguaggio utilizzato nei dati. La prima direzione di ricerca di questa tesi mira ad evidenziare come i modelli di elaborazione del linguaggio naturale possano essere potenziati dalle rappresentazioni ottenute con metodi non supervisionati di Deep Learning al fine di conferire un senso agli ingenti contenuti generati dagli utenti. Nello specifico, questa tesi si focalizza su diversi ambiti che sono considerati cruciali per capire di cosa il testo tratti (Named Entity Recognition and Linking) e qual è l’opinione che l’utente sta cercando di esprimere considerando la possibile presenza di ironia (Sentiment Analysis e Irony Detection). Per ognuno di questi ambiti, questa tesi propone modelli innovativi di elaborazione del linguaggio naturale potenziati dalla rappresentazione ottenuta tramite metodi di Deep Learning. Come seconda direzione di ricerca, questa tesi ha approfondito lo sviluppo di un nuovo modello di Deep Learning per l’apprendimento di rappresentazioni significative del testo ulteriormente valorizzato considerando anche la struttura relazionale che sta alla base dei contenuti generati sul Web. Il processo di inferenza terrà quindi in considerazione sia il testo dei dati di input che la componente relazionale sottostante. La rappresentazione, dopo essere stata ottenuta, potrà quindi essere utilizzata da modelli di apprendimento automatico standard per poter eseguire svariate tipologie di analisi nell'ambito di elaborazione del linguaggio naturale. Concludendo, gli studi sperimentali condotti in questa tesi hanno rilevato che l’utilizzo di rappresentazioni più significative ottenute con modelli di Deep Learning, associate agli innovativi modelli di elaborazione del linguaggio naturale proposti in questa tesi, porta ad un miglioramento dei risultati ottenuti e a migliori le abilità di generalizzazione. Ulteriori progressi sono stati anche evidenziati considerando modelli capaci di sfruttare, oltre che al testo, la componente relazionale.
The huge amount of textual user-generated content on the Web has incredibly grown in the last decade, creating new relevant opportunities for different real-world applications and domains. To overcome the difficulties of dealing with this large volume of unstructured data, the research field of Natural Language Processing has provided efficient solutions developing computational models able to understand and interpret human natural language without any (or almost any) human intervention. This field has gained in further computational efficiency and performance from the advent of the recent machine learning research lines concerned with Deep Learning. In particular, this thesis focuses on a specific class of Deep Learning models devoted to learning high-level and meaningful representations of input data in unsupervised settings, by computing multiple non-linear transformations of increasing complexity and abstraction. Indeed, learning expressive representations from the data is a crucial step in Natural Language Processing, because it involves the transformation from discrete symbols (e.g. characters) to a machine-readable representation as real-valued vectors, which should encode semantic and syntactic meanings of the language units. The first research direction of this thesis is aimed at giving evidence that enhancing Natural Language Processing models with representations obtained by unsupervised Deep Learning models can significantly improve the computational abilities of making sense of large volume of user-generated text. In particular, this thesis addresses tasks that were considered crucial for understanding what the text is talking about, by extracting and disambiguating the named entities (Named Entity Recognition and Linking), and which opinion the user is expressing, dealing also with irony (Sentiment Analysis and Irony Detection). For each task, this thesis proposes a novel Natural Language Processing model enhanced by the data representation obtained by Deep Learning. As second research direction, this thesis investigates the development of a novel Deep Learning model for learning a meaningful textual representation taking into account the relational structure underlying user-generated content. The inferred representation comprises both textual and relational information. Once the data representation is obtained, it could be exploited by off-the-shelf machine learning algorithms in order to perform different Natural Language Processing tasks. As conclusion, the experimental investigations reveal that models able to incorporate high-level features, obtained by Deep Learning, show significant performance and improved generalization abilities. Further improvements can be also achieved by models able to take into account the relational information in addition to the textual content.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zilis, Michael A. « Societal Semantics : The Linguistic Representation of Society ». Miami University Honors Theses / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=muhonors1177369105.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Brendel, Claudia. « Identity and representation on the internet ». Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52301.

Texte intégral
Résumé :
Thesis (MA)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: This thesis investigates the ways in which identity is established and represented on the Internet. Through detailed case studies of different Internet sites, I examine the changing parameters of these concepts, and indeed of our concept of 'reality' itself. I then undertake a detailed reading of a number of films that represent the Internet as an integral part of their narrative. I make use, but also critique, postmodern understandings of identity and representation. Existing postmodern theories of identity and representation cannot fully account for the way Internet identity functions and the Internet interacts with other media and offline life. New analyses are required to explain the interactions between these concepts. This thesis uses the constructs of presence, performance, the body, and narrative to describe the way in which identity and representation function online, are represented in film and influence offline life.
AFRIKAANSE OPSOMMING: Hierdie tesis beskou die maniere waarop identiteit op die internet gevestig en oorgedra word. Ek ondersoek die veranderende parameters van hierdie konsepte deur uitgebreide gevallestudies van verskillende internetruimtes te doen, en bekyk ook ons opvatting van die werklikheid self. Voorts doen ek 'n deurtastende ondersoek na 'n aantal films wat die internet as 'n integrale rolspeler in die narratief voorstel. Ek maak gebruik van, maar beoordelook, postmodernistiese beskouings van identiteit en oordrag. Die bestaande postmodernistiese teorieë oor identiteit en oordrag kan nie volledig rekenskap gee van die wyse waarop die internet-identiteit funksioneer of hoe die internet op ander media en aftydse middele reageer nie. Nuwe ondersoeke is nodig om die wisselwerking tussen hierdie konsepte te verduidelik. Hierdie tesis gebruik die begrippe van aanwesigheid, optrede, hoofinhoud en narratief om die wyse waarop indentiteit en oordrag intyds funksioneer, in film oorgedra word en aftydse middele beïnvloed, te beskryf.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Openshaw, James Michael. « Singular representation ». Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:1a411d1e-e7fb-410d-ada0-a24f39056670.

Texte intégral
Résumé :
This thesis is a study of aboutness. It defends the claim that we have singular thoughts about ordinary objects and argues that an essential part of how we do so is by maintaining singular representations. This proposal allows us to avoid traditional, unsatisfying conceptions of the scope of singular thought while restoring the sense in which such thought is a distinctively epistemic achievement. Reconnecting the study of aboutness with epistemology promises to alleviate the sense of directionlessness in the contemporary literature, offering a firmer grip on the phenomenon along with new, systematic resources for its investigation. Chapters 1-2 explore the effects of contextualist machinery on orthodox views about singular thought. It is widely thought that if there is to be a plausible connection between the truth of a de re attitude report about a subject and that subject's possession of a singular thought, then there can be no acquaintance requirement(s) on singular thought. Chapter 1 shows that this view rests on a faulty picture of how we talk about attitudes. Indeed, the truth of a de re attitude report cannot be taken to track the singular/non-singular distinction without collapsing it. A new, contextualist picture is needed. That there must be a distinction between singular and non-singular intentionality is emphasized in Chapter 2, where a key explanatory role for singular thought - brought out by a thought experiment due to Strawson - is examined. I show that the role does not call for any distinctive kind of mental content. Once we abandon the two widespread views questioned in Chapters 1-2, our grip on the phenomenon of singular aboutness is loosened: it is not constitutively tied to the kinds of attitude-reporting data or mental content by which it is often assumed to be revealed. Where are we to look for insight? What makes something the object of a singular thought? According to Russell, it is a datum of intuition that singular thought involves a kind of knowledge; a theory of aboutness will precisify the intuitive notion of 'knowing which thing one is thinking about' in order to capture this demand in a philosophically revealing way. If Russell is right, teasing out this connection to knowledge will allow us to see what it takes for a particular thing to be the immediate subject matter of thought. Chapter 3 discusses Evans's theory of this kind. Chapter 4 examines recent work by Dickie. While serious concerns emerge in each case, insights recovered are used to precisify Russell's requirement, leading to a novel picture of singular representation and the epistemic character of this achievement. While the chapters follow a narrative, providing an extended rationale for the proposal in Chapter 4, each may be read in isolation by those familiar with the philosophical issues. For those who are not, the Introduction provides sufficient background.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Kingsbury, Moore Lois Joy. « Reference and representation in Down's syndrome ». Thesis, University of Plymouth, 1996. http://hdl.handle.net/10026.1/2522.

Texte intégral
Résumé :
Previous research has highlighted a different pattern in the use of grammatical forms to successfully maintain coherent discourse by individuals with Down's syndrome. To maintain coherent discourse both linguistic and non-linguistic information must be integrated and maintained in a mental representation of current discourse. The ability of children with Down's syndrome to use such a mental representation has been assessed in this study. The ability of adults with Down's syndrome to comprehend and produce a range of grammatical forms was initially assessed, using a grammaticality judgement task, an imitation task, and a spontaneous speech sample. Results indicated that the production and comprehension of pronouns was found moderately difficult. The successful use of a pronoun depends on the ability to use a mental representation to retain information about its antecedent in order to assist correct interpretation and avoid ambiguity. A narrative task was used to investigate the use of referential forms by children with Down's syndrome and typically developing children. The effects of certain contextual features on the use of referential forms were investigated: the status of each character and the number of characters in the story; the method of presenting the story; and the position of a listener while the story was narrated. When narrating a story typically developing children distinguished the status of characters in the stories by consistently using different referential forms for each. As age increased this strategy was used more successfully and flexibly. Children with Down's syndrome did not use referential forms in the same way as typically developing children. It is likely that this is a consequence of a difficulty in maintaining information about the whole story-where many sources of information must be accessed, integrated and maintained in a mental representation. At a local level within the story, children with Down's syndrome used referential strategies successfully, demonstrating an ability to integrate limited amounts of information about characters in a story. The inability to maintain information in a mental representation across longer periods of discourse indicates the importance of short term memory in language production.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Baring-Gould, Sengan. « SemNet : the knowledge representation of LOLITA ». Thesis, Durham University, 2000. http://etheses.dur.ac.uk/4284/.

Texte intégral
Résumé :
Many systems of Knowledge Representation exist, but none were designed specifically for general purpose large scale natural language processing. This thesis introduces a set of metrics to evaluate the suitability of representations for this purpose, derived from an analysis of the problems such processing introduces. These metrics address three broad categories of question: Is the representation sufficiently expressive to perform its task? What implications has its design on the architecture of the system using it? What inefficiencies are intrinsic to its design? An evaluation of existing Knowledge Representation systems reveals that none of them satisfies the needs of general purpose large scale natural language processing. To remedy this lack, this thesis develops a new representation: SemNet. SemNet benefits not only from the detailed requirements analysis but also from insights gained from its use as the core representation of the large scale general purpose system LOLITA (Large-scale Object-based Linguistic Interactor, Translator, and Analyser). The mapping process between Natural language and representation is presented in detail, showing that the representation achieves its goals in practice.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Kuttner, Eliza Wing-Mun. « A schema & ; constraint-based representation to understanding natural language ». Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/25899.

Texte intégral
Résumé :
This thesis attempts to represent the syntax and semantics of English sentences using a schema and constraint-based approach. In this approach, syntactic and semantic knowledge that are represented by schemata are processed in parallel with the utilization of network consistency techniques and an augmented version of Earley's context-free parsing algorithm. A sentence's syntax and semantics are disambiguated incrementally as the interpretation proceeds left to right, word by word. Each word and recognized grammatical constituent provide additional information that helps to guide the interpretation process. It is desirable to attempt to apply network consistency techniques and schema-knowledge representations on understanding natural language since the former has been proven to be quite efficient and the latter provides modularity in representing knowledge. In addition, this approach is appealing because it can cope with ambiguities in an efficient manner. Multiple interpretations are retained if ambiguity exists as indicated by the words processed so far. However, incorrect interpretations are eliminated as soon as their inappropriateness is discovered. Thus, backtracking search which is known to be inefficient is avoided.
Science, Faculty of
Computer Science, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
32

Nguyen, Xuan Hoai Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. « Flexible representation for genetic programming : lessons from natural language processing ». Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2004. http://handle.unsw.edu.au/1959.4/38750.

Texte intégral
Résumé :
This thesis principally addresses some problems in genetic programming (GP) and grammar-guided genetic programming (GGGP) arising from the lack of operators able to make small and bounded changes on both genotype and phenotype space. It proposes a new and flexible representation for genetic programming, using a state-of-the-art formalism from natural language processing, Tree Adjoining Grammars (TAGs). It demonstrates that the new TAG-based representation possesses two important properties: non-fixed arity and locality. The former facilitates the design of new operators, including some which are bio-inspired, and others able to make small and bounded changes. The latter ensures that bounded changes in genotype space are reflected in bounded changes in phenotype space. With these two properties, the thesis shows how some well-known difficulties in standard GP and GGGP tree-based representations can be solved in the new representation. These difficulties have been previously attributed to the treebased nature of the representations; since TAG representation is also tree-based, it has enabled a more precise delineation of the causes of the difficulties. Building on the new representation, a new grammar guided GP system known as TAG3P has been developed, and shown to be competitive with other GP and GGGP systems. A new schema theorem, explaining the behaviour of TAG3P on syntactically constrained domains, is derived. Finally, the thesis proposes a new method for understanding performance differences between GP representations requiring different ways to bound the search space, eliminating the effects of the bounds through multi-objective approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Bek, Judith. « Language and Spatial Representation : Evidence from Dual-Tasking and Aphasia ». Thesis, University of Sheffield, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521920.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Downey, Daniel J. G. « Knowledge representation in natural language : the wordicle - a subconscious connection ». Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333137.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Verbos, John. « Non-symbolic Exact Quantity Representation in a Language-Impaired Population ». Thesis, Duquesne University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10844959.

Texte intégral
Résumé :

The linguistic relativity hypothesis argues that language influences non-linguistic cognition. One version of the hypothesis suggests that language is a set of tools or technologies that variously enhance or dampen an individual’s capacity to perceive and operate upon the world in certain ways. A domain in which this may be tested is number, where it is hypothesized that counting language allows us to bridge our innate capacities for recognizing small exact quantities (subitizing) and approximating quantities larger than three or four (analog magnitude estimation). To test this, previous studies have asked adult participants who have limited or no access to counting language to re-present non-symbolic exact quantities—that is, for participants to create an array of objects equal in number to a target array of objects presented to the participant. In these studies, both English-speakers whose access to number language was artificially compromised by verbal interference and the Pirahã—an Amazonian tribe whose language does not contain exact number words—appeared to rely on analog magnitude estimation for representing non-symbolic exact quantities greater than three. This suggests that the ability to consistently and accurately recognize and re-present non-symbolic exact quantities is impaired by having limited or no access to counting language. Here, sixteen participants with left-hemisphere damage from stroke and resulting aphasia performed the same five non-verbal, non-symbolic matching tasks from these previous studies. It was expected that coefficients of variation for particular tasks, and correlations between target magnitude with both respect to both error rate and error size across tasks, would suggest use of analog magnitude estimation by these verbally impaired participants. Participants also completed three additional number tasks (number elicitation, confrontation naming with Arabic numerals, and a count list recitation task) and a subset of participants completed nonverbal semantic processing and short-term memory tasks (Pyramids and Palm Trees and a verbal semantic category probe) to better understand errors on nonverbal matching tasks. Results indicated that for people with aphasia, non-symbolic exact quantity representation was more difficult than for people without aphasia, except when target quantities were presented in subitizable groups. Overall, participants made more frequent and larger errors when representing larger quantities and struggled when the target was not visible. Participants who had difficulty with tasks where the target was visible during response also had difficulty with tasks where the target was not visible during response. However, another group of participants only had difficulty with tasks where the target was not visible during response. Additionally, participants who had difficulty with non-verbal aphasia assessment subtests were more likely to err on non-symbolic exact quantity representation tasks where the target was visible during response, while participants who had difficulty with aphasia assessment subtests that required verbal responses were more likely to err on non-symbolic exact quantity representation tasks where the target was not visible during response. These results, alongside correlations with aphasia assessment battery performance, suggest that (1) accuracy on non-symbolic exact quantity matching tasks where the target is visible on response rely more heavily on visuospatial abilities than on language or memory; (2) tasks involving subitizing small exact quantities do not appear to require the same visuospatial capacities; and (3) non-symbolic exact quantity matching tasks where the target is not visible on response rely upon language and memory abilities—especially the capacity for verbal counting. Taken together, these findings reinforce the notion that verbal counting facilitates the consistent and accurate recognition and representation of exact quantities larger than three or four by bridging innate human capacities for subitizing and analog magnitude estimation. Overall, the present results further inform our understanding of tasks previously used to understand the relationship between language and number in a culture lacking words for number concepts.

Styles APA, Harvard, Vancouver, ISO, etc.
36

Elliott, Desmond. « Structured representation of images for language generation and image retrieval ». Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10524.

Texte intégral
Résumé :
A photograph typically depicts an aspect of the real world, such as an outdoor landscape, a portrait, or an event. The task of creating abstract digital representations of images has received a great deal of attention in the computer vision literature because it is rarely useful to work directly with the raw pixel data. The challenge of working with raw pixel data is that small changes in lighting can result in different digital images, which is not typically useful for downstream tasks such as object detection. One approach to representing an image is automatically extracting and quantising visual features to create a bag-of-terms vector. The bag-of-terms vector helps overcome the problems with raw pixel data but this unstructured representation discards potentially useful information about the spatial and semantic relationships between the parts of the image. The central argument of this thesis is that capturing and encoding the relationships between parts of an image will improve the performance of extrinsic tasks, such as image description or search. We explore this claim in the restricted domain of images representing events, such as riding a bicycle or using a computer. The first major contribution of this thesis is the Visual Dependency Representation: a novel structured representation that captures the prominent region–region relationships in an image. The key idea is that images depicting the same events are likely to have similar spatial relationships between the regions contributing to the event. This representation is inspired by dependency syntax for natural language, which directly captures the relationships between the words in a sentence. We also contribute a data set of images annotated with multiple human-written descriptions, labelled image regions, and gold-standard Visual Dependency Representations, and explain how the gold-standard representations can be constructed by trained human annotators. The second major contribution of this thesis is an approach to automatically predicting Visual Dependency Representations using a graph-based statistical dependency parser. A dependency parser is typically used in Natural Language Processing to automatically predict the dependency structure of a sentence. In this thesis we use a dependency parser to predict the Visual Dependency Representation of an image because we are working with a discrete image representation – that of image regions. Our approach can exploit features from the region annotations and the description to predict the relationships between objects in an image. In a series of experiments using gold-standard region annotations, we report significant improvements in labelled and unlabelled directed attachment accuracy over a baseline that assumes there are no relationships between objects in an image. Finally, we find significant improvements in two extrinsic tasks when we represent images as Visual Dependency Representations predicted from gold-standard region annotations. In an image description task, we show significant improvements in automatic evaluation measures and human judgements compared to state-of-the-art models that use either external text corpora or region proximity to guide the generation process. In the query-by-example image retrieval task, we show a significant improvement in Mean Average Precision and the precision of the top 10 images compared to a bag-of-terms approach. We also perform a correlation analysis of human judgements against automatic evaluation measures for the image description task. The automatic measures are standard measures adopted from the machine translation and summarization literature. The main finding of the analysis is that unigram BLEU is less correlated with human judgements than Smoothed BLEU, Meteor, or skip-bigram ROUGE.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Wiesehan, Gretchen. « History, identity, and representation in recent German-language autobiographical novels / ». Thesis, Connect to this title online ; UW restricted, 1992. http://hdl.handle.net/1773/6653.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Van, Leeuwen Theo. « Language and representation : the recontextualisation of participants, activities and reactions ». Thesis, The University of Sydney, 1993. http://hdl.handle.net/2123/1615.

Texte intégral
Résumé :
This thesis proposes a model for the description of social practice which analyses social practices into the following elements: (1) the participants of the practice; (2) the activities which constitute the practice; (3) the performance indicators which stipulate how the activities are to be performed; (4) the dress and body grooming for the participants; (5) the times when, and (6)the locations where the activities take place; (7) the objects, tools and materials, required for performing the activities; and (8) the eligibility conditions for the participants and their dress, the objects, and the locations, that is, the characteristics these elements must have to be eligible to participate in, or be used in, the social practice.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Van, Leeuwen Theo. « Language and representation : the recontextualisation of participants, activities and reactions ». University of Sydney, 1993. http://hdl.handle.net/2123/1615.

Texte intégral
Résumé :
Doctor of Philosophy
This thesis proposes a model for the description of social practice which analyses social practices into the following elements: (1) the participants of the practice; (2) the activities which constitute the practice; (3) the performance indicators which stipulate how the activities are to be performed; (4) the dress and body grooming for the participants; (5) the times when, and (6)the locations where the activities take place; (7) the objects, tools and materials, required for performing the activities; and (8) the eligibility conditions for the participants and their dress, the objects, and the locations, that is, the characteristics these elements must have to be eligible to participate in, or be used in, the social practice.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Nugaliyadde, Anupiya. « Enhancing natural language understanding using meaning representation and deep learning ». Thesis, Nugaliyadde, Anupiya (2019) Enhancing natural language understanding using meaning representation and deep learning. PhD thesis, Murdoch University, 2019. https://researchrepository.murdoch.edu.au/id/eprint/53973/.

Texte intégral
Résumé :
Natural Language Understanding (NLU) is one of the complex tasks in artificial intelligence. Machine learning was introduced to address the complex and dynamic nature of natural language. Deep learning gained popularity within the NLU community due to its capability of learning features directly from data, as well as learning from the dynamic nature of natural language. Furthermore, deep learning has shown to be able to learn the hidden feature(s) automatically and outperform most of the other machine learning approaches for NLU. Deep learning models require natural language inputs to be converted to vectors (word embedding). Word2Vec and GloVe are word embeddings which are designed to capture the analogy context-based statistics and provide lexical relations on words. Using the context-based statistical approach does not capture the prior knowledge required to understand language combined with words. Although a deep learning model receives word embedding, language understanding requires Reasoning, Attention and Memory (RAM). RAM are key factors in understanding language. Current deep learning models focus either on reasoning, attention or memory. In order to properly understand a language however, all three factors of RAM should be considered. Also, a language normally has a long sequence. This long sequence creates dependencies which are required in order to understand a language. However, current deep learning models, which are developed to hold longer sequences, either forget or get affected by the vanishing or exploding gradient descent. In this thesis, these three main areas are of focus. A word embedding technique, which integrates analogy context-based statistical and semantic relationships, as well as extracts from a knowledge base to hold enhanced meaning representation, is introduced. Also, a Long Short-Term Reinforced Memory (LSTRM) network is introduced. This addresses RAM and is validated by testing on question answering data sets which require RAM. Finally, a Long Term Memory Network (LTM) is introduced to address language modelling. Good language modelling requires learning from long sequences. Therefore, this thesis demonstrates that integrating semantic knowledge and a knowledge base generates enhanced meaning and deep learning models that are capable of achieving RAM and long-term dependencies so as to improve the capability of NLU.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Sturt, Patrick. « Syntactic re-analysis in human language processing ». Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/517.

Texte intégral
Résumé :
This thesis combines theoretical, computational and experimental techniques in the study of reanalysis in human sentence comprehension. We begin by surveying the main claims of existing theories of reanalysis, and identify representation preservation as a key concept. We show that the models which most obviously feature representation preservation are those which have been formulated with in the monotonicity framework, which assumes that there are aspects of representation which are updated monotonically (i.e.non-destructively) from state to state, and that any reanalysis which requires a non-monotonic update is predicted to cause processing disruption. Next, we present a computational implementation, based on the monotonic theory of Gorrell (1995b). We argue that in constructing such a model of reanalysis, it is essential to consider not only declarative constraints, but also the computational processes through which reanalysis routines explicit, leading to novel predictions in cases where there exist more than one alternative for structural revision. I show why preferences for such reanalysis ambiguities may differ between predominantly head initial languages such as English, and head final languages such as Japanese. After this, we consider the empirical consequences of the implemented model, in particular in relation to recent experimental data concerning modifier attachment. We shoe that the model is too restrictive, and we argue that the appropriate way to expand its coverage is to apply the monotonicity constraints not directly to phrase structure, but to thematics structure. We provide a general framework which allows such non-phrase structural models to be defined, maintaining the same notion of monotonicity that was employed in the previous model. We go on to provide solutions to some computational problems which accompany this change. Finally, we present two experimental studies. The first of these considers the issue of reanalysis ambiguity, and specifically the existence of a recency preference is confirmed in off-line tasks, such as comprehension accuracy and a questionnaire experiment, but is not confirmed in self-paced reading. We discuss some possible reasons for this dissociation between the on-line and off-line results. The second experimental study considers the effect of modifier attachment in Japanese relative clause ambiguities. In this study, we confirm the influence of thematic structure on the resolution of Japanese relative clause ambiguities, and we argue that this effect should be interpreted in terms of a constraint on reanalysis.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Patil, Prithviraj S. « A FORMAL LANGUAGE APPROACH FOR DETECTING TEXTURE PATHS AND PATTERNS IN IMAGES ». Wright State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=wright1161611535.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Tourville, José. « Licensing and the representation of floating nasals ». Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=39274.

Texte intégral
Résumé :
It is commonly agreed that phonological elements must be prosodically licensed in order to be interpreted phonetically (cf. Ito, 1986). The licensing of segments is generally assumed to follow from the Universal Association Conventions. The licensing of phonological units smaller than the segment, however, has not been fully addressed. There is no agreement on the exact licensing mechanisms at play and on what constitutes a proper anchor for the initial association of floating subsegmentals. This thesis proposes a principled account of subsegmental licensing within the theory of segmental structure known as feature geometry, as modified by Piggott (to appear). It is shown that the manifestation of nasality in Maukaka, Koyaga, Jula, and Terena result from the way licensing operates. It is argued that, universally, floating subsegmental units are licensed through mapping, which associates a unit to an available position. It is also proposed that whenever there is no proper position for the mapping of a subsegmental element, this element may be licensed by Chomsky-adjunction. This type of adjunction has played a role in syllabification but not in the organization of feature.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Özçelik, Öner. « Representation and acquisition of stress : the case of Turkish ». Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107766.

Texte intégral
Résumé :
This thesis investigates the representation and acquisition of word-level stress in Turkish. Two general proposals are made in the thesis, one related to formal phonology, the other about second language (L2) acquisition of word-level prosody. The first proposes that the presence/absence of the Foot is parametric; that is, contra much previous research (see e.g. Selkirk 1995, Vogel 2009), it is argued in this thesis that the Foot is not a universal constituent of the Prosodic Hierarchy; rather, some languages, such as Turkish and French, are footless. Several types of evidence are presented in support of this proposal, from both Turkish and French, with a focus on the former language. A comparison of regular (word-final) and exceptional stress in this language reveals, for example, that regular "stress" is intonational prominence falling on the last syllable of prosodic words in the absence of foot structure. Exceptional stress, on the other hand, is argued to be the result of certain morphemes coming into the computation already footed in the lexicon, and being footed on the surface, too, because of faithfulness to this information. The grammar, then, assigns the other properties of this foot, such as binarity and foot type, which are vacuously satisfied for regular morphemes, as they are not footed, and as the grammar has no mechanism that assigns feet or stress. The result is a unified analysis of regular and exceptional stress in Turkish.Second, the thesis proposes a path for the L2 acquisition of prosody, the Prosodic Acquisition Path Hypothesis (PAPH). The PAPH predicts different levels of difficulty and paths to be followed by L2 learners based on the typological properties of their first language (L1) and the L2 they are learning, and also on the basis of a hierarchical tree representation of the relationships proposed to hold between prosodic parameters. Most foot-related parameters are incorporated in the proposal, as well as the new parameter proposed in this thesis about the presence/absence of the Foot. The PAPH predicts that once the Foot is projected in an L1, learners of a footless L2 will not be able to expunge it from their grammar, but will, instead, be restricted to changing the values of foot-related parameters. Not every one of these parameters is, however, hypothesized to be equally easy to reset; depending on a variety of factors such as their location on the parameter tree and markedness, certain parameters, such as Foot-Type, are hypothesized to be easier to reset than others, such as Iterativity.The predictions as concerns the learning path are tested through an experiment, which examines productions of English- and French-speaking learners of L2 Turkish. The results of the experiment largely confirm the predictions of the PAPH. None of the English-speaking learners of Turkish were able to rid their grammar of the Foot, though they were able to make various Universal Grammar (UG)-constrained changes to their grammar, such as resetting Extrametricality from 'Yes' to 'No', and at later stages, Foot-Type from 'Trochaic' to 'Iambic', thereby having increasingly more word types with final stress. French-speaking learners, on the other hand, produced target-like footless outputs, with word-final prominence, from the initial stages of acquisition. At no stage did any of the learners have UG-unconstrained representations such as weight-insensitive iambs, which are not permitted by the inventory of feet provided by UG.
Cette thèse examine la représentation et l'acquisition de l'accent lexical en turc. Deux propositions principales sont avancées, la première concernant la phonologie théorique et la seconde, l'acquisition en langue seconde (L2) de la prosodie lexicale. Il est d'abord proposé que la présence ou l'absence d'un pied prosodique est paramétrique. Contrairement à ce qui est suggéré dans plusieurs travaux (Selkirk 1995, Vogel 2009), il est proposé dans cette thèse que le pied n'est pas un constituant prosodique universel. Au contraire, certaines langues, dont le turc et le français, ne possèdent pas de pied. Plusieurs arguments en faveur de cette position provenant de ces deux langues, en particulier du turc, sont avancés. La comparaison entre un accent régulier (sur la fin du mot) et un accent exceptionnel en turc révèle que «l'accent » est simplement une proéminence de l'intonation portant sur la dernière syllabe d'un mot prosodique en l'absence d'une structure de pied. L'accentuation exceptionnelle, en échange, est le résultat de l'introduction dans la computation de morphèmes déjà structurés en pieds dans le lexique. Cette structure de pied est maintenue en surface, suivant les contraintes de fidélité. La grammaire assigne ensuite les autres propriétés à ce pied, telles que la binarité et le type de pied. Ces propriétés sont satisfaites par défaut dans le cas des morphèmes réguliers, puisque ceux-ci ne sont pas structurés en pied. Le résultat est une analyse unifiée de l'accentuation normale et exceptionnelle en turc.En deuxième lieu, cette thèse propose une voie pour l'acquisition L2 de la prosodie, que nous appelons l'Hypothèse de la Voie de l'Acquisition de la Prosodie (HVAP). L'HVAP prédit différents niveaux de difficulté et différentes voies d'acquisition en fonction des propriétés typologiques de la langue première (L1) de l'apprenant et de la langue L2 visée, ainsi que sur la base d'un arbre de dépendance représentant les relations entre les paramètres prosodiques. La plupart des paramètres concernant les pieds sont incorporés dans cette analyse, en plus du nouveau paramètre régissant la présence ou l'absence de pied. L'HVAP prédit que si le pied est présent dans la L1, un apprenant L2 d'une langue qui ne possède pas de pied sera incapable de l'éliminer de sa grammaire et sera contraint de changer les valeurs de d'autres paramètres concernant les pieds. Ces paramètres ne sont pas tous également faciles à changer. Dépendamment de leur position dans l'arbre de paramètres et de l'effet de marquage, il est prédit que certains paramètres tels que le type de pied seront plus facile à changer que d'autres, comme l'Itérativité. Les prédictions de cette théorie concernant la voie d'apprentissage sont ensuite testées à l'aide d'une expérience visant à examiner la production en turc d'apprenants ayant le français ou l'anglais comme langue première. Les résultats de cette expérience confirment largement les prédictions de l'HVAP. Aucun des apprenants du turc parlant l'anglais comme langue première n'ont été en mesure de se débarrasser du pied dans leur grammaire. Ils ont toutefois été capables de faire certains changements contraints par la Grammaire Universelle (GU), comme par exemple changer le paramètre d'Extramétricalité de oui à non et, à une étape subséquente, changer le type de pied de trochaïque à iambique, résultant en un nombre plus élevé de mots portant l'accent principal sur la dernière syllabe. Les apprenants francophones, en échange, produisent des structures sans pieds avec accent final, comme la cible en turc, à partir du début de l'acquisition. Aucun des apprenants n'ont présentés de structures non-contraintes par la GU tels que des pieds iambiques non-sensibles au poids prosodique, un type de pied exclu par la GU.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ranbom, Larissa J. « Lexical representation of phonological variation in spoken word recognition ». Diss., Online access via UMI:, 2005. http://wwwlib.umi.com/dissertations/fullcit/1425750.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Belasco, Alan Michael. « The role of detection in mental representation ». Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/289227.

Texte intégral
Résumé :
This dissertation critically analyzes current theories of mental representation, with an emphasis on indicator and teleological semantics. Its central claim is that detection-based theories of mental meaning--or more generally, theories which trace the meaning of a cognitive structure S to those environmental conditions which obtain while S executes or acquires its function--cannot explain many of the representational structures invoked in common-sense and computational psychology. The dissertation emphasizes several kinds of representational states (both common-sense and computational) that are not commonly noted in the philosophical literature. By emphasizing the heterogeneity of cognitive contents, the dissertation shows just how robust a notion of content will be needed to naturalize, or even just analyze, mental representation. Chapter one introduces the fundamentals of indicator theories and the notion of a language of thought. Chapter two introduces a class of ordinary beliefs that resist explanation on indicator accounts--viz., mistaken beliefs about the physical appearance of members of a kind. Indicator theories require us to assign propositional contents to these beliefs so as to make them true (counterintuitively), at the additional cost of making false many other beliefs about the kind. Chapter three addresses the implications for content theories of internal instructions in cognitive processing--of structures which specify actions that the cognitive system is to perform. Instructions do not fit naturally within the framework of indicator semantics. Indicator theories take a symbol's meaning to be a function of conditions which regularly precede, and help cause, the symbol's tokening. By contrast, an instruction represents an action which has not yet been performed, an action that will issue from the instruction itself. Indicator theories thus must reconcile the future-directed contents of instructions with the backward-looking mechanism of detection. Chapter four challenges the assumption of both indicator and teleological accounts that meaning is founded on some type of causal interaction between the denoting state and the denoted conditions. It explores the conflict between this foundational assumption and the atomic prototypes invoked in theories of visual object recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Yi, Heejong. « Exploring the formal representation of discourse units with Korean noun anaphors and null pronouns ». Access to citation, abstract and download form provided by ProQuest Information and Learning Company ; downloadable PDF file, 360 p, 2005. http://proquest.umi.com/pqdweb?did=954007141&sid=5&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Shan, Caifeng. « Inferring facial and body language ». Thesis, Queen Mary, University of London, 2008. http://qmro.qmul.ac.uk/xmlui/handle/123456789/15020.

Texte intégral
Résumé :
Machine analysis of human facial and body language is a challenging topic in computer vision, impacting on important applications such as human-computer interaction and visual surveillance. In this thesis, we present research building towards computational frameworks capable of automatically understanding facial expression and behavioural body language. The thesis work commences with a thorough examination in issues surrounding facial representation based on Local Binary Patterns (LBP). Extensive experiments with different machine learning techniques demonstrate that LBP features are efficient and effective for person-independent facial expression recognition, even in low-resolution settings. We then present and evaluate a conditional mutual information based algorithm to efficiently learn the most discriminative LBP features, and show the best recognition performance is obtained by using SVM classifiers with the selected LBP features. However, the recognition is performed on static images without exploiting temporal behaviors of facial expression. Subsequently we present a method to capture and represent temporal dynamics of facial expression by discovering the underlying low-dimensional manifold. Locality Preserving Projections (LPP) is exploited to learn the expression manifold in the LBP based appearance feature space. By deriving a universal discriminant expression subspace using a supervised LPP, we can effectively align manifolds of different subjects on a generalised expression manifold. Different linear subspace methods are comprehensively evaluated in expression subspace learning. We formulate and evaluate a Bayesian framework for dynamic facial expression recognition employing the derived manifold representation. However, the manifold representation only addresses temporal correlations of the whole face image, does not consider spatial-temporal correlations among different facial regions. We then employ Canonical Correlation Analysis (CCA) to capture correlations among face parts. To overcome the inherent limitations of classical CCA for image data, we introduce and formalise a novel Matrix-based CCA (MCCA), which can better measure correlations in 2D image data. We show this technique can provide superior performance in regression and recognition tasks, whilst requiring significantly fewer canonical factors. All the above work focuses on facial expressions. However, the face is usually perceived not as an isolated object but as an integrated part of the whole body, and the visual channel combining facial and bodily expressions is most informative. Finally we investigate two understudied problems in body language analysis, gait-based gender discrimination and affective body gesture recognition. To effectively combine face and body cues, CCA is adopted to establish the relationship between the two modalities, and derive a semantic joint feature space for the feature-level fusion. Experiments on large data sets demonstrate that our multimodal systems achieve the superior performance in gender discrimination and affective state analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Allott, Nicholas Mark. « A natural language processing framework for automated assessment ». Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314333.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Qu, Chen. « Representation and acquisition of the tonal system of Mandarin Chinese ». Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119377.

Texte intégral
Résumé :
This thesis examines the representation and acquisition of the Mandarin tonal system. The proposals raised relate to three areas of phonological research: formal phonology, Mandarin phonology and the acquisition of phonology. Concerning formal phonology, the proposals are mainly three. First, tones are internally structured, following Yip (1990) and Bao (1999). Second, tone and stress can co-occur in tone languages, following Hyman (2006). And the uneven trochee should be recognized as one of the universal foot shapes across languages. Last, there is a relation between rising tone and stress, and between level tone and lack of stress. A relation also holds between high register and stress, and between low register and lack of stress. Stress determines the realization of tone. Stress makes tone high and rising; lack of stress makes tone level. Turning to Mandarin phonology, the proposals made in this thesis concern the formal representations of tones, prosodic structure and their interaction in tone sandhi processes. It is argued that Mandarin tones are underspecified: High register and rising contour are specified which, when embedded in a geometry where register (either H or L) and contour are sisters under the tonal node, results in the five-way tonal contrast of Mandarin, with T2 being the most marked tone and T0 the least marked. It is suggested that Mandarin is a weight-sensitive language and that a four-way weight distinction must be recognized: super-heavy (trimoraic), heavy (bimoraic), light (monomoraic) and weightless (moraless). Relatedly, it is argued that Mandarin strives to build uneven trochees, and that word-level stress falls on the leftmost heavy syllable in the domain of the phonological word. It is also proposed that phrasal stress in Mandarin falls on the rightmost head syllable in the domain of the phrase. The overarching point made concerning Mandarin prosodic structure is that the language respects the prosodic hierarchy most commonly adopted for other languages, in contrast to the position of many previous researchers working on the language. As far as tone sandhi is concerned, this thesis provides a unified stress-based account for the three processes attested in the language: T2 sandhi, T3 sandhi and yi-bu-qi-ba sandhi. It is argued that T2 sandhi targets prosodic dependent position and changes the tone from more marked to less marked; T3 sandhi and yi-bu-qi-ba sandhi target prosodic head position and change the tone from less marked to more marked. Turning to the acquisition of phonology, a hypothesis is formulated for children acquiring contour tone languages, which respects the Successive Division Algorithm (Dresher 2009) as well as Minimality and Monotonicity (Rice & Avery 1995, Rice 1995). This hypothesis leads to predictions for children's tonal behavior at each stage in development, including the point at which tone sandhi should be acquired. It is predicted that children acquiring contour tone languages may vary at the onset of acquisition because Universal Grammar provides two possible launching points: "register" and "contour". Subsequent stages in development are predicted to vary as well, if children acquire the tonal contrasts through repeated binary splits of the phonological space, as per the Successive Division Algorithm, and by adding one degree of complexity to representations at a time, following Minimality and Monotonicity. The hypothesis for children's acquisition of contour tone languages is tested against naturalistic longitudinal data collected from two children in northern China: GY and LL. The two children acquire the tonal system differently: GY focuses on "register", whereas LL on "contour" earlier. The children's tonal behavior over time is also different, including the points at which they acquire tone sandhi processes. The cross- and within- subject variation observed at the initial and subsequent stages of acquisition is shown to conform to the predictions.
Cette thèse étudie les représentations et l'acquisition du système tonal du mandarin. Les propositions soulevées dans la thèse touchent trois domaines de la recherche en phonologie : la phonologie formelle, la phonologie du mandarin et l'acquisition de la phonologie. La phonologie formelle a trois aspects. Premièrement, la thèse appuie l'approche de Yip (1980/1990) et de Bao (1990/1999) selon laquelle les tons ont une structure interne. Puis, il peut y avoir une cooccurrence du ton et de l'accent dans les langues tonales et la trochée inégale devrait être comptée comme une forme de pied universelle. Enfin, il existe une relation entre les tons montants et l'accent, entre le ton neutre et l'absence d'accent. Une relation existe aussi entre un registre haut et l'accent; entre un registre bas et l'absence d'accent. En effet, l'accent détermine la réalisation du ton: la position de la tête prosodique engendre un ton haut et montant; la manque de cette dernière entraine un ton neutre. Les propositions de cette thèse touchent les représentations formelles des tons, la structure prosodique et leur interaction dans les processus de sandhi tonal. Il est montré que les tons du mandarin sont sous-spécifiés: quand ils se trouvent dans une géométrie où le registre (H ou B) et le contour sont sœurs sous le nœud tonal, le registre haut et le contour montant sont spécifiés et engendre les cinq contrastes toniques du mandarin où T2 est le ton le plus marqué et T0 le moins marqué. La thèse propose aussi que le mandarin est une langue sensible au poids et que ses quatre distinctions doivent être reconnues: super-lourde (trimoraïque), lourde (bimoraïque), légère (monomoraïque) et sans-poids (sans more). Le mandarin cherche à construire des trochées inégales et l'accent de mot tombe sur la syllabe lourde la plus à gauche du domaine du mot phonologique. L'accent syntagmatique tombe sur la tête syllabe qui est la plus à droite du domaine du syntagme. L'idée qui s'oppose à la position de plusieurs chercheurs, est l'utilisation de la hiérarchie prosodique communément employée pour autres langues. Quant à sandhi tonal, la thèse propose une explication fondée sur l'accent pour trois processus attestés dans la langue : le sandhi T2, le sandhi T3 et le sandhi yi-bu-qi-ba. Le sandhi T2 vise les dépendants prosodique et change d'un ton plus marqué à un ton moins marqué, le sandhi T3 et le sandhi yi-bu-qi-ba affectent la tête prosodique et changent d'un ton moins marqué à un ton plus marqué. Une hypothèse concernant l'acquisition des langues tonales à contour chez l'enfant est formulée en respectant l'algorithme de division successive (Dresher 2009) et les principes de minimalité et de monotonicité (Rice & Avery 1995, Rice 1996). Cette hypothèse prédit le comportement tonal des enfants à tous les stades du développement, incluant le moment où le sandhi tonal devrait être acquis. Une variation est possible quand les enfants apprennent une langue tonale à contour car la grammaire universelle propose deux points de départs: le «registre» et le «contour». Les stades subséquents du développement peuvent varier si l'enfant acquiert les contrastes tonique suite à des divisions binaires répétées selon l'algorithme de division successive, ou l'ajout d'un degré de complexité de représentation à la fois, selon les principes de minimalité et de monotonicité. L'hypothèse est confrontée à des données longitudinales prises auprès de deux enfants du nord de la Chine, GY et LL, lors d'entretiens informels dans leur milieu naturel. Les deux enfants ont appris le système différemment: GY se concentre sur le «registre» avant le «contour» et LL se concentre sur le «contour» avant le «registre». Le comportement tonal de ces enfants à travers le temps diffère aussi au niveau du moment où ils acquièrent les processus du sandhi tonal. Ainsi, la variation observée au début de l'acquisition puis dans les stades subséquents conforme aux prédictions.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie