To see the other types of publications on this topic, follow the link: Understanding language.

Dissertations / Theses on the topic 'Understanding language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Understanding language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pettit, Dean R. (Dean Reid) 1967. "Understanding language." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/17560.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, February 2003.
Includes bibliographical references (leaves 139-140).
My dissertation concerns the nature of linguistic understanding. A standard view about linguistic understanding is that it is a propositional knowledge state. The following is an instance of this view: given a speaker S and an expression a that means M, S understand a just in case S knows that a means M. I refer to this as the epistemic view of linguistic understanding. The epistemic view would appear to be a mere conceptual truth about linguistic understanding, since it is entailed by the following two claims that themselves seem to be mere conceptual truths: (i) S understands a iff S knows what a means, and-given that a means M-(ii) S knows what a means iff S knows that a means M. I argue, however, that this is not a mere conceptual truth. Contrary to the epistemic view, propositional knowledge of the meaning of a is not necessary for understanding a. I argue that linguistic understanding does not even require belief. My positive proposal is that our understanding of language is typically realized, at least in native speakers, as a perceptual capacity. Evidence from cognitive neuropsychology suggests that our perceptual experience of language comes to us already semantically interpreted. We perceive a speaker's utterance as having content, and it is by perceiving the speaker's utterances as having the right content that we understand what the speaker says. We count as understanding language (roughly) in virtue of having this capacity to understand what speakers say when they use language. This notion of perceiving an utterance as having content gets analyzed in terms of Dretske's account of representation in terms of a teleological notion of function: you perceive a speaker's utterance as having content when the utterance produces in you a perceptual state that has a certain function in your psychology.
(cont.) I show how this view about the nature of linguistic understanding provides an attractive account of how identity claims can be semantically informative, as opposed to merely pragmatically informative, an account that avoids the standard difficulties for Fregean views that attempt to account for the informativeness of identity claims in terms of their semantics.
by Dean R. Pettit.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Rudzicz, Frank. "Clavius : understanding language understanding in multimodal interaction." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99536.

Full text
Abstract:
Natural communication between humans is not limited to speech, but often requires simultaneous coordination of multiple streams of information---especially hand gestures---to complement or supplement understanding. This thesis describes a software architecture, called CLAVIUS Whose purpose is to generically interpret multiple modes of input as singular semantic utterances through a modular programming interface that supports various sensing technologies. This interpretation is accomplished through a new multi-threaded parsing algorithm that co-ordinates top-down and bottom-up methods asynchronously on graph-based unification grammars. The interpretation process follows a best-first approach where partial parses are evaluated by a combination of scoring metrics, related to such criteria as information content, grammatical structure and language models. Furthermore, CLAVIUS relaxes two traditional constraints in conventional parsing---namely, it abandons forced relative ordering of right-hand constituents in grammar rules, and it allows parses to be expanded with null constituents.
The effects of this parsing methodology, and of the scoring criteria it employs, are analyzed within the context of experiments and data collection on a small group of users. Both CLAVIUS and its component modules are trained on this data, and results show improvements in performance accuracy, and the overcoming of several difficulties in other multimodal frameworks. General discussion as to the linguistic behaviour of speakers in a multimodal context are also described.
APA, Harvard, Vancouver, ISO, and other styles
3

Eiben, Robert Joseph. "Understanding Dead Languages." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32798.

Full text
Abstract:
Dead languages present a case where the original language community no longer exists. This results in a language for which the evidence is limited by the paucity of surviving texts and in which no new linguistic uses can be generated. Ludwig Wittgenstein argued that the meaning of language is simply its use by a language community. On this view a dead language is coextensive with the existing corpus, with the linguistic dynamic provided by the community of readers. Donald Davidson argued that the meaning of language is not conventional, but rather is discovered in a dynamic process of â passing theoriesâ generated by the speaker and listener. On this view a dead language is incomplete, because such dynamic theories can only be negotiated by participating in a living language community and are thus not captured by the extant corpus. We agree with Davidsonâ s view of theories of meaning and conclude that our interpretations of dead languages will suffer epistemological underdetermination that removes any guarantee that they reflect the meanings as heard by the original language community.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Khonaizi, Mohammed Taqi. "Natural Arabic language text understanding." Thesis, University of Greenwich, 1999. http://gala.gre.ac.uk/6096/.

Full text
Abstract:
The most challenging part of natural language understanding is the representation of meaning. The current representation techniques are not sufficient to resolve the ambiguities, especially when the meaning is to be used for interrogation at a later stage. Arabic language represents a challenging field for Natural Language Processing (NLP) because of its rich eloquence and free word order, but at the same time it is a good platform to capture understanding because of its rich computational, morphological and grammar rules. Among different representation techniques, Lexical Functional Grammar (LFG) theory is found to be best suited for this task because of its structural approach. LFG lays down a computational approach towards NLP, especially the constituent and the functional structures, and models the completeness of relationships among the contents of each structure internally, as well as among the structures externally. The introduction of Artificial Intelligence (AI) techniques, such as knowledge representation and inferencing, enhances the capture of meaning by utilising domain specific common sense knowledge embedded in the model of domain of discourse and the linguistic rules that have been captured from the Arabic language grammar. This work has achieved the following results: (i) It is the first attempt to apply the LFG formalism on a full Arabic declarative text that consists of more than one paragraph. (ii) It extends the semantic structure of the LFG theory by incorporating a representation based on the thematic-role frames theory. (iii) It extends to the LFG theory to represent domain specific common sense knowledge. (iv) It automates the production process of the functional and semantic structures. (v) It automates the production process of domain specific common sense knowledge structure, which enhances the understanding ability of the system and resolves most ambiguities in subsequent question-answer sessions.
APA, Harvard, Vancouver, ISO, and other styles
5

Batsuren, Khuyagbaatar. "Understanding and Exploiting Language Diversity." Doctoral thesis, Università degli studi di Trento, 2018. https://hdl.handle.net/11572/368635.

Full text
Abstract:
Languages are well known to be diverse on all structural levels, from the smallest (phonemic) to the broadest (pragmatic). We propose a set of formal, quantitative measures for the language diversity of linguistic phenomena, the resource incompleteness, and resource incorrectness. We apply all these measures to lexical semantics where we show how evidence of a high degree of universality within a given language set can be used to extend lexico-semantic resources in a precise, diversity-aware manner. We demonstrate our approach on several case studies: First is on polysemes and homographs among cases of lexical ambiguity. Contrarily to past research that focused solely on exploiting systematic polysemy, the notion of universality provides us with an automated method also capable of predicting irregular polysemes. Second is to automatically identify cognates from the existing lexical resource across different orthographies of genetically unrelated languages. Contrarily to past research that focused on detecting cognates from 225 concepts of Swadesh list, we captured 3.1 million cognate pairs across 40 different orthographies and 335 languages by exploiting the existing wordnet-like lexical resources.
APA, Harvard, Vancouver, ISO, and other styles
6

Batsuren, Khuyagbaatar. "Understanding and Exploiting Language Diversity." Doctoral thesis, University of Trento, 2018. http://eprints-phd.biblio.unitn.it/3451/1/disclaimer_batsuren.pdf.

Full text
Abstract:
Languages are well known to be diverse on all structural levels, from the smallest (phonemic) to the broadest (pragmatic). We propose a set of formal, quantitative measures for the language diversity of linguistic phenomena, the resource incompleteness, and resource incorrectness. We apply all these measures to lexical semantics where we show how evidence of a high degree of universality within a given language set can be used to extend lexico-semantic resources in a precise, diversity-aware manner. We demonstrate our approach on several case studies: First is on polysemes and homographs among cases of lexical ambiguity. Contrarily to past research that focused solely on exploiting systematic polysemy, the notion of universality provides us with an automated method also capable of predicting irregular polysemes. Second is to automatically identify cognates from the existing lexical resource across different orthographies of genetically unrelated languages. Contrarily to past research that focused on detecting cognates from 225 concepts of Swadesh list, we captured 3.1 million cognate pairs across 40 different orthographies and 335 languages by exploiting the existing wordnet-like lexical resources.
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Clive Richard. "ATLAS : a natural language understanding system." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Marlen, Michael Scott. "An approach to Natural Language understanding." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17581.

Full text
Abstract:
Doctor of Philosophy
Department of Computing and Information Sciences
David A. Gustafson
Natural Language understanding over a set of sentences or a document is a challenging problem. We approach this problem using semantic extraction and an ontology for answering questions based on the data. There is more information in a sentence than that found by extracting out the visible terms and their obvious relations between one another. It is the hidden information that is not seen that gives this solution the advantage over alternatives. This methodology was tested against the FraCas Test Suite with near perfect results (correct answers) for the sections that are the focus of this paper (Generalized Quantifiers, Plurals, Adjectives, Comparatives, Verbs, and Attitudes). The results indicate that extracting the visible semantics as well as the unseen semantics and their interrelations using an ontology to reason over it provides reliable and provable answers to questions validating this technology.
APA, Harvard, Vancouver, ISO, and other styles
9

Swain, Bradley Andrew. "Path understanding using geospatial natural language." [Pensacola, Fla.] : University of West Florida, 2009. http://purl.fcla.edu/fcla/etd/WFE0000182.

Full text
Abstract:
Thesis (M.S.)--University of West Florida, 2009.
Submitted to the Dept. of Computer Science. Title from title page of source document. Document formatted into pages; contains 45 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
10

Autayeu, Aliaksandr. "Descriptive Phrases: Understanding Natural Language Metadata." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/368353.

Full text
Abstract:
Fast development of information and communication technologies made available vast amounts of heterogeneous information. With these amounts growing faster and faster, information integration and search technologies are becoming a key for the success of information society. To handle such amounts efficiently, data needs to be leveraged and analysed at deep levels. Metadata is a traditional way of getting leverage over the data. Deeper levels of analysis include language analysis, starting from purely string-based (keyword) approaches, continuing with syntactic-based approaches and now semantics is about to be included in the processing loop. Metadata gives a leverage over the data. Often a natural language, being the easiest way of expression, is used in metadata. We call such metadata ``natural language metadata''. The examples include various titles, captions and labels, such as web directory labels, picture titles, classification labels, business directory category names. These short pieces of text usually describe (sets of) objects. We call them ``descriptive phrases''. This thesis deals with a problem of understanding natural language metadata for its further use in semantics aware applications. This thesis contributes by portraying descriptive phrases, using the results of analysis of several collected and annotated datasets of natural language metadata. It provides an architecture for the natural language metadata understanding, complete with the algorithms and the implementation. This thesis contains the evaluation of the proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
11

Autayeu, Aliaksandr. "Descriptive Phrases: Understanding Natural Language Metadata." Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/270/1/autayeu-phd-thesis.pdf.

Full text
Abstract:
Fast development of information and communication technologies made available vast amounts of heterogeneous information. With these amounts growing faster and faster, information integration and search technologies are becoming a key for the success of information society. To handle such amounts efficiently, data needs to be leveraged and analysed at deep levels. Metadata is a traditional way of getting leverage over the data. Deeper levels of analysis include language analysis, starting from purely string-based (keyword) approaches, continuing with syntactic-based approaches and now semantics is about to be included in the processing loop. Metadata gives a leverage over the data. Often a natural language, being the easiest way of expression, is used in metadata. We call such metadata ``natural language metadata''. The examples include various titles, captions and labels, such as web directory labels, picture titles, classification labels, business directory category names. These short pieces of text usually describe (sets of) objects. We call them ``descriptive phrases''. This thesis deals with a problem of understanding natural language metadata for its further use in semantics aware applications. This thesis contributes by portraying descriptive phrases, using the results of analysis of several collected and annotated datasets of natural language metadata. It provides an architecture for the natural language metadata understanding, complete with the algorithms and the implementation. This thesis contains the evaluation of the proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
12

Luebbering, Candice Rae. "The Cartographic Representation of Language: Understanding language map construction and visualizing language diversity." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/37543.

Full text
Abstract:
Language maps provide illustrations of linguistic and cultural diversity and distribution, appearing in outlets ranging from textbooks and news articles to websites and wall maps. They are valuable visual aids that accompany discussions of our cultural climate. Despite the prevalent use of language maps as educational tools, little recent research addresses the difficult task of map construction for this fluid cultural characteristic. The display and analysis capabilities of current geographic information systems (GIS) provide a new opportunity for revisiting and challenging the issues of language mapping. In an effort to renew language mapping research and explore the potential of GIS, this dissertation is composed of three studies that collectively present a progressive work on language mapping. The first study summarizes the language mapping literature, addressing the difficulties and limitations of assigning language to space before describing contemporary language mapping projects as well as future research possibilities with current technology. In an effort to identify common language mapping practices, the second study is a map survey documenting the cartographic characteristics of existing language maps. The survey not only consistently categorizes language map symbology, it also captures unique strategies observed for handling locations with linguistic plurality as well as representing language data uncertainty. A new typology of language map symbology is compiled based on the map survey results. Finally, the third study specifically addresses two gaps in the language mapping literature: the issue of visualizing linguistic diversity and the scarcity of GIS applications in language mapping research. The study uses census data for the Washington, D.C. Metropolitan Statistical Area to explore visualization possibilities for representing the linguistic diversity. After recreating mapping strategies already in use for showing linguistic diversity, the study applies an existing statistic (a linguistic diversity index) as a new mapping variable to generate a new visualization type: a linguistic diversity surface. The overall goal of this dissertation is to provide the impetus for continued language mapping research and contribute to the understanding and creation of language maps in education, research, politics, and other venues.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Habibovic, Asima. "Taboo language : Teenagers' understanding of and attitudes to English taboo language." Thesis, Högskolan Kristianstad, Sektionen för Lärarutbildning, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-7731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sætre, Rune. "GeneTUC: Natural Language Understanding in Medical Text." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-545.

Full text
Abstract:

Natural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists.

The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems.

The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities.

The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.

APA, Harvard, Vancouver, ISO, and other styles
15

He, Y. "A statistical approach to spoken language understanding." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603917.

Full text
Abstract:
The research work described here focuses on statistical learning approaches for building a purely data-driven spoken language understanding (SLU) system whose three major components, the speech recognizer, the semantic parser, and the dialogue act decoder are trained entirely from data. The system is comparable to existing SLU systems which rely on either hand-crafted semantic grammar rules or statistical model trained on fully-annotated training corpora but it has greatly reduced build cost. The core of the system is a novel hierarchical semantic parser model called a Hidden Vector State (HVS) model. Unlike other hierarchical parsing models which require fully-annotated treebank data for training, the HVS model can be trained using only lightly annotated data whilst simultaneously retaining sufficient ability to capture the hierarchical structure needed to robustly extract task domain semantics. The HVS parser is combined with a dialogue act detector based on Naive Bayesian networks which have been extended and refined by introducing Tree-Augmented Naive Bayes networks (TANs) to allow inter-concept dependencies to be robustly modelled. Finally, the two semantic analyzer components, the HVS semantic parser and the modified-TAN dialogue act decoder, have been integrated with a standard HTK-based Hidden Markov Model (HMM) speech recognizer and the additional knowledge provided by the semantic analyzer has been used to determine the best-scoring word hypothesis from the N-best lists generated by the speech recognizer. This purely data-driven spoken language understanding (SLU) system has been built and tested using both the ATIS and DARPA Communicator test sets. In addition to testing on clean data, the systems has been tested on various levels of noisy data and on modified application domains. The results support the claim that an SLU system which is statistically-based and trained entirely from data is intrinsically robust and can be readily adapted to new applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Kočiský, Tomáš. "Deep learning for reading and understanding language." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:cc45e366-cdd8-495b-af42-dfd726700ff0.

Full text
Abstract:
This thesis presents novel tasks and deep learning methods for machine reading comprehension and question answering with the goal of achieving natural language understanding. First, we consider a semantic parsing task where the model understands sentences and translates them into a logical form or instructions. We present a novel semi-supervised sequential autoencoder that considers language as a discrete sequential latent variable and semantic parses as the observations. This model allows us to leverage synthetically generated unpaired logical forms, and thereby alleviate the lack of supervised training data. We show the semi-supervised model outperforms a supervised model when trained with the additional generated data. Second, reading comprehension requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess reading comprehension ability, in both artificial agents and children learning to read. We propose a new, challenging, supervised reading comprehension task. We gather a large-scale dataset of news stories from the CNN and Daily Mail websites with Cloze-style questions created from the highlights. This dataset allows for the first time training deep learning models for reading comprehension. We also introduce novel attention-based models for this task and present qualitative analysis of the attention mechanism. Finally, following the recent advances in reading comprehension in both models and task design, we further propose a new task for understanding complex narratives, NarrativeQA, consisting of full texts of books and movie scripts. We collect human written questions and answers based on high-level plot summaries. This task is designed to encourage development of models for language understanding; it is designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard reading comprehension models struggle on the tasks presented here.
APA, Harvard, Vancouver, ISO, and other styles
17

Kojima, Takatsugu. "Spatial language understanding based on visual information." Kyoto University, 2007. http://hdl.handle.net/2433/136376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ye, Patrick. "Nautral language understanding in controlled virtual environments /." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/4756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Leitch, David Gideon. "The politics of understanding language as a model of culture /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3331060.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed Dec. 5, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 235-251).
APA, Harvard, Vancouver, ISO, and other styles
20

Callaghan, Paul. "An evaluation of Lolita and related natural language processing systems." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/5024/.

Full text
Abstract:
This research addresses the question, "how do we evaluate systems like LOLITA?" LOLITA is the Natural Language Processing (NLP) system under development at the University of Durham. It is intended as a platform for building NL applications. We are therefore interested in questions of evaluation for such general NLP systems. The thesis has two, parts. The first, and main, part concerns the participation of LOLITA in the Sixth Message Understanding Conference (MUC-6). The MUC-relevant portion of LOLITA is described in detail. The adaptation of LOLITA for MUC-6 is discussed, including work undertaken by the author. Performance on a specimen article is analysed qualitatively, and in detail, with anonymous comparisons to competitors' output. We also examine current LOLITA performance. A template comparison tool was implemented to aid these analyses. The overall scores are then considered. A methodology for analysis is discussed, and a comparison made with current scores. The comparison tool is used to analyse how systems performed relative to each-other. One method, Correctness Analysis, was particularly interesting. It provides a characterisation of task difficulty, and indicates how systems approached a task. Finally, MUC-6 is analysed. In particular, we consider the methodology and ways of interpreting the results. Several criticisms of MUC-6 are made, along with suggestions for future MUC-style events. The second part considers evaluation from the point of view of general systems. A literature review shows a lack of serious work on this aspect of evaluation. A first principles discussion of evaluation, starting from a view of NL systems as a particular kind of software, raises several interesting points for single task evaluation. No evaluations could be suggested for general systems; their value was seen as primarily economic. That is, we are unable to analyse their linguistic capability directly.
APA, Harvard, Vancouver, ISO, and other styles
21

Di, Stefano Marialuisa. "Understanding How Emergent Bilinguals Bridge Belonging and Languages in Dual Language Immersion Settings." DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6261.

Full text
Abstract:
The purpose of this study was to understand how young children bridge belonging and language in a dual language immersion (DLI) setting. I developed a 10-week ethnographic study in a Spanish-English third-grade class in the Northeast of the U.S. where data was collected in the form of field notes, interviews, and artifacts. Here I explored the way language instruction and student participation influenced the development of the teacher and students’ multiple identities. The findings of this study suggest that emergent bilinguals’ identity development derives from the process built through multiple dialogic classroom instruction and practices. The products of this process emphasize the sense of belonging and language practices as main components of students’ hybrid and fluid identities. This research contributes to the field of identity development and DLI studies in terms of knowledge, policy, and practices. In particular, the findings of this study: (a) increase our knowledge of students’ multiple identities development in DLI settings; (b) impact policy implementation in elementary schools; and (c) reveal classroom strategies and successful instructions in elementary education.
APA, Harvard, Vancouver, ISO, and other styles
22

Anderson, Amy L. Grumet Madeleine R. "Language matters a study of teachers' uses of language for understanding practice /." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,410.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2006.
Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the School of Education (Culture, Curriculum, and Change)." Discipline: Education; Department/School: Education.
APA, Harvard, Vancouver, ISO, and other styles
23

Carroll, Kevin Sean. "Language Maintenance in Aruba and Puerto Rico: Understanding Perceptions of Language Threat." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/195400.

Full text
Abstract:
This dissertation uses qualitative research methods to describe the history of language use and maintenance on the islands of Aruba and Puerto Rico. More specifically, it examines how the islands' unique colonial circumstances have affected the maintenance of the local language. The multidisciplinary field of language planning and policy (LPP) has historically focused on documenting, categorizing and revitalizing languages that have undergone significant language shift. As a result, the majority of the discourse regarding threatened languages also implies that a threatened language will soon be endangered. The language contexts on the islands of Aruba and Puerto Rico do not conform to this often assumed linear progression. The use of document analysis, interviews with key players in LPP and observations on both islands provide the data for the position that there are unique contexts where language threat can be discussed, not in terms of language shift, but in terms of perceptions of threat. In addition to providing a detailed historical account of language situations on both islands, this dissertation frames the findings within a larger framework of redefining language threat. Special attention is paid to how social agents have influenced perceptions through the social amplification of risk framework. The work concludes with an argument for a framework that incorporates not only languages that have witnessed language shift, but also language contexts where languages are perceived to be threatened, with the understanding that such a distinction could potentially move the field of LPP toward a better understanding of language maintenance.
APA, Harvard, Vancouver, ISO, and other styles
24

Lou, Bill Pi-ching. "New models of natural language for automated assessment." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Stout, Timothy G. "Understanding Successful Japanese Language Programs: Utah Case Study." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/2047.

Full text
Abstract:
Recent world events have caused Americans to reassess national political, economic, and educational priorities, resulting in a shift towards Asia. The schools in response have begun to introduce less commonly taught languages, such as Japanese and Chinese. Many Utah public schools have tried to implement less commonly taught language programs. Some have succeeded, and other others have not. The purpose of this study was to understand how and why some schools were able to successfully integrate less commonly taught language programs, and why others were not.The results of this study suggest that the factors relating to students’ interests and the teacher/administrator relationship were the most important positive factors affecting the success of the Japanese programs with staying power. It was also found that the factors relating to funding issues and student enrollment were the most important negative factors affecting the failure of the long-term Japanese programs that were eliminated.
APA, Harvard, Vancouver, ISO, and other styles
26

Potter, Jami L. "The Relationship of Language and Emotion Understanding to Sociable Behavior of Children with Language Impairment." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1786.

Full text
Abstract:
The purpose of this study was to examine the relationship between emotion understanding and language ability to the sociable behavior in children with language impairment (LI) and their typically developing peers. Twenty-nine children with LI and 29 age- and gender-matched peers with typical language were used in this study. Sociability was rated by his/her classroom teacher using the Teacher Behavior Rating Scale (Hart & Robinson, 1996). Language ability was assessed using the Comprehensive Assessment of Spoken Language (Carrow-Woolfolk, 1999). To assess emotion understanding, each participant was asked to perform several structural dissemblance tasks. Children with LI received scores significantly lower in language, dissemblance, prosocial behavior, and likeability compared to their typical developing peers. Hierarchical regression analyses indicated that language was a significant predictor of sociability. However, further analyses indicated that dissemblance mediated the relationship between language and likeability in girls, but not boys. Results from further analyses for prosocial behavior indicated that dissemblance did not mediate the relationship between language and prosocial behavior. Evidence from this study supports past research indicating children with LI experience emotional and language difficulties, which affect their social competence, particularly in girls.
APA, Harvard, Vancouver, ISO, and other styles
27

Majid, Asifa. "Language and causal understanding : there's something about Mary." Thesis, University of Glasgow, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

FERRARI, FLAVIA DIAS DE OLIVEIRA. "STARTING WITH PLAY: UNDERSTANDING AND LANGUAGE IN GADAMER." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16085@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
O presente trabalho consiste em um estudo em torno de Verdade e Método, principal obra do filósofo alemão Hans-Georg Gadamer. Ao tomar o fenômeno da compreensão (Verstehen) como objeto de sua reflexão, Gadamer nos esclarece de antemão que a hermenêutica que ele pretende desenvolver constitui uma tentativa de entender a verdade que é própria das ciências humanas, para além de sua autoconsciência metodológica, bem como o que liga tais ciências ao conjunto de nossa experiência de mundo. Um dos temas centrais desenvolvidos nesta obra, e que se encontra diretamente articulado com a questão da compreensão, é o conceito de jogo (Spiel), entendido como um acontecimento que se dá para além das subjetividades que nele se encontram envolvidas. Para Gadamer o alcance universal e a dimensão ontológica do jogo não devem ser ignorados. Portanto, tentaremos mostrar neste estudo, que jogar e compreender são elementos intercambiáveis em seu pensamento, na medida em que pensar o entrelaçamento jogo-compreensão é realizar que a estrutura da compreensão exige um certo entregar-se à situação em que a subjetividade não é tida mais como instância determinadora em relação ao momento da compreensão.
The present paper is a study on Truth and Method, the main work of the German philosopher Hans-Georg Gadamer. When it comes to the phenomenom of understanding (Verstehen) as the object of his thought, Gadamer explains us that the hermeneutics he intends to develop constitutes an attempt to understand the truth of human sciences, beyond their methodological self-consciousness, as well as, what links such sciences to the whole of our world experience. One of the central themes developed in this work, straightly linked to the question of understanding, is the concept of play (Spiel), known as an event that happens beyond the subjectivities involved in it. According to Gadamer, the universal scope and the ontological dimension of play should not be ignored. Therefore, we will try to show in this work that to play and to understand are interchangeable elements in Gadamer´s thought, inasmuch as to think about the connectedness of play-understanding is to realize that the structure of understanding demands one to surrender oneself to the situation in which the subjectivity is no longer the determinant instance regarding the moment of understanding.
APA, Harvard, Vancouver, ISO, and other styles
29

Korpusik, Mandy B. "Spoken language understanding in a nutrition dialogue system." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99860.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-111).
Existing approaches for the prevention and treatment of obesity are hampered by the lack of accurate, low-burden methods for self-assessment of food intake, especially for hard-to-reach, low-literate populations. For this reason, we propose a novel approach to diet tracking that utilizes speech understanding and dialogue technology in order to enable efficient self-assessment of energy and nutrient consumption. We are interested in studying whether speech can lower user workload compared to existing self-assessment methods, whether spoken language descriptions of meals can accurately quantify caloric and nutrient absorption, and whether dialogue can efficiently and effectively be used to ascertain and clarify food properties, perhaps in conjunction with other modalities. In this thesis, we explore the core innovation of our nutrition system: the language understanding component which relies on machine learning methods to automatically detect food concepts in a user's spoken meal description. In particular, we investigate the performance of conditional random field (CRF) models for semantic labeling and segmentation of spoken meal descriptions. On a corpus of 10,000 meal descriptions, we achieve an average F1 test score of 90.7 for semantic tagging and 86.3 for associating foods with properties. In a study of users interacting with an initial prototype of the system, semantic tagging achieved an accuracy of 83%, which was sufficiently high to satisfy users.
by Mandy B. Korpusik.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Stolte, Rosemarie. "German language learning in England : understanding the enthusiasts." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/388471/.

Full text
Abstract:
This study explores the motivation of English undergraduates to study German. In a context focused approach the history of German language learning in England is reviewed first. The historical findings in combination with a review of L2 motivation research lead to the empirical work. Inspired by Ushioda’s (2009) person-in-context relational view of motivation I have conducted cross-sectional qualitative interview research with groups of British undergraduates who study German at two different English universities. The data collected gives an insight into language learning motivation in general and shows what is specific to Anglophone learners and to German language learning. Through qualitative data analysis relating to different language learning motivation models I test the relevance of the concepts of integrativeness, instrumental orientation and the L2 motivational self system, to the learning of a high status ‘niche’ language which is often a third language for Anglophone students.
APA, Harvard, Vancouver, ISO, and other styles
31

Mazidi, Karen. "Infusing Automatic Question Generation with Natural Language Understanding." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc955021/.

Full text
Abstract:
Automatically generating questions from text for educational purposes is an active research area in natural language processing. The automatic question generation system accompanying this dissertation is MARGE, which is a recursive acronym for: MARGE automatically reads generates and evaluates. MARGE generates questions from both individual sentences and the passage as a whole, and is the first question generation system to successfully generate meaningful questions from textual units larger than a sentence. Prior work in automatic question generation from text treats a sentence as a string of constituents to be rearranged into as many questions as allowed by English grammar rules. Consequently, such systems overgenerate and create mainly trivial questions. Further, none of these systems to date has been able to automatically determine which questions are meaningful and which are trivial. This is because the research focus has been placed on NLG at the expense of NLU. In contrast, the work presented here infuses the questions generation process with natural language understanding. From the input text, MARGE creates a meaning analysis representation for each sentence in a passage via the DeconStructure algorithm presented in this work. Questions are generated from sentence meaning analysis representations using templates. The generated questions are automatically evaluated for question quality and importance via a ranking algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Shao, Han. "Pretraining Deep Learning Models for Natural Language Understanding." Oberlin College Honors Theses / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin158955297757398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Huber, Bernard J. Jr. "A knowledge-based approach to understanding natural language. /." Online version of thesis, 1991. http://hdl.handle.net/1850/11053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mrkšić, Nikola. "Data-driven language understanding for spoken dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/276689.

Full text
Abstract:
Spoken dialogue systems provide a natural conversational interface to computer applications. In recent years, the substantial improvements in the performance of speech recognition engines have helped shift the research focus to the next component of the dialogue system pipeline: the one in charge of language understanding. The role of this module is to translate user inputs into accurate representations of the user goal in the form that can be used by the system to interact with the underlying application. The challenges include the modelling of linguistic variation, speech recognition errors and the effects of dialogue context. Recently, the focus of language understanding research has moved to making use of word embeddings induced from large textual corpora using unsupervised methods. The work presented in this thesis demonstrates how these methods can be adapted to overcome the limitations of language understanding pipelines currently used in spoken dialogue systems. The thesis starts with a discussion of the pros and cons of language understanding models used in modern dialogue systems. Most models in use today are based on the delexicalisation paradigm, where exact string matching supplemented by a list of domain-specific rephrasings is used to recognise users' intents and update the system's internal belief state. This is followed by an attempt to use pretrained word vector collections to automatically induce domain-specific semantic lexicons, which are typically hand-crafted to handle lexical variation and account for a plethora of system failure modes. The results highlight the deficiencies of distributional word vectors which must be overcome to make them useful for downstream language understanding models. The thesis next shifts focus to overcoming the language understanding models' dependency on semantic lexicons. To achieve that, the proposed Neural Belief Tracking (NBT) model forsakes the use of standard one-hot n-gram representations used in Natural Language Processing in favour of distributed representations of user utterances, dialogue context and domain ontologies. The NBT model makes use of external lexical knowledge embedded in semantically specialised word vectors, obviating the need for domain-specific semantic lexicons. Subsequent work focuses on semantic specialisation, presenting an efficient method for injecting external lexical knowledge into word vector spaces. The proposed Attract-Repel algorithm boosts the semantic content of existing word vectors while simultaneously inducing high-quality cross-lingual word vector spaces. Finally, NBT models powered by specialised cross-lingual word vectors are used to train multilingual belief tracking models. These models operate across many languages at once, providing an efficient method for bootstrapping language understanding models for lower-resource languages with limited training data.
APA, Harvard, Vancouver, ISO, and other styles
35

Sharp, L. Kathryn, and Susan Lewis. "Moving Toward the Common Core: Understanding Academic Language." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/4270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Luo, Hongyin. "Neural attentions for natural language understanding and modeling." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122760.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 85-92).
In this thesis, we explore the use of neural attention mechanisms for improving natural language representation learning, a fundamental concept for modern natural language processing. With the proposed attention algorithms, our model made significant improvements in both language modeling and natural language understanding tasks. We regard language modeling as a representation learning task that learns to align local word contexts and their following words. We explore the use of attention mechanisms for both the context and following words to improve the performance of language models, and measure perplexity improvements on classic language modeling tasks. To learn better representation of contexts, we use a self-attention mechanism with a convolutional neural network (CNN) to simulate long short-term memory networks (LSTMs). The model process sequential data in parallel and still achieves competitive performances. We also propose a phrase induction model and headword attention to learn the embedding of following phrases. The model is able to learn reasonable phrase segments and outperforms several state-of-the-art language models on different data sets. The approach outperformed AWD-LSTM model by reducing 2 perplexities on the Penn Treebank and Wikitext-2 data sets, and achieved new state-of-the-art performance on the Wikitext-103 data set with 17.4 perplexity. For language understanding tasks, we propose the use of a self-attention CNN for video question answering. The performance of this model is significantly higher than the baseline video retrieval engine. Finally, we also investigate an end-to-end co-reference resolution model by applying cross-sentence attentions to utilize knowledge in contextual data and learn better contextualized word and span embeddings. The model achieved 66.69% MAP[at]1, and 87.42% MAP[at]5 accuracy of video retrieval and 57.13% MAP[at]1, 80.75 MAP[at]5 accuracy of a moment detection task, significantly outperforming the baselines.
The study is partly supported by Ford Motor Company
by Hongyin Luo.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
37

LA, QUATRA MORENO. "Deep Learning for Natural Language Understanding and Summarization." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Litman, Diane Judith. "Plan recognition and discourse analysis: an integrated approach for understanding dialogues /." Rochester, NY : University of Rochester, Department of Computer Science, 1985. http://doi.library.cmu.edu/10.1184/OCLC/14397594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

McKinnon, Maija Leena. "A procedural account of some English modals." Thesis, University of Edinburgh, 1985. http://hdl.handle.net/1842/20010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bromage, Jeanette. "Metaphor and understanding : a philosophical investigation." Thesis, University of Birmingham, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Livingstone, G. M. "Semantics, understanding and knowledge." Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cross, Sandra A. "Understanding verbal accounts of racism /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kuperberg, Gina Rosalind. "The cognitive neuroscience of language processing : towards an understanding of language dysfunction in schizophrenia." Thesis, King's College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Shain, Cory Adam. "Language, time, and the mind: Understanding human language processing using continuous-time deconvolutional regression." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1619002281033782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Goldie, Lara Lynn. "The Relationship Among Emotion Understanding, Language, and Social Behavior in Children with Language Impairment." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2709.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Salvador, Amaia. "Computer vision beyond the visible : image understanding through language." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/667162.

Full text
Abstract:
In the past decade, deep neural networks have revolutionized computer vision. High performing deep neural architectures trained for visual recognition tasks have pushed the field towards methods relying on learned image representations instead of hand-crafted ones, in the seek of designing end-to-end learning methods to solve challenging tasks, ranging from long-lasting ones such as image classification to newly emerging tasks like image captioning. As this thesis is framed in the context of the rapid evolution of computer vision, we present contributions that are aligned with three major changes in paradigm that the field has recently experienced, namely 1) the power of re-utilizing deep features from pre-trained neural networks for different tasks, 2) the advantage of formulating problems with end-to-end solutions given enough training data, and 3) the growing interest of describing visual data with natural language rather than pre-defined categorical label spaces, which can in turn enable visual understanding beyond scene recognition. The first part of the thesis is dedicated to the problem of visual instance search, where we particularly focus on obtaining meaningful and discriminative image representations which allow efficient and effective retrieval of similar images given a visual query. Contributions in this part of the thesis involve the construction of sparse Bag-of-Words image representations from convolutional features from a pre-trained image classification neural network, and an analysis of the advantages of fine-tuning a pre-trained object detection network using query images as training data. The second part of the thesis presents contributions to the problem of image-to-set prediction, understood as the task of predicting a variable-sized collection of unordered elements for an input image. We conduct a thorough analysis of current methods for multi-label image classification, which are able to solve the task in an end-to-end manner by simultaneously estimating both the label distribution and the set cardinality. Further, we extend the analysis of set prediction methods to semantic instance segmentation, and present an end-to-end recurrent model that is able to predict sets of objects (binary masks and categorical labels) in a sequential manner. Finally, the third part of the dissertation takes insights learned in the previous two parts in order to present deep learning solutions to connect images with natural language in the context of cooking recipes and food images. First, we propose a retrieval-based solution in which the written recipe and the image are encoded into compact representations that allow the retrieval of one given the other. Second, as an alternative to the retrieval approach, we propose a generative model to predict recipes directly from food images, which first predicts ingredients as sets and subsequently generates the rest of the recipe one word at a time by conditioning both on the image and the predicted ingredients.
En l'última dècada, les xarxes neuronals profundes han revolucionat el camp de la visió per computador. Els resultats favorables obtinguts amb arquitectures neuronals profundes entrenades per resoldre tasques de reconeixement visual han causat un canvi de paradigma cap al disseny de mètodes basats en representacions d'imatges apreses de manera automàtica, deixant enrere les tècniques tradicionals basades en l'enginyeria de representacions. Aquest canvi ha permès l'aparició de tècniques basades en l'aprenentatge d'extrem a extrem (end-to-end), capaces de resoldre de manera efectiva molts dels problemes tradicionals de la visió per computador (e.g. classificació d'imatges o detecció d'objectes), així com nous problemes emergents com la descripció textual d'imatges (image captioning). Donat el context de la ràpida evolució de la visió per computador en el qual aquesta tesi s'emmarca, presentem contribucions alineades amb tres dels canvis més importants que la visió per computador ha experimentat recentment: 1) la reutilització de representacions extretes de models neuronals pre-entrenades per a tasques auxiliars, 2) els avantatges de formular els problemes amb solucions end-to-end entrenades amb grans bases de dades, i 3) el creixent interès en utilitzar llenguatge natural en lloc de conjunts d'etiquetes categòriques pre-definits per descriure el contingut visual de les imatges, facilitant així l'extracció d'informació visual més enllà del reconeixement de l'escena i els elements que la composen La primera part de la tesi està dedicada al problema de la cerca d'imatges (image retrieval), centrada especialment en l'obtenció de representacions visuals significatives i discriminatòries que permetin la recuperació eficient i efectiva d'imatges donada una consulta formulada amb una imatge d'exemple. Les contribucions en aquesta part de la tesi inclouen la construcció de representacions Bag-of-Words a partir de descriptors locals obtinguts d'una xarxa neuronal entrenada per classificació, així com un estudi dels avantatges d'utilitzar xarxes neuronals per a detecció d'objectes entrenades utilitzant les imatges d'exemple, amb l'objectiu de millorar les capacitats discriminatòries de les representacions obtingudes. La segona part de la tesi presenta contribucions al problema de predicció de conjunts a partir d'imatges (image to set prediction), entès com la tasca de predir una col·lecció no ordenada d'elements de longitud variable donada una imatge d'entrada. En aquest context, presentem una anàlisi exhaustiva dels mètodes actuals per a la classificació multi-etiqueta d'imatges, que són capaços de resoldre la tasca de manera integral calculant simultàniament la distribució probabilística sobre etiquetes i la cardinalitat del conjunt. Seguidament, estenem l'anàlisi dels mètodes de predicció de conjunts a la segmentació d'instàncies semàntiques, presentant un model recurrent capaç de predir conjunts d'objectes (representats per màscares binàries i etiquetes categòriques) de manera seqüencial. Finalment, la tercera part de la tesi estén els coneixements apresos en les dues parts anteriors per presentar solucions d'aprenentatge profund per connectar imatges amb llenguatge natural en el context de receptes de cuina i imatges de plats cuinats. En primer lloc, proposem una solució basada en algoritmes de cerca, on la recepta escrita i la imatge es codifiquen amb representacions compactes que permeten la recuperació d'una donada l'altra. En segon lloc, com a alternativa a la solució basada en algoritmes de cerca, proposem un model generatiu capaç de predir receptes (compostes pels seus ingredients, predits com a conjunts, i instruccions) directament a partir d'imatges de menjar.
APA, Harvard, Vancouver, ISO, and other styles
47

Spivey, J. M. "Understanding Z : A specification language and its formal semantics." Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cullen, Brian. "Exploring second language creativity : Understanding and helping L2 songwriters." Thesis, Leeds Beckett University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.514222.

Full text
Abstract:
This study investigated how to help Japanese songwriters writing in English. It also aimed to evaluate and develop tools for probing the mental processes underlying L2 creativity. Evaluating various methodologies (corpus analysis of lyrics, interviews, verbal protocol analysis, and journal studies) led to eight case studies of Japanese L2 songwriters centered on the "dialogues of creation" which arose in one-ta-one songwriting workshops where the researcher adopted multiple roles including researcher, EFL teacher, creative coach, audience, and songwriter. Songwriting was shown to be a flexible use of cognitive, social, linguistic, and psychological strategies to solve a "song puzzle," simultaneously manipulating mental representations, inner voice, external constraints of song norms and language, and internal constraints created by writing. Key requirements underlying L2 creativity were revealed as flexibility in strategy and language use, openness, creation of favourable circumstances, awareness of L 1 norms, and development of craft through appropriate feedback and experiential learning. Other important issues addressed were motivations, inner and extemal validation criteria, borrowing and ownership, and L2 identity negotiation. Three approaches to helping L2 songwriters were formulated: a language-centered approach highlighting deviation from English song and linguistic norms, a skills-centered approach utilizing an L2 songwriting model, and a learner-centered approach identifying characteristics of the "good L2 songwriter." Teacher mediation through songwriting exercises and feedback created an experiential and organic curriculum which facilitated rapid development of strategies and self-correction skills. The investigation into mental processes, use of metaphor, and management of multiple roles may interest qualitative researchers. The descriptions of creative cognitive strategies in manipulating mental representations and inner voice, importance of inner validation, and process model of L2 songwriting may inform research in L 1 and L2 creativity. Pedagogic implications for experiential learning, collaborative learning, organic curriculum, bilingual clustering, and L2 identity negotiation may contribute to EFL and ESP
APA, Harvard, Vancouver, ISO, and other styles
49

Calderwood, Andrea. "Improving the singer's understanding of bebop language| Transcription application." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1569377.

Full text
Abstract:

This project report analyzes the content of line construction and development in founding bebop instrumental solos, and then compares them to bebop vocal solos. Performers examined include Charlie Parker, Ella Fitzgerald, and Chet Baker. Attention will be paid to harmonic content, vocal technique, syllable selection, and consideration given to language synthesis principals. This paper is intended as an impetus for further study of method improvements for developing vocalists' line construction through the study and incorporation of bebop-era instrumental transcriptions.

APA, Harvard, Vancouver, ISO, and other styles
50

Li, William (William Pui Lum). "Language technologies for understanding law, politics, and public policy." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/103673.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 205-209).
This thesis focuses on the development of machine learning and natural language processing methods and their application to large, text-based open government datasets. We focus on models that uncover patterns and insights by inferring the origins of legal and political texts, with a particular emphasis on identifying text reuse and text similarity in these document collections. First, we present an authorship attribution model on unsigned U.S. Supreme Court opinions, offering insights into the authorship of important cases and the dynamics of Supreme Court decision-making. Second, we apply software engineering metrics to analyze the complexity of the United States Code of Laws, thereby illustrating the structure and evolution of the U.S. Code over the past century. Third, we trace policy trajectories of legislative bills in the United States Congress, enabling us to visualize the contents of four key bills during the Financial Crisis. These applications on diverse open government datasets reveal that text reuse occurs widely in legal and political texts: similar ideas often repeat in the same corpus, different historical versions of documents are usually quite similar, or legitimate reasons for copying or borrowing text may exist. Motivated by this observation, we present a novel statistical text model, Probabilistic Text Reuse (PTR), for finding repeated passages of text in large document collections. We illustrate the utility of PTR by finding template ideas, less-common voices, and insights into document structure in a large collection of public comments on regulations proposed by the U.S. Federal Communications Commission (FCC) on net neutrality. These techniques aim to help citizens better understand political processes and help governments better understand political speech.
by William P. Li.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography