Dissertations / Theses on the topic 'Language modelling'

To see the other types of publications on this topic, follow the link: Language modelling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Language modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rajper, Noor Jehan. "VOML : virtual organization modelling language." Thesis, University of Leicester, 2012. http://hdl.handle.net/2381/10942.

Full text
Abstract:
Virtual organizations (VOs) and their breeding environments are an emerging approach for developing systems as a consortium of autonomous entities formed to share costs and resources, better respond to opportunities, achieve shorter time-to-market and exploit fast changing market opportunities. VOs cater for those demands by incorporating reconfigurations making VOs highly resilient and agile by design. Reconfiguration of systems is an active research area. Many policy and specification languages have been dedicated for the purpose. However, all these approaches consider reconfiguration of a system as somewhat isolated from its business and operational model; it is usually assumed that the latter two remain unaffected through such reconfigurations and the reconfiguration is usually limited to dynamic binding of components the system consists of. However the demands of VO reconfiguration go beyond dynamic binding and reach the level where it becomes crucial to keep changing the organizational structure (process model) of the system as well, which leads to changes of the operational/functional model. This continuous reconfiguration of the operational model emphasizes the need of a modelling language that allows specification and validation of such systems. This thesis approaches the problem of formal specification of VOs through the Virtual Organization Modelling Language (VOML) framework. The core of this framework are three languages each capturing a specific aspect. The first language named Virtual Organization Structural modelling language (VO-S), focuses on structural aspects and many of the characteristics particular to VOs such as relationship between the members expressed in domain terminology. The second language named VO Reconfiguration (VO-R for short), permits different reconfigurations on the structure of the VO. This language is an extension of APPEL for the domain of VOs. The third language named VO Operational modelling language (VO-O) describes the operational model of a VO in more details. This language is an adaptation and extension of the Sensoria Reference Modelling Language for service oriented architecture (SRML). Our framework models VOs using the VO-S and the VO-R which are at a high level of abstraction and independent of a specific computational model. Mapping rules provide guidelines to generate operational models, thus ensuring that the two models conform to each other. The usability and applicability of VOML is validated through two cases studies one of which offers travel itineraries as a VO service and is a running example. The other case study is an adaptation of a case study on developing a chemical plant from [14].
APA, Harvard, Vancouver, ISO, and other styles
2

Kent, Stuart John Harding. "Modelling events from natural language." Thesis, Imperial College London, 1993. http://kar.kent.ac.uk/21146/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Andersson, Jonathan. "Modelling and Evaluating the StreamBits language." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-652.

Full text
Abstract:

This thesis concludes the evaluation of a new high level programming language for stream applications, StreamBits. The goal with the project is to evaluate the programmability, with the focus on expressing machine-independent parallelism and bit-level computations in StreamBits. As of now, the programming language is prototyped in a Java framework. This project also involves improvement and expansion of this framework.

An examination of the framework was conducted. The conclusions of this examination was the foundation of the changes implemented in the framework during the improvement and expansion part of this project. Evaluation experiments were done using the improved version of the framework. The evaluation was based on a comparison of programs implemented in StreamBits and another programming language typically used by industry for this kind of applications. The focus of the evaluation was to evaluate how well the new data-types and stream constructs of StreamBits can be used and expressed compared to other languages.

The results are partly the improvements and expansion of the framework, partly the results of the tests conducted during the evaluation. Results show that the new data-types and stream constructs of StreamBits are valuable additions to a stream programming language. The data-types and stream constructs assists the programmer to write source code that is not closely bound to a specific architecture.

APA, Harvard, Vancouver, ISO, and other styles
4

Sicilia, Garcia E. I. "A study in dynamic language modelling." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Moore, Gareth Lewis. "Adaptive statistical class-based language modelling." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Prochazka, Katharina, and Gero Vogl. "Modelling language shift in Carinthia, Austria." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-198527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Prochazka, Katharina, and Gero Vogl. "Modelling language shift in Carinthia, Austria." Diffusion fundamentals 24 (2015) 40, S. 1, 2015. https://ul.qucosa.de/id/qucosa%3A14557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Botting, Richard. "Iterative construction of data modelling language semantics." Thesis, Coventry University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bull, Susan. "Collaborative student modelling in foreign language learning." Thesis, University of Edinburgh, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fountain, Trevor Michael. "Modelling the acquisition of natural language categories." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7875.

Full text
Abstract:
The ability to reason about categories and category membership is fundamental to human cognition, and as a result a considerable amount of research has explored the acquisition and modelling of categorical structure from a variety of perspectives. These range from feature norming studies involving adult participants (McRae et al. 2005) to long-term infant behavioural studies (Bornstein and Mash 2010) to modelling experiments involving artificial stimuli (Quinn 1987). In this thesis we focus on the task of natural language categorisation, modelling the cognitively plausible acquisition of semantic categories for nouns based on purely linguistic input. Focusing on natural language categories and linguistic input allows us to make use of the tools of distributional semantics to create high-quality representations of meaning in a fully unsupervised fashion, a property not commonly seen in traditional studies of categorisation. We explore how natural language categories can be represented using distributional models of semantics; we construct concept representations for corpora and evaluate their performance against psychological representations based on human-produced features, and show that distributional models can provide a high-quality substitute for equivalent feature representations. Having shown that corpus-based concept representations can be used to model category structure, we turn our focus to the task of modelling category acquisition and exploring how category structure evolves over time. We identify two key properties necessary for cognitive plausibility in a model of category acquisition, incrementality and non-parametricity, and construct a pair of models designed around these constraints. Both models are based on a graphical representation of semantics in which a category represents a densely connected subgraph. The first model identifies such subgraphs and uses these to extract a flat organisation of concepts into categories; the second uses a generative approach to identify implicit hierarchical structure and extract an hierarchical category organisation. We compare both models against existing methods of identifying category structure in corpora, and find that they outperform their counterparts on a variety of tasks. Furthermore, the incremental nature of our models allows us to predict the structure of categories during formation and thus to more accurately model category acquisition, a task to which batch-trained exemplar and prototype models are poorly suited.
APA, Harvard, Vancouver, ISO, and other styles
11

Donnelly, Paul Gerard. "A domain based approach to natural language modelling." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.481286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Thompson-Walsh, Christopher David. "Semantics and extension of a biological modelling language." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liang, Zhihong. "A meta-modelling language definition for specific domain." Thesis, De Montfort University, 2009. http://hdl.handle.net/2086/3539.

Full text
Abstract:
Model Driven software development has been considered to be a further software construction technology following object-oriented software development methods and with the potential to bring new breakthroughs in the research of software development. With deepening research, a growing number of Model Driven software development methods have been proposed. The model is now widely used in all aspects of software development. One key element determining progress in Model Driven software development research is how to better express and describe the models required for various software components. From a study of current Model Driven development technologies and methods, Domain-Specific Modelling is suggested in the thesis as a Model Driven method to better realise the potential of Model-Driven Software Development. Domain-specific modelling methods can be successfully applied to actual software development projects, which need a flexible and easy to extend, meta-modelling language to provide support. There is a particular requirement for modelling languages based on domain-specific modelling methods in Meta-modelling as most general modelling languages are not suitable. The thesis focuses on implementation of domain-specific modelling methods. The "domain" is stressed as a keystone of software design and development and this is what most differentiates the approach from general software development process and methods. Concerning the design of meta-modelling languages, the meta-modelling language based on XML is defined including its abstract syntax, concrete syntax and semantics. It can support description and construction of the domain meta-model and the domain application model. It can effectively realise visual descriptions, domain objects descriptions, relationships descriptions and rules relationships of domain model. In the area of supporting tools, a meta-meta model is given. The meta-meta model provides a group of general basic component meta-model elements together with the relationships between elements for the construction of the domain meta-model. It can support multi-view, multi-level description of the domain model. Developers or domain experts can complete the design and construction of the domain-specific meta-model and the domain application model in the integrated modelling environment. The thesis has laid the foundation necessary for research in descriptive languages through further study in key technologies of meta-modelling languages based on Model Driven development.
APA, Harvard, Vancouver, ISO, and other styles
14

McGreevy, Michael. "Statistical language modelling for large vocabulary speech recognition." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16444/1/Michael_McGreevy_Thesis.pdf.

Full text
Abstract:
The move towards larger vocabulary Automatic Speech Recognition (ASR) systems places greater demands on language models. In a large vocabulary system, acoustic confusion is greater, thus there is more reliance placed on the language model for disambiguation. In addition to this, ASR systems are increasingly being deployed in situations where the speaker is not conscious of their interaction with the system, such as in recorded meetings and surveillance scenarios. This results in more natural speech, which contains many false starts and disfluencies. In this thesis we investigate a novel approach to the modelling of speech corrections. We propose a syntactic model of speech corrections, and seek to determine if this model can improve on the performance of standard language modelling approaches when applied to conversational speech. We investigate a number of related variations to our basic approach and compare these approaches against the class-based N-gram. We also investigate the modelling of styles of speech. Specifically, we investigate whether the incorporation of prior knowledge about sentence types can improve the performance of language models. We propose a sentence mixture model based on word-class N-grams, in which the sentence mixture models and the word-class membership probabilities are jointly trained. We compare this approach with word-based sentence mixture models.
APA, Harvard, Vancouver, ISO, and other styles
15

McGreevy, Michael. "Statistical language modelling for large vocabulary speech recognition." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16444/.

Full text
Abstract:
The move towards larger vocabulary Automatic Speech Recognition (ASR) systems places greater demands on language models. In a large vocabulary system, acoustic confusion is greater, thus there is more reliance placed on the language model for disambiguation. In addition to this, ASR systems are increasingly being deployed in situations where the speaker is not conscious of their interaction with the system, such as in recorded meetings and surveillance scenarios. This results in more natural speech, which contains many false starts and disfluencies. In this thesis we investigate a novel approach to the modelling of speech corrections. We propose a syntactic model of speech corrections, and seek to determine if this model can improve on the performance of standard language modelling approaches when applied to conversational speech. We investigate a number of related variations to our basic approach and compare these approaches against the class-based N-gram. We also investigate the modelling of styles of speech. Specifically, we investigate whether the incorporation of prior knowledge about sentence types can improve the performance of language models. We propose a sentence mixture model based on word-class N-grams, in which the sentence mixture models and the word-class membership probabilities are jointly trained. We compare this approach with word-based sentence mixture models.
APA, Harvard, Vancouver, ISO, and other styles
16

Botha, Jan Abraham. "Probabilistic modelling of morphologically rich languages." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:8df7324f-d3b8-47a1-8b0b-3a6feb5f45c7.

Full text
Abstract:
This thesis investigates how the sub-structure of words can be accounted for in probabilistic models of language. Such models play an important role in natural language processing tasks such as translation or speech recognition, but often rely on the simplistic assumption that words are opaque symbols. This assumption does not fit morphologically complex language well, where words can have rich internal structure and sub-word elements are shared across distinct word forms. Our approach is to encode basic notions of morphology into the assumptions of three different types of language models, with the intention that leveraging shared sub-word structure can improve model performance and help overcome data sparsity that arises from morphological processes. In the context of n-gram language modelling, we formulate a new Bayesian model that relies on the decomposition of compound words to attain better smoothing, and we develop a new distributed language model that learns vector representations of morphemes and leverages them to link together morphologically related words. In both cases, we show that accounting for word sub-structure improves the models' intrinsic performance and provides benefits when applied to other tasks, including machine translation. We then shift the focus beyond the modelling of word sequences and consider models that automatically learn what the sub-word elements of a given language are, given an unannotated list of words. We formulate a novel model that can learn discontiguous morphemes in addition to the more conventional contiguous morphemes that most previous models are limited to. This approach is demonstrated on Semitic languages, and we find that modelling discontiguous sub-word structures leads to improvements in the task of segmenting words into their contiguous morphemes.
APA, Harvard, Vancouver, ISO, and other styles
17

Burkhardt, Rainer. "UML - Unified Modelling Language : objektorientierte Modellierung für die Praxis /." Bonn [u.a.] : Addison-Wesley, 1997. http://www.gbv.de/dms/ilmenau/toc/243054106.PDF.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Omar, Nazlia. "Heuristics-based entity-relationship modelling through natural language processing." Thesis, University of Ulster, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rojo, Vicente Guerrero. "MML, a modelling language with dynamic selection of methods." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Theodoulidis, Charalampos I. "A declarative specification language for temporal database applications." Thesis, University of Manchester, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Thomas, Kavita E. "Modelling Correction Signalled by "But" in Dialogue." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1030/.

Full text
Abstract:
Claiming that cross-speaker "but" can signal correction in dialogue, we start by describing the types of corrections "but" can communicate by focusing on the Speech Act (SA) communicated in the previous turn and address the ways in which "but" can correct what is communicated.
We address whether "but" corrects the proposition, the direct SA or the discourse relation communicated in the previous turn. We will also briefly address other relations signalled by cross-turn "but". After presenting a typology of the situations "but" can correct, we will address how these corrections can be modelled in the Information State model of dialogue, motivating this work by showing how it can be used to potentially avoid misunderstandings. We wrap up by showing how the model presented here updates beliefs in the Information State representation of the dialogue and can be used to facilitate response deliberation.
APA, Harvard, Vancouver, ISO, and other styles
22

Wakelin, Andrew. "A database query language for operations on graphical objects." Thesis, Abertay University, 1988. https://rke.abertay.ac.uk/en/studentTheses/826893af-0377-4ec6-a09a-6a5bd246df28.

Full text
Abstract:
The motivation for this work arose from the recognised inability of relational databases to store and manipulate data that is outside normal commercial applications (e.g. graphical data). The published work in this area is described with respect to the major problems of representation and manipulation of complex data. A general purpose data model, called GDB, that sucessfully tackles these major problems is developed from a formal specification in ML and is implemented using the PRECI/C database system. This model uses three basic graphical primitives (line segments, plane surfaces - facets, and volume elements tetrons) to construct graphical objects and it is shown how user designed primitives can be included. It is argued that graphical database query languages should be designed to be application specific and the user should be protected from the relational algebra which is the basis of the database operations. Such a base language (an extended version of DEAL) is presented which is capable of performing the necessary graphical manipulation by the use of recursive functions and views. The need for object hierarchies is established and the power of the DEAL language is shown to be necessary to handle such complex structures. The importance of integrity constraints is discussed and some ideas for the provision of user defined constraints are put forward.
APA, Harvard, Vancouver, ISO, and other styles
23

Balram, Shivanand. "Collaborative GIS process modelling using the Delphi method, systems theory and the unified modelling language (UML)." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85881.

Full text
Abstract:
Efforts to resolve environmental planning and decision-making conflicts usually focus on participant involvement, mutual understanding of the problem situation, evaluation criteria identification, data availability, and potential alternative solutions. However, as the alternatives become less distinct and participant values more diverse, intensified negotiations and more data are usually required for meaningful planning and decision-making. Consequently, questions such as "What collaborative spatial decision making design is best for a given context?" "How can the values and needs of stakeholders be integrated into the planning process?" and "How can we learn from decision making experiences and understanding of the past?" are crucial considerations. Answers to these questions can be developed around the analytic and discursive approaches that transform diffused subjective judgments into systematic consensus-oriented resolutions.
This dissertation examines the above issues through the design, implementation, and assessment of the Collaborative Spatial Delphi (CSD) Methodology. The CSD methodology facilitates spatial thinking and discursive strategies to describe the complex social-technical dynamics associated with the knowledge-structuring-consensus nexus of the participation process. The CSD methodology describes this nexus by synthesizing research findings from knowledge management, focus group theory, systems theory, integrated assessment, visualization and exploratory analysis, and transformative learning all represented within a collaborative geographic information system (GIS) framework.
The CSD methodology was implemented in multiple contexts. Its use in two contexts - strategic planning and management of urban green spaces in Montreal (Canada); and priority setting for North American biodiversity conservation - are reported in detail in this dissertation. The summative feedbacks from all the CSD planning workshops help incrementally improve the design of the CSD process. This dissertation also reports on the design and use of questionnaire surveys to incorporate local realities into planning, as well as the development of an evaluation index to assess the face validity and effectiveness of the CSD process from the perspective of workshop participants.
The accumulated evidence from the CSD implementations suggests that many core issues exist across spatial problem solving situations. Thus, the design and specification of a core collaborative process model provides benefits for knowledge exchange. General systems theory was used to classify the core technical components of the collaborative GIS design, and soft systems theory was used to characterize the human activity dynamics. Object oriented principles enabled the generation of a flexible domain model, and the unified modelling language (UML) visually described the collaborative process. The CSD methodology is used as a proof of concept.
This dissertation contributes to knowledge in the general areas of Geography, Geographic information systems and science, and Environmental decision making. The specific contributions are threefold. First, the CSD provides a synthesis of multi-disciplinary theories and a tested tool for environmental problem solving. Second, the CSD facilitates a fusion of local and technical knowledge for more realistic consensus planning outcomes. Third, an empirical-theoretical visual formalism of the CSD allows for process knowledge standardization and sharing across problem solving situations.
APA, Harvard, Vancouver, ISO, and other styles
24

Bauer, Kerstin [Verfasser]. "A New Modelling Language for Cyber-physical Systems / Kerstin Bauer." München : Verlag Dr. Hut, 2012. http://d-nb.info/1021072710/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Marquard, Stephen. "Improving searchability of automatically transcribed lectures through dynamic language modelling." Thesis, University of Cape Town, 2012. http://pubs.cs.uct.ac.za/archive/00000846/.

Full text
Abstract:
Recording university lectures through lecture capture systems is increasingly common. However, a single continuous audio recording is often unhelpful for users, who may wish to navigate quickly to a particular part of a lecture, or locate a specific lecture within a set of recordings. A transcript of the recording can enable faster navigation and searching. Automatic speech recognition (ASR) technologies may be used to create automated transcripts, to avoid the significant time and cost involved in manual transcription. Low accuracy of ASR-generated transcripts may however limit their usefulness. In particular, ASR systems optimized for general speech recognition may not recognize the many technical or discipline-specific words occurring in university lectures. To improve the usefulness of ASR transcripts for the purposes of information retrieval (search) and navigating within recordings, the lexicon and language model used by the ASR engine may be dynamically adapted for the topic of each lecture. A prototype is presented which uses the English Wikipedia as a semantically dense, large language corpus to generate a custom lexicon and language model for each lecture from a small set of keywords. Two strategies for extracting a topic-specific subset of Wikipedia articles are investigated: a naïve crawler which follows all article links from a set of seed articles produced by a Wikipedia search from the initial keywords, and a refinement which follows only links to articles sufficiently similar to the parent article. Pair-wise article similarity is computed from a pre-computed vector space model of Wikipedia article term scores generated using latent semantic indexing. The CMU Sphinx4 ASR engine is used to generate transcripts from thirteen recorded lectures from Open Yale Courses, using the English HUB4 language model as a reference and the two topic-specific language models generated for each lecture from Wikipedia. Three standard metrics – Perplexity, Word Error Rate and Word Correct Rate – are used to evaluate the extent to which the adapted language models improve the searchability of the resulting transcripts, and in particular improve the recognition of specialist words. Ranked Word Correct Rate is proposed as a new metric better aligned with the goals of improving transcript searchability and specialist word recognition. Analysis of recognition performance shows that the language models derived using the similarity-based Wikipedia crawler outperform models created using the naïve crawler, and that transcripts using similarity-based language models have better perplexity and Ranked Word Correct Rate scores than those created using the HUB4 language model, but worse Word Error Rates. It is concluded that English Wikipedia may successfully be used as a language resource for unsupervised topic adaptation of language models to improve recognition performance for better searchability of lecture recording transcripts, although possibly at the expense of other attributes such as readability.
APA, Harvard, Vancouver, ISO, and other styles
26

Khowaja, Zohra Ahsan. "Structural domain modelling for policy language specialization with conflict analysis." Thesis, University of Leicester, 2012. http://hdl.handle.net/2381/10994.

Full text
Abstract:
Policies are descriptive and provide information which can be used to modify the behaviour of a system without the need of recompilation and redeployment. They are usually written in a policy definition language which allows end users to specify their requirements, preferences and constraints. Policies are used in many software application areas: network management, telecommunications, security, and access control are some typical examples. Ponder, KAoS, Rein, XACML, and WSPL are examples of policy definition languages. These languages are usually targeted at a specific domain, hence there is a plethora of languages. APPEL (the Adaptable Programmable Policy Environment Language) [69] has followed a different approach: It is a generic policy description language conceived with a clear separation between core language and its specialization for concrete domains. So far, there has not been any formal method for the extension and domain specialization of the APPEL policy language. Policy conflict can occur when a new or a modified policy is deployed in a policy server, which leads to unspecified behaviour. To make policy based systems conflict free it is necessary to detect and resolve conflicts before they occur, otherwise the intended behaviour of a policy cannot be guaranteed. We introduce a structural modelling approach to specialize the policy language for different domains, implemented in the VIATRA2 graph transformation tool. This approach is applied to APPEL. Our method for conflict analysis is based on the modelling methodology. As conflicts depend on domain knowledge, it is sensible to use this knowledge for conflict analysis. The identified conflicting actions are then encoded in the ALLOY model checker that confirm the existence of actual and potential conflicts.
APA, Harvard, Vancouver, ISO, and other styles
27

Muresan, Gheorghe. "Using document clustering and language modelling in mediated information retrieval." Thesis, Robert Gordon University, 2002. http://hdl.handle.net/10059/623.

Full text
Abstract:
Our work addresses a well documented problem: users are frequently unable to articulate a query that clearly and comprehensively expresses their information need. This can be attributed to the information need being too ambiguous and not clearly defined in the user's mind, to a lack of knowledge of the domain of interest on the part of the user, to a lack of understanding of a retrieval system's conceptual model, or to an inability to use a certain query syntax. This thesis proposes a software tool that emulates the human search mediator. It helps a user explore a domain of interest, learn its structure, terminology and key concepts, and clarify and refine an information need. It can also help a user generate high-quality queries for searching the World Wide Web or other such large and heterogeneous document collections. Our work was inspired by library studies which have highlighted the role of the librarian in helping the user explore her information need, define the problem to be solved, articulate a formulation of the information need and adapt it for the retrieval system at hand in order to get information. Our approach, mediated access through a clustered collection, is based on an information access environment in which the user can explore a relatively small, well structured, pre-clustered document collection covering a particular subject domain, in order to understand the concepts encompassed and to clarify and refine her information need. At the same time, the user can ostensively indicate clusters and documents of interest so that the system builds a model of the user's topic of interest. Based on this model, the system assists and guides the user's exploration, or generates `mediated queries' that can be used to search other collections. We present the design and evaluation of WebCluster, a system that reifies the concept of mediated retrieval. Additionally, a variety of mediation experiments are presented,which provide guidelines as to which mediation strategies are more appropriate for different types of tasks. A set of experiments is presented that evaluate document clustering's capacity to group together topical documents and support mediation. In this context we propose and experimentally test a new formulation for the cluster hypothesis. We also look at the ability of language models to convey content, to represent topics and to highlight specific concepts in a given context. They are also successfully applied to generate flexible, task-dependent cluster representatives for supporting exploration through browsing and respectively searching. Our experimental results show that mediation has potential to significantly improve user queries and consequently the retrieval effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
28

Ghemri, Lila. "Cognitive modelling in an intelligent tutoring system for second language." Thesis, University of Bristol, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Koppelman, Joshua D. (Joshua David). "A statistical approach to language modelling for the ATIS problem." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36601.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 62-65).
by Joshua D. Koppelman.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
30

Scheffler, Carl. "Applied Bayesian inference : natural language modelling and visual feature tracking." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kamma, Aditya. "An Approach to Language Modelling for Intelligent Document Retrieval System." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Porayska-Pomsta, Kaska. "Influence of situational context on language production : modelling teachers' corrective responses." Thesis, University of Edinburgh, 2004. http://hdl.handle.net/1842/561.

Full text
Abstract:
Natural language is characterised by enormous linguistic variation (e.g., Fetzer (2003)). Such variation is not random, but is determined by a number of contextual factors. These factors encapsulate the socio-cultural conventions of a speech community and dictate the socially acceptable, i.e. polite, use of language. Producing polite language may not always be a trivial task. The ability to assess a situation with respect to a hearer’s social, cultural or emotional needs constitutes a crucial facet of a speaker’s social and linguistic competence. It is surprising then that it is also a facet which, to date, has received very little attention from researchers in the natural language generation community. Linguistic variation occurs in all linguistic sub-domains including the language of education (Person et al., 1995). Thanks to being relatively more constrained (and hence more predictable with respect to its intentional aspects than normal conversations), teachers’ language is taken in this thesis as a starting point for building a formal, computational model of language generation based on the theory of linguistic politeness. To date, the most formalised theory of linguistic politeness is that by Brown and Levinson (1987), in which face constitutes the central notion. With its two dimensions of Autonomy and Approval, face can be used to characterise different linguistic choices available to speakers in a systematic way. In this thesis, the basic idea of face is applied in the analysis of teachers’ corrective responses produced in real one-to-one and classroom dialogues, and it is redefined to suit the educational context. A computational model of selecting corrective responses is developed which demonstrates how the two dimensions of face can be derived from a situation and how they can be used to classify the many linguistic choices available to teachers. The model is fully implemented using a combination of naive Bayesian Networks and Case-Based Reasoning techniques. The evaluation of the model confirms the validity of the model, by demonstrating that politeness-based natural language generation in the context of teachers’ corrective responses can be used to model linguistic variation and that the resulting language is not singnificantly different from that produced by a human in identical situations.
APA, Harvard, Vancouver, ISO, and other styles
33

Adams, Nathan Grant. "A 2D visual language for rapid 3D scene design." Thesis, University of Canterbury. Computer Science and Software Engineering, 2009. http://hdl.handle.net/10092/3021.

Full text
Abstract:
Automatic recognition and digitization of the features found in raster images of 2D topographic maps has a long research history. Very little such work has focused on creating and working with alternatives to the classic isoline-based topographic map.This thesis presents a system that generates 3D scenes from a 2D diagram format designed for user friendliness; with more geometric expressiveness and lower ink usage than classic topographic maps. This thesis explains the rationale for and the structure of the system, and the difficulties encountered in constructing it. It then describes a user study to evaluate the language and the usability of its various features, and draws future research directions from it.
APA, Harvard, Vancouver, ISO, and other styles
34

Scott, Erin G. "Process algebra with layers : a language for multi-scale integration modelling." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23516.

Full text
Abstract:
Multi-scale modelling and analysis is becoming increasingly important and relevant. Analysis of the emergent properties from the interactions between scales of multi-scale systems is important to aid in solutions. There is no universally adopted theoretical/computational framework or language for the construction of multi-scale models. Most modelling approaches are specific to the problem that they are addressing and use a hybrid combination of modelling languages to model specific scales. This thesis addresses if process algebra can offer a unique opportunity in the definition and analysis of multi-scale models. In this thesis the generic Process Algebra with Layers (PAL) is defined: a language for multi-scale integration modelling. This work highlights the potential of process algebra to model multi-scale systems. PAL was designed based on features and challenges found from modelling a multi-scale system in an existing process algebra. The unique features of PAL are the layers: Population and Organism. The novel language modularises the spatial scales of the system into layers, therefore, modularising the detail of each scale. An Organism can represent a molecule, organelle, cell, tissue, organ or any organism. An Organism is described by internal species. An internal species, dependent on the scale of the Organism, can also represent a molecule, organelle, cell, tissue, organ or any organism. Populations hold specific types of Organism, for example, life stages, cell phases, infectious states and many more. The Population and Organism layers are integrated through mirrored actions. This novel language allows the clear definition of scales and interactions within and between these scales in one model. PAL can be applied to define a variety of multi-scale systems. PAL has been applied to two unrelated multi-scale system case studies to highlight the advantages of the generic novel language. Firstly the effects of ocean acidification on the life stages of the Pacific oyster. Secondly the effects of DNA damage from cancer treatment on the length of a cell cycle and cell population growth.
APA, Harvard, Vancouver, ISO, and other styles
35

Whittaker, Edward William Daniel. "Statistical language modelling for automatic speech recognition of Russian and English." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hunter, Gordon James Allan. "Statistical language modelling of dialogue material in the British national corpus." Thesis, University College London (University of London), 2004. http://discovery.ucl.ac.uk/1446734/.

Full text
Abstract:
Statistical language modelling may not only be used to uncover the patterns which underlie the composition of utterances and texts, but also to build practical language processing technology. Contemporary language applications in automatic speech recognition, sentence interpretation and even machine translation exploit statistical models of language. Spoken dialogue systems, where a human user interacts with a machine via a speech interface in order to get information, make bookings, complaints, etc., are example of such systems which are now technologically feasible. The majority of statistical language modelling studies to date have concentrated on written text material (or read versions thereof). However, it is well-known that dialogue is significantly different from written text in its lexical content and sentence structure. Furthermore, there are expected to be significant logical, thematic and lexical connections between successive turns within a dialogue, but "turns" are not generally meaningful in written text. There is therefore a need for statistical language modeling studies to be performed on dialogue, particularly with a longer-term aim to using such models in human-machine dialogue interfaces. In this thesis, I describe the studies I have carried out on statistically modelling the dialogue material within the British National Corpus (BNC) - a very large corpus of modern British English compiled during the 1990s. This thesis presents a general introductory survey of the field of automatic speech recognition. This is followed by a general introduction to some standard techniques of statistical language modelling which will be employed later in the thesis. The structure of dialogue is discussed using some perspectives from linguistic theory, and reviews some previous approaches (not necessarily statistical) to modelling dialogue. Then a qualitative description is given of the BNC and the dialogue data within it, together with some descriptive statistics relating to it and results from constructing simple trigram language models for both dialogue and text data. The main part of the thesis describes experiments on the application of statistical language models based on word caches, word "trigger" pairs, and turn clustering to the dialogue data. Several different approaches are used for each type of model. An analysis of the strengths and weaknesses of these techniques is then presented. The results of the experiments lead to a better understanding of how statistical language modelling might be applied to dialogue for the benefit of future language technologies.
APA, Harvard, Vancouver, ISO, and other styles
37

Gerl, Armin. "Modelling of a privacy language and efficient policy-based de-identification." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI105.

Full text
Abstract:
De nos jours, les informations personnelles des utilisateurs intéressent énormément les annonceurs et les industriels qui les utilisent pour mieux cibler leurs clients et pour amééliorer leurs offres. Ces informations, souvent trés sensibles, nécessitent d’être protégées pour réguler leur utilisation. Le RGPD est la législation européenne, récemment entrée en vigueur en Mai 2018 et qui vise à renforcer les droits de l’utilisateur quant au traitement de ses données personnelles. Parmi les concepts phares du RGPD, la définition des règles régissant la protection de la vie privée par défaut (privacy by default) et dès la conception (privacy by design). La possibilité pour chaque utilisateur, d’établir un consentement personnalisé sur la manière de consommer ses données personnelles constitue un de ces concepts. Ces règles, malgré qu’elles soient bien explicitées dans les textes juridiques, sont difficiles à mettre en oeuvre du fait de l’absence d’outils permettant de les exprimer et de les appliquer de manière systématique – et de manière différente – à chaque fois que les informations personnelles d’un utilisateur sont sollicitées pour une tâche donnée, par une organisation donnée. L’application de ces règles conduit à adapter l’utilisation des données personnelles aux exigences de chaque utilisateur, en appliquant des méthodes empêchant de révéler plus d’information que souhaité (par exemple : des méthodes d’anonymisation ou de pseudo-anonymisation). Le problème tend cependant à se complexifier quand il s’agit d’accéder aux informations personnelles de plusieurs utilisateurs, en provenance de sources différentes et respectant des normes hétérogènes, où il s’agit de surcroit de respecter individuellement les consentements de chaque utilisateur. L’objectif de cette thèse est donc de proposer un framework permettant de définir et d’appliquer des règles protégeant la vie privée de l’utilisateur selon le RGPD. La première contribution de ce travail consiste à définir le langage LPL (Layered Privacy Language) permettant d’exprimer, de personnaliser (pour un utilisateur) et de guider l’application de politiques de consommation des données personnelles, respectueuses de la vie privée. LPL présente la particularité d’être compréhensible pour un utilisateur ce qui facilite la négociation puis la mise en place de versions personnalisées des politiques de respect de la vie privée. La seconde contribution de la thèse est une méthode appelée Policy-based De-identification. Cette méthode permet l’application efficace des règles de protection de la vie privée dans un contexte de données multi-utilisateurs, régies par des normes hétérogènes de respect de la vie privée et tout en respectant les choix de protection arrêtés par chaque utilisateur. L’évaluation des performances de la méthode proposée montre un extra-temps de calcul négligeable par rapport au temps nécessaire à l’application des méthodes de protection des données
The processing of personal information is omnipresent in our datadriven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement
APA, Harvard, Vancouver, ISO, and other styles
38

Andrés, Ferrer Jesús. "Statistical approaches for natural language modelling and monotone statistical machine translation." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/7109.

Full text
Abstract:
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural y traducción automática estadística. En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad. En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida. Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico .
Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
Palancia
APA, Harvard, Vancouver, ISO, and other styles
39

Van, Wyk Desmond Eustin. "Virtual human modelling and animation for real-time sign language visualisation." University of the Western Cape, 2008. http://hdl.handle.net/11394/2998.

Full text
Abstract:
>Magister Scientiae - MSc
This thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.
South Africa
APA, Harvard, Vancouver, ISO, and other styles
40

Ciccone, Natalie A. "The measurement of stability in aphasia recovery: implications for language modelling." Thesis, Curtin University, 2003. http://hdl.handle.net/20.500.11937/1588.

Full text
Abstract:
Background: Performance stability is an implicit assumption within theoretical explanations of aphasia. The assumption being that when completing language processing tasks, performance will be stable from moment to moment and day to day. Theoretically, aphasia is most commonly viewed within a modular framework. that is, language processing is carried out by specific, specialised language processing modules. Aphasia is thought to result when one of these modules is dammed leading to a unique pattern of performance results. Implicit to this view of aphasia is stability, once damaged, the module will no longer be accessed and any process using the module will be impaired. This theory of aphasia is widely held within both research and clinical communities and underlies many of our approaches to the assessment and treatment of aphasia. However more recently researchers have been expressing doubts about the validity of assuming stability in aphasia performance. Instead variability in performance is being reported and alternative explanations of aphasia are being provided. One of these considers aphasia to result from a reduction in or the inefficient allocation of cognitive resources. Aims: This research explored variability in aphasic performance, with the aim to examine variability over a range of tasks and time periods. Methods and Procedures: Eight aphasic and ten non-brain damaged individuals participated in eight sessions. Within these sessions they completed a spontaneous language task; which contained four different narrative genres. a lexical decision task and a simple reaction task.\Performance on these tasks was examined for three different levels of variability; inter session variability (across session means for time measures), intra session variability (across items for time measures) and inter session variability (item to item accuracy for lexical decision). The three different levels of variability examined performance on the same task across days and within the same task on the same day. To determine whether the change in the performance of aphasic individuals was in the same range and followed the same pattern of change and variation demonstrated by the non-brain damaged participants, the pooled results of the non-bruin damaged individuals' performance developed a 'normal' range of performance. Using the group's data the results of each of the aphasic individual was then converted to a z- score. Outcomes and results: The results demonstrate that for all aphasic individuals, across the three tasks and three time periods examined, variability is a regular, if not universal feature of aphasia. Conclusions: Stability in aphasic performance cannot be assumed. Instead research and clinical approaches must establish stability or consider the impact of variability before conclusions about performance can be drawn. The presence of variability also calls into questions the traditionally held view that aphasia results from the selective impairment of specialised language processing modules. Instead an alternative mechanism for impairment must be considered. The resource allocation view of aphasia was explored and found to explain the performance of aphasic individuals within this study.
APA, Harvard, Vancouver, ISO, and other styles
41

Ciccone, Natalie A. "The measurement of stability in aphasia recovery : implications for language modelling /." Curtin University of Technology, School of Psychology, 2003. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=14631.

Full text
Abstract:
Background: Performance stability is an implicit assumption within theoretical explanations of aphasia. The assumption being that when completing language processing tasks, performance will be stable from moment to moment and day to day. Theoretically, aphasia is most commonly viewed within a modular framework. that is, language processing is carried out by specific, specialised language processing modules. Aphasia is thought to result when one of these modules is dammed leading to a unique pattern of performance results. Implicit to this view of aphasia is stability, once damaged, the module will no longer be accessed and any process using the module will be impaired. This theory of aphasia is widely held within both research and clinical communities and underlies many of our approaches to the assessment and treatment of aphasia. However more recently researchers have been expressing doubts about the validity of assuming stability in aphasia performance. Instead variability in performance is being reported and alternative explanations of aphasia are being provided. One of these considers aphasia to result from a reduction in or the inefficient allocation of cognitive resources. Aims: This research explored variability in aphasic performance, with the aim to examine variability over a range of tasks and time periods. Methods and Procedures: Eight aphasic and ten non-brain damaged individuals participated in eight sessions. Within these sessions they completed a spontaneous language task; which contained four different narrative genres. a lexical decision task and a simple reaction task.\
Performance on these tasks was examined for three different levels of variability; inter session variability (across session means for time measures), intra session variability (across items for time measures) and inter session variability (item to item accuracy for lexical decision). The three different levels of variability examined performance on the same task across days and within the same task on the same day. To determine whether the change in the performance of aphasic individuals was in the same range and followed the same pattern of change and variation demonstrated by the non-brain damaged participants, the pooled results of the non-bruin damaged individuals' performance developed a 'normal' range of performance. Using the group's data the results of each of the aphasic individual was then converted to a z- score. Outcomes and results: The results demonstrate that for all aphasic individuals, across the three tasks and three time periods examined, variability is a regular, if not universal feature of aphasia. Conclusions: Stability in aphasic performance cannot be assumed. Instead research and clinical approaches must establish stability or consider the impact of variability before conclusions about performance can be drawn. The presence of variability also calls into questions the traditionally held view that aphasia results from the selective impairment of specialised language processing modules. Instead an alternative mechanism for impairment must be considered. The resource allocation view of aphasia was explored and found to explain the performance of aphasic individuals within this study.
APA, Harvard, Vancouver, ISO, and other styles
42

Spike, Matthew John. "Minimal requirements for the cultural evolution of language." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25930.

Full text
Abstract:
Human language is both a cognitive and a cultural phenomenon. Any evolutionary account of language, then, must address both biological and cultural evolution. In this thesis, I give a mainly cultural evolutionary answer to two main questions: firstly, how do working systems of learned communication arise in populations in the absence of external or internal guidance? Secondly, how do those communication systems take on the fundamental structural properties found in human languages, i.e. systematicity at both a meaningless and meaningful level? A large, multi-disciplinary literature exists for each question, full of apparently conflicting results and analyses. My aim in this thesis is to survey this work, so as to find any commonalities and bring this together in order to provide a minimal account of the cultural evolution of language. The first chapter of this thesis takes a number of well-established models of the emergence of signalling systems. These are taken from several different fields: evolutionary linguistics, evolutionary game theory, philosophy, artificial life, and cognitive science. By using a common framework to directly compare these models, I show that three underlying commonalities determine the ability of any population of agents to reliably develop optimal signalling. The three requirements are that i) agents can create and transfer referential information, ii) there is a systemic bias against ambiguity, and iii) some mechanism leading to information loss exists. Following this, I extend the model to determine the effects of including referential uncertainty. I show that, for the group of models to which this applies, this places certain extra restrictions on the three requirements stated above. In the next chapter, I use an information-theoretic framework to construct a novel analysis of signalling games in general, and rephrase the three requirements in more formal terms. I then show that we can use these 3 criteria as a diagnostic for determining whether any given signalling game will lead to optimal signalling, without the requirement for repeated simulations. In the final, much longer, chapter, I address the topic of duality of patterning. This involves a lengthy review of the literature on duality of patterning, combinatoriality, and compositionality. I then argue that both levels of systematicity can be seen as a functional adaptation which maintains communicative accuracy in the face of noisy processes at different levels of analysis. I support this with results from a new, minimally-specified model, which also clarifies and informs a number of long-fought debates within the field.
APA, Harvard, Vancouver, ISO, and other styles
43

Chan, Oscar. "Prosodic features for a maximum entropy language model." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0244.

Full text
Abstract:
A statistical language model attempts to characterise the patterns present in a natural language as a probability distribution defined over word sequences. Typically, they are trained using word co-occurrence statistics from a large sample of text. In some language modelling applications, such as automatic speech recognition (ASR), the availability of acoustic data provides an additional source of knowledge. This contains, amongst other things, the melodic and rhythmic aspects of speech referred to as prosody. Although prosody has been found to be an important factor in human speech recognition, its use in ASR has been limited. The goal of this research is to investigate how prosodic information can be employed to improve the language modelling component of a continuous speech recognition system. Because prosodic features are largely suprasegmental, operating over units larger than the phonetic segment, the language model is an appropriate place to incorporate such information. The prosodic features and standard language model features are combined under the maximum entropy framework, which provides an elegant solution to modelling information obtained from multiple, differing knowledge sources. We derive features for the model based on perceptually transcribed Tones and Break Indices (ToBI) labels, and analyse their contribution to the word recognition task. While ToBI has a solid foundation in linguistic theory, the need for human transcribers conflicts with the statistical model's requirement for a large quantity of training data. We therefore also examine the applicability of features which can be automatically extracted from the speech signal. We develop representations of an utterance's prosodic context using fundamental frequency, energy and duration features, which can be directly incorporated into the model without the need for manual labelling. Dimensionality reduction techniques are also explored with the aim of reducing the computational costs associated with training a maximum entropy model. Experiments on a prosodically transcribed corpus show that small but statistically significant reductions to perplexity and word error rates can be obtained by using both manually transcribed and automatically extracted features.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, David Y. W. "Modelling variation in spoken and written language : the multi-dimensional approach revisited." Thesis, Lancaster University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vieira, Pinto Rodrigo Lamas. "A logic-based modelling language and integer-programming framework for multicriteria optimisation." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Naylor, Patrick Joseph. "A generic risk and protection integration model in the unified modelling language." Thesis, Liverpool John Moores University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

CHEVITARESE, DANIEL SALLES. "NEURONAL CIRCUIT SPECIFICATION LANGUAGE AND TOOLS FOR MODELLING THE VIRTUAL FLY BRAIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=29820@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
PROGRAMA DE DOUTORADO SANDUÍCHE NO EXTERIOR
O cérebro da Drosophila é um sistema atrativo para o estudo da lógica do circuito neural, porque implementa o comportamento sensorial complexo com um sistema nervoso que compreende um número de componentes neurais que é de cinco ordens de grandeza menor do que o de vertebrados. A análise do conectoma da mosca, revelou que o seu cérebro compreende cerca de 40 subdivisões distintas chamadas unidades de processamento local (LPUs), cada uma das quais é caracterizada por circuitos de processamento únicos. As LPUs podem ser consideradas os blocos de construção funcionais da cérebro, uma vez que quase todas LPUs identificadas correspondem a regiões anatômicas do cérebro associadas com subsistemas funcionais específicos tais como a sensação e locomoção. Podemos, portanto, emular todo o cérebro da mosca, integrando suas LPUs constituintes. Embora o nosso conhecimento do circuito interno de muitas LPUs está longe de ser completa, análises dessas LPUs compostas pelos sistemas olfativos e visuais da mosca sugerem a existência de repetidos sub-circuitos que são essenciais para as funções de processamento de informações fornecidas por cada LPU. O desenvolvimento de modelos LPU plaussíveis, portanto, requer a habilidade de especificar e instanciar sub-circuitos, sem referência explícita a seus neurônios constituintes ou ligações internas. Para este fim, este trabalho apresenta um arcabouço para modelar e especificar circuitos do cérebro, proporcionando uma linguagem de especificação neural chamada CircuitML, uma API Python para melhor manipular arquivos CircuitML e um conector otimizado para neurokernel para a simulação desses LPUs em GPU. A CircuitML foi concebida como uma extensão para NeuroML (NML), que é uma linguagem para de descrição de redes neurais biológicas baseada em XML que fornece primitivas para a definição de sub-circuitos neurais. Sub-circuitos são dotados de portas de interface que permitem a sua ligação a outros sub-circuitos através de padrões de conectividade neural.
The brain of the fruit y Drosophila Melanogaster is an attractive model system for studying the logic of neural circuit function because it implements complex sensory-driven behavior with a nervous system comprising a number of neural components that is five orders of magnitude smaller than that of vertebrates. Analysis of the fly s connectome, or neural connectivity map, using the extensive toolbox of genetic manipulation techniques developed for Drosophila, has revealed that its brain comprises about 40 distinct modular subdivisions called Local Processing Units (LPUs), each of which is characterized by a unique internal information processing circuitry. LPUs can be regarded as the functional building blocks of the y, since almost all identified LPUs have been found to correspond to anatomical regions of the y brain associated with specific functional subsystems such as sensation and locomotion. We can therefore emulate the entire y brain by integrating its constituent LPUs. Although our knowledge of the internal circuitry of many LPUs is far from complete, analyses of those LPUs comprised by the fly s olfactory and vision systems suggest the existence of repeated canonical sub-circuits that are integral to the information processing functions provided by each LPU. The development of plausible LPU models therefore requires the ability to specify and instantiate sub-circuits without explicit reference to their constituent neurons and internal connections. To this end, this work presents a framework to model and specify the circuit of the brain, providing a neural circuit specification language called CircuitML, a Python API to better handler CircuitML files and an optimized connector to neurokernel for the simulation of those LPUs on GPU. CircuitML has been designed as an extension to NeuroML (NML), which is an XML-based neural model description language that provides constructs for defining sub-circuits that comprise neural primitives. Sub-circuits are endowed with interface ports that enable their connection to other sub-circuits via neural connectivity patterns.
APA, Harvard, Vancouver, ISO, and other styles
48

Ding, Liya. "Modelling and Recognition of Manuals and Non-manuals in American Sign Language." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1237564092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Figl, Kathrin, Michael Derntl, and Sonja Kabicher. "Visual modelling and designing for cooperative learning and development of team competences." Inderscience Enterprises Ltd, 2009. http://epub.wu.ac.at/5649/1/b807.pdf.

Full text
Abstract:
This paper proposes a holistic approach to designing for the promotion of team and social competences in blended learning courses. Planning and modelling cooperative learning scenarios based on a domain specific modelling notation in the style of UML activity diagrams, and comparing evaluation results with planned outcomes allows for iterative optimization of a course's design. In a case study - a course on project management for computer science students - the instructional design including individual and cooperative learning situations was modelled. Specific emphasis was put on visualising the hypothesised development of team competences in the course design models. These models were subsequently compared to evaluation results obtained during the course. The results show that visual modelling of planned competence promotion enables more focused design, implementation and evaluation of collaborative learning scenarios.
APA, Harvard, Vancouver, ISO, and other styles
50

Bolton, Christie. "On the refinement of state-based and event-based models." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography