Dissertations / Theses on the topic 'Pattern semantics'

To see the other types of publications on this topic, follow the link: Pattern semantics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Pattern semantics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bergmair, Richard. "Monte Carlo semantics : robust inference and logical pattern processing with natural language text." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barb, Adrian S. "Knowledge representation and exchange of visual patterns using semantic abstractions." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/6674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on July 21, 2009) Includes bibliographical references.
3

Sarkar, Somwrita. "Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semantics." Thesis, The University of Sydney, 2009. http://hdl.handle.net/2123/5683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks.
4

Ducros, Théo. "Reasoning in Descriptions Logics Augmented with Refreshing Variables." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2022. http://www.theses.fr/2022UCFAC113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les logiques de descriptions ont été étudiées et utilisées dans de nombreux systèmes basés sur la connaissance. Elles permettent non seulement de representer la connaissance mais également de raisonner sur celle-ci. La relation de subsumtpion, une relation hierarchique entre les concepts, est l'une des tâches de raisonnement les plus communes. Le matching et l'unfication généralise la subsumption aux descriptions de concepts impliquants des variables. Dans ce papier, nous étudions le problème de raisonner en logique de description avec des variables. Plus précisement, nous considérons deux types de sémantique pour les variables dans le contexte de la logique el. Nous étudions fondamentalement deux types de raisonnement dans ce contexte, le matching et l'inclusion de pattern. Le problème de matching est utilisé comme un mécanisme pour évaluer des patterns sur une base de connaissances (i.e. calculer les réponses à une requête qui prends la forme d'un pattern) alors que l'inclusion de pattern permet de déterminer si les réponses d'un pattern sont inclus dans les réponses d'un autre pattern et ce, peu importe la base de connaissances. Nous démontrons que les deux problemes, le matching et l'inclusion de pattern, sont EXPTIME-complete. Les principaux résultats techniques sont obtenus par l'établissement d'une correspondance entre logique et automate à variables
Description logics are a family of knowledge representation that have been widely investigated and used in knowledge-based systems. The strength of description logics is beyond their modeling assets, it's their reasoning abilities. %Reasoning takes the shape of mechanisms that make the implicit knowledge explicit. One of the most common mechanism is based on the subsumption relationship. This relationship is a hierarchical relationship between concepts which aims to state if a concept is more general than another. The associated reasoning tasks aims to determine the subsumption relationship between two concepts. %Variables have been introduced to description logic to answer the needs of representing incomplete information. In this context, deciding subsumption evolved into two non-standard reasoning tasks known as matching and unification. Matching aims to decide the subsumption relationship between a concept and a pattern (i.e. a concept expressed with variables). Unification extends matching to the case where both entries are patterns. The semantics associated to variables can be qualified as non-refreshing semantics where assignment are fixed. In this thesis, we investigate reasoning with variables augmented with refreshing semantics. Refreshing semantics enables variables to be released and then given a new assignment. We define recursive pattern queries as terminologies that may contain variables leading to investigation of problems to answer recursive pattern queries over description logic ontologies. More specifically, we focus on the description logic el. Recursive pattern queries are expressed in the logic elrv, an extension of the description logic el with variables equipped with refreshing semantics.%We study the complexity of query answering and query containment in elrv, two reasoning mechanisms that can be viewed as a variant of matching and unification in presence of refreshing variables. Our main technical results are derived by establishing a correspondence between this logic and a variant of variable automata. While the upper bound is given by specific algorithms which are proven to be optimal, the lower bound is achieved by a reduction to halting problem of alternating turing machine. Thus leading to these problems being EXPTIME-complete
5

Danks, Warwick. "The Arabic verb : form and meaning in the vowel-lengthening patterns." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The research presented in this dissertation adopts an empirical Saussurean structuralist approach to elucidating the true meaning of the verb patterns characterised formally by vowel lengthening in Modern Standard Arabic (MSA). The verbal system as a whole is examined in order to place the patterns of interest (III and VI) in context, the complexities of Arabic verbal morphology are explored and the challenges revealed by previous attempts to draw links between form and meaning are presented. An exhaustive dictionary survey is employed to provide quantifiable data to empirically test the largely accepted view that the vowel lengthening patterns have mutual/reciprocal meaning. Finding the traditional explanation inadequate and prone to too many exceptions, alternative commonalities of meaning are similarly investigated. Whilst confirming the detransitivising function of the ta- prefix which derives pattern VI from pattern III, analysis of valency data also precludes transitivity as a viable explanation for pattern III meaning compared with the base form. Examination of formally similar morphology in certain nouns leads to the intuitive possibility that vowel lengthening has aspectual meaning. A model of linguistic aspect is investigated for its applicability to MSA and used to isolate the aspectual feature common to the majority of pattern III and pattern VI verbs, which is determined to be atelicity. A set of verbs which appear to be exceptional in that they are not attributable to atelic aspectual categories is found to be characterised by inceptive meaning and a three-phase model of event time structure is developed to include an inceptive verbal category, demonstrating that these verbs too are atelic. Thus the form-meaning relationship which is discovered is that the vowel lengthening verbal patterns in Modern Standard Arabic have atelic aspectual meaning.
6

Renau, Araque Irene. "Gramática y diccionario : las construcciones con se en las entradas verbales del diccionario de español como lengua extranjera." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/97047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La presente tesis doctoral aborda los usos de se, su tratamiento en los diccionarios románicos actuales y su representación en un diccionario de aprendizaje de español como lengua extranjera. Su objetivo principal es proponer un modelo de representación para verbos que muestren estos usos. Para ello, se atenderán los siguientes aspectos:  El estado de la cuestión tanto en los estudios de gramática (capítulo 2) como en los lexicográficos (capítulo 3).  La representación de los usos pronominales en los diccionarios románicos actuales, en concreto los de aprendizaje de segunda lengua (capítulo 4).  El análisis sistemático de los usos de se en el corpus, enfocado desde la perspectiva de la Theory of Norms and Exploitations y el Corpus Pattern Analysis de Hanks (2004) (capítulos 5 y 6).  La elaboración de un modelo de entrada lexicográfica verbal que contenga usos con se para un diccionario de ELE (capítulo 7). Los resultados de la tesis son principalmente la elaboración de una base de datos sobre verbos con usos pronominales (capítulo 6, SCPA) y de un prototipo de 20 entradas lexicográficas de los mismos verbos analizados con CPA (capítulo 7).
The present Ph.D. thesis studied the uses of the Spanish particle se, its treatment by current romance dictionaries and its representation in a dictionary for learners of Spanish as a foreign language. The main objective is to propose a model for the representation of the verbs that present the use of se. For this, the following aspects will be analysed:  The review of related work in grammar studies (chapter 2) as well as lexicography (chapter 3).  The representation of pronominal uses in the current romance dictionaries, particularly in those for learners of Spanish as a second language (chapter 4).  The systematic analysis of se in corpora from the perspective of Hanks’ (2004) theory of Norms and Exploitations and Corpus Patterns Analysis (chapter 5 and 6).  The elaboration of a model of a verbal lexical entry for a dictionary of Spanish as a second language containing uses of se (chapter 7). The results of the thesis are mainly the elaboration of a database on Spanish pronominal verbs (chapter 6, Spanish CPA) and of a prototype of 20 lexical entries with the same verbs analysed with CPA (chapter 7).
7

Berglund, Jonny. "A Construction Grammar Approach to the Phrase." Thesis, Stockholm University, Stockholm University, Department of English, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

This essay adopts a construction grammar approach to the linguistic pattern why don’t you. It argues that the pattern can have two different senses: an interrogative sense and a suggestive sense. Further it argues that the suggestive sense is a construction similar to the definition of a construction described by construction grammar theory.

In other words, the linguistic pattern why don’t you can have a specific underlying semantics that cannot be reached by an examination of its formal pattern.

Keywords: Construct, Construction, Construction Grammar, Idiom, Interpretation, Linguistic Pattern, Marker, Underlying Semantics

8

Sarkar, Somwrita. "Acquiring symbolic design optimization problem reformulation knowledge." Connect to full text, 2009. http://hdl.handle.net/2123/5683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of Sydney, 2009.
Title from title screen (viewed November 13, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the Faculty of Architecture, Design and Planning in the Faculty of Science. Includes graphs and tables. Includes bibliographical references. Also available in print form.
9

Soztutar, Enis. "Mining Frequent Semantic Event Patterns." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611007/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Especially with the wide use of dynamic page generation, and richer user interaction in Web, traditional web usage mining methods, which are based on the pageview concept are of limited usability. For overcoming the difficulty of capturing usage behaviour, we define the concept of semantic events. Conceptually, events are higher level actions of a user in a web site, that are technically independent of pageviews. Events are modelled as objects in the domain of the web site, with associated properties. A sample event from a video web site is the '
play video event'
with properties '
video'
, '
length of video'
, '
name of video'
, etc. When the event objects belong to the domain model of the web site'
s ontology, they are referred as semantic events. In this work, we propose a new algorithm and associated framework for mining patterns of semantic events from the usage logs. We present a method for tracking and logging domain-level events of a web site, adding semantic information to events, an ordering of events in respect to the genericity of the event, and an algorithm for computing sequences of frequent events.
10

Rose, Tony Gerard. "Large vocabulary semantic analysis for text recognition." Thesis, Nottingham Trent University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Jingen. "Learning Semantic Features for Visual Recognition." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications such as video (image) indexing and search, intelligent surveillance, human-machine interaction, robot navigation, etc. Effective modeling of the objects, scenes and actions is critical for visual recognition. Recently, bag of visual words (BoVW) representation, in which the image patches or video cuboids are quantized into visual words (i.e., mid-level features) based on their appearance similarity using clustering, has been widely and successfully explored. The advantages of this representation are: no explicit detection of objects or object parts and their tracking are required; the representation is somewhat tolerant to within-class deformations, and it is efficient for matching. However, the performance of the BoVW is sensitive to the size of the visual vocabulary. Therefore, computationally expensive cross-validation is needed to find the appropriate quantization granularity. This limitation is partially due to the fact that the visual words are not semantically meaningful. This limits the effectiveness and compactness of the representation. To overcome these shortcomings, in this thesis we present principled approach to learn a semantic vocabulary (i.e. high-level features) from a large amount of visual words (mid-level features). In this context, the thesis makes two major contributions. First, we have developed an algorithm to discover a compact yet discriminative semantic vocabulary. This vocabulary is obtained by grouping the visual-words based on their distribution in videos (images) into visual-word clusters. The mutual information (MI) be- tween the clusters and the videos (images) depicts the discriminative power of the semantic vocabulary, while the MI between visual-words and visual-word clusters measures the compactness of the vocabulary. We apply the information bottleneck (IB) algorithm to find the optimal number of visual-word clusters by finding the good tradeoff between compactness and discriminative power. We tested our proposed approach on the state-of-the-art KTH dataset, and obtained average accuracy of 94.2%. However, this approach performs one-side clustering, because only visual words are clustered regardless of which video they appear in. In order to leverage the co-occurrence of visual words and images, we have developed the co-clustering algorithm to simultaneously group the visual words and images. We tested our approach on the publicly available fifteen scene dataset and have obtained about 4% increase in the average accuracy compared to the one side clustering approaches. Second, instead of grouping the mid-level features, we first embed the features into a low-dimensional semantic space by manifold learning, and then perform the clustering. We apply Diffusion Maps (DM) to capture the local geometric structure of the mid-level feature space. The DM embedding is able to preserve the explicitly defined diffusion distance, which reflects the semantic similarity between any two features. Furthermore, the DM provides multi-scale analysis capability by adjusting the time steps in the Markov transition matrix. The experiments on KTH dataset show that DM can perform much better (about 3% to 6% improvement in average accuracy) than other manifold learning approaches and IB method. Above methods use only single type of features. In order to combine multiple heterogeneous features for visual recognition, we further propose the Fielder Embedding to capture the complicated semantic relationships between all entities (i.e., videos, images,heterogeneous features). The discovered relationships are then employed to further increase the recognition rate. We tested our approach on Weizmann dataset, and achieved about 17% 21% improvements in the average accuracy.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
12

Wei, Xiaoyong. "Concept-based video search by semantic and context reasoning /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-cs-b23750509f.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 122-133)
13

Kaewtrakulpong, Pakorn. "Adaptive probabilistic models for learning semantic patterns." Thesis, Brunel University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lermusiaux, Pierre. "Analyse statique de transformations pour l’élimination de motifs." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La transformation de programmes est une pratique très courante dans le domaine des sciences informatiques. De la compilation à la génération de tests en passant par de nombreuses approches d'analyse de codes et de vérification formelle des programmes, c'est un procédé qui est à la fois omniprésent et crucial au bon fonctionnement des programmes et systèmes informatiques. Cette thèse propose une étude formelle des procédures de transformation de programmes dans le but d'exprimer et de garantir des propriétés syntaxiques sur le comportement et les résultats d'une telle transformation. Dans le contexte de la vérification formelle des programmes, il est en effet souvent nécessaire de pouvoir caractériser la forme des termes obtenus par réduction suivant une telle transformation. En s'inspirant du modèle de passes de compilation, qui décrivent un séquençage de la compilation d'un programme en étapes de transformation minimales n'affectant qu'un petit nombre des constructions du langage, on introduit, dans cette thèse, un formalisme basé sur les notions de filtrage par motif et de réécriture permettant de décrire certaines propriétés couramment induites par ce type de transformations. Le formalisme proposé se repose sur un système d'annotations des symboles de fonction décrivant une spécification du comportement attendu des fonctions associées. On présente alors une méthode d'analyse statique permettant de vérifier que les transformations étudiées, exprimées par un système de réécriture, satisfont en effet ces spécifications
Program transformation is an extremely common practice in computer science. From compilation to tests generation, through many approaches of code analysis and formal verification of programs, it is a process that is both ubiquitous and critical to properly functionning programs and information systems. This thesis proposes to study the program transformations mechanisms in order to express and verify syntactical guarantees on the behaviour of these transformations and on their results.Giving a characterisation of the shape of terms returned by such a transformation is, indeed, a common approach to the formal verification of programs. In order to express some properties often used by this type of approaches, we propose in this thesis a formalism inspired by the model of compilation passes, that are used to describe the general compilation of a program as a sequence of minimal transformations, and based on the notions of pattern matching and term rewriting.This formalism relies on an annotation mechanism of function symbols in order to express a set of specfications describing the behaviours of the associated functions. We then propose a static analysis method in order to check that a transformation, expressed as a term rewrite system, actually verifiesits specifications
15

He, Feihu. "Using patterns in conceptual modeling of business activities." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/4081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Patterns are used as building blocks for design and construction in many fields such as architecture, music, literature, etc. Researchers and practitioners in the information systems area have been exploring patterns and using them in system analysis and design. Patterns found in the analysis stage, when analysts create conceptual models to abstractly represent domain reality, are call business patterns or analysis patterns. Although various business patterns were proposed in previous studies, we found that business semantics were missing in these patterns. These business patterns failed to show functionalities that is essential to patterns in general. Most of these patterns were also not capable of describing business activities, the dynamic aspect of business. This study is conducted to address these issues. In this thesis, we provide a brief literature review on business patterns, and discuss the major problems we found in these studies. Then we introduce our research approach and the major outcomes. We propose a new definition of business patterns with business semantics, which enables us to recover the missing functionality in business patterns. We suggest the key elements to represent business patterns, and propose a two-level template (functional and operational) to describe these elements. Based on theR²M approach, we propose a modeling method with graphical notations to describe the operational level of patterns, where business activities can be modeled. Examples and a case study are provided in this thesis to demonstrate how to use the modeling method and how to use business patterns in practice.
16

Aldin, Laden. "Semantic discovery and reuse of business process patterns." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In modern organisations business process modelling has become fundamental due to the increasing rate of organisational change. As a consequence, an organisation needs to continuously redesign its business processes on a regular basis. One major problem associated with the way business process modelling (BPM) is carried out today is the lack of explicit and systematic reuse of previously developed models. Enabling the reuse of previously modelled behaviour can have a beneficial impact on the quality and efficiency of the overall information systems development process and also improve the effectiveness of an organisation’s business processes. In related disciplines, like software engineering, patterns have emerged as a widely accepted architectural mechanism for reusing solutions. In business process modelling the use of patterns is quite limited apart from few sporadic attempts proposed by the literature. Thus, pattern-based BPM is not commonplace. Business process patterns should ideally be discovered from the empirical analysis of organisational processes. Empiricism is currently not the basis for the discovery of patterns for business process modelling and no systematic methodology for collecting and analysing process models of business organisations currently exists. The purpose of the presented research project is to develop a methodological framework for achieving reuse in BPM via the discovery and adoption of patterns. The framework is called Semantic Discovery and Reuse of Business Process Patterns (SDR). SDR provides a systematic method for identifying patterns among organisational data assets representing business behaviour. The framework adopts ontologies (i.e., formalised conceptual models of real-world domains) in order to facilitate such discovery. The research has also produced an ontology of business processes that provides the underlying semantic definitions of processes and their constituent parts. The use of ontologies to model business processes represents a novel approach and combines advances achieved by the Semantic Web and BPM communities. The methodological framework also relates to a new line of research in BPM on declarative business processes in which the models specify what should be done rather than how to ‘prescriptively’ do it. The research follows a design science method for designing and evaluating SDR. Evaluation is carried out using real world sources and reuse scenarios taken from both the financial and educational domains.
17

Alfaries, Auhood. "Ontology learning for Semantic Web Services." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The expansion of Semantic Web Services is restricted by traditional ontology engineering methods. Manual ontology development is time consuming, expensive and a resource exhaustive task. Consequently, it is important to support ontology engineers by automating the ontology acquisition process to help deliver the Semantic Web vision. Existing Web Services offer an affluent source of domain knowledge for ontology engineers. Ontology learning can be seen as a plug-in in the Web Service ontology development process, which can be used by ontology engineers to develop and maintain an ontology that evolves with current Web Services. Supporting the domain engineer with an automated tool whilst building an ontological domain model, serves the purpose of reducing time and effort in acquiring the domain concepts and relations from Web Service artefacts, whilst effectively speeding up the adoption of Semantic Web Services, thereby allowing current Web Services to accomplish their full potential. With that in mind, a Service Ontology Learning Framework (SOLF) is developed and applied to a real set of Web Services. The research contributes a rigorous method that effectively extracts domain concepts, and relations between these concepts, from Web Services and automatically builds the domain ontology. The method applies pattern-based information extraction techniques to automatically learn domain concepts and relations between those concepts. The framework is automated via building a tool that implements the techniques. Applying the SOLF and the tool on different sets of services results in an automatically built domain ontology model that represents semantic knowledge in the underlying domain. The framework effectiveness, in extracting domain concepts and relations, is evaluated by its appliance on varying sets of commercial Web Services including the financial domain. The standard evaluation metrics, precision and recall, are employed to determine both the accuracy and coverage of the learned ontology models. Both the lexical and structural dimensions of the models are evaluated thoroughly. The evaluation results are encouraging, providing concrete outcomes in an area that is little researched.
18

Jiang, Yugang. "Large scale semantic concept detection, fusion, and selection for domain adaptive video search /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-cs-b23749957f.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 145-161)
19

Šimkienė, Ina. "A functional analysis of noncanonical word order patterns in CARSON McCULLERS‘ short stories." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140724_101113-92224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In communication, a language user is naturally disposed to proceed from what is known to, or shared by, both the speaker/writer and hearer/reader and end with the information that is the most important. Such a disposition complies with the requirements of Functional Sentence Perspective (FSP), but it also makes a language user “transform” the basic word order. The present work took a functional approach to language study to explore the syntactic potential of English to produce various sentence patterns by carrying out a communicative (functional) analysis of Carson McCullers’ short stories. The analysis showed that one of the main causes of noncanonical ordering of sentence elements is thematization by means of Preposing. The preposed elements were semantically diverse, though the frequency of occurrence of different process type sentences varied. The results of the analysis led to the conclusion that syntactic movement is determined by the semantic, syntactic and contextual restrictions. Syntactically, the peripheral elements of the sentence exhibited a higher flexibility than the core sentence elements. Semantic and syntactic unity of the sentence elements were interfered when the preposed sentence elements expressed information recoverable from a very short retrievability span, which revealed the significant role of the context in syntactical movements. Preposing and the resulting sentence patterns seem to be used for particular discourse functions: to enhance the... [to full text]
Pagal sakinio aktualiosios skaidos (AS) teoriją konkrečiame kontekste vieni sakinio elementai komunikaciniu požiūriu yra svarbesni negu kiti. Kalbos vartotojas yra natūraliai linkęs sakinį pradėti nuo to, kas žinoma jam kaip kalbėtojui ar rašytojui ir jo klausytojui ar skaitytojui ir baigti sakinį informacija, kuri yra svarbiausia. Tokia nuostata verčia kalbos vartotoją transformuoti vadinamą gramatinį sakinio modelį Veiksnys+Tarinys+Papildinys. Kitaip tariant, komunikacijos procese sintaksinio lygmens uždavinys yra „rasti“ tinkamą sakinio modelį ir jį aktualizuoti. Šiame darbe yra tiriamos anglų kalbos sintaksinės galios sudaryti įvairias sintaksines struktūras, kurios geriausiai gali atspindėti sakinio turinį ir komunikacinį tikslą. Tiriamajai medžiagai pasirinkti amerikiečių rašytojos Carson McCaullers apsakymai. Tyrimas parodė, kad tiriamų sakinių žodžių tvarką dažniausiai lėmė teminami semantiniai elementai, iškeliant juos į sakinio pradžią, arba kitaip tariant, atliekant temos preposiciją (angl. Preposing). Į sakinio pradžią keliami elementai yra semantiškai skirtingi, priklausomai nuo proceso tipo. Žodžių tvarkos įvairavimą lemia semantiniai, sintaksiniai ir konteksto apribojimai. Dažniausiai į sakinio pradžią keliami pagrindiniai ir periferiniai elementai reiškė žinomą informaciją. Tiriant žodžių tvarkos įvairavimo atvejus, buvo siekiama įvertinti ir diskursinį žodžių tvarkos modelių vaidmenį. Tyrimas parodė, kad įprasta žodžių tvarka yra keičiama, siekiant ne tik... [toliau žr. visą tekstą]
20

Luo, Ying. "Statistical semantic analysis of spatio-temporal image sequences /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Tran Diem Hanh. "Semantic-based topic evaluation and application in information filtering." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/209882/1/Tran%20Diem%20Hanh_Nguyen_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Topic modelling techniques are used to find the main themes in a collection of documents automatically. This thesis presents effective topic evaluation models to measure the quality of the discovered topics. The proposed techniques use human defined knowledge to solve the problems of evaluating topics in terms of semantic meaning of the topics. The thesis also proposed methods to modelling user interest based on the topic model generated from the user’s documents. The proposed techniques help to measure the quality of the topics and significantly improve the performance of text mining applications.
22

Dias, Moreira De Souza Fillipe. "Semantic Description of Activities in Videos." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Description of human activities in videos results not only in detection of actions and objects but also in identification of their active semantic relationships in the scene. Towards this broader goal, we present a combinatorial approach that assumes availability of algorithms for detecting and labeling objects and actions, albeit with some errors. Given these uncertain labels and detected objects, we link them into interpretative structures using domain knowledge encoded with concepts of Grenander’s general pattern theory. Here a semantic video description is built using basic units, termed generators, that represent labels of objects or actions. These generators have multiple out-bonds, each associated with either a type of domain semantics, spatial constraints, temporal constraints or image/video evidence. Generators combine between each other, according to a set of pre-defined combination rules that capture domain semantics, to form larger structures known as configurations, which here will be used to represent video descriptions. Such connected structures of generators are called configurations. This framework offers a powerful representational scheme for its flexibility in spanning a space of interpretative structures (configurations) of varying sizes and structural complexity. We impose a probability distribution on the configuration space, with inferences generated using a Markov Chain Monte Carlo-based simulated annealing algorithm. The primary advantage of the approach is that it handles known computer vision challenges – appearance variability, errors in object label annotation, object clutter, simultaneous events, temporal dependency encoding, etc. – without the need for a exponentially- large (labeled) training data set.
23

Petit, Barbara. "Autour du lambda-calcul avec constructeurs." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00662500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le lambda calcul avec constructeurs (de Arbiser, Miquel et Rios) est une extension du lambda calcul avec un mécanisme de filtrage. Le filtrage à la ML y est décomposé en deux étapes: une analyse de cas sur des constantes (telle l'instruction "case" de Pascal), et une commutation de l'application avec la construction de filtrage. Cette règle de commutation entre deux constructions de natures différentes induit une géométrie de calcul surprenante, a priori incompatible avec les intuitions habituelles de typage. Cependant il a été montré que ce calcul est confluent, et vérifie la propriété de séparation (à la Böhm). Cette thèse propose un système de types du polymorphique pour ce calcul, et décrit ensuite un modèle de réalisabilité, qui adapte les candidats de réductibilité de Girard au lambda calcul avec constructeurs. La normalisation forte du calcul typé et l'absence d'erreur de filtrage lors de l'évaluation en découlent immédiatement. Nous nous intéressons ensuite à la sémantique du lambda calcul avec constructeurs non typé. Une notion générique de modèle catégorique pour ce calcul est définie, puis un modèle particulier (le modèle syntaxique dans la catégorie des PERs) est construit. Nous en déduisons un résultat de complétude. Enfin, nous proposons une traduction CPS du lambda calcul avec constructeurs dans le lambda calcul simplement typé avec paires. Le lambda calcul avec constructeurs peut ainsi être simulé dans un calcul bien connu, et cette traduction nous permet aussi de transformer tout modèle par continuation en modèle du lambda calcul avec constructeurs. Une équation catégorique caractéristique de ces modèles apparait alors, qui permet de construire des modèles non syntaxiques (dans les domaines) de Scott du lambda calcul avec constructeurs.
24

Hammar, Karl. "Towards an Ontology Design Pattern Quality Model." Licentiate thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköing University.
25

Sörensen, Susanne. "Five English Verbs : A Comparison between Dictionary meanings and Meanings in Corpus collocations." Thesis, Högskolan i Halmstad, Sektionen för humaniora (HUM), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In Norstedts Comprehensive English-Swedish Dictionary (2000) it is said that the numbered list of senses under each headword is frequency ordered. Thus, the aim of this study is to see whether this frequency order of senses agrees with the frequencies appearing in the British National Corpus (BNC). Five English, polysemous verbs were studied. For each verb, a simple search in the corpus was carried out, displaying 50 random occurrences. Each collocate was encoded with the most compatible sense from the numbered list of senses in the dictionary. The encoded tokens were compiled and listed in frequency order. This list was compared to the dictionary's list of senses. Only two of the verbs reached agreement between the highest ranked dictionary sense and the most frequent sense in the BNC simple search. None of the verbs' dictionary orders agreed completely with the emerged frequency order of the corpus occurrences, why complementary collocational learning is advocated.
26

Tsishkou, Dzmitry. "Face detection, matching and recognition for semantic video understanding." Ecully, Ecole centrale de Lyon, 2005. http://www.theses.fr/2005ECDL0044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The objective of this work can be summarized as follows : to propose face detection and recognition in video solution that is enough fast, accurate and reliable to be implemented in the semantic video understanding system that is capable of replacing human expert in a variety of multimedia indexing applications. Meanwhile we assume that the research results that were raised during this work are complete enough to be adapted or modified as a part of other image processing, pattern recognition and video indexing and analysis systems.
27

Lodhi, Sheheryar, and Zaheer Ahmed. "Content Ontology Design Pattern Presentation." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Data- och elektroteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-15760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ontology design patterns are used for creating quality modeling solutions for ontologies. The presentation of ontology design patterns is concerned with reusability of ontologies from a user perspective. The purpose of this research is to identify improvement areas in the presentation of content ontology design patterns. The objective is to analyze different content ontology design patterns and provide suggestions for possible changes in current templates and pattern presentation. The ontology design pattern templates were compared with existing templates of other patterns to identify improvement areas. After this, two surveys were conducted with novice users and expert ontology engineers to improve the readability and usability of content ontology design patterns from the user perspective and to discover differences in opinion while using the patterns. Based on the findings of comparison and survey results, we proposed suggestions to improve the current template and presentation of content ontology design patterns.
28

Gao, Jizhou. "VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS." UKnowledge, 2013. http://uknowledge.uky.edu/cs_etds/14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation addresses the difficulties of semantic segmentation when dealing with an extensive collection of images and 3D point clouds. Due to the ubiquity of digital cameras that help capture the world around us, as well as the advanced scanning techniques that are able to record 3D replicas of real cities, the sheer amount of visual data available presents many opportunities for both academic research and industrial applications. But the mere quantity of data also poses a tremendous challenge. In particular, the problem of distilling useful information from such a large repository of visual data has attracted ongoing interests in the fields of computer vision and data mining. Structural Semantics are fundamental to understanding both natural and man-made objects. Buildings, for example, are like languages in that they are made up of repeated structures or patterns that can be captured in images. In order to find these recurring patterns in images, I present an unsupervised frequent visual pattern mining approach that goes beyond co-location to identify spatially coherent visual patterns, regardless of their shape, size, locations and orientation. First, my approach categorizes visual items from scale-invariant image primitives with similar appearance using a suite of polynomial-time algorithms that have been designed to identify consistent structural associations among visual items, representing frequent visual patterns. After detecting repetitive image patterns, I use unsupervised and automatic segmentation of the identified patterns to generate more semantically meaningful representations. The underlying assumption is that pixels capturing the same portion of image patterns are visually consistent, while pixels that come from different backdrops are usually inconsistent. I further extend this approach to perform automatic segmentation of foreground objects from an Internet photo collection of landmark locations. New scanning technologies have successfully advanced the digital acquisition of large-scale urban landscapes. In addressing semantic segmentation and reconstruction of this data using LiDAR point clouds and geo-registered images of large-scale residential areas, I develop a complete system that simultaneously uses classification and segmentation methods to first identify different object categories and then apply category-specific reconstruction techniques to create visually pleasing and complete scene models.
29

Ghanem, Amer G. "Identifying Patterns of Epistemic Organization through Network-Based Analysis of Text Corpora." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1448274706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Saia, Roberto. "Similarity and diversity: two sides of the same coin in the evaluation of data streams." Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Information Systems represent the primary instrument of growth for the companies that operate in the so-called e-commerce environment. The data streams generated by the users that interact with their websites are the primary source to define the user behavioral models. Some main examples of services integrated in these websites are the Recommender Systems, where these models are exploited in order to generate recommendations of items of potential interest to users, the User Segmentation Systems, where the models are used in order to group the users on the basis of their preferences, and the Fraud Detection Systems, where these models are exploited to determine the legitimacy of a financial transaction. Even though in literature diversity and similarity are considered as two sides of the same coin, almost all the approaches take into account them in a mutually exclusive manner, rather than jointly. The aim of this thesis is to demonstrate how the consideration of both sides of this coin is instead essential to overcome some well-known problems that affict the state-of-the-art approaches used to implement these services, improving their performance. Its contributions are the following: with regard to the recommender systems, the detection of the diversity in a user profile is used to discard incoherent items, improving the accuracy, while the exploitation of the similarity of the predicted items is used to re-rank the recommendations, improving their effectiveness; with regard to the user segmentation systems, the detection of the diversity overcomes the problem of the non-reliability of data source, while the exploitation of the similarity reduces the problems of understandability and triviality of the obtained segments; lastly, concerning the fraud detection systems, the joint use of both diversity and similarity in the evaluation of a new transaction overcomes the problems of the data scarcity, and those of the non-stationary and unbalanced class distribution.
31

Jonsson, Niklas. "Temporal and co-varying clause combining in Austronesian languages : Semantics, morpho-syntax and distributional patterns." Doctoral thesis, Stockholms universitet, Institutionen för lingvistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-74794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study investigates combined clause constructions for ten distinct semantic relations in a cross-section of Austronesian languages. The relations are of a temporal or co-varying nature, the former commonly expressed in English by such markers as when, then, until, etc. and the latter by if, so, because, etc. The research falls into three main sections. First, the study provides an overview of the semantic domain covered by the relevant relations in the Austronesian languages. Several subdistinctions are found to be made within the relations investigated. The study also explores polysemic relation markers, and a number of patterns are identified. The most common pattern is the overlap between open conditional and non-past co-occurrence relations, for which many Austronesian languages employ the same relation marker. Second, the study develops a morpho-syntactic typology of Austronesian clause combining based on three parameters related to features common to clause combining constructions. The typology divides the constructions into five different types that are ranked with regard to structural tightness. Some additional constructions, cutting across several types, are also discussed; in particular, asymmetric coordination, which involves the use of a coordinator to connect a fronted topicalized adverbial clause to the rest of the sentence. Finally, the study explores the distributional patterns of the morpho-syntactic types across the semantic relations, as well as across three geographical areas in the Austronesian region. In the former case, a clear correlation is found between posteriority and result relations on the one hand and looser structural types on the other. The distribution of types across the Austronesian region reveals few differences between the areas, although two tendencies could be detected: the Oceanic languages tend to employ slightly looser morpho-syntax, while the Formosan and Philippine languages employ slightly tighter morpho-syntax.
32

Li, Honglin. "Hierarchical video semantic annotation the vision and techniques /." Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1071863899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xv, 146 p.; also includes graphics. Includes bibliographical references (p. 136-146).
33

Krisnadhi, Adila Alfa. "Ontology Pattern-Based Data Integration." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1453177798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Gerber, Daniel. "Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Data Web has undergone a tremendous growth period. It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts. In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day. However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc. As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically. Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process. In addition, users are accustomed to entering keyword queries to satisfy their information needs. With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers. In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means. First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web. We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities. Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds. Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true. The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web. Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need.
35

Kapugama, Geeganage Dakshi T. "Concept-enhanced topic modelling technique." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227455/1/Dakshi_Kapugama%20Geeganage_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Topic modelling is a state-of-the-art technique to understand, categorize and summarise the text and is beneficial to discover the hidden themes in text collections. The existing topic modelling approaches pay less or no attention to capturing the semantics of words. Hence, meaningless topics are generated. This research addresses the main problem of existing topic modelling approaches by introducing two semantic-based topic generation approaches. The thesis has made main contributions to topic modelling and text mining domains by introducing semantic-based topic representation, semantic topic model and an ambiguity handling approach. The research outcomes are beneficial for many text mining applications.
36

Topsakal, Oguzhan. "Extracting semantics from legacy sources using reverse engineering of java code with the help of visitor patterns." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0001210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wynn, Moe Thandar. "Semantics, verification, and implementation of workflows with cancellation regions and OR-joins." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16324/1/Moe_Wynn_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Workflow systems aim to provide automated support for the conduct of certain business processes. Workflow systems are driven by workflow specifications which among others, capture the execution interdependencies between various activities. These interdependencies are modelled by means of different control flow constructors, e.g., sequence, choice, parallelism and synchronisation. It has been shown in the research on workflow patterns that the support for and the interpretation of various control flow constructs varies substantially across workflow systems. Two of the most problematic patterns relate to the OR-join and to cancellation. An OR-join is used in situations when we need to model " wait and see" behaviour for synchronisation. Different approaches assign a different (often only intuitive) semantics to this type of join, though they do share the common theme that synchronisation is only to be performed for active paths. Depending on context assumptions this behaviour may be relatively easy to deal with, though in general its semantics is complicated, both from a definition point of view (in terms of formally capturing a desired intuitive semantics) and from a computational point of view (how does one determine whether an OR-join is enabled?). Many systems and languages struggle with the semantics and implementation of the OR-join because its non-local semantics require a synchronisation depending on an analysis of future execution paths. This may require some non-trivial reasoning. The presence of cancellation features and other OR-joins in a workflow further complicates the formal semantics of the OR-join. The cancellation feature is commonly used to model external events that can change the behaviour of a running workflow. It can be used to either disable activities in certain parts of a workflow or to stop currently running activities. Even though it is possible to cancel activities in workflow systems using some sort of abort function, many workflow systems do not provide direct support for this feature in the workflow language. Sometimes, cancellation affects only a selected part of a workflow and other activities can continue after performing a cancellation action. As cancellation occurs naturally in business scenarios, comprehensive support in a workflow language is desirable. We take on the challenge of providing formal semantics, verification techniques as well as an implementation for workflows with those features. This thesis addresses three interrelated issues for workflows with cancellation regions and OR-joins. The concept of the OR-join is examined in detail in the context of the workflow language YAWL, a powerful workflow language designed to support a collection of workflow patterns and inspired by Petri nets. The OR-join semantics has been redesigned to represent a general, formal, and decidable approach for workflows in the presence of cancellation regions and other OR-joins. This approach exploits a link that is proposed between YAWL and reset nets, a variant of Petri nets with a special type of arc that can remove all tokens from a place. Next, we explore verification techniques for workflows with cancellation regions and OR-joins. Four structural properties have been identified and a verification approach that exploits coverability and reachability notions from reset nets has been proposed. The work on verification techniques has highlighted potential problems with calculating state spaces for large workflows. Applying reduction rules before carrying out verification can decrease the size of the problem by cutting down the size of the workflow that needs to be examined while preserving some essential properties. Therefore, we have extended the work on verification by proposing reduction rules for reset nets and for YAWL nets with and without OR-joins. The proposed OR-join semantics as well as the proposed verification approach have been implemented in the YAWL environment.
38

Wynn, Moe Thandar. "Semantics, verification, and implementation of workflows with cancellation regions and OR-joins." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16324/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Workflow systems aim to provide automated support for the conduct of certain business processes. Workflow systems are driven by workflow specifications which among others, capture the execution interdependencies between various activities. These interdependencies are modelled by means of different control flow constructors, e.g., sequence, choice, parallelism and synchronisation. It has been shown in the research on workflow patterns that the support for and the interpretation of various control flow constructs varies substantially across workflow systems. Two of the most problematic patterns relate to the OR-join and to cancellation. An OR-join is used in situations when we need to model " wait and see" behaviour for synchronisation. Different approaches assign a different (often only intuitive) semantics to this type of join, though they do share the common theme that synchronisation is only to be performed for active paths. Depending on context assumptions this behaviour may be relatively easy to deal with, though in general its semantics is complicated, both from a definition point of view (in terms of formally capturing a desired intuitive semantics) and from a computational point of view (how does one determine whether an OR-join is enabled?). Many systems and languages struggle with the semantics and implementation of the OR-join because its non-local semantics require a synchronisation depending on an analysis of future execution paths. This may require some non-trivial reasoning. The presence of cancellation features and other OR-joins in a workflow further complicates the formal semantics of the OR-join. The cancellation feature is commonly used to model external events that can change the behaviour of a running workflow. It can be used to either disable activities in certain parts of a workflow or to stop currently running activities. Even though it is possible to cancel activities in workflow systems using some sort of abort function, many workflow systems do not provide direct support for this feature in the workflow language. Sometimes, cancellation affects only a selected part of a workflow and other activities can continue after performing a cancellation action. As cancellation occurs naturally in business scenarios, comprehensive support in a workflow language is desirable. We take on the challenge of providing formal semantics, verification techniques as well as an implementation for workflows with those features. This thesis addresses three interrelated issues for workflows with cancellation regions and OR-joins. The concept of the OR-join is examined in detail in the context of the workflow language YAWL, a powerful workflow language designed to support a collection of workflow patterns and inspired by Petri nets. The OR-join semantics has been redesigned to represent a general, formal, and decidable approach for workflows in the presence of cancellation regions and other OR-joins. This approach exploits a link that is proposed between YAWL and reset nets, a variant of Petri nets with a special type of arc that can remove all tokens from a place. Next, we explore verification techniques for workflows with cancellation regions and OR-joins. Four structural properties have been identified and a verification approach that exploits coverability and reachability notions from reset nets has been proposed. The work on verification techniques has highlighted potential problems with calculating state spaces for large workflows. Applying reduction rules before carrying out verification can decrease the size of the problem by cutting down the size of the workflow that needs to be examined while preserving some essential properties. Therefore, we have extended the work on verification by proposing reduction rules for reset nets and for YAWL nets with and without OR-joins. The proposed OR-join semantics as well as the proposed verification approach have been implemented in the YAWL environment.
39

Zamazal, Ondřej. "Pattern-based Ontology Matching and Ontology Alignment Evaluation." Doctoral thesis, Vysoká škola ekonomická v Praze, 2006. http://www.nusl.cz/ntk/nusl-77051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ontology Matching is one of the hottest topic within the Semantic Web of recent years. There is still ample of space for improvement in terms of performance. Furthermore, current ontology matchers mostly concentrate on simple entity to entity matching. However, matching of whole structures could bring some additional complex relationships. These structures of ontologies can be captured as ontology patterns. The main theme of this thesis is an examination of pattern-based ontology matching enhanced with ontology transformation and pattern-based ontology alignment evaluation. The former is examined due to its potential benefits regarding complex matching and matching as such. The latter is examined because complex hypotheses could be beneficial feedback as complement to traditional evaluation methods. These two tasks are related to four different topics: ontology patterns, ontology transformation, ontology alignment evaluation and ontology matching. With regard to those four topics, this work covers the following aspects: * Examination of different aspects of ontology patterns. Particularly, description of relevant ontology patterns for ontology transformation and for ontology matching (such as naming, matching and transformation patterns). * Description of a pattern-based method for ontology transformation. * Introduction of new methods for an alignment evaluation; including using patterns as a complex structures for more detailed analysis. * Experiments and demonstrations of new concepts introduced in this thesis. The thesis first introduces naming pattern and matching pattern classification on which ontology transformation framework is based. Naming patterns are useful for detection of ontology patterns and for generation of new names for entities. Matching patterns are basis for transformation patterns in terms of sharing some building blocks. In comparison with matching patterns, transformation patterns have transformation links that represent way how parts of ontology patterns are transformed. Besides several evaluations and implementations, the thesis provides a demonstration of getting complex matching due to ontology transformation process. Ontology transformation framework has been implemented in Java environment where all generic patterns are represented as corresponding Java objects. Three main implemented services are made generally available as RESTful services: ontology pattern detection, transformation instruction generation and ontology transformation.
40

Castro, Rute Nogueira Silveira de. "Descoberta de relacionamentos entre padrÃes de sofware utilizando semÃntica latente." Universidade Federal do CearÃ, 2006. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
O reuso de padrÃes de software vem se tornando cada vez mais comum no desenvolvimento de sistemas, pois se trata de uma boa prÃtica de engenharia de software que visa promover a reutilizaÃÃo de soluÃÃes comprovadas para problemas recorrentes. No entanto, existe uma carÃncia de mecanismos que promovam a busca de padrÃes adequados a cada situaÃÃo. TambÃm hà uma dificuldade na detecÃÃo de relacionamentos existentes entre os padrÃes de software disponÃveis na literatura. Este trabalho apresenta o uso de tÃcnicas de mineraÃÃo de texto em um conjunto de padrÃes de software com o objetivo de identificar como esses padrÃes se relacionam. A tÃcnica de mineraÃÃo de textos busca extrair conceitos inteligentes a partir de grandes volumes de informaÃÃo textual. O padrÃo de software deve ser tratado dentro de mineraÃÃo de texto como um grande volume de texto com uma estrutura definida por seu template. Os graus de relacionamentos entre os padrÃes sÃo determinados nos possÃveis tipos de relacionamentos entre eles, bem como atravÃs de regras fundamentadas no conceito de PadrÃes de Software. Essas regras, aliadas à tÃcnica de mineraÃÃo de texto, geram as informaÃÃes de relacionamento desejadas.
The reuse of software patterns is becoming increasingly common in developing systems, because it is a good practice of engineering software that aims to promote the reuse of solutions to recurring problems. However, there is a lack of mechanisms that promote the search for patterns appropriate to each situation. There is also a difficulty in detecting relationships among the software patterns available in the literature.This work presents the use of techniques for text mining into a set of software patterns in order to identify how these patterns are related. The technique of mining, intelligent text search extract concepts from textual information.The software pattern should be treated within the mining of text as a volume of text with a defined structure for its template. The degrees of relationships among the patterns are possible in certain types of relationships among them, and through rules based on the concept of software pattern. These rules, coupled with the technique of text mining, generate information of relationship you want.
41

Kompus, Kristiina. "How the past becomes present neural mechanisms governing retrieval from episodic memory /." Doctoral thesis, Umeå : Umeå university, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-31873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rodriguez, Castro Benedicto. "Towards ontology design patterns to model multiple classification criteria of domain concepts in the Semantic Web." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/341646/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis explores a very recurrent modeling scenario in ontology design that deals with the notion of real world concepts that can be classified according to multiple criteria. Current ontology modeling guidelines do not explicitly consider this aspect in the representation of such concepts. Such void leaves ample room for ad-hoc practices that can lead to unexpected or undesired results in ontology artifacts. The aim is to identify best practices and design patterns to represent such concepts in OWL DL ontologies suitable for deployment in the Web of Data and the Semantic Web. To assist with these issues, an initial set of basic design guidelines is put forward, that mitigates the opportunity for ad-hoc modeling decisions in the development of ontologies for the problem scenario described. These guidelines relies upon an existing simplified methodology for facet analysis from the field of Library and Information Science. The outcome of this facet analysis produces a Faceted Classification Scheme (FCS) for the concept in question where in most cases a facet would correspond to a classification criterion. The Value Partition, the Class As Property Value and the Normalisation Ontology Design Patterns (ODPs) are revisited to produce an ontology representation of a FCS. A comparative analysis between a FCS and the Normalisation ODP in particular, revealed the existence of key similarities between the elements in the generic structure of both knowledge representation paradigms. These similarities allow to establish a series of mappings to transform a FCS into an OWL DL ontology that contains a valid representation of the classification criteria involved in the characterization of the domain concept. An existing FCS example in the domain of \Dishwasher Detergent" and existing ontology examples in the domain of \Pizza", \Wine" and \Fault" (in the context of a computer system) are used to illustrate the outcome of this research
43

Lin, Chi-San Althon. "Syntax-driven argument identification and multi-argument classification for semantic role labeling." The University of Waikato, 2007. http://hdl.handle.net/10289/2602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Semantic role labeling is an important stage in systems for Natural Language Understanding. The basic problem is one of identifying who did what to whom for each predicate in a sentence. Thus labeling is a two-step process: identify constituent phrases that are arguments to a predicate, then label those arguments with appropriate thematic roles. Existing systems for semantic role labeling use machine learning methods to assign roles one-at-a-time to candidate arguments. There are several drawbacks to this general approach. First, more than one candidate can be assigned the same role, which is undesirable. Second, the search for each candidate argument is exponential with respect to the number of words in the sentence. Third, single-role assignment cannot take advantage of dependencies known to exist between semantic roles of predicate arguments, such as their relative juxtaposition. And fourth, execution times for existing algorithm are excessive, making them unsuitable for real-time use. This thesis seeks to obviate these problems by approaching semantic role labeling as a multi-argument classification process. It observes that the only valid arguments to a predicate are unembedded constituent phrases that do not overlap that predicate. Given that semantic role labeling occurs after parsing, this thesis proposes an algorithm that systematically traverses the parse tree when looking for arguments, thereby eliminating the vast majority of impossible candidates. Moreover, instead of assigning semantic roles one at a time, an algorithm is proposed to assign all labels simultaneously; leveraging dependencies between roles and eliminating the problem of duplicate assignment. Experimental results are provided as evidence to show that a combination of the proposed argument identification and multi-argument classification algorithms outperforms all existing systems that use the same syntactic information.
44

Rosenberg, Maria. "La formation agentive en français : les composés [VN/A/Adv/P]N/A et les dérivés V-ant, V-eur et V-oir(e)." Phd thesis, Stockholms universitet, Institutionen för franska, italienska och klassiska språk, 2008. http://tel.archives-ouvertes.fr/tel-00486981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study addresses the French morphological construction [VN/A/Adv/P]N/A. The main objectives are to posit a single rule for its formation and to question the implications of the agent polysemy. The theoretical framework is lexeme-based morphology, which adheres to weak lexicalism. The first part of our analysis is qualitative and concerns the availability aspect of productivity. The method is introspective. The internal semantic patterns of the French construction are examined. Our results give evidence for the claim that a single morphological construction rule, [VN/A/Adv/P]N/A, is responsible for the cases where the first constituent is a verb stem, and the second constituent may correspond to an internal argument, an external argument or a semantic adjunct. All cases manifest the same patterns, which are related to the denotative meanings included in the agent polysemy: Agent, Instrument, Locative, Action, Result and Cause. Our contrastive analysis shows that the same patterns are found in the four Swedish agentive formations, [N/A/Adv/PV-are]N, [N/A/Adv/PV]N, [N/A/Adv/PV-a]N and [VN]N, which correspond to the French [VN/A/Adv/P]N/A construction and which also contain a verbal constituent and its internal or external argument, or an adjunct. The second part of our analysis is quantitative and concerns the profitability aspect of productivity. The method is inductive. The aim is to explore the polysemy of agent and its assumed hierarchical structure, in synchrony and diachrony. Four French agentive formations, [VN/A/Adv/P]N/A compounds and V-ant, V-eur and V-oir(e) derivations, are included in order to examine semantic competition and blocking effects. Our results give evidence for the existence of an agent polysemy but deny that it has a hierarchical structure valid for every agentive formation. The meanings in the agent polysemy are more or less profitable according to formation type: blocking effects could explain this behaviour.
45

Oita, Marilena. "Deriving Semantic Objects from the Structured Web (Inférer des Objects Sémantiques du Web Structuré)." Phd thesis, Telecom ParisTech, 2012. http://tel.archives-ouvertes.fr/tel-00922459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis focuses on the extraction and analysis of Web data objects, investigated from different points of view: temporal, structural, semantic. We first survey different strategies and best practices for deriving temporal aspects of Web pages, together with a more in-depth study on Web feeds for this particular purpose. Next, in the context of dynamically-generated Web pages by content management systems, we present two keyword-based techniques that perform article extraction from such pages. Keywords, either automatically acquired through a Tf−Idf analysis, or extracted from Web feeds, guide the process of object identification, either at the level of a single Web page (SIGFEED algorithm), or across different pages sharing the same template (FOREST algorithm). We finally present, in the context of the deep Web, a generic framework which aims at discovering the semantic model of a Web object (here, data record) by, first, using FOREST for the extraction of objects, and second, by representing the implicit rdf:type similarities between the object attributes and the entity of the Web interface as relationships that, together with the instances extracted from the objects, form a labeled graph. This graph is further aligned to a generic ontology like YAGO for the discovery of the graph's unknown types and relations.
46

DEL, BARRIO DE LA ROSA Florencio. "El régimen de los verbos en español medieval." Doctoral thesis, Biblioteca Virtual Miguel de Cervantes/Universidad de Alicante, 2005. http://hdl.handle.net/10278/22322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Oderfält, Ozelot. "Language change and collocations : A study of collocation patterns and semantic prosody during the Covid-19 pandemic." Thesis, Umeå universitet, Institutionen för språkstudier, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This essay is a corpus-based and quantitative study on language change that has occurred during global events, such as the Covid-19 pandemic. Global events especially affect the English language since it is a global language. In this essay, language change, collocation patterns and semantic prosody are discussed to compare the use of language and investigate whether any changes have occurred during the pandemic. These factors are studied since changes in collocation patterns can give words new meaning and possibly also a new semantic prosody. The collocations that are two or more words that often go together and the frequency of 10 sets of words are studied in particular, since they are often used during the Covid-19 pandemic. The British national corpus (BNC) and the Coronavirus Corpus (CVC) are used in the study to retrieve information on collocational patterns. By using the two corpora, it is possible to investigate the collocations during the pandemic by using CVC, and BNC for a comparison to the collocational use before the pandemic. This is done by using the collocate function in the corpora and investigating the collocates of two words on either side of the node. The major findings from the research reported in this essay show that many of the words have received additional meaning during the pandemic through their collocations, and they are most commonly neutral in semantic prosody.
48

Silva, Marcos Alexandre Rose. "Uma linguagem de padrões semanticamente relacionados para o design de sistemas educacionais que permitam coautoria." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2016-06-02T19:03:58Z (GMT). No. of bitstreams: 1 5900.pdf: 5031941 bytes, checksum: c33a127070b11fb3923bc17ba9d98189 (MD5) Previous issue date: 2014-05-03
Financiadora de Estudos e Projetos
The adequacy of educational content considering student´s culture, knowledge and values allow them to identify the relationship between what they are learning and their reality and, consequently they feel more interested and engaged at education. In contrast, in informatics at education, designing educational systems to allow adequacy is a challenge because of a lack of techniques to support the design and the difficulty to identify what and how allow this adequacy by users, because many users of these systems, as educators and students, do not have knowledge of designing. In this context, it is presented here a formalization of the design pattern with successful solutions for recurrent problems on designing co-authorship systems analyzed and/or experienced by the researcher of this dissertation during design and evaluations of these systems at Advanced Interaction Laboratory (LIA). These patterns intend to support designing of educational systems that allow users, as co-authors, adequate these systems, inserting the content to be displayed at them. Each pattern describes specific problem and solution. In order to support indentifying how these patterns are organized to each other, semantic relations defined by Minsky are adopted to organize them based on humans´ intellectual structure. Validations with different participants´ profiles, e.g., with or without knowledge about concepts related to design, software engineering, human-computer interaction, co-authorship, etc., were done to formalize, refine and observe the comprehension and/or application of these patterns to design co-authorship system prototypes, as well as, different participants from mathematic or pedagogy areas and teachers to validate the use these of these prototypes. The results shown that the pattern language is comprehensible and it supports designing to define what and how to display on interface to allow and help users insert content.
A adequação no conteúdo educacional de acordo com a cultura, o conhecimento e valores dos alunos permite aos mesmos identificarem relação entre o que estão aprendendo e suas realidades e, consequentemente se sentirem mais interessados e engajados no aprendizado. Contudo, no contexto da informática na educação, fazer o design de sistemas educacionais para permitir a adequação é um desafio, tanto pela falta de técnicas para apoiar o design, quanto pela dificuldade em identificar o que adequar, como permitir e facilitar essa adequação, pois muitos dos usuários desses sistemas, como educadores e alunos, não têm conhecimento e experiência com design de soluções computacionais. Nesse contexto, neste trabalho é apresentada a formalização de uma linguagem de padrões de design com soluções de sucesso para problemas recorrentes no design de sistemas de coautoria observadas e/ou experiência das pelo proponente deste trabalho, ao analisar esses sistemas e participar do processo de desenvolvimento e avaliação desses sistemas no Laboratório de Interação Avançada (LIA). Esses padrões têm como objetivo apoiar o design de sistemas educacionais que permitam aos usuários, como coautores, terem apoio para adequar os sistemas, inserindo o conteúdo que será exibido nos mesmos. Cada padrão de design se refere a um par problema-solução específico e, para apoiar a identificação e compreensão de como os padrões estão relacionados entre si, formando uma linguagem de padrões, são adotadas as relações semânticas definidas por Minsky para organizá-los e expressar o relacionamento entre eles de uma forma próxima a estrutura cognitiva humana. Validações com diferentes perfis de participantes, por exemplo, com e sem conhecimento sobre conceitos relacionados à Engenharia de Software, Interação Humano-Computador, Coautoria, etc., foram feitas para formalizar, refinar e observar a compreensão e/ou o uso dos padrões no design de protótipos de sistemas educacionais, bem como participantes das áreas de matemática ou pedagogia e educadores para validar o uso desses protótipos. Os resultados mostram que a linguagem de padrões de design semanticamente relacionados é compreendida e apoia o design para definir o que e como exibir nas interfaces dos sistemas para permitir e auxiliar os usuários na inserção do conteúdo.
49

DI, TUCCI DONATELLA. "Reading units in Italian children: evidence from morphological, orthographic and semantic features on word reading process." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/169025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigates the morphological, orthographic and semantic features affecting the word reading process in Italian primary-school children and at the same time the different reading units which young readers are able to rely on. In Chapter 1, an overview on reading models and on studies showing a complex scenario of results has been proposed. In Chapter 2, a pseudoword reading task has been carried out in order to provide evidence of a lexical reading in Italian children that can be based on whole-word representations. In Chapter 3, we aimed at presenting a morphological-oriented coding scheme of reading errors performed by Italian children in a morphologically complex words reading. This analysis showed reliability on morphemic structure when children read morphologically complex words, and their ability to use morphemes as intermediate grain size reading units. Chapters 4 and 5 presented a new measure, the Orthography-Semantics Consistency (OSC), quantifying the consistency of the orthographic and semantic information carried in a word, and moving from the hypothesis that orthographic-semantic associations, even if they are not morpheme-mediated, play a crucial role in word reading process over and above morpheme units. In order to validate OSC measure from a developmental point of view, a morphological masked priming task and a simple lexical decision task have been first performed by a group of English children, as OSC measure was validated on English language data only (Chapter 4), and then by a group of Italian children (Chapter 5).
50

PALA, FEDERICO. "Re-identification and semantic retrieval of pedestrians in video surveillance scenarios." Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Person re-identification consists of recognizing individuals across different sensors of a camera network. Whereas clothing appearance cues are widely used, other modalities could be exploited as additional information sources, like anthropometric measures and gait. In this work we investigate whether the re-identification accuracy of clothing appearance descriptors can be improved by fusing them with anthropometric measures extracted from depth data, using RGB-Dsensors, in unconstrained settings. We also propose a dissimilaritybased framework for building and fusing multi-modal descriptors of pedestrian images for re-identification tasks, as an alternative to the widely used score-level fusion. The experimental evaluation is carried out on two data sets including RGB-D data, one of which is a novel, publicly available data set that we acquired using Kinect sensors. In this dissertation we also consider a related task, named semantic retrieval of pedestrians in video surveillance scenarios, which consists of searching images of individuals using a textual description of clothing appearance as a query, given by a Boolean combination of predefined attributes. This can be useful in applications like forensic video analysis, where the query can be obtained froma eyewitness report. We propose a general method for implementing semantic retrieval as an extension of a given re-identification system that uses any multiple part-multiple component appearance descriptor. Additionally, we investigate on deep learning techniques to improve both the accuracy of attribute detectors and generalization capabilities. Finally, we experimentally evaluate our methods on several benchmark datasets originally built for re-identification tasks

To the bibliography