Pour voir les autres types de publications sur ce sujet consultez le lien suivant : ID. Knowledge representation.

Thèses sur le sujet « ID. Knowledge representation »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 23 meilleures thèses pour votre recherche sur le sujet « ID. Knowledge representation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Khor, Sebastian Wankun. « A fuzzy knowledge map framework for knowledge representation ». Thesis, Khor, Sebastian Wankun (2007) A fuzzy knowledge map framework for knowledge representation. PhD thesis, Murdoch University, 2007. https://researchrepository.murdoch.edu.au/id/eprint/129/.

Texte intégral
Résumé :
Cognitive Maps (CMs) have shown promise as tools for modelling and simulation of knowledge in computers as representation of real objects, concepts, perceptions or events and their relations. This thesis examines the application of fuzzy theory to the expression of these relations, and investigates the development of a framework to better manage the operations of these relations. The Fuzzy Cognitive Map (FCM) was introduced in 1986 but little progress has been made since. This is because of the difficulty of modifying or extending its reasoning mechanism from causality to relations other than causality, such as associative and deductive reasoning. The ability to express the complex relations between objects and concepts determines the usefulness of the maps. Structuring these concepts and relations in a model so that they can be consistently represented and quickly accessed and anipulated by a computer is the goal of knowledge representation. This forms the main motivation of this research. In this thesis, a novel framework is proposed whereby single-antecedent fuzzy rules can be applied to a directed graph, and reasoning ability is extended to include noncausality. The framework provides a hierarchical structure where a graph in a higher layer represents knowledge at a high level of abstraction, and graphs in a lower layer represent the knowledge in more detail. The framework allows a modular design of knowledge representation and facilitates the creation of a more complex structure for modelling and reasoning. The experiments conducted in this thesis show that the proposed framework is effective and useful for deriving inferences from input data, solving certain classification problems, and for prediction and decision-making.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Grau, Ron. « The acquisition and representation of knowledge about complex multi-dynamic processes ». Thesis, University of Sussex, 2009. http://sro.sussex.ac.uk/id/eprint/15370/.

Texte intégral
Résumé :
This thesis is concerned with the acquisition, representation, modelling and discovery of knowledge in ill-structured domains. In the context of this work, these are referred to as domains that involve "complex multi-dynamic (CMD) processes". A CMD process is an abstract concept for thinking about combinations of different processes where any specification and explanation involves large amounts of heterogeneous knowledge. Due to manifold cognitive and representational problems, this particular knowledge is currently hard to acquire from experts and difficult to integrate in process models. The thesis focuses on two problems in the context of modelling, discovery and design of CMD processes, a knowledge representation problem and a knowledge acquisition problem. The thesis outlines a solution by drawing together different theoretical and technological developments related to the fields of Artificial Intelligence, Cognitive Science and Computer Science, including research on computational models of scientific discovery, process modelling, and representation design. An integrative framework of knowledge representations and acquisition methods has been established, underpinning a general paradigm of CMD processes. The framework takes a compositional, collaborative approach to knowledge acquisition by providing methods for the decomposition of complex process combinations into systems of process fragments and the localisation of structural change, process behaviour and function within these systems. Diagrammatic representations play an important role, as they provide a range of representational, cognitive and computational properties that are particularly useful for meeting many of the difficulties that CMD processes pose. The research has been applied to Industrial Bakery Product Manufacturing, a challenging domain that involves a variety of physical, chemical and biochemical process combinations. A software prototype (CMD SUITE) has been implemented that integrates the developed theoretical framework to create novel, interactive knowledge-based tools which are aimed towards ill-structured domains of knowledge. The utility of the software workbench and its underlying CMD Framework has been demonstrated in a case study. The bakery experts collaborating in this project were able to successfully utilise the software tools to express and integrate their knowledge in a new way, while overcoming limits of previously used models and tools.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Matikainen, Tiina Johanna. « Semantic Representation of L2 Lexicon in Japanese University Students ». Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/133319.

Texte intégral
Résumé :
CITE/Language Arts
Ed.D.
In a series of studies using semantic relatedness judgment response times, Jiang (2000, 2002, 2004a) has claimed that L2 lexical entries fossilize with their equivalent L1 content or something very close to it. In another study using a more productive test of lexical knowledge (Jiang 2004b), however, the evidence for this conclusion was less clear. The present study is a partial replication of Jiang (2004b) with Japanese learners of English. The aims of the study are to investigate the influence of the first language (L1) on second language (L2) lexical knowledge, to investigate whether lexical knowledge displays frequency-related, emergent properties, and to investigate the influence of the L1 on the acquisition of L2 word pairs that have a common L1 equivalent. Data from a sentence completion task was completed by 244 participants, who were shown sentence contexts in which they chose between L2 word pairs sharing a common equivalent in the students' first language, Japanese. The data were analyzed using the statistical analyses available in the programming environment R to quantify the participants' ability to discriminate between synonymous and non-synonymous use of these L2 word pairs. The results showed a strong bias against synonymy for all word pairs; the participants tended to make a distinction between the two synonymous items by assigning each word a distinct meaning. With the non-synonymous items, lemma frequency was closely related to the participants' success in choosing the correct word in the word pair. In addition, lemma frequency and the degree of similarity between the words in the word pair were closely related to the participants' overall knowledge of the non-synonymous meanings of the vocabulary items. The results suggest that the participants had a stronger preference for non-synonymous options than for the synonymous option. This suggests that the learners might have adopted a one-word, one-meaning learning strategy (Willis, 1998). The reasonably strong relationship between several of the usage-based statistics and the item measures from R suggest that with exposure learners are better able to use words in ways that are similar to native speakers of English, to differentiate between appropriate and inappropriate contexts and to recognize the boundary separating semantic overlap and semantic uniqueness. Lexical similarity appears to play a secondary role, in combination with frequency, in learners' ability to differentiate between appropriate and inappropriate contexts when using L2 word pairs that have a single translation in the L1.
Temple University--Theses
Styles APA, Harvard, Vancouver, ISO, etc.
4

Glinos, Demetrios. « SYNTAX-BASED CONCEPT EXTRACTION FOR QUESTION ANSWERING ». Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3565.

Texte intégral
Résumé :
Question answering (QA) stands squarely along the path from document retrieval to text understanding. As an area of research interest, it serves as a proving ground where strategies for document processing, knowledge representation, question analysis, and answer extraction may be evaluated in real world information extraction contexts. The task is to go beyond the representation of text documents as "bags of words" or data blobs that can be scanned for keyword combinations and word collocations in the manner of internet search engines. Instead, the goal is to recognize and extract the semantic content of the text, and to organize it in a manner that supports reasoning about the concepts represented. The issue presented is how to obtain and query such a structure without either a predefined set of concepts or a predefined set of relationships among concepts. This research investigates a means for acquiring from text documents both the underlying concepts and their interrelationships. Specifically, a syntax-based formalism for representing atomic propositions that are extracted from text documents is presented, together with a method for constructing a network of concept nodes for indexing such logical forms based on the discourse entities they contain. It is shown that meaningful questions can be decomposed into Boolean combinations of question patterns using the same formalism, with free variables representing the desired answers. It is further shown that this formalism can be used for robust question answering using the concept network and WordNet synonym, hypernym, hyponym, and antonym relationships. This formalism was implemented in the Semantic Extractor (SEMEX) research tool and was tested against the factoid questions from the 2005 Text Retrieval Conference (TREC), which operated upon the AQUAINT corpus of newswire documents. After adjusting for the limitations of the tool and the document set, correct answers were found for approximately fifty percent of the questions analyzed, which compares favorably with other question answering systems.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rudolph, Sebastian. « Relational Exploration : Combining Description Logics and Formal Concept Analysis for Knowledge Specification ». Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A25002.

Texte intégral
Résumé :
Facing the growing amount of information in today's society, the task of specifying human knowledge in a way that can be unambiguously processed by computers becomes more and more important. Two acknowledged fields in this evolving scientific area of Knowledge Representation are Description Logics (DL) and Formal Concept Analysis (FCA). While DL concentrates on characterizing domains via logical statements and inferring knowledge from these characterizations, FCA builds conceptual hierarchies on the basis of present data. This work introduces Relational Exploration, a method for acquiring complete relational knowledge about a domain of interest by successively consulting a domain expert without ever asking redundant questions. This is achieved by combining DL and FCA: DL formalisms are used for defining FCA attributes while FCA exploration techniques are deployed to obtain or refine DL knowledge specifications.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Turhan, Anni-Yasmin. « On the Computation of Common Subsumers in Description Logics ». Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23919.

Texte intégral
Résumé :
Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions. In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain "meaningful" common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Münnich, Stefan. « Ontologien als semantische Zündstufe für die digitale Musikwissenschaft ? » De Gruyter, Berlin / Boston, 2018. https://slub.qucosa.de/id/qucosa%3A36849.

Texte intégral
Résumé :
Ontologien spielen eine zentrale Rolle für die formalisierte Repräsentation von Wissen und Informationen sowie für die Infrastruktur des sogenannten semantic web. Trotz früherer Initiativen der Bibliotheken und Gedächtnisinstitutionen hat sich die deutschsprachige Musikwissenschaft insgesamt nur sehr zögerlich dem Thema genähert. Im Rahmen einer Bestandsaufnahme werden neben der Erläuterung grundlegender Konzepte, Herausforderungen und Herangehensweisen bei der Modellierung von Ontologien daher auch vielversprechende Modelle und bereits erprobte Anwendungsbeispiele für eine ‚semantische‘ digitale Musikwissenschaft identifiziert.
Ontologies play a crucial role for the formalised representation of knowledge and information as well as for the infrastructure of the semantic web. Despite early initiatives that were driven by libraries and memory institutions, German musicology as a whole has turned very slowly to the subject. In an overview the author addresses basic concepts, challenges, and approaches for ontology design and identifies models and use cases with promising applications for a ‚semantic‘ digital musicology.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Baader, Franz, et Adrian Nuradiansyah. « Mixing Description Logics in Privacy-Preserving Ontology Publishing ». Springer, 2019. https://tud.qucosa.de/id/qucosa%3A75565.

Texte intégral
Résumé :
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an FL0 or an FLE concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Hladik, Jan. « To and Fro Between Tableaus and Automata for Description Logics ». Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A24073.

Texte intégral
Résumé :
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten.
Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Steffen, Johann. « VIKA - Konzeptstudien eines virtuellen Konstruktionsberaters für additiv zu fertigende Flugzeugstrukturbauteile ». Thelem Universitätsverlag & ; Buchhandlung GmbH & ; Co. KG, 2021. https://tud.qucosa.de/id/qucosa%3A75869.

Texte intégral
Résumé :
Gegenstand der Arbeit ist die konzeptionelle Ausarbeitung einer virtuellen Anwendung, die es den Anwendern in der Flugzeugstrukturkonstruktion im Kontext der additiven Fertigung ermöglicht, interaktiv und intuitiv wichtige Entscheidungen für den Bauteilentstehungsprozess zu treffen. Dabei soll sich die Anwendung adaptiv je nach Anwendungsfall in der Informationsbereitstellung an die jeweils benötigten Anforderungen und Bedürfnisse des Anwenders anpassen können.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Verbancsics, Phillip. « Effective task transfer through indirect encoding ». Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4716.

Texte intégral
Résumé :
An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the bird's eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation.Yet a challenge for such representation is that a raw two-dimensional map is high-dimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded.; Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain.
ID: 030646258; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 144-152).
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
Styles APA, Harvard, Vancouver, ISO, etc.
12

Jin, Yi. « Belief Change in Reasoning Agents : Axiomatizations, Semantics and Computations ». Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A24983.

Texte intégral
Résumé :
The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Kursun, Olcay. « SINBAD AUTOMATION OF SCIENTIFIC PROCESS : FROM HIDDEN FACTOR ANALYSIS TO THEORY SYNTHESIS ». Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4467.

Texte intégral
Résumé :
Modern science is turning to progressively more complex and data-rich subjects, which challenges the existing methods of data analysis and interpretation. Consequently, there is a pressing need for development of ever more powerful methods of extracting order from complex data and for automation of all steps of the scientific process. Virtual Scientist is a set of computational procedures that automate the method of inductive inference to derive a theory from observational data dominated by nonlinear regularities. The procedures utilize SINBAD - a novel computational method of nonlinear factor analysis that is based on the principle of maximization of mutual information among non-overlapping sources (Imax), yielding higher-order features of the data that reveal hidden causal factors controlling the observed phenomena. One major advantage of this approach is that it is not dependent on a particular choice of learning algorithm to use for the computations. The procedures build a theory of the studied subject by finding inferentially useful hidden factors, learning interdependencies among its variables, reconstructing its functional organization, and describing it by a concise graph of inferential relations among its variables. The graph is a quantitative model of the studied subject, capable of performing elaborate deductive inferences and explaining behaviors of the observed variables by behaviors of other such variables and discovered hidden factors. The set of Virtual Scientist procedures is a powerful analytical and theory-building tool designed to be used in research of complex scientific problems characterized by multivariate and nonlinear relations.
Ph.D.
School of Computer Science
Engineering and Computer Science;
Computer Science
Styles APA, Harvard, Vancouver, ISO, etc.
14

Rosen, Michael. « COLLABORATIVE PROBLEM SOLVING : THE ROLE OF TEAM KNOWLEDGE BUILDING PROCESSES AND EXTERNAL REPRESENTATIONS ». Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2727.

Texte intégral
Résumé :
This dissertation evaluates the relationship between five team knowledge building processes (i.e., information exchange, knowledge sharing, option generation, evaluation of alternatives, and regulation), the external representations constructed by a team during a performance episode, and performance outcomes in a problem solving task. In a broad range of domains such as the military, and healthcare, team-based work structures used to solve complex problems; however, the bulk of research on teamwork to date has dealt with behavioral coordination in routine tasks. This leaves a gap in the theory available for developing interventions to support collaborative problem solving, or knowledge-based performance, in teams. Sixty nine three person teams participated in a strategic planning simulation using a collaborative map. Content analysis was applied to team communications and the external representations team members created using the collaborative tool. Regression and multi-way frequency analyses were used to test hypotheses about the relationship between the amount and sequence of team process behaviors respectively and team performance outcomes. Additionally, the moderating effects of external representation quality were evaluated. All five team knowledge building processes were significantly related to outcomes, but only one (i.e., knowledge sharing) in the simple, positive, and linear way hypothesized. Information exchange was negatively related to outcomes after controlling for the amount of acknowledgements team members made. Option generation and evaluation interacted to predict outcomes such that higher levels of evaluation were more beneficial to teams with higher levels of option generation. Regulation processes exhibited a negative curvilinear relationship with outcomes such that high and low performing teams engaged in less regulation than did moderately performing teams. External representation quality moderated a composite team knowledge building process variable such that better external representations were more beneficial for teams with poorer quality processes than for teams with high quality process. Additionally, there were significant differences in the sequence of team knowledge building processes between high and low performing teams as well as between groups based on high and low levels of external representation quality. The team knowledge building process framework is useful for understanding complex collaborative problem solving. However, these processes predict performance outcomes in complex and inter-related ways. Further implications for theories of team performance and applications for training, designing performance support tools, and team performance measurement are discussed.
Ph.D.
Department of Psychology
Sciences
Psychology PhD
Styles APA, Harvard, Vancouver, ISO, etc.
15

Koski, Jessica Elizabeth. « The Neural Representations of Social Status : An MVPA Study ». Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/339639.

Texte intégral
Résumé :
Psychology
Ph.D.
Status is a salient social cue, to the extent that it shapes our attention, judgment, and memory for other people, and it guides our social interactions. While prior work has addressed the traits associated with status, as well as its effects on cognition and behavior, research on the neural mechanisms of status perception is still relatively sparse and predominantly focused on neural activity during explicit status judgments. Further, there is no research looking at the involvement of person-processing networks in status perception, or how we embed status information in our representations of others. In the present study I asked whether person-specific representations in ventral face-processing regions (occipital face area (OFA), fusiform face area (FFA)) as well as more anterior regions (anterior temporal lobe (ATL) and orbitofrontal cortex (OFC)) contain information about a person’s status, and whether regions involved in affective processing and reward (amygdala, ventral striatum) decode status information as well. Participants learned to associate names, career titles, and reputational status information (high versus low ratings) with objects and faces over a two-day training regimen. Object status served as a nonsocial comparison. Trained stimuli were presented in an fMRI experiment, where participants performed a target detection task unrelated to status. MVPA revealed that face and object sensitive regions in the ATLs and lateral OFC decoded face and object status, respectively. These data suggest that regions sensitive to abstract person knowledge and valuation interact during the perception of social status, potentially contributing to the effects of status on social perception.
Temple University--Theses
Styles APA, Harvard, Vancouver, ISO, etc.
16

Straß, Hannes. « Abstract Dialectical Frameworks – An Analysis of Their Properties and Role in Knowledge Representation and Reasoning ». 2016. https://ul.qucosa.de/id/qucosa%3A16720.

Texte intégral
Résumé :
Abstract dialectical frameworks (ADFs) are a formalism for representing knowledge about abstract arguments and various logical relationships between them. This work studies ADFs in detail. Firstly, we use the framework of approximation fixpoint theory to define various semantics that are known from related knowledge representation formalisms also for ADFs. We then analyse the computational complexity of a variety of reasoning problems related to ADFs. Afterwards, we also analyse the formal expressiveness in terms of realisable sets of interpretations and show how ADFs fare in comparison to other formalisms. Finally, we show how ADFs can be put to use in instantiated argumentation, where researchers try to assign meaning to sets of defeasible and strict rules. The main outcomes of our work show that in particular the sublanguage of bipolar ADFs are a useful knowledge representation formalism with meaningful representational capabilities and acceptable computational properties.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Baumann, Ringo. « On the Existence of Characterization Logics and Fundamental Properties of Argumentation Semantics ». 2019. https://ul.qucosa.de/id/qucosa%3A36595.

Texte intégral
Résumé :
Given the large variety of existing logical formalisms it is of utmost importance to select the most adequate one for a specific purpose, e.g. for representing the knowledge relevant for a particular application or for using the formalism as a modeling tool for problem solving. Awareness of the nature of a logical formalism, in other words, of its fundamental intrinsic properties, is indispensable and provides the basis of an informed choice. One such intrinsic property of logic-based knowledge representation languages is the context-dependency of pieces of knowledge. In classical propositional logic, for example, there is no such context-dependence: whenever two sets of formulas are equivalent in the sense of having the same models (ordinary equivalence), then they are mutually replaceable in arbitrary contexts (strong equivalence). However, a large number of commonly used formalisms are not like classical logic which leads to a series of interesting developments. It turned out that sometimes, to characterize strong equivalence in formalism L, we can use ordinary equivalence in formalism L0: for example, strong equivalence in normal logic programs under stable models can be characterized by the standard semantics of the logic of here-and-there. Such results about the existence of characterizing logics has rightly been recognized as important for the study of concrete knowledge representation formalisms and raise a fundamental question: Does every formalism have one? In this thesis, we answer this question with a qualified “yes”. More precisely, we show that the important case of considering only finite knowledge bases guarantees the existence of a canonical characterizing formalism. Furthermore, we argue that those characterizing formalisms can be seen as classical, monotonic logics which are uniquely determined (up to isomorphism) regarding their model theory. The other main part of this thesis is devoted to argumentation semantics which play the flagship role in Dung’s abstract argumentation theory. Almost all of them are motivated by an easily understandable intuition of what should be acceptable in the light of conflicts. However, although these intuitions equip us with short and comprehensible formal definitions it turned out that their intrinsic properties such as existence and uniqueness, expressibility, replaceability and verifiability are not that easily accessible. We review the mentioned properties for almost all semantics available in the literature. In doing so we include two main axes: namely first, the distinction between extension-based and labelling-based versions and secondly, the distinction of different kind of argumentation frameworks such as finite or unrestricted ones.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Distel, Felix. « Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis ». Doctoral thesis, 2010. https://tud.qucosa.de/id/qucosa%3A25605.

Texte intégral
Résumé :
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Straß, Hannes, et Johannes Peter Wallner. « Analyzing the Computational Complexity of Abstract Dialectical Frameworks via Approximation Fixpoint Theory ». 2013. https://ul.qucosa.de/id/qucosa%3A12226.

Texte intégral
Résumé :
Abstract dialectical frameworks (ADFs) have recently been proposed as a versatile generalization of Dung''s abstract argumentation frameworks (AFs). In this paper, we present a comprehensive analysis of the computational complexity of ADFs. Our results show that while ADFs are one level up in the polynomial hierarchy compared to AFs, there is a useful subclass of ADFs which is as complex as AFs while arguably offering more modeling capacities. As a technical vehicle, we employ the approximation fixpoint theory of Denecker, Marek and Truszczyński, thus showing that it is also a useful tool for complexity analysis of operator-based semantics.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Drescher, Conrad. « Action Logic Programs : How to Specify Strategic Behavior in Dynamic Domains Using Logical Rules ». Doctoral thesis, 2009. https://tud.qucosa.de/id/qucosa%3A25570.

Texte intégral
Résumé :
We discuss a new concept of agent programs that combines logic programming with reasoning about actions. These agent logic programs are characterized by a clear separation between the specification of the agent’s strategic behavior and the underlying theory about the agent’s actions and their effects. This makes it a generic, declarative agent programming language, which can be combined with an action representation formalism of one’s choice. We present a declarative semantics for agent logic programs along with (two versions of) a sound and complete operational semantics, which combines the standard inference mechanisms for (constraint) logic programs with reasoning about actions.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Ellmauthaler, Stefan. « Multi-Context Reasoning in Continuous Data-Flow Environments ». 2018. https://ul.qucosa.de/id/qucosa%3A21457.

Texte intégral
Résumé :
The field of artificial intelligence, research on knowledge representation and reasoning has originated a large variety of formats, languages, and formalisms. Over the decades many different tools emerged to use these underlying concepts. Each one has been designed with some specific application in mind and are even used nowadays, where the internet is seen as a service to be sufficient for the age of Industry 4.0 and the Internet of Things. In that vision of a connected world, with these many different formalisms and systems, a formal way to uniformly exchange information, such as knowledge and belief is imperative. That alone is not enough, because even more systems get integrated into the online world and nowadays we are confronted with a huge amount of continuously flowing data. Therefore a solution is needed to both, allowing the integration of information and dynamic reaction to the data which is provided in such continuous data-flow environments. This work aims to present a unique and novel pair of formalisms to tackle these two important needs by proposing an abstract and general solution. We introduce and discuss reactive Multi-Context Systems (rMCS), which allow one to utilise different knowledge representation formalisms, so-called contexts which are represented as an abstract logic framework, and exchange their beliefs through bridge rules with other contexts. These multiple contexts need to mutually agree on a common set of beliefs, an equilibrium of belief sets. While different Multi-Context Systems already exist, they are only solving this agreement problem once and are neither considering external data streams, nor are they reasoning continuously over time. rMCS will do this by adding means of reacting to input streams and allowing the bridge rules to reason with this new information. In addition we propose two different kind of bridge rules, declarative ones to find a mutual agreement and operational ones for adapting the current knowledge for future computations. The second framework is more abstract and allows computations to happen in an asynchronous way. These asynchronous Multi-Context Systems are aimed at modelling and describing communication between contexts, with different levels of self-management and centralised management of communication and computation. In this thesis rMCS will be analysed with respect to usability, consistency management, and computational complexity, while we will show how asynchronous Multi-Context Systems can be used to capture the asynchronous ideas and how to model an rMCS with it. Finally we will show how rMCSs are positioned in the current world of stream reasoning and that it can capture currently used technologies and therefore allows one to seamlessly connect different systems of these kinds with each other. Further on this also shows that rMCSs are expressive enough to simulate the mechanics used by these systems to compute the corresponding results on its own as an alternative to already existing ones. For asynchronous Multi-Context Systems, we will discuss how to use them and that they are a very versatile tool to describe communication and asynchronous computation.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Haufe, Sebastian. « Automated Theorem Proving for General Game Playing ». Doctoral thesis, 2011. https://tud.qucosa.de/id/qucosa%3A26073.

Texte intégral
Résumé :
While automated game playing systems like Deep Blue perform excellent within their domain, handling a different game or even a slight change of rules is impossible without intervention of the programmer. Considered a great challenge for Artificial Intelligence, General Game Playing is concerned with the development of techniques that enable computer programs to play arbitrary, possibly unknown n-player games given nothing but the game rules in a tailor-made description language. A key to success in this endeavour is the ability to reliably extract hidden game-specific features from a given game description automatically. An informed general game player can efficiently play a game by exploiting structural game properties to choose the currently most appropriate algorithm, to construct a suited heuristic, or to apply techniques that reduce the search space. In addition, an automated method for property extraction can provide valuable assistance for the discovery of specification bugs during game design by providing information about the mechanics of the currently specified game description. The recent extension of the description language to games with incomplete information and elements of chance further induces the need for the detection of game properties involving player knowledge in several stages of the game. In this thesis, we develop a formal proof method for the automatic acquisition of rich game-specific invariance properties. To this end, we first introduce a simple yet expressive property description language to address knowledge-free game properties which may involve arbitrary finite sequences of successive game states. We specify a semantic based on state transition systems over the Game Description Language, and develop a provably correct formal theory which allows to show the validity of game properties with respect to their semantic across all reachable game states. Our proof theory does not require to visit every single reachable state. Instead, it applies an induction principle on the game rules based on the generation of answer set programs, allowing to apply any off-the-shelf answer set solver to practically verify invariance properties even in complex games whose state space cannot totally be explored. To account for the recent extension of the description language to games with incomplete information and elements of chance, we correctly extend our induction method to properties involving player knowledge. With an extensive evaluation we show its practical applicability even in complex games.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Majschak, Jens-Peter. « Rechnerunterstützung für die Suche nach verarbeitungstechnischen Prinziplösungen ». Doctoral thesis, 1996. https://tud.qucosa.de/id/qucosa%3A26633.

Texte intégral
Résumé :
Die hier zur Verfügung gestellte Datei ist leider nicht vollständig, aus technischen Gründen sind die folgenden Anhänge leider nicht enthalten: Anhang 3: Begriffshierarchie "verarbeitungstechnische Funktion" S. 141 Anhang 4: Begriffshierarchie "Eigenschaftsänderung" S. 144 Anhang 5: Begriffshierarchie "Verarbeitungsgut" S. 149 Anhang 6: Begriffshierarchie "Verarbeitungstechnisches Prinzip" S. 151 Konsultieren Sie die Druckausgabe, die Sie im Bestand der SLUB Dresden finden: http://slubdd.de/katalog?TN_libero_mab21079933:ABKÜRZUNGEN UND FORMELZEICHEN S. 5 1. EINLEITUNG S. 7 2. UNTERSTÜTZUNGSMITTEL FÜR DIE KONZEPTPHASE IN DER VERARBEITUNGSMASCHINEN-KONSTRUKTION - ALLGEMEINE ANFORDERUNGEN, ENTWICKLUNGSSTAND 9 2.1. DIE BEDEUTUNG DER KONZEPTPHASE IN DER VERARBEITUNGSMASCHINENKONSTRUKTION S. 9 2.2. ALLGEMEINE ANFORDERUNGEN AN UNTERSTÜTZUNGSMITTEL FÜR DEN KONSTRUKTEUR ALS PROBLEMLÖSER S. 13 2.3. SPEZIFIK VERARBEITUNGSTECHNISCHER PROBLEMSTELLUNGEN S. 17 2.3.1. Verarbeitungstechnische Informationen im Konstruktionsprozeß von Verarbeitungsmaschinen S. 17 2.3.2. Komplexität verarbeitungstechnischer Probleme S. 19 2.3.3. Unbestimmtheit verarbeitungstechnischer Probleme S. 21 2.3.4. Beschreibungsspezifik verarbeitungstechnischer Problemstellungen S. 22 2.4. UNTERSTÜTZUNGSMITTEL FÜR DIE KONZEPTPHASE UND IHRE EIGNUNG FÜR DIE VERARBEITUNGSMASCHINENKONSTRUKTION S. 24 2.4.1. Traditionelle Unterstützungsmittel für die Lösungssuche S. 24 2.4.1.1. Lösungskataloge S. 24 2.4.1.2. Konstruktionsmethodik in der Prinzipphase S. 25 2.4.2. Rechnerunterstützung für die Konstruktion mit Relevanz für die Konzeptphase S. 28 2.4.2.1. Kurzüberblick über Konstruktionsunterstützungssysteme und ihre Einbindung in übergeordnete Systeme S. 28 2.4.2.2. Rechnerunterstützung zum Analysieren S. 31 2.4.2.3. Rechnerunterstützung zum Informieren S. 32 2.4.2.4. Rechnerunterstützung zum Synthetisieren S. 34 2.4.2.5. Rechnerunterstützung zum Bewerten und Auswählen S. 39 2.4.2.6. Integrierende Systeme mit Unterstützung für die Konzeptphase S. 41 2.4.3. Der Wissensspeicher Verarbeitungstechnik S. 43 2.5. SCHLUßFOLGERUNGEN AUS DER ANALYSE DES IST-STANDES S. 46 3. ANFORDERUNGEN AN EINE RECHNERUNTERSTÜTZUNG DER PRINZIPPHASE DER VERARBEITUNGSMASCHINENKONSTRUKTION 47 3.1. FUNKTIONSBESTIMMUNG S. 47 3.1.1. Typisierung der mit dem System zu lösenden Fragestellungen S. 47 3.1.2. Anforderungen an Funktionalität und Dialoggestaltung S. 50 3.2. INHALTLICHE ABGRENZUNG S. 54 3.3. ANFORDERUNGEN AN DIE WISSENSREPRÄSENTATION S. 57 4. INFORMATIONSMODELL DES VERARBEITUNGSTECHNISCHEN PROBLEMRAUMES S. 61 4.1. ÜBERBLICK ÜBER MÖGLICHE DARSTELLUNGSARTEN S. 61 4.1.1. Allgemeiner Überblick S. 61 4.1.1.1. Unterschiede zwischen wissensbasierten Systemen und anderen Wissensrepräsentationsformen S. 61 4.1.1.2. Algorithmische Modellierung S. 62 4.1.1.3. Relationale Modellierung S. 63 4.1.1.4. Darstellungsformen in wissensbasierten Systemen S. 64 4.1.2. Die verwendete Software und ihre Möglichkeiten S. 71 4.2. ÜBERBLICK ÜBER DEN SYSTEMAUFBAU S. 74 4.2.1. Gesamtüberblick S. 74 4.2.2. Sichtenmodell S. 78 4.2.3. Relationale Darstellung von Prinzipinformationen, Kennwerten und Kenngrößen S. 83 4.2.4. Bildinformationen S. 85 4.2.5. Ergänzende Informationen in der Benutzeroberfläche S. 86 4.3. MODELLIERUNG VON WISSENSKOMPONENTEN DER DOMÄNE VERARBEITUNGSTECHNIK S. 87 4.3.1. Abbildung verarbeitungstechnischer Funktionen S. 87 4.3.1.1. Darstellungsarten für verarbeitungstechnische Funktionen - Bedeutung, Verwendung, Probleme S. 87 4.3.1.2. Die Sicht "Verarbeitungstechnische Funktion" S. 89 4.3.1.3. Die Sicht "Eigenschaftsänderung" S. 90 4.3.2. Abbildung von Informationen über Verarbeitungsgüter S. 93 4.3.2.1. Beschreibungskomponenten und ihre Verwendung bei der Lösungssuche S. 93 4.3.2.2. Die Sicht "Verarbeitungsgut" S. 94 4.3.2.3. Abbildung von Verarbeitungsguteigenschaften S. 94 4.3.3. Abbildung verarbeitungstechnischer Prinzipe S. 96 4.3.3.1. Die Sicht "Verarbeitungstechnisches Prinzip" S. 96 4.3.3.2. Die Detailbeschreibung verarbeitungstechnischer Prinzipe S. 97 4.3.4. Verarbeitungstechnische Kenngrößen S. 99 4.3.5. Darstellung von Zusammenhängen mittels Regeln S. 100 4.3.6. Unterstützung der Feinauswahl S. 102 5. PROBLEMLÖSEN MIT DEM BERATUNGSSYSTEM VERARBEITUNGSTECHNIK S. 104 5.1. INTERAKTIVE PROBLEMAUFBEREITUNG S. 104 5.2. BESTIMMUNG DER LÖSUNGSMENGE - GROBAUSWAHL S. 109 5.3. FEINAUSWAHL S. 110 5.4. VERARBEITUNG DER ERGEBNISSE S. 112 6. WISSENSAKQUISITION S. 113 6.1. PROBLEME BEI DER WISSENSAKQUISITION S. 113 6.2. VORSCHLÄGE ZUR UNTERSTÜTZUNG UND ORGANISATION DER AKQUISITION FÜR DAS BERATUNGSSYSTEM VERARBEITUNGSTECHNIK S. 115 7. GEDANKEN ZUR WEITERENTWICKLUNG S. 116 7.1. INHALTLICHER UND FUNKTIONALER AUSBAU DES BERATUNGSSYSTEMS VERARBEITUNGSTECHNIK S. 116 7.1.1. Ergänzung der Sichtenbeschreibung durch weitere Sichten S. 116 7.1.2. Andere Erweiterungsmöglichkeiten S. 117 7.2. EINBINDUNGSMÖGLICHKEITEN FÜR DAS BERATUNGSSYSTEMS VERARBEITUNGSTECHNIK S. 118 8. ZUSAMMENFASSUNG S. 120 LITERATURVERZEICHNIS S. 123 Anhang 1: Beispiele für phasenübergreifende Rechnerunterstützung der Konstruktion 134 Anhang 2: Inhalt der Kerntabelle "Prinzip" S. 138 Anhang 3: Begriffshierarchie "verarbeitungstechnische Funktion" S. 141 Anhang 4: Begriffshierarchie "Eigenschaftsänderung" S. 144 Anhang 5: Begriffshierarchie "Verarbeitungsgut" S. 149 Anhang 6: Begriffshierarchie "Verarbeitungstechnisches Prinzip" S. 151 Anhang 7: Implementierung einer umstellbaren Formel am Beispiel Dichteberechnung S. 158
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie