Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Rule Representation.

Thèses sur le sujet « Rule Representation »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Rule Representation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Aude, J. S. « Design rule representation within a hardware design system ». Thesis, University of Manchester, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377479.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Soltan-Zadeh, Yasaman. « Improved rule-based document representation and classification using genetic programming ». Thesis, Royal Holloway, University of London, 2011. http://repository.royalholloway.ac.uk/items/479a1773-779b-8b24-b334-7ed485311abe/8/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ghiasnezhad, Omran Pouya. « Rule Learning in Knowledge Graphs ». Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382680.

Texte intégral
Résumé :
With recent advancements in knowledge extraction and knowledge management systems, an enormous number of knowledge bases have been constructed, such as YAGO, and Wikidata. These automatically built knowledge bases which contain millions of entities and their relations have been stored in graph-based schemas, and thus are usually referred to as knowledge graphs (KGs). Since KGs have been built based on the limited available data, they are far from complete. However, learning frequent patterns in the form of logical rules from these incomplete KGs has two main advantages. First, by applying the learned rules, we can infer new facts, so we could complete the KGs. Second, the rules are stand-alone knowledge which express valuable insight about the data. However, learning rules from KGs in relation to the real-world scenarios imposes several challenges. First, due to the vast size of real-world KGs, developing a rule learning method is challenging. In fact, existing methods are not scalable for learning rst order rules, while various optimisation strategies are used such as sampling and language bias (i.e., restrictions on the form of rules). Second, applying the learned rules to the vast KG and inferring new facts is another di cult issue. Learned rules usually contain a lot of noises and adding new facts can cause inconsistency of KGs. Third, it is useful but non-trivial to extend an existing method of rule learning to the case of stream KGs. Forth, in many data repositories, the facts are augmented with time stamps. In this case, we face a stream of data (KGs). Considering time as a new dimension of data imposes some challenges to the rule learning process. It would be useful to construct a time-sensitive model from the stream of data and apply the obtained model to stream KGs. Last, the density of information in a KG is varied. Although the size of a KG is vast, it contains a limited amount of information for some relations. Consequently, that part of KG is sparse. Learning a set of accurate and informative rules regarding the sparse part of a KG is challenging due to the lack of su cient training data. In this thesis, we investigate these research problems and present our methods for rule learning in various scenarios. We have rst developed a new approach, named Rule Learning via Learning Representation (RLvLR), to learning rules from KGs by using the technique of embedding in representation learning together with a new sampling method. RLvLR learns rst-order rules from vast KGs by exploring the embedding space. It can handle some large KGs that cannot be handled by existing rule learners e ciently, due to a novel sampling method. To improve the performance of RLvLR for handling sparse data, we propose a transfer learning method, Transfer Rule Learner (TRL), for rule learning. Based on a similarity characterised by the embedding representation, our method is able to select most relevant KGs and rules to transfer from a pool of KGs whose rules have been obtained. We have also adapted RLvLR to handle stream KGs instead of static KGs. Then a system called StreamLearner is developed for learning rules from stream KGs. These proposed methods can only learn so-called closed path rules, which is a proper subset of Horn rules. Thus, we have also developed a transfer rule learner (T-LPAD) that learns the structure of logic program with annotated disjunctions. T-LPAD is created by employing transfer learning to explore the space of rules' structures more e ciently. Various experiments have been conducted to test and validate the proposed methods. Our experimental results show that our methods outperform state-of-the-art methods in many ways.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Styles APA, Harvard, Vancouver, ISO, etc.
4

Brunson, Alicia. « Light, Bright, and Out of Sight : Hollywood’s Representation of the Tragic Mulatto ». Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc407836/.

Texte intégral
Résumé :
The purpose of this research is to examine the longevity of the stereotype of the tragic mulatto in American film history. Specifically, my research focuses on the portrayals and perceptions of biracial actresses. Media informs, entertains, and influences how we, and especially youth, self-identify and interact with others. This research focuses on the portrayal of biracial actresses throughout film history. It is also important in its investigation of the perpetuation of the one-drop rule. In this research, I will examine if historical stereotypes of tragic mulatto are apparent in contemporary Hollywood film. The methodologies used in this research include a content analysis of films with biracial actresses and an online survey of respondents’ perceptions of four actresses. Statistical techniques used for analysis include ordinary least square regression and multinomial logistic regression. Findings suggest that the tragic mulatto stereotype is not blatant in contemporary Hollywood film, but issues of colorism may be apparent.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yang, Wanzhong. « Granule-based knowledge representation for intra and inter transaction association mining ». Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/30398/1/Wanzhong_Yang_Thesis.pdf.

Texte intégral
Résumé :
Abstract With the phenomenal growth of electronic data and information, there are many demands for the development of efficient and effective systems (tools) to perform the issue of data mining tasks on multidimensional databases. Association rules describe associations between items in the same transactions (intra) or in different transactions (inter). Association mining attempts to find interesting or useful association rules in databases: this is the crucial issue for the application of data mining in the real world. Association mining can be used in many application areas, such as the discovery of associations between customers’ locations and shopping behaviours in market basket analysis. Association mining includes two phases. The first phase, called pattern mining, is the discovery of frequent patterns. The second phase, called rule generation, is the discovery of interesting and useful association rules in the discovered patterns. The first phase, however, often takes a long time to find all frequent patterns; these also include much noise. The second phase is also a time consuming activity that can generate many redundant rules. To improve the quality of association mining in databases, this thesis provides an alternative technique, granule-based association mining, for knowledge discovery in databases, where a granule refers to a predicate that describes common features of a group of transactions. The new technique first transfers transaction databases into basic decision tables, then uses multi-tier structures to integrate pattern mining and rule generation in one phase for both intra and inter transaction association rule mining. To evaluate the proposed new technique, this research defines the concept of meaningless rules by considering the co-relations between data-dimensions for intratransaction-association rule mining. It also uses precision to evaluate the effectiveness of intertransaction association rules. The experimental results show that the proposed technique is promising.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yang, Wanzhong. « Granule-based knowledge representation for intra and inter transaction association mining ». Queensland University of Technology, 2009. http://eprints.qut.edu.au/30398/.

Texte intégral
Résumé :
Abstract With the phenomenal growth of electronic data and information, there are many demands for the development of efficient and effective systems (tools) to perform the issue of data mining tasks on multidimensional databases. Association rules describe associations between items in the same transactions (intra) or in different transactions (inter). Association mining attempts to find interesting or useful association rules in databases: this is the crucial issue for the application of data mining in the real world. Association mining can be used in many application areas, such as the discovery of associations between customers’ locations and shopping behaviours in market basket analysis. Association mining includes two phases. The first phase, called pattern mining, is the discovery of frequent patterns. The second phase, called rule generation, is the discovery of interesting and useful association rules in the discovered patterns. The first phase, however, often takes a long time to find all frequent patterns; these also include much noise. The second phase is also a time consuming activity that can generate many redundant rules. To improve the quality of association mining in databases, this thesis provides an alternative technique, granule-based association mining, for knowledge discovery in databases, where a granule refers to a predicate that describes common features of a group of transactions. The new technique first transfers transaction databases into basic decision tables, then uses multi-tier structures to integrate pattern mining and rule generation in one phase for both intra and inter transaction association rule mining. To evaluate the proposed new technique, this research defines the concept of meaningless rules by considering the co-relations between data-dimensions for intratransaction-association rule mining. It also uses precision to evaluate the effectiveness of intertransaction association rules. The experimental results show that the proposed technique is promising.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Solihin, Wawan. « A simplified BIM data representation using a relational database schema for an efficient rule checking system and its associated rule checking language ». Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54831.

Texte intégral
Résumé :
Efforts to automate building rule checking have not brought us anywhere near to the ultimate goal to fully automate the rule checking process. With the advancement in BIM and the latest tools and computing capability, we have what is necessary to achieve it. And yet challenges still abound. This research takes a holistic approach to solve the issue by first examining the rule complexity and its logic structure. Three major aspects of the rules are addressed in this research. The first is a new approach to transform BIM data into a simple database schema and to make it easily query-able by adopting the data warehouse approach. Geometry and spatial operations are also commonly needed for automating rules, and therefore the second approach is to integrate these into a database in the form of multiple representations. The third is a standardized rule language that leverages the database query integrated with its geometry and spatial query capability, called BIMRL. It is designed for a non-programmatic approach to the rule definitions that is suitable for typical rule experts. The rule definition takes a form of triplet command: CHECK – EVALUATE – ACTION statement that can be chained to support more complex rules. A prototype system has been developed as a proof-of-concept using selected rules taken from various sources to demonstrate the validity of the approach to solve the challenges of automating the building rule checking.
Styles APA, Harvard, Vancouver, ISO, etc.
8

PISCHEDDA, DORIS. « Rule-guided behaviour : how and where rules are represented and processed in human brain ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50374.

Texte intégral
Résumé :
Much of our behaviour is guided by rules defining associations between meaningful stimuli and proper responses. The ability to flexibly switch between rules to adapt to a continuously changing environment is one of the main challenges for the human cognitive system. Investigating how different types and combinations of rules are encoded and implemented in human brain is crucial to understand how we select and apply rules to guide our behaviour and react flexibly to a dynamic environment. The present thesis addressed the issue of where in the brain different types of rules are represented and how they are processed. Behavioural paradigms, functional magnetic resonance imaging, and multivariate pattern classification were combined to shed light on the cognitive mechanisms underlying rule processing and to identify brain areas encoding the contents of such processes. Using a priming paradigm, the first study assessed which types of associations (conditional, disjunctive, spatial, or quantified) could be activated automatically and trigger unconscious inferences. It proved that Modus Ponens inference is carried out unconsciously. The second study demonstrated that a condition-action rule instructed on a trial-by-trial basis and immediately marked as irrelevant causes significant interference effects when involuntarily triggered by target stimuli matching the condition in the rule. In the third study, using complex rule sets, we showed that rules at different level in the hierarchy of action control are encoded in partially separate brain networks. Moreover, we found that rule information is represented in distinct brain areas when different types of rules are encoded jointly. In the fourth study, we used rules composed using different logical connectives to expand the set of associations considered and to assess possible differences in rule representation and processing between rules with distinct logical forms. We found that separate brain areas encoded task rule information during rule representation and evaluation and that the involvement of these areas depended on the specific rule active in a trial. Taken together, our results suggest that conditional rules hold a special status in the human cognitive system, contributing to our knowledge on rule-guided behaviour.
Styles APA, Harvard, Vancouver, ISO, etc.
9

au, skhor@iinet net, et Sebastian Wankun Khor. « A Fuzzy Knowledge Map Framework for Knowledge Representation ». Murdoch University, 2007. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20070822.32701.

Texte intégral
Résumé :
Cognitive Maps (CMs) have shown promise as tools for modelling and simulation of knowledge in computers as representation of real objects, concepts, perceptions or events and their relations. This thesis examines the application of fuzzy theory to the expression of these relations, and investigates the development of a framework to better manage the operations of these relations. The Fuzzy Cognitive Map (FCM) was introduced in 1986 but little progress has been made since. This is because of the difficulty of modifying or extending its reasoning mechanism from causality to relations other than causality, such as associative and deductive reasoning. The ability to express the complex relations between objects and concepts determines the usefulness of the maps. Structuring these concepts and relations in a model so that they can be consistently represented and quickly accessed and anipulated by a computer is the goal of knowledge representation. This forms the main motivation of this research. In this thesis, a novel framework is proposed whereby single-antecedent fuzzy rules can be applied to a directed graph, and reasoning ability is extended to include noncausality. The framework provides a hierarchical structure where a graph in a higher layer represents knowledge at a high level of abstraction, and graphs in a lower layer represent the knowledge in more detail. The framework allows a modular design of knowledge representation and facilitates the creation of a more complex structure for modelling and reasoning. The experiments conducted in this thesis show that the proposed framework is effective and useful for deriving inferences from input data, solving certain classification problems, and for prediction and decision-making.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rocher, Swan. « Querying existential rule knowledge bases : decidability and complexity ». Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT291/document.

Texte intégral
Résumé :
Dans cette thèse, nous nous intéressons au problème d'interrogation de bases de connaissances composées de données et d'une ontologie, qui représente des connaissances générales sur le domaine d'application. Parmi les différents formalismes permettant de représenter les connaissances ontologiques, nous considérons ici un fragment de la logique du premier ordre appelé règles existentielles (aussi connues sous le nom de ``tuple generating dependencies'' et Datalog+/-). Le problème fondamental de conséquence logique au cœur de cette thèse demande si une requête conjonctive est conséquence d'une base de connaissances. Les règles existentielles étant très expressives, ce problème est indécidable. Toutefois, différentes restrictions sur les ensembles de règles ont été proposées afin d'obtenir sa décidabilité.La contribution de cette thèse est double. Premièrement, nous proposons un outil qui nous permet d'unifier puis d'étendre la plupart des classes de règles connues reposant sur des notions d'acyclicité assurant la finitude du chaînage avant. Deuxièmement, nous étudions la compatibilité des classes décidables de règles existentielles connues avec un type de connaissance souvent nécessaire dans les ontologies: la transitivité de relations binaires. Nous aidons à clarifier le paysage des résultats positifs et négatifs liés à cette question et fournissons une approche permettant de combiner la transitivité avec les règles existentielles linéaires
In this thesis we investigate the issue of querying knowledge bases composed of data and general background knowledge, called an ontology. Ontological knowledge can be represented under different formalisms and we consider here a fragment of first-order logic called existential rules (also known as tuple-generating dependencies and Datalog+/-).The fundamental entailment problem at the core of this thesis asks if a conjunctive query is entailed by an existential rule knowledge base. General existential rules are highly expressive, however at the cost of undecidability. Various restrictions on sets of rules have been proposed to regain the decidability of the entailment problem.Our specific contribution is two-fold. First, we propose a new tool that allows to unify and extend most of the known existential rule classes that rely on acyclicity conditions to tame infinite forward chaining, without increasing the complexity of the acyclicity recognition. Second, we study the compatibility of known decidable rule classes with a frequently required modeling construct, namely transitivity of binary relations. We help clarifying the picture of negative and positive results on this question, and provide a technique to safely combine transitivity with one of the simplest, yet useful, decidable rule classes, namely linear rules
Styles APA, Harvard, Vancouver, ISO, etc.
11

Daddah, Amel. « State-society exchange in modern Sahelian Africa : Cultural representation, political mobilization, and state rule (Senegal, Mauritania, Chad, Sudan) ». Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186159.

Texte intégral
Résumé :
Modern African states need to be analyzed from a perspective which complements, corrects, or specifies dependency/world-system and structural marxist explanations of peripheral political dynamics. This dissertation offers such a perspective as it seeks to explain variations in state-society exchange among four comparably dependent modern nations of the Sahelian African region (Senegal, Mauritania, Chad, Sudan). The model accounts for the political ramifications--state's mode of rule, level and type of opposition mobilization--of each country's ethno-religious configuration. It assumes that trans-national economic (and/or geopolitical) dynamics do not necessarily weigh more heavily on the dynamics of state-society relations than local political processes.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Lakkaraju, Sai Kiran, University of Western Sydney, of Science Technology and Environment College et School of Computing and Information Technology. « A SLDNF based formalization for updates and abduction ». THESIS_CSTE_CIT_Lakkaraju_S.xml, 2001. http://handle.uws.edu.au:8081/1959.7/381.

Texte intégral
Résumé :
Knowledge representation and inference are the backbone of artificial intelligence, and logic programming is one of the most widely used knowledge representation tools. Logic programming with deduction/induction/abduction as the reasoning technique is serving numerous fields of artificial intelligence. In dynamic domains where there are constant changes in knowledge, updating the knowledge base is crucial to keep it stable. This thesis investigates the issues in updating the knowledge base. Two types of logic program based updates are considered, simple fact based updates where the knowledge base is updated by a simple fact, and rule based updates where the knowledge base is updated by a rule. A SLDNF based procedural approach is proposed to implement such updates. This thesis also investigates the issues involved in simple fact based and rule based abduction, and it is observed that updates are closely related to abduction. A SLDNF based procedural approach to perform simple fact/rule based updates and abduction is proposed as a result of this study
Master of Science (Hons)
Styles APA, Harvard, Vancouver, ISO, etc.
13

Lundberg, Jacob. « Resource Efficient Representation of Machine Learning Models : investigating optimization options for decision trees in embedded systems ». Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162013.

Texte intégral
Résumé :
Combining embedded systems and machine learning models is an exciting prospect. However, to fully target any embedded system, with the most stringent resource requirements, the models have to be designed with care not to overwhelm it. Decision tree ensembles are targeted in this thesis. A benchmark model is created with LightGBM, a popular framework for gradient boosted decision trees. This model is first transformed and regularized with RuleFit, a LASSO regression framework. Then it is further optimized with quantization and weight sharing, techniques used when compressing neural networks. The entire process is combined into a novel framework, called ESRule. The data used comes from the domain of frequency measurements in cellular networks. There is a clear use-case where embedded systems can use the produced resource optimized models. Compared with LightGBM, ESRule uses 72ˆ less internal memory on average, simultaneously increasing predictive performance. The models use 4 kilobytes on average. The serialized variant of ESRule uses 104ˆ less hard disk space than LightGBM. ESRule is also clearly faster at predicting a single sample.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Felton, Emily Byas. « Strategies used in implementing the multiple eligibility crieria rule in Georgia elementary schools to increase representation of black American students in gifted education ». Click here to access dissertation, 2008. http://www.georgiasouthern.edu/etd/archive/fall2008/emily_a_byas/felton_emily_b_200808_edd.pdf.

Texte intégral
Résumé :
Thesis (Ed.D.)--Georgia Southern University, 2008.
"A dissertation submitted to the Graduate Faculty of Georgia Southern University in partial fulfillment of the requirements for the degree Doctor of Education." Directed by Abebayehu Tekleselassie. "December 2008" ETD. Includes bibliographical references (p. 119-129) and appendices.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Saxena, Isha. « Information extraction and representation from free text reports Isha Saxena ». Master's thesis, Universidade de Évora, 2021. http://hdl.handle.net/10174/29323.

Texte intégral
Résumé :
The need for extracting specific information has increased drastically with the boost in digital-born documents. These documents majorly comprise of free text from which structured information can be extracted. The sources include, customer review reports, patient records, financial and legal documents, etc. The needs and applications for extracting specific information from free text are growing every moment, and new researches are emerging to mine contextual information in a way that is both highly efficient and convenient in its usage. This thesis work address to the problem of extracting specific information from free text, specifically for the domains who lack labeled data. First step in the development of an advanced information extraction system is to extract and represent structured information from unstructured natural language text. To accomplish this task, the thesis proposes a system for extracting and tagging domain specific information, as domain related entities / concepts, and relational phrases. The approaches comprise of dictionary matching for domain specific concept extraction, and rule based pattern matching for relation extraction and tagging the free text accordingly. The experiments were performed on Altice Labs’1 customer reports. The system achieved over 80% recall and 90% precision for both concept and relation extraction. The proposed domain-specific concept extraction module was compared with existing concept extraction platforms: Microsoft Concept Graph2 and DBpedia Spotlight3. The proposed model yielded high performance results then both the platforms; Sumário: Extração e representação de informações de relatórios de texto livre A necessidade de extrair informações específicas aumentou drasticamente com o aumento dos documentos de origem digital. Esses documentos consistem principalmente de texto livre do qual informações estruturadas podem ser extraídas. As fontes incluem relatórios de revisão de clientes, registos de pacientes, documentos financeiros e jurídicos, etc. As necessidades e aplicações para extrair informações específicas de texto livre estão crescendo a cada momento e novas pesquisas estão surgindo para extrair informações contextuais de uma forma altamente eficiente e conveniente em seu uso. Este trabalho aborda o problema da extração de informações específicas em texto livre, especificamente para os domínios que carecem de dados etiquetados. O primeiro passo no desenvolvimento de um sistema avançado de extração de informações é extrair e representar informações estruturadas de um texto de linguagem natural não estruturado. Para cumprir essa tarefa, a tese propõe um sistema para extrair e marcar informações específicas do domínio, como entidades / conceitos relacionados ao domínio e frases relacionais. As abordagens incluem correspondência de dicionário para extração de conceitos específico de domínio e correspondência de padrão baseada em regras para extração de relação e marcação de texto livre. As experiências foram realizados nos relatórios de clientes 4 da Altice Labs. O sistema atingiu mais de 80 % de recall e 90% de precisão para extração de conceito e relação. O módulo de extração de conceito específico de domínio proposto foi comparado com plataformas de extração de conceito existentes: Microsoft Concept Graph 5 e DBpedia Spotlight 6. O modelo proposto rendeu resultados de alto desempenho para ambas as plataformas.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Silva, Jackson Gois da. « A significação de representações químicas e a filosofia de Wittgenstein ». Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/48/48134/tde-29082012-104740/.

Texte intégral
Résumé :
Nesta tese fazemos um estudo da significação de representações químicas a partir da filosofia de Wittgenstein. Na comparação dessa filosofia com as propostas mais importantes do Ensino de Ciências, que são os paradigmas de Mudança Conceitual, Modelos Mentais e Perfis Conceituais, percebemos que essas propostas têm em comum o pressuposto representacional do significado. Isso quer dizer que o significado ocorre, nessas propostas, porque há uma relação de representação com objetos mentais, além de uma dependência lógica entre essas representações. Procuramos mostrar, a partir daí, os aspectos da filosofia de Wittgenstein que possibilitam a compreensão do significado como uso, e não como dependente de representações, objetos mentais ou da lógica. Com nossa proposta, o significado está integralmente na aprendizagem das formas de uso da nossa linguagem, não em entidades exteriores a ela. Isso resulta em consequências práticas para o ensino, uma vez que atividades que envolvem a fala e as ações a ela ligadas estão ao alcance de professores e pesquisadores, mas entidades mentais com dependência lógica não estão. Encontramos em nossa revisão da literatura a contribuição de um grupo de pesquisadores que tem produzido conhecimentos há uma década, os quais se inspiram na filosofia de Wittgenstein para propor sua própria epistemologia. Fazemos uma análise da transposição das concepções da filosofia de Wittgenstein para essa epistemologia e concluímos que, nas duas categorias propostas, as concepções trazidas de Wittgenstein não se constituem uma novidade no Ensino de Ciências, que são a importância da semelhança na elaboração de significados e o aprendizado a partir de onde não se tem dúvidas. Propomos que a principal contribuição desse filósofo para o Ensino de Ciências é o papel das regras na elaboração de significados. Além disso, procuramos delimitar de que forma o Ensino de Ciências pode fazer uso da filosofia de Wittgenstein, uma vez que há uma forte imagem terapêutica entre seus estudiosos. Como o fazer do Ensino de Ciências não é o mesmo da Filosofia, propomos os usos de seus métodos filosóficos na resolução pontual do passado filosófico ainda presente nessa área. Ainda propomos um ensaio sobre a questão da existência dos átomos, à moda da filosofia wittgensteiniana, e concluímos afirmando que essa questão é uma ilusão da linguagem. Ainda sugerimos um redirecionamento de foco no Ensino de Ciências para quais são as implicações epistemológicas de inventarmos formas de representação que apresentam aspectos empíricos e convencionais, ao invés de apenas observar de que maneira as práticas científica e filosófica tratam as representações.
We investigate in this PhD dissertation the meaning of chemical representations from the philosophy of Ludwig Wittgenstein. We compare the foundations of this philosophy with the most important program researches in Science Education, e.g. Conceptual Change, Mental Models and Conceptual Profiles, and we identify that these proposals have a common representational presumption of meaning. In this conception meaning happens because there is a relationship of representation with mental objects, and also a logical dependence among these representations. We show, from this point, the aspects of the philosophy of Wittgenstein that allows us to understand meaning as use, and not as depending of representations, mental objects or logic. In our proposal, meaning is fully on the learning of ways of use in our language, and not in external entities. The results are practical consequences to teaching, once the activities that involve speech and the actions akin to it are on the reach of teachers and researchers, but mental entities with logical dependence are not. We found in our literature review a contribution of a group of researchers that have been contributing in Science Education from Wittgensteins philosophy about one decade. They took his philosophy as an inspiration to produce their own epistemology. We analyze the transposition of the conceptions of the philosophy of Wittgenstein to this epistemology and conclude that the categories proposed are not new to Science Education, that are the importance of similarity to meaning elaboration and the learning from there is no doubt. We propose that the main contribution of this philosopher to Science Education is the role of the rules to meaning elaboration. We also seek to delimitate how Science Education can use Wittgensteins philosophy, once there is a strong image of therapy among those who study him. As Science Education doesnt have the same goals and methods of Philosophy, we propose the uses of Wittgensteins methods to solve some points of the philosophical past still present in this field. We also propose an essay about the question of atoms existence, based on the philosophy of Wittgenstein, and conclude that this is a language illusion. We suggest a redirection of focus to Science Education to what are the epistemological implications about the fact that we invent ways of representation that present empirical and conventional aspects, instead of just observing how scientists and philosophers treat representations.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Oshurko, Ievgeniia. « Knowledge representation and curation in hierarchies of graphs ». Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN024.

Texte intégral
Résumé :
L'extraction automatique des intuitions et la construction de modèles computationnels à partir de connaissances sur des systèmes complexes repose largement sur le choix d'une représentation appropriée. Ce travail s'efforce de construire un cadre adapté pour la représentation de connaissances fragmentées sur des systèmes complexes et sa curation semi-automatisé.Un système de représentation des connaissances basé sur des hiérarchies de graphes liés à l'aide d'homomorphismes est proposé. Les graphes individuels représentent des fragments de connaissances distincts et les homomorphismes permettent de relier ces fragments. Nous nous concentrons sur la conception de mécanismes mathématiques,basés sur des approches algébriques de la réécriture de graphes, pour la transformation de graphes individuels dans des hiérarchies qui maintient des relations cohérentes entre eux.De tels mécanismes fournissent une piste d'audit transparente, ainsi qu'une infrastructure pour maintenir plusieurs versions des connaissances.La théorie développée est appliquée à la conception des schémas pour les bases de données orientée graphe qui fournissent des capacités de co-évolution schémas-données.Ensuite, cette théorie est utilisée dans la construction du cadre KAMI, qui permet la curation des connaissances sur la signalisation dans les cellules. KAMI propose des mécanismes pour une agrégation semi-automatisée de faits individuels sur les interactions protéine-protéine en corpus de connaissances, la réutilisation de ces connaissances pour l'instanciation de modèles de signalisation dans différents contextes cellulaires et la génération de modèles exécutables basés sur des règles
The task of automatically extracting insights or building computational models fromknowledge on complex systems greatly relies on the choice of appropriate representation.This work makes an effort towards building a framework suitable for representation offragmented knowledge on complex systems and its semi-automated curation---continuouscollation, integration, annotation and revision.We propose a knowledge representation system based on hierarchies of graphs relatedwith graph homomorphisms. Individual graphs situated in such hierarchies representdistinct fragments of knowledge and the homomorphisms allow relating these fragments.Their graphical structure can be used efficiently to express entities and their relations. Wefocus on the design of mathematical mechanisms, based on algebraic approaches to graphrewriting, for transformation of individual graphs in hierarchies that maintain consistentrelations between them. Such mechanisms provide a transparent audit trail, as well as aninfrastructure for maintaining multiple versions of knowledge.We describe how the developed theory can be used for building schema-aware graphdatabases that provide schema-data co-evolution capabilities. The proposed knowledgerepresentation framework is used to build the KAMI (Knowledge Aggregation and ModelInstantiation) framework for curation of cellular signalling knowledge. The frameworkallows for semi-automated aggregation of individual facts on protein-protein interactionsinto knowledge corpora, reuse of this knowledge for instantiation of signalling models indifferent cellular contexts and generation of executable rule-based models
Styles APA, Harvard, Vancouver, ISO, etc.
18

Håkansson, Anne. « Graphic Representation and Visualisation as Modelling Support for the Knowledge Acquisition Process ». Doctoral thesis, Uppsala University, Computer Science, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3812.

Texte intégral
Résumé :

The thesis describes steps taken towards using graphic representation and visual modelling support for the knowledge acquisition process in knowledge-based systems – a process commonly regarded as difficult. The performance of the systems depends on the quality of the embedded knowledge, which makes the knowledge acquisition phase particularly significant. During the acquisition phase, a main obstacle to proper extraction of information is the absence of effective modelling techniques.

The contributions of the thesis are: introducing a methodology for user-centred knowledge modelling, enhancing transparency to support the modelling of content and of the reasoning strategy, incorporating conceptualisation to simplify the grasp of the contents and to support assimilation of the domain knowledge, and supplying a visual compositional logic programming language for adding and modifying functionality.

The user-centred knowledge acquisition model, proposed in this thesis, applies a combination of different approaches to knowledge modelling. The aim is to bridge the gap between the users (i.e., knowledge engineers, domain experts and end users) and the system in transferring knowledge, by supporting the users through graphics and visualisation. Visualisation supports the users by providing several different views of the contents of the system.

The Unified Modelling Language (UML) is employed as a modelling language. A benefit of utilising UML is that the knowledge base can be modified, and the reasoning strategy and the functionality can be changed directly in the model. To make the knowledge base more comprehensible and expressive, we incorporated visual conceptualisation into UML’s diagrams to describe the contents. Visual conceptualisation of the knowledge can also facilitate assimilation in a hypermedia system through visual libraries.

Visualisation of functionality is applied to a programming paradigm, namely relational programming, often employed in artificial intelligence systems. This approach employs Venn-Euler diagrams as a graphic interface to a compositional operator based relational programming language.

The concrete result of the research is the development of a graphic representation and visual modelling approach to support the knowledge acquisition process. This approach has been evaluated for two different knowledge bases, one built for hydropower development and river regulation and the other for diagnosing childhood diseases.

Styles APA, Harvard, Vancouver, ISO, etc.
19

Paschke, Adrian [Verfasser], Martin [Akademischer Betreuer] Bichler et Bernd [Akademischer Betreuer] Brügge. « RBSLA : Rule-based Service Level Agreements : Knowledge Representation for Automated e-Contract, SLA and Policy Management / Adrian Paschke. Gutachter : Bernd Brügge ; Martin Bichler. Betreuer : Martin Bichler ». München : Universitätsbibliothek der TU München, 2007. http://d-nb.info/1054310904/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

De, Kock Erika. « Decentralising the codification of rules in a decision support expert knowledge base ». Pretoria : [s.n.], 2003. http://upetd.up.ac.za/thesis/available/etd-03042004-105746.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Trinh, Megan. « On the Diameter of the Brauer Graph of a Rouquier Block of the Symmetric Group ». University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron152304291682246.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Görgen, Kai. « On Rules and Methods : Neural Representations of Complex Rule Sets and Related Methodological Contributions ». Doctoral thesis, Humboldt-Universität zu Berlin, 2019. http://dx.doi.org/10.18452/20711.

Texte intégral
Résumé :
Wo und wie werden komplexe Regelsätze im Gehirn repräsentiert? Drei empirische Studien dieser Doktorarbeit untersuchen dies experimentell. Eine weitere methodische Studie liefert Beiträge zur Weiterentwicklung der genutzten empirischen Methode. Die empirischen Studien nutzen multivariate Musteranalyse (MVPA) funktioneller Magnetresonanzdaten (fMRT) gesunder Probanden. Die Fragestellungen der methodischen Studie wurden durch die empirischen Arbeiten inspiriert. Wirkung und Anwendungsbreite der entwickelten Methode gehen jedoch über die Anwendung in den empirischen Studien dieser Arbeit hinaus. Die empirischen Studien bearbeiten Fragen wie: Wo werden Hinweisreize und Regeln repräsentiert, und sind deren Repräsentationen voneinander unabhängig? Wo werden Regeln repräsentiert, die aus mehreren Einzelregeln bestehen, und sind Repräsentationen der zusammengesetzten Regeln Kombinationen der Repräsentationen der Einzelregeln? Wo sind Regeln verschiedener Hierarchieebenen repräsentiert, und gibt es einen hierarchieabhängigen Gradienten im ventrolateralen präfrontalen Kortex (VLPFK)? Wo wird die Reihenfolge der Regelausführung repräsentiert? Alle empirischen Studien verwenden informationsbasiertes funktionales Mapping ("Searchlight"-Ansatz), zur hirnweiten und räumlich Lokalisierung von Repräsentationen verschiedener Elemente komplexer Regelsätze. Kernergebnisse der Arbeit beinhalten: Kompositionalität neuronaler Regelrepräsentationen im VLPFK; keine Evidenz für Regelreihenfolgenrepräsentation im VLPFK, welches gegen VLPFK als generelle Task-Set-Kontrollregion spricht; kein Hinweis auf einen hierarchieabhängigen Gradienten im VLPFK. Die komplementierende methodische Studie präsentiert "The Same Analysis Approach (SAA)", ein Ansatz zur Erkennung und Behebung experimentspezifischer Fehler, besonders solcher, die aus Design–Analyse–Interaktionen entstehen. SAA ist für relevant MVPA, aber auch für anderen Bereichen innerhalb und außerhalb der Neurowissenschaften.
Where and how does the brain represent complex rule sets? This thesis presents a series of three empirical studies that decompose representations of complex rule sets to directly address this question. An additional methodological study investigates the employed analysis method and the experimental design. The empirical studies employ multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data from healthy human participants. The methodological study has been inspired by the empirical work. Its impact and application range, however, extend well beyond the empirical studies of this thesis. Questions of the empirical studies (Studies 1-3) include: Where are cues and rules represented, and are these represented independently? Where are compound rules (rules consisting of multiple rules) represented, and are these composed from their single rule representations? Where are rules from different hierarchical levels represented, and is there a hierarchy-dependent functional gradient along ventro-lateral prefrontal cortex (VLPFC)? Where is the order of rule-execution represented, and is it represented as a separate higher-level rule? All empirical studies employ information-based functional mapping ("searchlight" approach) to localise representations of rule set features brain-wide and spatially unbiased. Key findings include: compositional coding of compound rules in VLPFC; no order information in VLPFC, suggesting VLPFC is not a general controller for task set; evidence against the hypothesis of a hierarchy-dependent functional gradient along VLPFC. The methodological study (Study 4) introduces "The Same Analysis Approach (SAA)". SAA allows to detect, avoid, and eliminate confounds and other errors in experimental design and analysis, especially mistakes caused by malicious experiment-specific design-analysis interactions. SAA is relevant for MVPA, but can also be applied in other fields, both within and outside of neuroscience.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lakkaraju, Sai Kiran. « A SLDNF formalization for updates and abduction / ». View thesis View thesis, 2001. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030507.112018/index.html.

Texte intégral
Résumé :
Thesis (M.Sc. (Hons.)) -- University of Western Sydney, 2001.
"A thesis submitted for the degree of Master of Science (Honours) - Computing and Information Technology at University of Western Sydney" Bibliography : leaves 93-98.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Nyman, Peter. « On relations between classical and quantum theories of information and probability ». Doctoral thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13830.

Texte intégral
Résumé :
In this thesis we study quantum-like representation and simulation of quantum algorithms by using classical computers.The quantum--like representation algorithm (QLRA) was  introduced by A. Khrennikov (1997) to solve the ``inverse Born's rule problem'', i.e. to construct a representation of probabilistic data-- measured in any context of science-- and represent this data by a complex or more general probability amplitude which matches a generalization of Born's rule.The outcome from QLRA matches the formula of total probability with an additional trigonometric, hyperbolic or hyper-trigonometric interference term and this is in fact a generalization of the familiar formula of interference of probabilities. We study representation of statistical data (of any origin) by a probability amplitude in a complex algebra and a Clifford algebra (algebra of hyperbolic numbers). The statistical data is collected from measurements of two dichotomous and trichotomous observables respectively. We see that only special statistical data (satisfying a number of nonlinear constraints) have a quantum--like representation. We also study simulations of quantum computers on classical computers.Although it can not be denied that great progress have been made in quantum technologies, it is clear that there is still a huge gap between the creation of experimental quantum computers and realization of a quantum computer that can be used in applications. Therefore the simulation of quantum computations on classical computers became an important part in the attempt to cover this gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation.  The second part of this thesis is devoted to adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulations on classical computers. Concretely we represent Simon's algorithm, Deutsch-Josza algorithm, Shor's algorithm, Grover's algorithm and quantum error-correcting codes in the Mathematica symbolic language. We see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include future algorithms in this framework.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Beldiceanu, Nicolas. « Langage de regles et moteur d'inferences bases sur des contraintes et des actions : application aux reseaux de petri ». Paris 6, 1988. http://www.theses.fr/1988PA066053.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Adjaoute, Akli. « Rylm : générateur de systèmes experts pour les problèmes d'aide aux diagnosticsYkra : système d'enseignement ». Paris 6, 1988. http://www.theses.fr/1988PA066005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Görgen, Kai [Verfasser], John-Dylan [Gutachter] Haynes, Benjamin [Gutachter] Blankertz et Felix [Gutachter] Blankenburg. « On Rules and Methods : Neural Representations of Complex Rule Sets and Related Methodological Contributions / Kai Görgen ; Gutachter : John-Dylan Haynes, Benjamin Blankertz, Felix Blankenburg ». Berlin : Humboldt-Universität zu Berlin, 2019. http://d-nb.info/1200406087/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Seymour, Jillaine. « Judicial response to the representative parties rule in England and Australia ». Thesis, University of Oxford, 2001. https://ora.ox.ac.uk/objects/uuid:584cf9d7-4c22-4aee-97f2-4f82e327bb7c.

Texte intégral
Résumé :
Use of the representative parties rule in England and Australia has been stifled by restrictive interpretation of the circumstances in which it is available. Chapter 1 demonstrates that the predominant test in England for the 'same interest' required by the rule would, if consistently applied, defeat any claim to use the rule. The recent change of test in Australia widens the rule's potential scope but does not appear to have resulted in significantly more liberal interpretation. Chapter 2 discusses the rule's operation, including res judicata, the enforcement of judgments, and the protection of the interests of those represented and of the named parties. It concludes that the rule diverges from the traditional model of individual voluntary civil litigation, and is characterised by uncertainty. Chapter 3 argues that this uncertainty may have encouraged a defensive posture by the courts, limiting use of the rule and avoiding the need to address those issues which demand resolution. Chapter 4 notes that various features of the rule undermine a number of principles commonly associated with procedural fairness. It is argued that judicial response to these features often pays insufficient attention to two issues. The first is whether the purpose which the principle is expected to promote is in fact protected by the rule, even if the principle itself is undermined. The second is the need to balance the rule's limitation of some principles against its particular benefits. It is further argued that some successful representative claims exemplify circumstances in which the primary purpose of procedural law (accurate application of the substantive law) is served by the rule. Chapter 5 identifies other successful representative claims, particularly against representatives of the members of unincorporated associations, which, it is argued, ought to be viewed not as supporting accurate application, but rather as facilitating development, of the substantive law.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Guarini, Marcello 1970. « Rules and representations in the classicism-connectionism debate ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq31115.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Damacena, Alexandre Bento. « A função representativa do parlamento na República Federativa do Brasil ». Universidade Presbiteriana Mackenzie, 2008. http://tede.mackenzie.br/jspui/handle/tede/1204.

Texte intégral
Résumé :
Made available in DSpace on 2016-03-15T19:34:34Z (GMT). No. of bitstreams: 1 Alexandre Bento Damacena.pdf: 649839 bytes, checksum: 5e5de7e48c9e0b0afd8e98f2a5ee83ce (MD5) Previous issue date: 2008-02-14
When the king had issues to deal with or wanted to hear the opinion of the most important men regarding any certain subject, he would gather lords and the most important members of the clergy for a meeting. In Portugal and in the kingdoms of Spain, these meetings were named Courts . In France, they were called General States . And in England, they were called Parliament . Each Parliament evolved and went through several stages until we got to the models of the 21st century. Several functions were attributed to the Parliaments, such as, for instance, to legislate and control the Executive Branch. However, amongst all its attributions, the representative function is fundamental, because it turns the Parliament into an institution that is essential for democracy. Representing the people s will is not easy, and choosing the representatives is not something simple to be done as well. Being a ruler or ruled brings about rights and duties for both sides. Should the congressman comply with the voter s will, his political party or his own beliefs? This study looks for answers for these questions by means of the analysis of the political representation in a democratic environment. More specifically, it examines Brazil and its Federal Parliament. The study of the complex relationship ruler-ruled and of the representative function of the Parliament contributes to point out possible paths and alternatives for the improvement of the Brazilian representative system.
Quando o rei tinha questões graves a tratar ou queria ouvir a opinião dos homens mais importantes sobre um determinado assunto, mandava chamar os grandes senhores da nobreza e os membros mais destacados do clero para uma reunião. Em Portugal e nos reinos da Espanha, a essas reuniões foi dado o nome de Cortes . Na França denominou-se Estados Gerais . E na Inglaterra, chamou-se Parlamento . Cada Parlamento se desenvolveu e conheceu diversas fases até chegarmos aos modelos do século XXI. Aos Parlamentos foram atribuídas diversas funções , como por exemplo legislar e efetuar o controle do Executivo. Mas, de todas as suas atribuições, a função representativa é fundamental, pois torna o Parlamento uma instituição indispensável para a realização da democracia. Representar a vontade do povo não é fácil, e fazer a escolha dos representantes também não é tarefa das mais simples. Ser governante ou governado produz direitos e deveres para os dois lados. O parlamentar deverá seguir a vontade do eleitor, do seu partido ou suas próprias convicções? Este trabalho busca respostas para essas questões por meio da análise da representação política em ambiente democrático. De forma mais específica, examina o Brasil e o seu Parlamento Federal. O estudo da complexa relação governante-governado e da função representativa do Parlamento contribuem para apontar possíveis caminhos e alternativas para o aperfeiçoamento do sistema epresentativo brasileiro.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Bellissimo, Michael Robert. « A LOWER BOUND ON THE DISTANCE BETWEEN TWO PARTITIONS IN A ROUQUIER BLOCK ». University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1523039734121649.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Lamanauskas, Milton Fernando. « A jurisprudência eleitoral e seus reflexos no Estado democrático de direito ». Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/2/2134/tde-08092011-102459/.

Texte intégral
Résumé :
A sociedade brasileira aguarda pacientemente e há anos uma real reforma que introduza a ética e a moral no meio político e faça valer a soberania de seu povo. De um lado, observa-se um Poder Legislativo com sérias dificuldades de quebrar sua inércia e cumprir o seu papel de concretizar a lei como expressão da vontade geral. De outro, um Poder Judiciário que busca suprir os anseios sociais dando efetividade aos direitos fundamentais colocados na Constituição Federal da República Brasileira de 1988. O presente estudo almeja analisar como o Estado brasileiro tem convivido com um intenso ativismo judicial ou judicialização da política e suas consequências para a democracia pátria. Para a eficácia de suas conclusões, limitou-se o campo de estudo à matéria eleitoral, dada sua cristalina correlação com o Estado Democrático de Direito. Foram selecionadas, deste modo, as recentes decisões dos Tribunais nacionais em temas eleitorais para averiguar os reflexos desta jurisprudência sobre as bases democráticas de nosso país. E, em assim procedendo, foram trazidos elementos para uma crítica fundamentada à tentativa do Poder Judiciário de moralizar as instituições políticas, buscando fornecer as bases para concluir se esse altivo movimento dos Tribunais logrou, de fato, o aprimoramento do regime democrático vigente, preservando a harmonia entre os Poderes, a unidade do ordenamento jurídico e a legitimidade das instituições da nação ou se, ao contrário, apenas soluções pontuais foram conquistadas, combatendo-se uma doença grave com remédios paliativos ao invés de atacar a real causa das mazelas que assolam o Estado Democrático de Direito brasileiro.
The Brazilian society waits patiently and for many years for a deep change that introduces ethics and moral to politics to renew the sovereignty of its people. On one hand, the Parliament presents serious difficulties in moving forward to fulfill its role of materializing the law as an expression of the general will. On the other hand, the judiciary tries to meet social expectations, providing effectiveness to basic rights constitutionally established. This study aims to analyze how the Brazilian State has been living with an intense judicial activism and a judicialization of its politics and its consequences to democracy. For the effectiveness of its conclusions, the object of the analysis was limited to electoral issue, due to its crystal clear relation with the Rule of Law. In this manner, some recent judicial decisions of the national Courts as regards electoral subjects were chosen to verify the consequences of this jurisprudence on the democratic foundations of our country. And, in so proceeding, many aspects were brought to enable a justified criticism to the judiciary attempt to moralize political institutions, trying to provide the basis for the following questions: have, in fact, this noble movement of the Courts succeeded in improving the current democratic Brazilian system, preserving the harmony between the Powers, the unity of the legal system and the legitimacy of the nation institutions?; or, on the opposite, only few hoc solutions have been conquered, fighting with a serious illness by ministering palliative drugs, instead of solving the real cause of the illness that plagues the State?
Styles APA, Harvard, Vancouver, ISO, etc.
33

Weller, Martin L. « An analysis of the applicability of rule based technology to a representative domain ». Thesis, Teesside University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387023.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Gladh, Jörgen. « Tensor products, Fusion rules and su(2) Representations ». Thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap, fysik och matematik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-2801.

Texte intégral
Résumé :
In this master thesis I have looked on two different kinds of representations of the Lie algebras su(2) and sl(2), and the tensor products of the representations. In the first case I looked at a tensor product involving a representation similar to one that appears in an article by A. van Tonder. This representation and tensor product was investigated mainly to get a good comprehension in the subject and to understand some of the problems that can arise. In the other case, which is the main problem in this thesis, I looked at a tensor product and representations that appears in an article by M. R. Gaberdiel. Here we deal with a tensor product of representations of su(2) with a specific value for the level at k = -4/3 and a specific eigenvalue of the Casimir operator at -2/9. This was done in the frame of finite dimensional Lie algebra and affine Lie algebra and not in the case of fusion rules as in the article by M. R. Gaberdiel. In both cases some of the calculations where done from in situ and the investigation of the representations behaviour due to the step operators, theirs eigenvalue and theirs weight system. Results and conclusions of the investigations are discussed in the last part of this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Wennerholm, Pia. « The Role of High-Level Reasoning and Rule-Based Representations in the Inverse Base-Rate Effect ». Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Universitetsbiblioteket [distributör], 2001. http://publications.uu.se/theses/91-554-5178-0/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Gredebäck, Gustaf. « Infants’ Knowledge of Occluded Objects : Evidence of Early Spatiotemporal Representations ». Doctoral thesis, Uppsala University, Department of Psychology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4058.

Texte intégral
Résumé :

This thesis demonstrates that infants represent temporarily non-visible, or occluded, objects. From 4 months of age, infants could accurately predict the reappearance of a moving object after 660 ms of non visibility; indicating accurate spatiotemporal representations. At this age predictions were dominated by associations between specific events and outcomes (associative rules). Between 6 and 8 months of age predictions became dominated by extrapolations (Study III). From 6 months infants could represent occluded objects for up to 4 seconds. The number of successful predictions decreased, however, if the information contained in the occlusion event diminished (time of accretion and deletion). As infants grew older (up to 12 months) they produced more accurate predictions. (Study II). The similarities between adult and infant performances were numerous (Study I). These conclusion are based on one cross sectional (Study I) and two longitudinal studies (Study II & III) in which an object, a ‘happy face’, moved on circular (Study I, II, & III) and other complex trajectories (Study III). One portion of each trajectory was covered by a screen that blocked the object from sight. In each study participants gaze were recorded with an infrared eye tracking system (ASL 504) and a magnetic head tracker (Flock of Birds). This data was combined with data from the stimulus and stored for of line analysis.

Styles APA, Harvard, Vancouver, ISO, etc.
37

Esposito, Gabriele. « Representation, power and electoral rules : myths and paradoxes : a computational and experimental approach ». Paris, EHESS, 2011. http://www.theses.fr/2011EHES0152.

Texte intégral
Résumé :
Est-ce que l'être humain, seul ou en groupe, est en mesure de comprendre l'influence qu'il possède à l'intérieur d'un comité décisionnel? Est-il capable de traiter tous les acteurs de façon équitable dans le processus de conception d'une assemblée parlementaire, ou bien donnera-t-il vie à des créatures bizarres, avec des motivations purement politiciennes? Les règles de vote actuelles sont-t-elles bien crées dans le but d'éviter des résultats paradoxaux lors d'une élection? Cette thèse répond à ces questions en utilisant des outils de la théorie des jeux coopératifs et non coopératifs, à l'aide d'approches computationnelles et expérimentales. La première partie de ce travail analyse les systèmes de vote à deux niveaux de type fédéraux et les lois électorales. La deuxième partie se concentre sur l'apprentissage des individus dans des jeux pour lesquels les acteurs doivent identifier et choisir la situation qui leur attribue la plus grande influente
Is the human being, single man or group, able to understand the influence she has inside a decisional committee? Is she able to treat aIl members fairly in the designing process of a parliamentary assembly, or she will give life to bizarre creatures with pure political motivations? Are current voting rules able to avoid paradoxical outcomes after an election has been run? This thesis answers to these questions using tools from cooperative and non-cooperative game theory, combining a computational and an experimental approach. The first part of the work analyze two-tiers voting systems and electoral laws. The second part focuses on human learning in games associated to the ability of people to choose the situation attributing them the largest power
Styles APA, Harvard, Vancouver, ISO, etc.
38

Mário, Oliveira Rodrigues Cleyton. « Component assembly and theorem proving in constraint handling rules ». Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1821.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-12T15:52:36Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Devido á grande demanda por softwares cada vez mais robustos, complexos e flexíveis, e, sobretudo, pelo curtíssimo tempo de entrega exigido, a engenharia de software tem procurado novos meios de desenvolvimento que supram satisfatoriamente essas demandas. Uma forma de galgar esses novos patamares de produtividade provém do uso de uma metodologia baseada em agentes que se comunicam e com isso, ao invés dos programas serem estritamente programados, o comportamento destes sistemas de software emerge da interação de agentes, robôs, ou subsistemas aut onomos, independentes, além de declarativamente especificados. Isto provê a habilidade para automaticamente configurá -los, otimizá-los, monitorá-los, adaptá-los, diagnosticá-los, repará-los e protegê-los dentro do ambiente. Contudo, um grande problema das linguagens declarativas é a falta de mecanismos que permitem a melhor estruturação de dados, facilitando portanto, o reuso. Portanto, esta dissertação explica o desenvolvimento de nova linguagem lógica declarativa para programar sistemas de raciocínio automático de uma forma modularizada: C2HR∨. A linguagem base escolhida para a extensão com componentes lógicos foi CHR. Os motivos para essa escolha são definidos ao longo da dissertação. Duas abordagens, portanto, são apresentadas: a primeira, conhecida como CHRat, foi desenvolvida numa parceria juntamente com o grupo de pesquisas CONTRAINTES do INRIA/Rocquencourt-Paris, onde o programador ´e o responsável direto por definir os componentes CHR, permitindo o seu reuso por outros componentes; a segunda aplicação, CHRtp, visa atender prioritariamente requisitos de completude e, por isso, se baseia em procedimentos lógicos de inferência como: o raciocínio para frente, o raciocínio para trás, e a resolução/factoring. A dissertação mostra também alguns exemplos práticos, onde uso de componentes facilita radicalmente sua implementação. As contribuições almejadas com essa dissertação são: a definição de uma família bem formalizada de provadores de teoremas automáticos, que podem trabalhar com sentenças especificadas em lógica horn ou em lógica de primeira ordem, a extensão de CHR como uma linguagem modular de propósito geral, a melhor estruturação de bases conhecimentos e até o uso em conjunto de bases heterogêneas, a definição de uma linguagem para a fácil e direta estruturação de dados por meio de componentes, dentre outras
Styles APA, Harvard, Vancouver, ISO, etc.
39

Karimianpour, Camelia. « The Stone-von Neumann Construction in Branching Rules and Minimal Degree Problems ». Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34240.

Texte intégral
Résumé :
In Part I, we investigate the principal series representations of the n-fold covering groups of the special linear group over a p-adic field. Such representations are constructed via the Stone-von Neumann theorem. We have three interrelated results. We first compute the K-types of these representations. We then give a complete set of reducibility points for the unramified principal series representations. Among these are the unitary unramified principal series representations, for which we further investigate the distribution of the K-types among its irreducible components. In Part II, we demonstrate another application of the Stone-von Neumann theorem. Namely, we present a lower bound for the minimal degree of a faithful representation of an adjoint Chevalley group over a quotient ring of a non-Archimedean local field.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Scanlon, Joan B. « Bending the rule : some representations of male and female homosexuality in English narrative prose from c. 1880 to 1930 ». Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278434.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Hall, Jack Kingsbury Mathematics &amp Statistics Faculty of Science UNSW. « Some branching rules for GL(N,C) ». Awarded by:University of New South Wales. Mathematics and Statistics, 2007. http://handle.unsw.edu.au/1959.4/29473.

Texte intégral
Résumé :
This thesis considers symmetric functions and algebraic combinatorics via the polynomial representation theory of GL(N,C). In particular, we utilise the theory of Jacobi-Trudi determinants to prove some new results pertaining to the Littlewood-Richardson coefficients. Our results imply, under some hypotheses on the strictness of the partition an equality between Littlewood-Richardson coefficients and Kostka numbers. For the case that a suitable partition has two rows, an explicit formula is then obtained for the Littlewood-Richardson coefficient using the Hook Length formula. All these results are then applied to compute branching laws for GL(m+n,C) restricting to GL(m,C) x GL(n,C). The technique also implies the well-known Racah formula.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Gad, Mohamed Omar. « Representational fairness in GATT/WTO rule making : multinational enterprises and developing country interests in the TRIPS pharmaceutical-related provisions ». Thesis, Queen Mary, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522314.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Scott-Wright, Alicia 1949. « Managing revisions of rules and guidelines used in clinical information systems : exploring a hierarchical knowledge representation model ». Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28589.

Texte intégral
Résumé :
Thesis (S.M.)--Harvard-MIT Division of Health Sciences and Technology, 2004.
Includes bibliographical references (leaves 46-51).
One important purpose for creating clinical practice guidelines is to improve quality of care by reducing variations in practice. In the current healthcare environment, guidelines are being advocated as a means to disseminate research findings, standardize care, improve quality of care, and increase the cost-effectiveness of health care services. Unfortunately, compliance with text-based clinical practice guidelines is unsatisfactory. On the other hand, adherence to guideline recommendations is increased when providers receive patient-specific recommendations during the patient-provider consultation. Guideline-based point of care decision support systems have been shown to increase provider consultation. Guideline-based point of care decision support systems have been shown to increase provider adherence to guideline recommendations. Computer-interpretable formats for clinical practice guidelines are a prerequisite for decision support systems. The development process of a text-based clinical practice guideline is long and arduous and in most cases this process is repeated when text-based guidelines are revised to include new medical knowledge. Clearly, once text-based guideline knowledge is translated into a computer-interpretable format, the computer-interpretable guideline would also require periodic revisions to maintain the integrity of its evidence-base. Therefore, representation formalisms for encoding guideline knowledge into computer-interpretable formats should enable easy revisions of the encoded guidelines. This thesis describes a study I conducted to demonstrate that modular knowledge representation of clinical practice guidelines facilitates easy guideline revisions. To test the hypothesis
(cont.) hypothesis, I used a methodology for modular representation of guidelines, HieroGLIF, developed by Decision Systems Group, Brigham and Women's Hospital, Boston Massachusetts. HieroGLIF uses Axiomatic Design theory to encode "guideline knowledge modules" into a hierarchical tree structure. Axiomatic Design theory was developed in the field of engineering as a principled approach to product design. I applied HieroGLIF to encode parts of three outdated guidelines. I revised these designs to model updated guideline releases. Quantitative metrics assessed the adequacy of the tool to encode generic setting-independent guidelines and to facilitate revisions in encoded guidelines without complete recoding of the model. This work explores the use of HieroGLIF and Axiomatic Design theory to facilitate revisions of computer-interpretable guidelines.
by Alicia Scott-Wright.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Damkjer, Kristian Linn. « Architecting RUBE worlds a methodology for creating virtual analog devices as metaphorical representations of formal systems / ». [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000670.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Morak, Michael. « The impact of disjunction on reasoning under existential rules ». Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:b8f012c4-0210-41f6-a0d3-a9d1ea5f8fac.

Texte intégral
Résumé :
Ontological database management systems are a powerful tool that combine traditional database techniques with ontological reasoning methods. In this setting, a classical extensional database is enriched with an ontology, or a set of logical assertions, that describe how new, intensional knowledge can be derived from the extensional data. Conjunctive queries are therefore answered against this combined knowledge base of extensional and intensional data. Many languages that represent ontologies have been introduced in the literature. In this thesis we will focus on existential rules (also called tuple-generating dependencies or Datalog± rules), and three established languages in this area, namely guarded-based rules, sticky rules and weakly-acyclic rules. The main goal of the thesis is to enrich these languages with non-deterministic constructs (i.e. disjunctions) and investigate the complexity of the answering conjunctive queries under these extended languages. As is common in the literature, we will distinguish between combined complexity, where the database, the ontology and the query are considered as input, and data complexity, where only the database is considered as input. The latter case is relevant in practice, as usually the ontology and the query can be considered as fixed, and are usually much smaller than the database itself. After giving appropriate definitions to extend the considered languages to disjunctive existential rules, we establish a series of complexity results, completing the complexity picture for each of the above languages, and four different query languages: arbitrary conjunctive queries, bounded (hyper-)treewidth queries, acyclic queries and atomic queries. For the guarded-based languages, we show a strong 2EXPTIME lower bound for general queries that holds even for fixed ontologies, and establishes 2EXPTIME-completeness of the query answering problem in this case. For acyclic queries, the complexity can be reduced to EXPTIME, if the predicate arity is bounded, and the problem even becomes tractable for certain restricted languages, if only atomic queries are used. For ontologies represented by sticky disjunctive rules, we show that the problem becomes undecidable, even in the case of data complexity and atomic queries. Finally, for weakly-acyclic rules, we show that the complexity increases from 2EXPTIME to coN2EXPTIME in general, and from tractable to coNP in case of the data complexity, independent of which query language is used. After answering the open complexity questions, we investigate applications and relevant consequences of our results for description logics and give two generic complexity statements, respectively, for acyclic and general conjunctive query answering over description logic knowledge bases. These generic results allow for an easy determination of the complexity of this reasoning task, based on the expressivity of the considered description logic.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Bouzeghoub, Mokrane. « Secsi : un système expert en conception de systèmes d'informations, modélisation conceptuelle de schémas de bases de données ». Paris 6, 1986. http://www.theses.fr/1986PA066046.

Texte intégral
Résumé :
Les principaux objectifs du système sont d'une part la constitution d'une base de connaissances regroupant à la fois des acquis théoriques sur les modèles et une expérience pratique en conception de bases de données, et d'autre part la réalisation d'un système d'outils ouvert, capable aussi bien de données, et d'autre part la réalisation d'un système d'outils ouvert, capable aussi bien d'expliquer et de justifier ses choix et ses résultats que d'intégrer de nouveaux concepts et de nouvelles règles de conception. Outre l'architecture générale et les fonctionnalités du système, cette thèse décrit le modèle de représentation de connaissances base sur les réseaux sémantiques, les règles d'inférence et la méthodologie de conception adoptée.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Dixon, Matt Luke. « The lateral prefrontal cortex supports an integrated representation of task-rules and expected rewards : evidence from fMRI-adaptation ». Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36766.

Texte intégral
Résumé :
Our capacity for self-control is supported by the use of behaviour-guiding rules. A fundamental question is how we decide which one of out of many potential rules to follow. If different rules were integrated with their expected reward-value, they could be compared, and the one with the highest value selected. However, it currently remains unknown whether any areas of the brain perform this integrative function. To address this question, we took advantage of functional magnetic resonance imaging (fMRI)-adaptation—the ubiquitous finding that repeated as compared to novel stimuli elicit a change in the magnitude of neural activity in areas of the brain that are sensitive to that stimulus. We created a novel fMRI-adaptation paradigm in which instruction cues signaled novel or repeated task-rules and expected rewards. We found that the inferior frontal sulcus (IFS)—a sub-region of the lateral prefrontal cortex—exhibited fMRI-adaptation uniquely when both rule and reward information repeated as compared to when it was novel. fMRI-adaptation was not observed when either factor repeated in isolation, providing strong evidence that the IFS supports an integrated representation of task-rules and rewards. Consistent with an integrative role, the IFS exhibited correlated activity with numerous rule-related and reward-related areas of the brain across the entire experimental time-course. Additionally, the correlation strength between the IFS and a subset of these regions changed as a function of the novelty of rule and reward information presented during the instruction cue period. Our results provide novel evidence that the IFS integrates rules with their expected reward-value, which in turn can guide complex decision making.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Rima, Audrius. « Verslo taisyklių panaudojimas duomenų analizei metamodelių transformacijų pagrindu ». Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_144909-36358.

Texte intégral
Résumé :
Didėjantis duomenų kiekis šiuolaikinėse informacinėse sistemose verčia ieškoti geresnių ir patogesnių priemonių ir metodų šių duomenų analizei. Esant dideliam duomenų kiekiui žmogus nebegali aprėpti informacijos įvairovės, atrasti logines sąsajas tampa sudėtinga, todėl reikalingos priemonės, kurios palengvintų, automatizuotų ir intelektualizuotų duomenų analizę. Šiame darbe nagrinėjamas verslo taisyklių pritaikymas intelektualizuotai duomenų analizei. Darbe pasiūlomas metodas leidžiantis verslo taisykles, užrašytas XML kalba, transformuoti iki daugiamatės duomenų analizės instrukcijų programų sistemoje. Pasiūlytas metodas grindžiamas metamodelių transformacijomis. Darbe siūlomas metodas patikrintas eksperimentu, be to jis realizuotas programų sistemos prototipe.
Rising amount of data in information system require to search better and usable tools and methods for this data analysis. When is large data amount, then people can’t see diverseness of information, there is complicated to find logical links, therefore required tools, which can make data analysis usual, automation and intelligent. The paper describes business rule using for intelligent data analysis and offers a method for transformation of a business rule, described in XML language, into the multidimensional data analysis rules in the program system. The method based on metamodel transformations. There is offered method, which is validated by experiment and implemented in prototype of software system.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Thomazo, Michaël. « Conjunctive Query Answering Under Existential Rules - Decidability, Complexity, and Algorithms ». Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2013. http://tel.archives-ouvertes.fr/tel-00925722.

Texte intégral
Résumé :
L'objectif du problème appelé "Ontology-based data access" (OBDA) est d'améliorer la réponse à des requêtes en prenant en compte des connaissances d'ordre général durant l'évaluation des requêtes. Ces connaissances générales sont représentées à l'aide d'une ontologie, qui est exprimée dans cette thèse grâce à des formules logiques du premier ordre, appelées règles existentielles, et aussi connues sous le nom de "tuple-generating dependencies" et Datalog+/-. L'expressivité des formules utilisées est telle que l'évaluation de requêtes devient un problème indécidable, et cela a conduit la communauté à définir de nombreux cas décidables, c'est-à-dire des restrictions sur les ensembles de règles existentielles considérés. La contribution de cette thèse est double : tout d'abord, nous proposons une vue unifiée sur une grande fraction des cas décidables connus, et fournissons par là même une analyse de complexité et un algorithme optimal dans le pire des cas. Nous considérons également l'approche couramment utilisée de réécriture de requêtes, et proposons un algorithme générique qui permet de surmonter certaines causes évidentes d'explosion combinatoire qui rendent les approches classiques pratiquement inapplicables.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Ghaderi, Hazhar. « The Rare Decay of the Neutral Pion into a Dielectron ». Thesis, Uppsala universitet, Kärnfysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-211683.

Texte intégral
Résumé :
We give a rather self-contained introduction to the rare pion to dielectron decay which in nontrivial leading order is given by a QED triangle loop. We work within the dispersive framework where the imaginary part of the amplitude is obtained via the Cutkosky rules. We derive these rules in detail. Using the twofold Mellin-Barnes representation for the pion transition form factor, we derive a simple expression for the branching ratio B(π0  e+e-) which we then test for various models. In particular a more recent form factor derived from a Lagrangian for light pseudoscalars and vector mesons inspired by effective field theories. Comparison with the KTeV experiment at Fermilab is made and we find that we are more than 3σ below the KTeV experiment for some of the form factors. This is in agreement with other theoretical models, such as the Vector Meson Dominance model and the quark-loop model within the constituent-quark framework. But we also find that we can be in agreement with KTeV if we explore some freedom of the form factor not fixed by the low-energy Lagrangian.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie