Dissertations / Theses on the topic 'Combining logics'

To see the other types of publications on this topic, follow the link: Combining logics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Combining logics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nair, Vineet. "On Extending BDI Logics." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365892.

Full text
Abstract:
In this thesis we extend BDI logics, which are normal multimodal logics with an arbitrary set of normal modal operators, from three different perspectives. Firstly, based on some recent developments in modal logic, we examine BDI logics from a combining logic perspective and apply combination techniques like fibring/dovetailing for explaining them. The second perspective is to extend the underlying logics so as to include action constructs in an explicit way based on some recent action-related theories. The third perspective is to adopt a non-monotonic logic like defeasible logic to reason about intentions in BDI. As such, the research captured in this thesis is theoretical in nature and situated at the crossroads of various disciplines relevant to Artificial Intelligence (AI). More specifically this thesis makes the following contributions: 1. Combining BDI Logics through fibring/dovetailing: BDI systems modeling rational agents have a combined system of logics of belief, time and intention which in turn are basically combinations of well understood modal logics. The idea behind combining logics is to develop general techniques that allow to produce combinations of existing and well understood logics. To this end we adopt Gabbay's fibring/dovetailing technique to provide a general framework for the combinations of BDI logics. We show that the existing BDI framework is a dovetailed system. Further we give conditions on the fibring function to accommodate interaction axioms of the type G [superscript k,l,m,n] ([diamond][superscript k] [superscript l] [phi] [implies] [superscript m] [diamond][superscript n] [phi]) based on Catach's multimodal semantics. This is a major result when compared with other combining techniques like fusion which fails to accommodate axioms of the above type. 2. Extending the BDI framework to accommodate Composite Actions: Taking motivation from a recent work on BDI theory, we incorporate the notion of composite actions, [pi]-1; [pi]-2 (interpreted as [pi]-1 followed by [pi]-2), to the existing BDI framework. To this end we introduce two new constructs Result and Opportunity which helps in reasoning about the actual execution of such actions. We give a set of axioms that can accommodate the new constructs and analyse the set of commitment axioms as given in the original work in the background of the new framework. 3. Intention reasoning as Defeasible reasoning: We argue for a non-monotonic logic of intention in BDI as opposed to the usual normal modal logic one. Our argument is based on Bratman's policy-based intention. We show that policy-based intention has a defeasible/non-monotonic nature and hence the traditional normal modal logic approach to reason about such intentions fails. We give a formalisation of policy-based intention in the background of defeasible logic. The problem of logical omniscience which usually accompanies normal modal logics is avoided to a great extend through such an approach.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
2

Nair, Vineet, and n/a. "On Extending BDI Logics." Griffith University. School of Information Technology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030929.095254.

Full text
Abstract:
In this thesis we extend BDI logics, which are normal multimodal logics with an arbitrary set of normal modal operators, from three different perspectives. Firstly, based on some recent developments in modal logic, we examine BDI logics from a combining logic perspective and apply combination techniques like fibring/dovetailing for explaining them. The second perspective is to extend the underlying logics so as to include action constructs in an explicit way based on some recent action-related theories. The third perspective is to adopt a non-monotonic logic like defeasible logic to reason about intentions in BDI. As such, the research captured in this thesis is theoretical in nature and situated at the crossroads of various disciplines relevant to Artificial Intelligence (AI). More specifically this thesis makes the following contributions: 1. Combining BDI Logics through fibring/dovetailing: BDI systems modeling rational agents have a combined system of logics of belief, time and intention which in turn are basically combinations of well understood modal logics. The idea behind combining logics is to develop general techniques that allow to produce combinations of existing and well understood logics. To this end we adopt Gabbay's fibring/dovetailing technique to provide a general framework for the combinations of BDI logics. We show that the existing BDI framework is a dovetailed system. Further we give conditions on the fibring function to accommodate interaction axioms of the type G [superscript k,l,m,n] ([diamond][superscript k] [superscript l] [phi] [implies] [superscript m] [diamond][superscript n] [phi]) based on Catach's multimodal semantics. This is a major result when compared with other combining techniques like fusion which fails to accommodate axioms of the above type. 2. Extending the BDI framework to accommodate Composite Actions: Taking motivation from a recent work on BDI theory, we incorporate the notion of composite actions, [pi]-1; [pi]-2 (interpreted as [pi]-1 followed by [pi]-2), to the existing BDI framework. To this end we introduce two new constructs Result and Opportunity which helps in reasoning about the actual execution of such actions. We give a set of axioms that can accommodate the new constructs and analyse the set of commitment axioms as given in the original work in the background of the new framework. 3. Intention reasoning as Defeasible reasoning: We argue for a non-monotonic logic of intention in BDI as opposed to the usual normal modal logic one. Our argument is based on Bratman's policy-based intention. We show that policy-based intention has a defeasible/non-monotonic nature and hence the traditional normal modal logic approach to reason about such intentions fails. We give a formalisation of policy-based intention in the background of defeasible logic. The problem of logical omniscience which usually accompanies normal modal logics is avoided to a great extend through such an approach.
APA, Harvard, Vancouver, ISO, and other styles
3

Knorr, Matthias. "Combining open and closed world reasoning for the semantic web." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/6702.

Full text
Abstract:
Dissertação para obtenção do Grau de Doutor em Informática
One important problem in the ongoing standardization of knowledge representation languages for the Semantic Web is combining open world ontology languages, such as the OWL-based ones, and closed world rule-based languages. The main difficulty of such a combination is that both formalisms are quite orthogonal w.r.t. expressiveness and how decidability is achieved. Combining non-monotonic rules and ontologies is thus a challenging task that requires careful balancing between expressiveness of the knowledge representation language and the computational complexity of reasoning. In this thesis, we will argue in favor of a combination of ontologies and nonmonotonic rules that tightly integrates the two formalisms involved, that has a computational complexity that is as low as possible, and that allows us to query for information instead of calculating the whole model. As our starting point we choose the mature approach of hybrid MKNF knowledge bases, which is based on an adaptation of the Stable Model Semantics to knowledge bases consisting of ontology axioms and rules. We extend the two-valued framework of MKNF logics to a three-valued logics, and we propose a well-founded semantics for non-disjunctive hybrid MKNF knowledge bases. This new semantics promises to provide better efficiency of reasoning,and it is faithful w.r.t. the original two-valued MKNF semantics and compatible with both the OWL-based semantics and the traditional Well- Founded Semantics for logic programs. We provide an algorithm based on operators to compute the unique model, and we extend SLG resolution with tabling to a general framework that allows us to query a combination of non-monotonic rules and any given ontology language. Finally, we investigate concrete instances of that procedure w.r.t. three tractable ontology languages, namely the three description logics underlying the OWL 2 pro les.
Fundação para a Ciência e Tecnologia - grant contract SFRH/BD/28745/2006
APA, Harvard, Vancouver, ISO, and other styles
4

Rudolph, Sebastian. "Relational Exploration: Combining Description Logics and Formal Concept Analysis for Knowledge Specification." Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A25002.

Full text
Abstract:
Facing the growing amount of information in today's society, the task of specifying human knowledge in a way that can be unambiguously processed by computers becomes more and more important. Two acknowledged fields in this evolving scientific area of Knowledge Representation are Description Logics (DL) and Formal Concept Analysis (FCA). While DL concentrates on characterizing domains via logical statements and inferring knowledge from these characterizations, FCA builds conceptual hierarchies on the basis of present data. This work introduces Relational Exploration, a method for acquiring complete relational knowledge about a domain of interest by successively consulting a domain expert without ever asking redundant questions. This is achieved by combining DL and FCA: DL formalisms are used for defining FCA attributes while FCA exploration techniques are deployed to obtain or refine DL knowledge specifications.
APA, Harvard, Vancouver, ISO, and other styles
5

Rudolph, Sebastian [Verfasser]. "Relational exploration : combining description logics and formal concept analysis for knowledge specification / von Sebastian Rudolph." Karlsruhe : Univ.-Verl. Karlsruhe, 2007. http://d-nb.info/983756430/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rocktaschel, Tim. "Combining representation learning with logic for language processing." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10040845/.

Full text
Abstract:
The current state-of-the-art in many natural language processing and automated knowledge base completion tasks is held by representation learning methods which learn distributed vector representations of symbols via gradient based optimization. They require little or no hand-crafted features, thus avoiding the need for most preprocessing steps and task-specific assumptions. However, in many cases representation learning requires a large amount of annotated training data to generalize well to unseen data. Such labeled training data is provided by human annotators who often use formal logic as the language for specifying annotations. This thesis investigates different combinations of representation learning methods with logic for reducing the need for annotated training data, and for improving generalization. We introduce a mapping of function-free first-order logic rules to loss functions that we combine with neural link prediction models. Using this method, logical prior knowledge is directly embedded in vector representations of predicates and constants. We find that this method learns accurate predicate representations for which no or little training data is available, while at the same time generalizing to other predicates not explicitly stated in rules. However, this method relies on grounding first-order logic rules, which does not scale to large rule sets. To overcome this limitation, we propose a scalable method for embedding implications in a vector space by only regularizing predicate representations. Subsequently, we explore a tighter integration of representation learning and logical deduction. We introduce an end-to-end differentiable prover – a neural network that is recursively constructed from Prolog’s backward chaining algorithm. The constructed network allows us to calculate the gradient of proofs with respect to symbol representations and to learn these representations from proving facts in a knowledge base. In addition to incorporating complex first-order rules, it induces interpretable logic programs via gradient descent. Lastly, we propose recurrent neural networks with conditional encoding and a neural attention mechanism for determining the logical relationship between two natural language sentences.
APA, Harvard, Vancouver, ISO, and other styles
7

Canuto, Anne Magaly de Paula. "Combining neural networks and fuzzy logic for applications in character recognition." Thesis, University of Kent, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.344107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sirin, Evren. "Combining description logic reasoning with AI planning for composition of web services." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/4070.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Linker, Sven [Verfasser], Ernst-Rüdiger [Akademischer Betreuer] Olderog, and Michael R. [Akademischer Betreuer] Hansen. "Proofs for traffic safety : combining diagrams and logic / Sven Linker. Betreuer: Ernst-Rüdiger Olderog ; Michael R. Hansen." Oldenburg : BIS der Universität Oldenburg, 2015. http://d-nb.info/106831236X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Linker, Sven Verfasser], Ernst-Rüdiger [Akademischer Betreuer] Olderog, and Michael R. [Akademischer Betreuer] [Hansen. "Proofs for traffic safety : combining diagrams and logic / Sven Linker. Betreuer: Ernst-Rüdiger Olderog ; Michael R. Hansen." Oldenburg : BIS der Universität Oldenburg, 2015. http://d-nb.info/106831236X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Macián, Sorribes Héctor. "Design of optimal reservoir operating rules in large water resources systems combining stochastic programming, fuzzy logic and expert criteria." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/82554.

Full text
Abstract:
Given the high degree of development of hydraulic infrastructure in the developed countries, and with the increasing opposition to constructing new facilities in developing countries, the focus of water resource system analysis has turned into defining adequate operation strategies. Better management is necessary to cope with the challenge of supplying increasing demands and conflicts on water allocation while facing climate change impacts. To do so, a large set of mathematical simulation and optimization tools have been developed. However, the real application of these techniques is still limited. One of the main lines of research to fix this issue regards to the involvement of experts' knowledge in the definition of mathematical algorithms. To define operating rules in a way in which system operators could rely, their expert knowledge should be fully accounted and merged with the results from mathematical algorithms. This thesis develops a methodological framework and the required tools to improve the operation of large-scale water resource systems. In such systems, decision-making processes are complex and supported, at least partially, by the expert knowledge of decision-makers. This importance of expert judgment in the operation strategies requires mathematical tools able to embed and combine it with optimization algorithms. The methods and tools developed in this thesis rely on stochastic programming, fuzzy logic and the involvement of system operators during the whole rule-defining process. An extended stochastic programming algorithm, able to be used in large-scale water resource systems including stream-aquifer interactions, has been developed (the CSG-SDDP). The methodological framework proposed uses fuzzy logic to capture the expert knowledge in the definition of optimal operating rules. Once the current decision-making process is fairly reproduced using fuzzy logic and expert knowledge, stochastic programming results are introduced and thus the performance of the rules is improved. The framework proposed in this thesis has been applied to the Jucar river system (Eastern Spain), in which scarce resources are allocated following complex decision-making processes. We present two applications. In the first one, the CSG-SDDP algorithm has been used to define economically-optimal conjunctive use strategies for a joint operation of reservoirs andaquifers. In the second one, we implement a collaborative framework to couple historical records with expert knowledge and criteria to define a decision support system (DSS) for the seasonal operation of the reservoirs of the Jucar River system. The co-developed DSS tool explicitly reproduces the decision-making processes and criteria considered by the system operators. Two fuzzy logic systems have been developed and linked with this purpose, as well as with fuzzy regressions to preview future inflows. The DSS developed was validated against historical records. The developed framework offers managers a simple way to define a priori suitable decisions, as well as to explore the consequences of any of them. The resulting representation has been then combined with the CSG-SDDP algorithm in order to improve the rules following the current decision-making process. Results show that reducing pumping from the Mancha Oriental aquifer would lead to higher systemwide benefits due to increased flows by stream-aquifer interaction. The operating rules developed successfully combined fuzzy logic, expert judgment and stochastic programming, increasing water allocations to the demands by changing the way in which Alarcon, Contreras and Tous are balanced. These rules follow the same decision-making processes currently done in the system, so system operators would feel familiar with them. In addition, they can be contrasted with the current operating rules to determine what operation options can be coherent with the current management and, at the same time, achieve an optimal operation
Dado el alto número de infraestructuras construidas en los países desarrollados, y con una oposición creciente a la construcción de nuevas infraestructuras en los países en vías de desarrollo, la atención del análisis de sistemas de recursos hídricos ha pasado a la definición de reglas de operación adecuadas. Una gestión más eficiente del recurso hídrico es necesaria para poder afrontar los impactos del cambio climático y de la creciente demanda de agua. Para lograrlo, un amplio abanico de herramientas y modelos matemáticos de optimización se han desarrollado. Sin embargo, su aplicación práctica en la gestión hídrica sigue siendo limitada. Una de las más importantes líneas de investigación para solucionarlo busca la involucración de los expertos en la definición de dichos modelos matemáticos. Para definir reglas de operación en las cuales los gestores confíen, es necesario tener en cuenta su criterio experto y combinarlo con algoritmos de optimización. La presente tesis desarrolla una metodología, y las herramientas necesarias para aplicarla, con el fin de mejorar la operación de sistemas complejos de recursos hídricos. En éstos, los procesos de toma de decisiones son complicados y se sustentan, al menos en parte, en el juicio experto de los gestores. Esta importancia del criterio de experto en las reglas de operación requiere herramientas matemáticas capaces de incorporarlo en su estructura y de unirlo con algoritmos de optimización. Las herramientas y métodos desarrollados se basan en la optimización estocástica, en la lógica difusa y en la involucración de los expertos durante todo el proceso. Un algoritmo estocástico extendido, capaz de ser usado en sistemas complejos con interacciones río-acuífero se ha desarrollado (el CSG-SDDP). La metodología definida usa lógica difusa para capturar el criterio de experto en la definición de reglas óptimas. En primer lugar se reproducen los procesos de toma de decisiones actuales y, tras ello, el algoritmo de optimización estocástica se emplea para mejorar las reglas previamente obtenidas. La metodología propuesta en esta tesis se ha aplicado al sistema Júcar (Este de España), en el que los recursos hídricos son gestionados de acuerdo a complejos procesos de toma de decisiones. La aplicación se ha realizado de dos formas. En la primera, el algoritmo CSG-SDDP se ha utilizado para definir una estrategia óptima para el uso conjunto de embalses y acuíferos. En la segunda, la metodología se ha usado para reproducir las reglas de operación actuales en base a criterio de expertos. La herramienta desarrollada reproduce de forma explícita los procesos de toma de decisiones seguidos por los operadores del sistema. Dos sistemas lógicos difusos se han empleado e interconectado con este fin, así como regresiones difusas para predecir aportaciones. El Sistema de Ayuda a la Decisión (SAD) creado se ha validado comparándolo con los datos históricos. La metodología desarrollada ofrece a los gestores una forma sencilla de definir decisiones a priori adecuadas, así como explorar las consecuencias de una decisión concreta. La representación matemática resultante se ha combinado entonces con el CSG-SDDP para definir reglas óptimas que respetan los procesos actuales. Los resultados obtenidos indican que reducir el bombeo del acuífero de la Mancha Oriental conlleva una mejora en los beneficios del sistema debido al incremento de caudal por relación río-acuífero. Las reglas de operación han sido adecuadamente desarrolladas combinando lógica difusa, juicio experto y optimización estocástica, aumentando los suministros a las demandas mediante modificaciones el balance de Alarcón, Contreras y Tous. Estas reglas siguen los procesos de toma de decisiones actuales en el Júcar, por lo que pueden resultar familiares a los gestores. Además, pueden compararse con las reglas de operación actuales para establecer qué decisiones entre
Donat l'alt nombre d'infraestructures construïdes en els països desenrotllats, i amb una oposició creixent a la construcció de noves infraestructures en els països en vies de desenrotllament, l'atenció de l'anàlisi de sistemes de recursos hídrics ha passat a la definició de regles d'operació adequades. Una gestió més eficient del recurs hídric és necessària per a poder afrontar els impactes del canvi climàtic i de la creixent demanda d'aigua. Per a aconseguir-ho, una amplia selecció de ferramentes i models matemàtics d'optimització s'han desenrotllat. No obstant això, la seua aplicació pràctica en la gestió hídrica continua sent limitada. Una de les més importants línies d'investigació per a solucionar-ho busca la col·laboració activa dels experts en la definició dels models matemàtics. Per a definir regles d'operació en les quals els gestors confien, és necessari tindre en compte el seu criteri expert i combinar-ho amb algoritmes d'optimització. La present tesi desenrotlla una metodologia, i les ferramentes necessàries per a aplicar-la, amb la finalitat de millorar l'operació de sistemes complexos de recursos hídrics. En estos, els processos de presa de decisions són complicats i se sustenten, almenys en part, en el juí expert dels gestors. Esta importància del criteri d'expert en les regles d'operació requereix ferramentes matemàtiques capaces d'incorporar-lo en la seua estructura i d'unir-lo amb algoritmes d'optimització. Les ferramentes i mètodes desenrotllats es basen en l'optimització estocàstica, en la lògica difusa i en la col·laboració activa dels experts durant tot el procés. Un algoritme estocàstic avançat, capaç de ser usat en sistemes complexos amb interaccions riu-aqüífer, s'ha desenrotllat (el CSG-SDDP) . La metodologia definida utilitza lògica difusa per a capturar el criteri d'expert en la definició de regles òptimes. En primer lloc es reprodueixen els processos de presa de decisions actuals i, després d'això, l'algoritme d'optimització estocàstica s'empra per a millorar les regles prèviament obtingudes. La metodologia proposada en esta tesi s'ha aplicat al sistema Xúquer (Est d'Espanya), en el que els recursos hídrics són gestionats d'acord amb complexos processos de presa de decisions. L'aplicació s'ha realitzat de dos formes. En la primera, l'algoritme CSG-SDDP s'ha utilitzat per a definir una estratègia òptima per a l'ús conjunt d'embassaments i aqüífers. En la segona, la metodologia s'ha usat per a reproduir les regles d'operació actuals basant-se en criteri d'experts. La ferramenta desenvolupada reprodueix de forma explícita els processos de presa de decisions seguits pels operadors del sistema. Dos sistemes lògics difusos s'han empleat i interconnectat amb este fi, al igual què regressions difuses per preveure cabdals. El Sistema d'Ajuda a la Decisió (SAD) creat s'ha validat comparant-lo amb les dades històriques. La metodologia desenvolupada ofereix als gestors una manera senzilla de definir decisions a priori adequades, així com per explorar les conseqüències d'una decisió concreta. La representació matemàtica resultant s'ha combinat amb el CSG-SDDP per a definir regles òptimes que respecten els processos actuals. Els resultats obtinguts indiquen que reduir el bombament de l'aqüífer de la Mancha Oriental comporta una millora en els beneficis del sistema a causa de l'increment de l'aigua per relació riu-aqüífer. Les regles d'operació han sigut adequadament desenrotllades combinant lògica difusa, juí expert i optimització estocàstica, augmentant els subministres a les demandes per mitjà de modificacions del balanç d'Alarcón, Contreras i Tous. Estes regles segueixen els processos de presa de decisions actuals en el Xúquer, per la qual cosa poden resultar familiars als gestors. A més, poden comparar-se amb les regles d'operació actuals per a establir quines decisions entre les possibles serien coherents
Macián Sorribes, H. (2017). Design of optimal reservoir operating rules in large water resources systems combining stochastic programming, fuzzy logic and expert criteria [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/82554
TESIS
APA, Harvard, Vancouver, ISO, and other styles
12

Do, Ju-Yong. "Road to seamless positioning : hybrid positioning system combining GPS and television signals /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Polyakova, Evgenia I. "A general theory for evaluating joint data interaction when combining diverse data sources /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Levinson, Dmitry. "Licensing of negative polarity particles yet, anymore, either and neither : combining downward monotonicity and assertivity /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mondragón, González Sirenia Lizbeth. "Combining a real-time closed-loop system with neuromodulation : an integrative approach to prevent pathological repetitive behaviors." Thesis, Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=http://theses-intra.upmc.fr/modules/resources/download/theses/2019SORUS259.pdf.

Full text
Abstract:
Pour avoir un comportement efficace, nous nous appuyons sur des habitudes et des routines ; des comportements répétitifs (CR) que nous avons acquis et automatisées. Cependant, l’exécution excessive de ces CR peut devenir pathologique. Il a été montré que les séquences comportementales complexes et habituelles sont gérées par les circuits cortico-striataux qui sont donc affectés dans plusieurs pathologies pour lesquelles une dysrégulation de l’expression d’actions automatiques débouche sur des CR pathologiques. Pour ce projet de thèse, on se focalise sur un substrat des circuits cortico-striataux suspecté de jouer un rôle crucial dans la régulation des CR : le réseau d’interneurones à parvalbumine (IPV) du striatum. Ces IPV forment un large réseau d’inhibition du feed-forward dans les neurones épineux moyens qui est crucial pour la régulation du réseau de sortie du striatum. En utilisant un modèle animal qui exprime des auto-toilettages compulsifs, dans les travaux de cette thèse, il a été montré que par une stimulation optogénétique des IPV dans la partie dorsale médiale du striatum, il est possible de réduire les CR à des niveaux normaux. De plus, l’identification d’un biomarqueur électrophysiologique dans le cortex orbitofrontal latéral qui précédait l’apparition des événements d’auto-toilettage a servi à déclencher la stimulation optogénétique dans une approche prédictive en boucle fermée. Les travaux de cette thèse présentent aussi une série de développements technologiques formant un framework qui sera utile pour de futures approches en boucle fermée en neurosciences
SummaryFor efficient everyday behavior, we rely on habits and routines, i.e. repetitive behaviors (RB) that we have acquired and automatized. However, the increased execution of such RB can become pathological. The cortical-striatal circuits are affected during various conditions where a dysregulation of the expression of automatized actions, leads to pathological RB. In this thesis, we focus on a neurophysiological substrate that is suspected to have a crucial role in the emergence and regulation of RB: the striatal parvalbumin immunoreactive interneuronal (PVI) network. PVIs form a wide network of feed-forward inhibition onto medium spiny neurons and are crucial for the functional regulation of the striatal output networks. By using an animal model of compulsive self-grooming, in this thesis, we show that through optogenetic stimulation of PVI in the dorsal medial part of the striatum it was possible to effectively reduce RB to normal levels. Moreover, we identified an electrophysiological biomarker in the lateral orbitofrontal cortex (lOFC) preceding the emergence of self-grooming. This was used to trigger optogenetic stimulation of the PVI in a predictive closed-loop approach effectively reduced grooming events initializations. Finally, in line with our interest in closed loop experimentation, we contributed to the neuroscience toolbox with a series of technological developments that helped us to provide a framework that will be useful for further hard real-time closed loop approaches
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Xiujuan. "Computational Intelligence Based Classifier Fusion Models for Biomedical Classification Applications." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/26.

Full text
Abstract:
The generalization abilities of machine learning algorithms often depend on the algorithms’ initialization, parameter settings, training sets, or feature selections. For instance, SVM classifier performance largely relies on whether the selected kernel functions are suitable for real application data. To enhance the performance of individual classifiers, this dissertation proposes classifier fusion models using computational intelligence knowledge to combine different classifiers. The first fusion model called T1FFSVM combines multiple SVM classifiers through constructing a fuzzy logic system. T1FFSVM can be improved by tuning the fuzzy membership functions of linguistic variables using genetic algorithms. The improved model is called GFFSVM. To better handle uncertainties existing in fuzzy MFs and in classification data, T1FFSVM can also be improved by applying type-2 fuzzy logic to construct a type-2 fuzzy classifier fusion model (T2FFSVM). T1FFSVM, GFFSVM, and T2FFSVM use accuracy as a classifier performance measure. AUC (the area under an ROC curve) is proved to be a better classifier performance metric. As a comparison study, AUC-based classifier fusion models are also proposed in the dissertation. The experiments on biomedical datasets demonstrate promising performance of the proposed classifier fusion models comparing with the individual composing classifiers. The proposed classifier fusion models also demonstrate better performance than many existing classifier fusion methods. The dissertation also studies one interesting phenomena in biology domain using machine learning and classifier fusion methods. That is, how protein structures and sequences are related each other. The experiments show that protein segments with similar structures also share similar sequences, which add new insights into the existing knowledge on the relation between protein sequences and structures: similar sequences share high structure similarity, but similar structures may not share high sequence similarity.
APA, Harvard, Vancouver, ISO, and other styles
17

Galicia, Auyón Jorge Armando. "Revisiting Data Partitioning for Scalable RDF Graph Processing Combining Graph Exploration and Fragmentation for RDF Processing Query Optimization for Large Scale Clustered RDF Data RDFPart- Suite: Bridging Physical and Logical RDF Partitioning. Reverse Partitioning for SPARQL Queries: Principles and Performance Analysis. ShouldWe Be Afraid of Querying Billions of Triples in a Graph-Based Centralized System? EXGRAF: Exploration et Fragmentation de Graphes au Service du Traitement Scalable de Requˆetes RDF." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2021. http://www.theses.fr/2021ESMA0001.

Full text
Abstract:
Le Resource Description Framework (RDF) et SPARQL sont des standards très populaires basés sur des graphes initialement conçus pour représenter et interroger des informations sur le Web. La flexibilité offerte par RDF a motivé son utilisation dans d'autres domaines. Aujourd'hui les jeux de données RDF sont d'excellentes sources d'information. Ils rassemblent des milliards de triplets dans des Knowledge Graphs qui doivent être stockés et exploités efficacement. La première génération de systèmes RDF a été construite sur des bases de données relationnelles traditionnelles. Malheureusement, les performances de ces systèmes se dégradent rapidement car le modèle relationnel ne convient pas au traitement des données RDF intrinsèquement représentées sous forme de graphe. Les systèmes RDF natifs et distribués cherchent à surmonter cette limitation. Les premiers utilisent principalement l’indexation comme stratégie d'optimisation pour accélérer les requêtes. Les deuxièmes recourent au partitionnement des données. Dans le modèle relationnel, la représentation logique de la base de données est cruciale pour concevoir le partitionnement. La couche logique définissant le schéma explicite de la base de données offre un certain confort aux concepteurs. Cette couche leur permet de choisir manuellement ou automatiquement, via des assistants automatiques, les tables et les attributs à partitionner. Aussi, elle préserve les concepts fondamentaux sur le partitionnement qui restent constants quel que soit le système de gestion de base de données. Ce schéma de conception n'est plus valide pour les bases de données RDF car le modèle RDF n'applique pas explicitement un schéma aux données. Ainsi, la couche logique est inexistante et le partitionnement des données dépend fortement des implémentations physiques des triplets sur le disque. Cette situation contribue à avoir des logiques de partitionnement différentes selon le système cible, ce qui est assez différent du point de vue du modèle relationnel. Dans cette thèse, nous promouvons l'idée d'effectuer le partitionnement de données au niveau logique dans les bases de données RDF. Ainsi, nous traitons d'abord le graphe de données RDF pour prendre en charge le partitionnement basé sur des entités logiques. Puis, nous proposons un framework pour effectuer les méthodes de partitionnement. Ce framework s'accompagne de procédures d'allocation et de distribution des données. Notre framework a été incorporé dans un système de traitement des données RDF centralisé (RDF_QDAG) et un système distribué (gStoreD). Nous avons mené plusieurs expériences qui ont confirmé la faisabilité de l'intégration de notre framework aux systèmes existants en améliorant leurs performances pour certaines requêtes. Enfin, nous concevons un ensemble d'outils de gestion du partitionnement de données RDF dont un langage de définition de données (DDL) et un assistant automatique de partitionnement
The Resource Description Framework (RDF) and SPARQL are very popular graph-based standards initially designed to represent and query information on the Web. The flexibility offered by RDF motivated its use in other domains and today RDF datasets are great information sources. They gather billions of triples in Knowledge Graphs that must be stored and efficiently exploited. The first generation of RDF systems was built on top of traditional relational databases. Unfortunately, the performance in these systems degrades rapidly as the relational model is not suitable for handling RDF data inherently represented as a graph. Native and distributed RDF systems seek to overcome this limitation. The former mainly use indexing as an optimization strategy to speed up queries. Distributed and parallel RDF systems resorts to data partitioning. The logical representation of the database is crucial to design data partitions in the relational model. The logical layer defining the explicit schema of the database provides a degree of comfort to database designers. It lets them choose manually or automatically (through advisors) the tables and attributes to be partitioned. Besides, it allows the partitioning core concepts to remain constant regardless of the database management system. This design scheme is no longer valid for RDF databases. Essentially, because the RDF model does not explicitly enforce a schema since RDF data is mostly implicitly structured. Thus, the logical layer is inexistent and data partitioning depends strongly on the physical implementations of the triples on disk. This situation contributes to have different partitioning logics depending on the target system, which is quite different from the relational model’s perspective. In this thesis, we promote the novel idea of performing data partitioning at the logical level in RDF databases. Thereby, we first process the RDF data graph to support logical entity-based partitioning. After this preparation, we present a partitioning framework built upon these logical structures. This framework is accompanied by data fragmentation, allocation, and distribution procedures. This framework was incorporated to a centralized (RDF_QDAG) and a distributed (gStoreD) triple store. We conducted several experiments that confirmed the feasibility of integrating our framework to existent systems improving their performances for certain queries. Finally, we design a set of RDF data partitioning management tools including a data definition language (DDL) and an automatic partitioning wizard
APA, Harvard, Vancouver, ISO, and other styles
18

Tsai, Chia-Chang, and 蔡佳昌. "Low-Power Circuit Design Combining the Techniques of Asynchronous Circuits and Adiabatic Logics." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53284896780164490187.

Full text
Abstract:
碩士
國立彰化師範大學
電子工程學系
97
This thesis proposes a novel low-power logic circuit, called handshaking quasi-adiabatic logic (HQAL), which combines the advantages of asynchronous circuits and adiabatic logics. The HQAL logics adopt dual-rail encoding, and employ handshaking to transfer data between the adjacent modules. Hence, HQAL has the advantages of asynchronous circuits: no clock skew problem, no power dissipation due to the clock tree, and no dynamic power dissipation when there are no input data. The power line of the HQAL logic gates is controlled by the handshake control chain (HCC). A HQAL logic gate is not supplied with power when it has no input data. Only when a HQAL gate has acquired its input data, it can gain the power and then operate in a way similar to the adiabatic logic. Hence, the HQAL logic can achieve low power dissipation. With the handshake control chain, the HQAL circuit can avoid the problem of data token overriding, which may occur in conventional asynchronous adiabatic logic (AAL) circuits. Simulation results showed that the HQAL implementation of a pipelined Sklansky adder can achieve 33.1% reduction in power dissipation, compared to the CMOS implementation, for a data rate of 700 MHz and 72.5% reduction in power dissipation for a data rate of 10 MHz. Also, the HQAL implementation can achieve up to 95.6% reduction in static power dissipation as the adiabatic logic blocks in HQAL are not powered and have negligible leakage power dissipation when they have no input.
APA, Harvard, Vancouver, ISO, and other styles
19

Rudolph, Sebastian [Verfasser]. "Relational exploration : combining description logics and formal concept analysis for knowledge specification / von Sebastian Rudolph." 2006. http://d-nb.info/983709688/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

XIE, HAN-GING, and 謝漢卿. "Combining logic minimization and PLA folding." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/88870185632016755310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mandys, Petr. "Combining temporal logic and behavior protocols." Master's thesis, 2006. http://www.nusl.cz/ntk/nusl-267256.

Full text
Abstract:
In this thesis we consider one of the weaknesses of temporal logic - the fact that the temporal formulas specifying complex properties are hard to read. We introduce new temporal logic "BP-CTL", that originate from Computational Tree Logic (CTL) extended with operators partly taken from Behavior Protocols (BP) and partly newly defined. Text of the thesis is divided into several parts. First we introduce reader to the context of the issue. Next we describe new operators and show their usage on small examples. Then we formally define the resulting language (BP-CTL). In the next part we demonstrate the usability of BP-CTL and introduce the tool - called bpctl - for checking properties written in BP-CTL. Finally we evaluate and conclude our work. The text is extended with appendixes including detailed description of used formalisms, mapping tables of patterns collected in Property Specification Patterns project for BP-CTL and bpctl user manual.
APA, Harvard, Vancouver, ISO, and other styles
22

QIU, YONG-SHENG, and 邱永盛. "Combining logic minimization and bipartite folding for PLAzeng." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/91437709983535435183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Feng, Ti Kan. "Combining Decomposition and Hybrid Algorithms for the Satellite Range Scheduling Problems." Thesis, 2012. http://hdl.handle.net/1807/32239.

Full text
Abstract:
Multiple-resource satellite scheduling problem (MuRRSP) is a complex and difficult scheduling problem, which schedules a large number of task requests onto ground-station antennas in order to communicate with the satellites. We first examined several exact algorithms that were previously implemented in the machine scheduling field. They are column generation and logic-based Benders decomposition. A new hybrid approach that combines both column generation and logic-based Benders decomposition is proposed. The hybrid performed well when there is a large number of machines. Next, we presented a connection between the parallel machine scheduling problem and MuRRSP in order to solve MuRRSP with exact algorithms. Furthermore, we proposed a strengthened cut in the sub-problem of the logic-based Benders decomposition. The resulting algorithm proved to be very effective. Barbulescu’s benchmark problems were solved and proved optimal with an average run-time less than one-hour. To the best of our knowledge, previous efforts to solve MuRRSP were all heuristic based and no optimal schedules existed.
APA, Harvard, Vancouver, ISO, and other styles
24

Farsiniamarj, Nasim. "Combining integer programming and tableau-based reasoning : a hybrid calculus for the description logic SHQ." Thesis, 2008. http://spectrum.library.concordia.ca/976382/1/MR63309.pdf.

Full text
Abstract:
Qualified cardinality restrictions are expressive language constructs which extend the basic description logic ALC with the ability of expressing numerical constraints about relationships. However, the well-known standard tableau algorithms perform weakly when dealing with cardinality restrictions. Therefore, an arithmetically informed approach seems to be inevitable when dealing with these cardinality restrictions. This thesis presents a hybrid tableau calculus for the description logic SHQ which extends ALC by qualified cardinality restrictions, role hierarchies, and transitive roles. The hybrid calculus is based on the so-called atomic decomposition technique and combines arithmetic and logical reasoning. The most prominent feature of this hybrid calculus is that it reduces reasoning about qualified number restrictions to integer linear programming. Therefore, according to the nature of arithmetic reasoning, this calculus is not affected by the size of numbers occurring in cardinality restrictions. Furthermore, we give evidence on how this method of hybrid reasoning can improve the performance of reasoning by organizing the search space more competently. An empirical evaluation of our hybrid reasoner for a set of synthesized benchmarks featuring qualified number restrictions clearly demonstrates its superior performance. In comparison to other standard description logic reasoners, our approach demonstrates an overall runtime improvement of several orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
25

Nageeb, Meena. "The Effectiveness of the Hybrid Graphical Representation Method in Visually Combining and Communicating Logical and Spatial Relationships between Scheduled Activities." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-11042.

Full text
Abstract:
This research endeavor investigated the possibility to combine the visual advantages of both graphical schedule visualization methods, the Linked Gantt Charts (LGC) and Flowline graphs (FLG), derived from the activity-based and location-based scheduling systems, to help resolve some of their shortcomings by capitalizing on their combined strengths. In order to accomplish the goal of the research, a graphical representation system that combines these two scheduling visualization methods, LGC and FLG, is developed. Afterwards, the research attempted to empirically validate the ability of the proposed tool to visually communicate and combine logical and spatial relationships between scheduled activities. This is compared to comprehending the same information by looking at a stand-alone LGC or FLG. The accuracy and time, of deciphering various details of a sample project schedule, are used as parameters to evaluate the proposed scheduling visualization tool, and compare it to the existing LGC and FLG systems. The Hybrid Graphical Representation (HGR) is the tool developed by this research to combine Linked Gantt Chart bars from the activity-based scheduling approach, and flow-lines from the location-based scheduling approach. The HGR concept is founded on the basic idea that both LGC and FLG share a common X-axis, Time. The only difference is in a LGC the Activities are listed on the Y-axis, while the FLG shows Locations on the Y-axis. This research proposed adding a third dimension to the FLG, listing the project Activities on a Z-axis. Viewing the HGR 3D graph from the top, the user will observe the Gantt bars with Time on the X-axis and the Activities listed on the Z-axis. Observing the schedule from the front view, the user will see the flow-lines developed from the location-based scheduling approach with Locations on the Y-axis and Time on the X-axis. After conducting a series of online surveys measuring the time and accuracy of using a prototype HGR schedule, it was found that the users were able to reap the benefits of both scheduling approaches (LGC and FLG), and visually link and communicate information concerning the activities' logical relationships and spatial relationships. However, it took the participants a relatively longer time to achieve that higher accuracy utilizing the HGR tool.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography