Dissertations / Theses on the topic 'Probabilistic logics'

To see the other types of publications on this topic, follow the link: Probabilistic logics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Probabilistic logics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Potyka, Nico [Verfasser]. "Solving Reasoning Problems for Probabilistic Conditional Logics with Consistent and Inconsistent Information / Nico Potyka." Hagen : Fernuniversität Hagen, 2016. http://d-nb.info/1082048402/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weidner, Thomas. "Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal Logic." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208732.

Full text
Abstract:
The classic theorems of Büchi and Kleene state the expressive equivalence of finite automata to monadic second order logic and regular expressions, respectively. These fundamental results enjoy applications in nearly every field of theoretical computer science. Around the same time as Büchi and Kleene, Rabin investigated probabilistic finite automata. This equally well established model has applications ranging from natural language processing to probabilistic model checking. Here, we give probabilistic extensions Büchi\\\'s theorem and Kleene\\\'s theorem to the probabilistic setting. We obtain a probabilistic MSO logic by adding an expected second order quantifier. In the scope of this quantifier, membership is determined by a Bernoulli process. This approach turns out to be universal and is applicable for finite and infinite words as well as for finite trees. In order to prove the expressive equivalence of this probabilistic MSO logic to probabilistic automata, we show a Nivat-theorem, which decomposes a recognisable function into a regular language, homomorphisms, and a probability measure. For regular expressions, we build upon existing work to obtain probabilistic regular expressions on finite and infinite words. We show the expressive equivalence between these expressions and probabilistic Muller-automata. To handle Muller-acceptance conditions, we give a new construction from probabilistic regular expressions to Muller-automata. Concerning finite trees, we define probabilistic regular tree expressions using a new iteration operator, called infinity-iteration. Again, we show that these expressions are expressively equivalent to probabilistic tree automata. On a second track of our research we investigate Constraint LTL over multidimensional data words with data values from the infinite tree. Such LTL formulas are evaluated over infinite words, where every position possesses several data values from the infinite tree. Within Constraint LTL on can compare these values from different positions. We show that the model checking problem for this logic is PSPACE-complete via investigating the emptiness problem of Constraint Büchi automata.
APA, Harvard, Vancouver, ISO, and other styles
3

Barbosa, Fábio Daniel Moreira. "Probabilistic propositional logic." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22198.

Full text
Abstract:
Mestrado em Matemática e Aplicações
O termo Lógica Probabilística, em geral, designa qualquer lógica que incorpore conceitos probabilísticos num sistema lógico formal. Nesta dissertacção o principal foco de estudo e uma lógica probabilística (designada por Lógica Proposicional Probabilística Exógena), que tem por base a Lógica Proposicional Clássica. São trabalhados sobre essa lógica probabilística a síntaxe, a semântica e um cálculo de Hilbert, provando-se diversos resultados clássicos de Teoria de Probabilidade no contexto da EPPL. São também estudadas duas propriedades muito importantes de um sistema lógico - correcção e completude. Prova-se a correcção da EPPL da forma usual, e a completude fraca recorrendo a um algoritmo de satisfazibilidade de uma fórmula da EPPL. Serão também considerados na EPPL conceitos de outras lógicas probabilísticas (incerteza e probabilidades intervalares) e Teoria de Probabilidades (condicionais e independência).
The term Probabilistic Logic generally refers to any logic that incorporates probabilistic concepts in a formal logic system. In this dissertation, the main focus of study is a probabilistic logic (called Exogenous Probabilistic Propo- sitional Logic), which is based in the Classical Propositional Logic. There will be introduced, for this probabilistic logic, its syntax, semantics and a Hilbert calculus, proving some classical results of Probability Theory in the context of EPPL. Moreover, there will also be studied two important properties of a logic system - soundness and completeness. We prove the EPPL soundness in a standard way, and weak completeness using a satis ability algorithm for a formula of EPPL. It will be considered in EPPL concepts of other probabilistic logics (uncertainty and intervalar probability) and of Probability Theory (independence and conditional).
APA, Harvard, Vancouver, ISO, and other styles
4

Klinov, Pavel. "Practical reasoning in probabilistic description logic." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/practical-reasoning-in-probabilistic-description-logic(6aff2ad0-dc76-44cf-909b-2134f580f29b).html.

Full text
Abstract:
Description Logics (DLs) form a family of languages which correspond to decidable fragments of First-Order Logic (FOL). They have been overwhelmingly successful for constructing ontologies - conceptual structures describing domain knowledge. Ontologies proved to be valuable in a range of areas, most notably, bioinformatics, chemistry, Health Care and Life Sciences, and the Semantic Web.One limitation of DLs, as fragments of FOL, is their restricted ability to cope with various forms of uncertainty. For example, medical knowledge often includes statistical relationships, e.g., findings or results of clinical trials. Currently it is maintained separately, e.g., in Bayesian networks or statistical models. This often hinders knowledge integration and reuse, leads to duplication and, consequently, inconsistencies.One answer to this issue is probabilistic logics which allow for smooth integration of classical, i.e., expressible in standard FOL or its sub-languages, and uncertain knowledge. However, probabilistic logics have long been considered impractical because of discouraging computational properties. Those are mostly due to the lack of simplifying assumptions, e.g., independence assumptions which are central to Bayesian networks.In this thesis we demonstrate that deductive reasoning in a particular probabilistic DL, called P-SROIQ, can be computationally practical. We present a range of novel algorithms, in particular, the probabilistic satisfiability procedure (PSAT) which is, to our knowledge, the first scalable PSAT algorithm for a non-propositional probabilistic logic. We perform an extensive performance and scalability evaluation on different synthetic and natural data sets to justify practicality.In addition, we study theoretical properties of P-SROIQ by formally translating it into a fragment of first-order logic of probability. That allows us to gain a better insight into certain important limitations of P-SROIQ. Finally, we investigate its applicability from the practical perspective, for instance, use it to extract all inconsistencies from a real rule-based medical expert system.We believe the thesis will be of interest to developers of probabilistic reasoners. Some of the algorithms, e.g., PSAT, could also be valuable to the Operations Research community since they are heavily based on mathematical programming. Finally, the theoretical analysis could be helpful for designers of future probabilistic logics.
APA, Harvard, Vancouver, ISO, and other styles
5

Chakrapani, Lakshmi Narasimhan. "Probabilistic boolean logic, arithmetic and architectures." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26706.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Palem, Krishna V.; Committee Member: Lim, Sung Kyu; Committee Member: Loh, Gabriel H.; Committee Member: Mudge, Trevor; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

Blakely, Scott. "Probabilistic Analysis for Reliable Logic Circuits." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1860.

Full text
Abstract:
Continued aggressive scaling of electronic technology poses obstacles for maintaining circuit reliability. To this end, analysis of reliability is of increasing importance. Large scale number of inputs and gates or correlations of failures render such analysis computationally complex. This paper presents an accurate framework for reliability analysis of logic circuits, while inherently handling reconvergent fan-out without additional complexity. Combinational circuits are modeled stochastically as Discrete-Time Markov Chains, where propagation of node logic levels and error probability distributions through circuitry are used to determine error probabilities at nodes in the circuit. Model construction is scalable, as it is done so on a gate-by-gate basis. The stochastic nature of the model lends itself to allow various properties of the circuit to be formally analyzed by means of steady-state properties. Formal verifying the properties against the model can circumvent strenuous simulations while exhaustively checking all possible scenarios for given properties. Small combinational circuits are used to explain model construction, properties are presented for analysis of the system, more example circuits are demonstrated, and the accuracy of the method is verified against an existing simulation method.
APA, Harvard, Vancouver, ISO, and other styles
7

Faria, Francisco Henrique Otte Vieira de. "Learning acyclic probabilistic logic programs from data." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-27022018-090821/.

Full text
Abstract:
To learn a probabilistic logic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilistic logic programs.
O aprendizado de um programa lógico probabilístico consiste em encontrar um conjunto de regras lógico-probabilísticas que melhor se adequem aos dados, a fim de explicar de que forma estão relacionados os atributos observados e predizer a ocorrência de novas instanciações destes atributos. Neste trabalho focamos em programas acíclicos, cujo significado é bastante claro e fácil de interpretar. Propõe-se que o processo de aprendizado de programas lógicos probabilísticos acíclicos deve ser guiado por funções de avaliação importadas da literatura de aprendizado de redes Bayesianas. Neste trabalho s~ao sugeridas novas técnicas para aprendizado de parâmetros que contribuem para uma melhora significativa na eficiência computacional do estado da arte representado pelo pacote ProbLog. Além disto, apresentamos novas técnicas para aprendizado da estrutura de programas lógicos probabilísticos acíclicos.
APA, Harvard, Vancouver, ISO, and other styles
8

Misino, Eleonora. "Deep Generative Models with Probabilistic Logic Priors." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.

Full text
Abstract:
Many different extensions of the VAE framework have been introduced in the past. How­ ever, the vast majority of them focused on pure sub­-symbolic approaches that are not sufficient for solving generative tasks that require a form of reasoning. In this thesis, we propose the probabilistic logic VAE (PLVAE), a neuro-­symbolic deep generative model that combines the representational power of VAEs with the reasoning ability of probabilistic ­logic programming. The strength of PLVAE resides in its probabilistic ­logic prior, which provides an interpretable structure to the latent space that can be easily changed in order to apply the model to different scenarios. We provide empirical results of our approach by training PLVAE on a base task and then using the same model to generalize to novel tasks that involve reasoning with the same set of symbols.
APA, Harvard, Vancouver, ISO, and other styles
9

Weidner, Thomas [Verfasser], Manfred [Akademischer Betreuer] Droste, Manfred [Gutachter] Droste, and Benedikt [Gutachter] Bollig. "Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal Logic / Thomas Weidner ; Gutachter: Manfred Droste, Benedikt Bollig ; Betreuer: Manfred Droste." Leipzig : Universitätsbibliothek Leipzig, 2016. http://d-nb.info/1240627777/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Forst, Jan Frederik. "POLIS : a probabilistic summarisation logic for structured documents." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/467.

Full text
Abstract:
As the availability of structured documents, formatted in markup languages such as SGML, RDF, or XML, increases, retrieval systems increasingly focus on the retrieval of document-elements, rather than entire documents. Additionally, abstraction layers in the form of formalised retrieval logics have allowed developers to include search facilities into numerous applications, without the need of having detailed knowledge of retrieval models. Although automatic document summarisation has been recognised as a useful tool for reducing the workload of information system users, very few such abstraction layers have been developed for the task of automatic document summarisation. This thesis describes the development of an abstraction logic for summarisation, called POLIS, which provides users (such as developers or knowledge engineers) with a high-level access to summarisation facilities. Furthermore, POLIS allows users to exploit the hierarchical information provided by structured documents. The development of POLIS is carried out in a step-by-step way. We start by defining a series of probabilistic summarisation models, which provide weights to document-elements at a user selected level. These summarisation models are those accessible through POLIS. The formal definition of POLIS is performed in three steps. We start by providing a syntax for POLIS, through which users/knowledge engineers interact with the logic. This is followed by a definition of the logics semantics. Finally, we provide details of an implementation of POLIS. The final chapters of this dissertation are concerned with the evaluation of POLIS, which is conducted in two stages. Firstly, we evaluate the performance of the summarisation models by applying POLIS to two test collections, the DUC AQUAINT corpus, and the INEX IEEE corpus. This is followed by application scenarios for POLIS, in which we discuss how POLIS can be used in specific IR tasks.
APA, Harvard, Vancouver, ISO, and other styles
11

Wagner, Daniel. "Finite-state abstractions for probabilistic computation tree logic." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6348.

Full text
Abstract:
Probabilistic Computation Tree Logic (PCTL) is the established temporal logic for probabilistic verification of discrete-time Markov chains. Probabilistic model checking is a technique that verifies or refutes whether a property specified in this logic holds in a Markov chain. But Markov chains are often infinite or too large for this technique to apply. A standard solution to this problem is to convert the Markov chain to an abstract model and to model check that abstract model. The problem this thesis therefore studies is whether or when such finite abstractions of Markov chains for model checking PCTL exist. This thesis makes the following contributions. We identify a sizeable fragment of PCTL for which 3-valued Markov chains can serve as finite abstractions; this fragment is maximal for those abstractions and subsumes many practically relevant specifications including, e.g., reachability. We also develop game-theoretic foundations for the semantics of PCTL over Markov chains by capturing the standard PCTL semantics via a two-player games. These games, finally, inspire a notion of p-automata, which accept entire Markov chains. We show that p-automata subsume PCTL and Markov chains; that their languages of Markov chains have pleasant closure properties; and that the complexity of deciding acceptance matches that of probabilistic model checking for p-automata representing PCTL formulae. In addition, we offer a simulation between p-automata that under-approximates language containment. These results then allow us to show that p-automata comprise a solution to the problem studied in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
12

Maksimović, Petar. "Développement et vérification des logiques probabilistes et des cadres logiques." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00907854.

Full text
Abstract:
On présente une Logique Probabiliste avec des opérateurs Conditionnels - LPCP, sa syntaxe, sémantique, axiomatisation correcte et fortement complète, comprenant une règle de déduction infinitaire. On prouve que LPCP est décidable, et on l'étend pour qu'il puisse représenter l'évidence, en créant ainsi la première axiomatisation propositionnelle du raisonnement basé sur l'évidence. On codifie les Logiques Probabilistes LPP1Q et LPPQ2 dans l'Assistant de Preuve Coq, et on vérifie formellement leurs propriétés principales: correction, complétude fort et non-compacité. Les deux logiques étendent la Logique Classique avec des opérateurs de probabilité, et présentent une règle de déduction infinitaire. LPPQ1 permet des itérations des opérateurs de probabilité, lorsque LPPQ2 ne le permet pas. On a formellement justifié l'utilisation des solveurs SAT probabilistes pour vérifier les questions liées à la cohérence. On présente LFP, un Cadre Logique avec Prédicats Externes, en introduisant un mécanisme pour bloquer et débloquer types et termes dans LF, en permettant l'utilisation d'oracles externes. On démontre que LFP satisfait tous les principales propriétés et on développe un cadre canonique correspondant, qui permet de prouver l'adéquation. On fournit diverses encodages - le λ-calcul non-typé avec la stratégie de réduction CBV, Programmation-par-Contrats, un langage impératif avec la Logique de Hoare, des Logiques Modales et la Logique Linéaire Non-Commutative, en montrant que en LFP on peut codifier aisément des side-conditions dans l'application des règles de typage et atteindre une séparation entre vérification et computation, en obtenant des preuves plus claires et lisibles.
APA, Harvard, Vancouver, ISO, and other styles
13

Mio, Matteo. "Game semantics for probabilistic modal μ-calculi." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6223.

Full text
Abstract:
The probabilistic (or quantitative) modal μ-calculus is a fixed-point logic designed for expressing properties of probabilistic labeled transition systems (PLTS’s). Two semantics have been studied for this logic, both assigning to every process state a value in the interval [0, 1] representing the probability that the property expressed by the formula holds at the state. One semantics is denotational and the other is a game semantics, specified in terms of two-player stochastic games. The two semantics have been proved to coincide on all finite PLTS’s. A first contribution of the thesis is to extend this coincidence result to arbitrary PLTS’s. A shortcoming of the probabilistic μ-calculus is the lack of expressiveness required to encode other important temporal logics for PLTS’s such as Probabilistic Computation Tree Logic (PCTL). To address this limitation, we extend the logic with a new pair of operators: independent product and coproduct, and we show that the resulting logic can encode the qualitative fragment of PCTL. Moreover, a further extension of the logic, with the operation of truncated sum and its dual, is expressive enough to encode full PCTL. A major contribution of the thesis is the definition of appropriate game semantics for these extended probabilistic μ-calculi. This relies on the definition of a new class of games, called tree games, which generalize standard 2-player stochastic games. In tree games, a play can be split into concurrent subplays which continue their evolution independently. Surprisingly, this simple device supports the encoding of the whole class of imperfect-information games known as Blackwell games. Moreover, interesting open problems in game theory, such as qualitative determinacy for 2-player stochastic parity games, can be reformulated as determinacy problems for suitable classes of tree games. Our main technical result about tree games is a proof of determinacy for 2-player stochastic metaparity games, which is the class of tree games that we use to give game semantics to the extended probabilistic μ-calculi. In order to cope with measure-theoretic technicalities, the proof is carried out in ZFC set theory extended with Martin’s Axiom at the first uncountable cardinal (MAℵ1). The final result of the thesis shows that the game semantics of the extended logics coincides with the denotational semantics, for arbitrary PLTS’s. However, in contrast to the earlier coincidence result, which is proved in ZFC, the proof of coincidence for the extended calculi is once again carried out in ZFC +MAℵ1.
APA, Harvard, Vancouver, ISO, and other styles
14

Cizelj, Igor. "Vehicle control from temporal logic specifications with probabilistic satisfaction guarantees." Thesis, Boston University, 2014. https://hdl.handle.net/2144/10967.

Full text
Abstract:
Thesis (Ph.D.)--Boston University
Temporal logics, such as Linear Temporal Logic (LTL) and Computation Tree Logic (CTL), have become increasingly popular for specifying complex mission specifications in motion planning and control synthesis problems. This dissertation proposes and evaluates methods and algorithms for synthesizing control strategies for different vehicle models from temporal logic specifications. Complex vehicle models that involve systems of differential equations evolving over continuous domains are considered. The goal is to synthesize control strategies that maximize the probability that the behavior of the system, in the presence of sensing and actuation noise, satisfies a given temporal logic specification. The first part of this dissertation proposes an approach for designing a vehicle control strategy that maximizes the probability of accomplishing a motion specification given as a Probabilistic CTL (PCTL) formula. Two scenarios are examined. First, a threat-rich environment is considered when the motion of a vehicle in the environment is given as a finite transition system. Second, a noisy Dubins vehicle is considered. For both scenarios, the motion of the vehicle in the environment is modeled as a Markov Decision Process (MDP) and an approach for generating an optimal MDP control policy that maximizes the probability of satisfying the PCTL formula is introduced. The second part of this dissertation introduces a human-supervised control synthesis method for a noisy Dubins vehicle such that the expected time to satisfy a PCTL formula is minimized, while maintaining the satisfaction probability above a given probability threshold. A method for abstracting the motion of the vehicle in the environment in the form of an MDP is presented. An algorithm for synthesizing an optimal MDP control policy is proposed. If the probability threshold cannot be satisfied with the initial specification, the presented framework revises the specifica- tion until the supervisor is satisfied with the revised specification and the satisfaction probability is above the threshold. The third part of this dissertation focuses on the problem of stochastic control of a noisy differential drive mobile robot such that the probability of satisfying a time constrained specification, given as a Bounded LTL (BLTL) formula, is maximized. A method for mapping noisy sensor measurements to an MDP is introduced. Due to the size of the MDP, finding the exact solution is computationally too expensive. Correctness is traded for scalability, and an MDP control synthesis method based on Statistical Model Checking is introduced.
APA, Harvard, Vancouver, ISO, and other styles
15

Hinojosa, William. "Probabilistic fuzzy logic framework in reinforcement learning for decision making." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26716/.

Full text
Abstract:
This dissertation focuses on the problem of uncertainty handling during learning by agents dealing in stochastic environments by means of reinforcement learning. Most previous investigations in reinforcement learning have proposed algorithms to deal with the learning performance issues but neglecting the uncertainty present in stochastic environments. Reinforcement learning is a valuable learning method when a system requires a selection of actions whose consequences emerge over long periods for which input-output data are not available. In most combinations of fuzzy systems with reinforcement learning, the environment is considered deterministic. However, for many cases, the consequence of an action may be uncertain or stochastic in nature. This work proposes a novel reinforcement learning approach combined with the universal function approximation capability of fuzzy systems within a probabilistic fuzzy logic theory framework, where the information from the environment is not interpreted in a deterministic way as in classic approaches but rather, in a statistical way that considers a probability distribution of long term consequences. The generalized probabilistic fuzzy reinforcement learning (GPFRL) method, presented in this dissertation, is a modified version of the actor-critic learning architecture where the learning is enhanced by the introduction of a probability measure into the learning structure where an incremental gradient descent weight- updating algorithm provides convergence. XXIABSTRACT Experiments were performed on simulated and real environments based on a travel planning spoken dialogue system. Experimental results provided evidence to support the following claims: first, the GPFRL have shown a robust performance when used in control optimization tasks. Second, its learning speed outperforms most of other similar methods. Third, GPFRL agents are feasible and promising for the design of adaptive behaviour robotics systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Faix, Marvin. "Conception de machines probabilistes dédiées aux inférences bayésiennes." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM079/document.

Full text
Abstract:
Ces travaux de recherche ont pour but de concevoir des ordinateurs baséssur une organisation du calcul mieux adaptée au raisonnement probabiliste.Notre intérêt s’est porté sur le traitement des données incertaines et lescalculs à réaliser sur celles-ci. Pour cela, nous proposons des architectures demachines se soustrayant au modèle Von Neumann, supprimant notammentl’utilisation de l’arithmétique en virgule fixe ou flottante. Les applicationscomme le traitement capteurs ou la robotique en générale sont des exemplesd’utilisation des architectures proposées.Plus spécifiquement, ces travaux décrivent deux types de machines probabilistes, radicalement différentes dans leur conception, dédiées aux problèmesd’inférences bayésiennes et utilisant le calcul stochastique. La première traiteles problèmes d’inférence de faibles dimensions et utilise le calcul stochas-tique pour réaliser les opérations nécessaires au calcul de l’inférence. Cettemachine est basée sur le concept de bus probabiliste et possède un très fortparallélisme. La deuxième machine permet de traiter les problèmes d’infé-rence en grandes dimensions. Elle implémente une méthode MCMC sous laforme d’un algorithme de Gibbs au niveau binaire. Dans ce cas, le calculstochastique est utilisé pour réaliser l’échantillonnage, bit à bit, du modèle.Une importante caractéristique de cette machine est de contourner les pro-blèmes de convergence généralement attribués au calcul stochastique. Nousprésentons en fin de manuscrit une extension de ce second type de machine :une machine générique et programmable permettant de trouver une solutionapprochée à n’importe quel problème d’inférence
The aim of this research is to design computers best suited to do probabilistic reasoning. The focus of the research is on the processing of uncertain data and on the computation of probabilistic distribution. For this, new machine architectures are presented. The concept they are designed on is different to the one proposed by Von Neumann, without any fixed or floating point arithmetic. These architectures could replace the current processors in sensor processing and robotic fields.In this thesis, two types of probabilistic machines are presented. Their designs are radically different, but both are dedicated to Bayesian inferences and use stochastic computing. The first deals with small-dimension inference problems and uses stochastic computing to perform the necessary operations to calculate the inference. This machine is based on the concept of probabilistic bus and has a strong parallelism.The second machine can deal with intractable inference problems. It implements a particular MCMC method: the Gibbs algorithm at the binary level. In this case, stochastic computing is used for sampling the distribution of interest. An important feature of this machine is the ability to circumvent the convergence problems generally attributed to stochastic computing. Finally, an extension of this second type of machine is presented. It consists of a generic and programmable machine designed to approximate solution to any inference problem
APA, Harvard, Vancouver, ISO, and other styles
17

Roberts, Lesley. "Towards a probabilistic semantics for natural language /." [St. Lucia, Qld.], 2003. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18482.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Myers, Catherine E. "Learning with delayed reinforcement in an exploratory probabilistic logic neural network." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pandya, Rashmibala. "A multi-layered framework for higher order probabilistic reasoning." Thesis, University of Exeter, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Martiny, Karsten [Verfasser]. "PDT logic : a probabilistic doxastic temporal logic for reasoning about beliefs in multi-agent systems / Karsten Martiny." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2018. http://d-nb.info/1152030132/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kucik, Paul D. "Probabilistic modeling of insurgency /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bona, Glauber De. "Measuring inconsistency in probabilistic knowledge bases." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04042016-045006/.

Full text
Abstract:
In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community.
Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA.
APA, Harvard, Vancouver, ISO, and other styles
23

Kane, Thomas Brett. "Reasoning with uncertainty using Nilsson's probabilistic logic and the maximum entropy formalism." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ghahremani, Azghandi Nargess. "Petri nets, probability and event structures." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9936.

Full text
Abstract:
Models of true concurrency have gained a lot of interest over the last decades as models of concurrent or distributed systems which avoid the well-known problem of state space explosion of the interleaving models. In this thesis, we study such models from two perspectives. Firstly, we study the relation between Petri nets and stable event structures. Petri nets can be considered as one of the most general and perhaps wide-spread models of true concurrency. Event structures on the other hand, are simpler models of true concurrency with explicit causality and conflict relations. Stable event structures expand the class of event structures by allowing events to be enabled in more than one way. While the relation between Petri nets and event structures is well understood, the relation between Petri nets and stable event structures has not been studied explicitly. We define a new and more compact unfoldings of safe Petri nets which is directly translatable to stable event structures. In addition, the notion of complete finite prefix is defined for compact unfoldings, making the existing model checking algorithms applicable to them. We present algorithms for constructing the compact unfoldings and their complete finite prefix. Secondly, we study probabilistic models of true concurrency. We extend the definition of probabilistic event structures as defined by Abbes and Benveniste to a newly defined class of stable event structures, namely, jump-free stable event structures arising from Petri nets (characterised and referred to as net-driven). This requires defining the fundamental concept of branching cells in probabilistic event structures, for jump-free net-driven stable event structures, and by proving the existence of an isomorphism among the branching cells of these systems, we show that the latter benefit from the related results of the former models. We then move on to defining a probabilistic logic over probabilistic event structures (PESL). To our best knowledge, this is the first probabilistic logic of true concurrency. We show examples of expressivity achieved by PESL, which in particular include properties related to synchronisation in the system. This is followed by the model checking algorithm for PESL for finite event structures. Finally, we present a logic over stable event structures (SEL) along with an account of its expressivity and its model checking algorithm for finite stable event structures.
APA, Harvard, Vancouver, ISO, and other styles
25

Dellaluce, Jason. "Enhancing symbolic AI ecosystems with Probabilistic Logic Programming: a Kotlin multi-platform case study." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23856/.

Full text
Abstract:
As Artificial Intelligence (AI) progressively conquers the software industry at a fast pace, the demand for more transparent and pervasive technologies increases accordingly. In this scenario, novel approaches to Logic Programming (LP) and symbolic AI have the potential to satisfy the requirements of modern software environments. However, traditional logic-based approaches often fail to match present-day planning and learning workflows, which natively deal with uncertainty. Accordingly, Probabilistic Logic Programming (PLP) is emerging as a modern research field that investigates the combination of LP with the probability theory. Although research efforts at the state of the art demonstrate encouraging results, they are usually either developed as proof of concepts or bound to specific platforms, often having inconvenient constraints. In this dissertation, we introduce an elastic and platform-agnostic approach to PLP aimed to surpass the usability and portability limitations of current proposals. We design our solution as an extension of the 2P-Kt symbolic AI ecosystem, thus endorsing the mission of the project and inheriting its multi-platform and multi-paradigm nature. Additionally, our proposal comprehends an object-oriented and pure-Kotlin library for manipulating Binary Decision Diagrams (BDDs), which are notoriously relevant in the context of probabilistic computation. As a Kotlin multi-platform architecture, our BDD module aims to surpass the usability constraints of existing packages, which typically rely on low level C/C++ bindings for performance reasons. Overall, our project explores novel directions towards more usable, portable, and accessible PLP technologies, which we expect to grow in popularity both in the research community and in the software industry over the next few years.
APA, Harvard, Vancouver, ISO, and other styles
26

Sekar, Sanjana. "Logic Encryption Methods for Hardware Security." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1505124923353686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ceylan, Ismail Ilkan. "Query Answering in Probabilistic Data and Knowledge Bases." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-235238.

Full text
Abstract:
Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory.
APA, Harvard, Vancouver, ISO, and other styles
28

Maksimovic, Petar. "Développement et Vérification des Logiques Probabilistes et des Cadres Logiques." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00911547.

Full text
Abstract:
On présente une Logique Probabiliste avec des Operateurs Conditionnels - LPCP, sa syntaxe, sémantique, axiomatisation correcte et fortement complète, comprenant une règle de déduction infinitaire. On prouve que LPCP est décidable, et on l'étend pour qu'il puisse représenter l'évidence, en créant ainsi la première axiomatisation propositionnelle du raisonnement basé sur l'évidence. On codifie les Logiques Probabilistes LPP1Q et LPPQ2 dans l'Assistant de Preuve Coq, et on vérifie formellement leurs propriétés principales: correction, complétude fort et non-compacité. Les deux logiques étendent la Logique Classique avec des opérateurs de probabilité, et présentent une règle de déduction infinitaire. LPPQ1 permet des itérations des opérateurs de probabilité, lorsque LPPQ2 ne le permet pas. On a formellement justifié l'utilisation des solveurs SAT probabilistes pour vérifier les questions liées à la cohérence. On présente LFP, un Cadre Logique avec Prédicats Externes, en introduisant un mécanisme pour bloquer et débloquer types et termes dans LF, en permettant l'utilisation d'oracles externes. On démontre que LFP satisfait tous les principales propriétés et on développe un cadre canonique correspondant, qui permet de prouver l'adéquation. On fournit diverses encodages - le λ-calcul non-typé avec la stratégie de réduction CBV, Programmation-par-Contrats, un langage impératif avec la Logique de Hoare, des Logiques Modales et la Logique Linéaire Non-Commutative, en montrant que en LFP on peut codifier aisément des side-conditions dans l'application des règles de typage et atteindre une séparation entre vérification et computation, en obtenant des preuves plus claires et lisibles.
APA, Harvard, Vancouver, ISO, and other styles
29

Chakraborty, Souymodip Verfasser], Joost-Pieter [Akademischer Betreuer] [Katoen, and Lijun [Akademischer Betreuer] Zhang. "New results on probabilistic verification : automata, logic and satisfiability / Souymodip Chakraborty ; Joost-Pieter Katoen, Lijun Zhang." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/119418426X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chakraborty, Souymodip [Verfasser], Joost-Pieter [Akademischer Betreuer] Katoen, and Lijun [Akademischer Betreuer] Zhang. "New results on probabilistic verification : automata, logic and satisfiability / Souymodip Chakraborty ; Joost-Pieter Katoen, Lijun Zhang." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/119418426X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Gao, Xiaoxu. "Exploring declarative rule-based probabilistic frameworks for link prediction in Knowledge Graphs." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210650.

Full text
Abstract:
En kunskapsgraf lagrar information från webben i form av relationer mellan olika entiteter. En kunskapsgrafs kvalité bestäms av hur komplett den är och dess noggrannhet. Dessvärre har många nuvarande kunskapsgrafer brister i form av saknad fakta och inkorrekt information. Nuvarande lösningar av länkförutsägelser mellan entiteter har problem med skalbarhet och hög arbetskostnad. Denna uppsats föreslår ett deklarativt regelbaserat probabilistiskt ramverk för att utföra länkförutsägelse. Systemet involverar en regelutvinnande modell till ett “hinge-loss Markov random fields” för att föreslå länkar. Vidare utvecklades tre strategier för regeloptimering för att förbättra reglernas kvalité. Jämfört med tidigare lösningar så bidrar detta arbete till att drastiskt reducera arbetskostnader och en mer spårbar modell. Varje metod har utvärderas med precision och F-värde på NELL och Freebase15k. Det visar sig att strategin för regeloptimering presterade bäst. MAP-uppskattningen för den bästa modellen på NELL är 0.754, vilket är bättre än en nuvarande spjutspetsteknologi graphical model(0.306). F-värdet för den bästa modellen på Freebase15k är 0.709.
The knowledge graph stores factual information from the web in form of relationships between entities. The quality of a knowledge graph is determined by its completeness and accuracy. However, most current knowledge graphs often miss facts or have incorrect information. Current link prediction solutions have problems of scalability and high labor costs. This thesis proposed a declarative rule-based probabilistic framework to perform link prediction. The system incorporates a rule-mining model into a hingeloss Markov random fields to infer links. Moreover, three rule optimization strategies were developed to improve the quality of rules. Compared with previous solutions, this work dramatically reduces manual costs and provides a more tractable model. Each proposed method has been evaluated with Average Precision or F-score on NELL and Freebase15k. It turns out that the rule optimization strategy performs the best. The MAP of the best model on NELL is 0.754, better than a state-of-the-art graphical model (0.306). The F-score of the best model on Freebase15k is 0.709.
APA, Harvard, Vancouver, ISO, and other styles
32

Al, Shekaili Dhahi. "Integrating Linked Data search results using statistical relational learning approaches." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/integrating-linked-data-search-results-using-statistical-relational-learning-approaches(3f77386b-a38a-4110-8ce1-bda6340e6f0b).html.

Full text
Abstract:
Linked Data (LD) follows the web in providing low barriers to publication, and in deploying web-scale keyword search as a central way of identifying relevant data. As in the web, searchesinitially identify results in broadly the form in which they were published, and the published form may be provided to the user as the result of a search. This will be satisfactory in some cases, but the diversity of publishers means that the results of the search may be obtained from many different sources, and described in many different ways. As such, there seems to bean opportunity to add value to search results by providing userswith an integrated representation that brings together features from different sources. This involves an on-the-fly and automated data integration process being applied to search results, which raises the question as to what technologies might bemost suitable for supporting the integration of LD searchresults. In this thesis we take the view that the problem of integrating LD search results is best approached by assimilating different forms ofevidence that support the integration process. In particular, thisdissertation shows how Statistical Relational Learning (SRL) formalisms (viz., Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL)) can beexploited to assimilate different sources of evidence in a principledway and to beneficial effect for users. Specifically, in this dissertation weconsider syntactic evidence derived from LD search results and from matching algorithms, semantic evidence derived from LD vocabularies, and user evidence,in the form of feedback. This dissertation makes the following key contributions: (i) a characterisation of key features of LD search results that are relevant to their integration, and a description of some initial experiences in the use of MLN for interpreting search results; (ii)a PSL rule-base that models the uniform assimilation of diverse kinds of evidence;(iii) an empirical evaluation of how the contributed MLN and PSL approaches perform in terms of their ability to infer a structure for integrating LD search results;and (iv) concrete examples of how populating such inferred structures for presentation to the end user is beneficial, as well as guiding the collection of feedbackwhose assimilation further improves search results presentation.
APA, Harvard, Vancouver, ISO, and other styles
33

Morettin, Paolo. "Learning and Reasoning in Hybrid Structured Spaces." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/264203.

Full text
Abstract:
Many real world AI applications involve reasoning on both continuous and discrete variables, while requiring some level of symbolic reasoning that can provide guarantees on the system's behaviour. Unfortunately, most of the existing probabilistic models do not efficiently support hard constraints or they are limited to purely discrete or continuous scenarios. Weighted Model Integration (WMI) is a recent and general formalism that enables probabilistic modeling and inference in hybrid structured domains. A difference of WMI-based inference algorithms with respect to most alternatives is that probabilities are computed inside a structured support involving both logical and algebraic relationships between variables. While some progress has been made in the last years and the topic is increasingly gaining interest from the community, research in this area is at an early stage. These aspects motivate the study of hybrid and symbolic probabilistic models and the development of scalable inference procedures and effective learning algorithms in these domains. This PhD Thesis embodies my effort in studying scalable reasoning and learning techniques in the context of WMI.
APA, Harvard, Vancouver, ISO, and other styles
34

Arruda, Alexandre Matos. "Abdução clássica e abdução probabilística: a busca pela explicação de dados reais." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-20102015-170210/.

Full text
Abstract:
A busca por explicações de fatos ou fenômenos é algo que sempre permeou o raciocínio humano. Desde a antiguidade, o ser humano costuma observar fatos e, de acordo com eles e o conhecimento presente, criar hipóteses que possam explicá-los. Um exemplo clássico é quando temos consulta médica e o médico, após verificar todos os sintomas, descobre qual é a doença e os meios de tratá-la. Essa construção de explicações, dado um conjunto de evidências que o indiquem, chamamos de \\textit{abdução}. A abdução tradicional para a lógica clássica estabelece que o dado meta não é derivado da base de conhecimento, ou seja, dada uma base de conhecimento $\\Gamma$ e um dado meta $A$ temos $\\Gamma ot \\vdash A$. Métodos clássicos de abdução buscam gerar um novo dado $H$ que, juntamente com uma base de conhecimento $\\Gamma$, possamos inferir $A$ ($\\Gamma \\cup H \\vdash A$). Alguns métodos tradicionais utilizam o tableaux (como em \\cite) para a geração da fórmula $H$. Aqui, além de lidarmos com a abdução baseada em corte, através do KE-tableaux, que não necessita assumir que o dado meta não seja derivado da base de conhecimento, lidaremos também com a lógica probabilística, redescoberta por Nilsson, em \\cite, onde temos a atribuição de probabilidades a fórmulas. Dizemos que uma instância em lógica probabilística é consistente se existe uma distribuição probabilística consistente sobre as valorações. Determinar essa distribuição probabilística é que o chamamos de problema PSAT. O objetivo de nosso trabalho é definir e estabelecer o que é uma abdução em Lógica Probabilística (abdução em PSAT) e, além disso, fornecer métodos de abdução para PSAT: dada uma instância PSAT $\\left\\langle \\Gamma, \\Psi ightangle$ na forma normal atômica \\cite e uma fórmula $A$ tal que existe uma distribuição probabi bylística $\\pi$ que satisfaz $\\left\\langle \\Gamma, \\Psi ightangle$ e $\\pi(A) = 0$, cada método é capaz de gerar uma fórmula $H$ tal que $\\left\\langle \\Gamma \\cup H , \\Psi ightangle \\!\\!|\\!\\!\\!\\approx A$ onde $\\pi(A) > 0$ para toda distribuição $\\pi$ que satisfaça $\\left\\langle \\Gamma \\cup H , \\Psi ightangle$. Iremos também demonstrar que alguns dos métodos apresentados são corretos e completos na geração de fórmulas $H$ que satisfaçam as condições de abdução.
The search for explanations of facts or phenomena is something that has always permeated human reasoning. Since antiquity, the human being usually observes facts and, according to them and his knowledge, create hypotheses that can explain them. A classic example is when we have medical consultation and the doctor, after checking all the symptoms, discovers what is the disease and the ways to treat it. This construction of explanations, given a set of evidence, we call \\textit. In traditional abduction methods it is assumed that the goal data has not yet been explained, that is, given a background knowledge base $\\Gamma$ and a goal data $A$ we have $\\Gamma ot \\vdash A$. Classical methods want to generate a new datum $H$ in such way that with the background knowledge base $\\Gamma$, we can infer $A$ ($\\Gamma \\cup H \\vdash A$). Some traditional methods use the analytical tableaux (see \\cite) for the generation of $H$. Here we deal with a cut-based abduction, with the KE-tableaux, which do not need to assume that the goal data is not derived from the knowledge base, and, moreover, with probabilistic logic (PSAT), rediscovered in \\cite, where we have probabilistic assignments to logical formulas. A PSAT instance is consistent if there is a probabilistic distribution over the assignments. The aim of our work is to define and establish what is an abduction in Probabilistic Logic (abduction for PSAT) and, moreover, provide methods for PSAT abduction: given a PSAT instance $\\left\\langle \\Gamma, \\Psi ightangle$ in atomic normal form \\cite and a formula $A$ such that there is a probabilistic distribution $\\pi$ that satisfies $\\left\\langle \\Gamma, \\Psi ightangle$ and $\\pi(A)=0$, each method is able to generate a formula $H$ such that $\\left\\langle \\Gamma \\cup H , \\Psi ightangle \\!\\!|\\!\\!\\!\\approx A$ where $\\pi(A) > 0$ for all distribution $\\pi$ that satisfies $\\left\\langle \\Gamma \\cup H , \\Psi ightangle$. We demonstrated that some of the our methods, shown in this work, are correct and complete for the generation of $H$.
APA, Harvard, Vancouver, ISO, and other styles
35

Diebel, James Richard. "Bayesian image vectorization : the probabilistic inversion of vector image rasterization /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Palermo, Angela Giovanna. "Logique juridique et logique probabiliste à l'époque moderne." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA1027/document.

Full text
Abstract:
Notre projet de recherche consiste à analyser les relations étroites qu’entretiennent la logique juridique et le raisonnement probabiliste dans la constitution du calcul des probabilités, c’est à dire depuis son origine au XVIIe siècle jusqu’au siècle des Lumières.L’étude de la logique juridique pousse inévitablement à examiner les rapports entre logique et rhétorique, et à repenser la rhétorique à la lumière de son incontournable rôle logique et, de même, à montrer que toute étude sur la logique juridique doit passer inévitablement par l’étude de la logique de l’argumentation.J'ai montré, contre la thèse qui réduit le raisonnement juridique à une simple rhétorique, que celui-ci répond à une exigence de vérité, ce qui exige de repenser la relation essentielle entre logique et rhétorique dans le champ juridique. La logique ici mobilisée est une logique de la probabilité, laquelle est appropriée à la rationalité pragmatique.C’est du même coup la relation entre logique juridique et logique probabiliste qui se trouve interrogée, à la fois dans une perspective historique, mais surtout du point de vue de philosophie de la science, puisque ces éléments constituent un bon point de départ pour se poser la question de la signification de la gnoséologie et, plus largement, de la validité des théories gnoséologiques. Mais pas seulement : en effet, nombre de philosophes contemporains des sciences ont mis l’accent sur le rôle des métaphores humanistes et des « sciences humaines » dans le développement des théories scientifiques.C’est en quoi consistent l’actualité de ces études et l’utilité de ces questions qui sont intéressantes parce qu’elles se posent à la limite entre la philosophie des sciences et la philosophie morale, brisant ainsi l’ancien dualisme qui a fait écran à la théorie de la connaissance pendant des siècles et qui a encore ses défenseurs dans le monde de certains philosophes analytiques.Nous avons donc montré que logique juridique et logique probabiliste peuvent être considérées comme des paradigmes gnoséologiques tout à fait nouveaux
When I started to study the relationship between legal logic and probabilistic logic, I immediately realized that this relationship could not really be understood without investigating more specifically the link logic-rhetoric included in it. A long philosophical tradition has accustomed us to consider the legal logic as essentially tied to the rhetoric and the latter as completely detached from the logic.With the word "rhetoric" we usually refers to the '"art of speaking well." But ρητορική τέχνη (retoriké tekne) that arises in the fifth century BC on empirical grounds of the art court has, from its birth, a practical purpose : it wants to be an instrument of persuasion, and the medium he uses is the εικός (eikόs), the plausible. One of the foundations of Greek logic is thus to be found on the empirical grounds of judicial logic. But even if the rhetoric was born with practical and not theoretical purposes, however, this fact requires a study of argumentation theory and its evidence, apart from the prejudice that, even if logic and rhetoric are both related to the argument, the logic should deal with the correct arguments while rhetoric deals with only persuasive arguments.Through historical and logical analysis drawn from Aristotle and which comes to consider the positions of prominent contemporary scholars such as Giuliani, Taruffo, Capozzi, Cellucci, Spranzi, etc., in this article I will show that, instead, logic and rhetoric have a strong bond which should be rethought so as to better understand the essence of legal logic, but also because the break of dualism logical-rhetoric can open much wider perspectives of reflection. Particularly I refer to the reflection of logical and moral relationship that, in turn, would lead us to reflect on the opposition between mind and body. In fact, when we turn a look at the history of logic, we will realize that, since ancient times, there were no sharp and radicals divisions between logical and rhetorical field and that, even in modern times, it is possible to draw a line of continuity between the field of rigorous proof and the field of demonstration of rhetoric, thanks to the recognizable theoretical role of metaphor
APA, Harvard, Vancouver, ISO, and other styles
37

Fares, George E. "Probabilistic fault location in combinational logic networks by multistage binary tree classifier algorith development, implementation results and efficiency." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Spikes, Kyle Thomas. "Probabilistic seismic inversion based on rock-physics models for reservoir characterization /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ozdemir, Mustafa. "A Probabilistic Schedule Delay Analysis In Construction Projects By Using Fuzzy Logic Incorporated With Relative Importance Index (rii) Method." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612169/index.pdf.

Full text
Abstract:
The aim of this thesis is to propose a decision support tool for contractors before the bidding stage to quantify the probability of schedule delay in construction projects by using fuzzy logic incorporated with relative importance index (RII) method. Eighty three (83) different schedule delay factors were identified through detailed literature review and interview with experts from a leading Turkish construction company, then categorized into nine (9) groups and visualized by utilizing Ishikawa (Fish Bone) Diagrams. The relative importances of schedule delay factors were quantified by relative importance index (RII) method and the ranking of the factors and groups were demonstrated according to their importance level on schedule delay. A schedule delay assessment model was proposed by using Fuzzy Theory in order to determine a realistic time contingency by taking into account of delay factors characterized in construction projects. The assessment model was developed by using Fuzzy Logic Toolbox of the MATLAB Program Software. Proposed methodology was tested in a real case study and probability of schedule delay was evaluated by the assessment model after the required inputs were inserted to software. According to the case study results, the most contributing factors and groups (that need attention) to the probability of schedule delays were discussed. The assessment model results were found to be conceivably acceptable and adequate for the purpose of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
40

Hager, Johann. "The application of probabilistic logic to identify, quantify and mitigate the uncertainty inherent to a large surface mining budget." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/79708.

Full text
Abstract:
Mining is a hugely expensive process and unlike manufacturing is based on an ever diminishing resource. It requires a continuous infusion of capital to sustain production. A myriad of factors, from the volatility of the markets to the surety that the minerals are really there, “plagues” both management and investors. The budget tries to predict or forecast future profits and acts as a roadmap to all stakeholders. Unfortunately, most of the time the budget of a mine degenerates to the extent of a collapse, sometimes very soon into the new budget period. This problem plagues both small and large mines indiscriminately. The budget is dictated in absolutes, and little or no variability is allowed. This thesis aims at developing a process to predict the probability of failure or success through the application of probabilistic logic to the simulation of the budget. To achieve this, a very detailed modelling tool is required. The model must replicate the actual mining process both in time and actual spatial representation. Enabling technology was developed over a period of five years, primarily based on the Runge Software Suite. The use of activity based costing enabled the budget to be simulated and expressed as a probability distribution. A Pareto analysis was done on the main cost drivers to extract the most important elements – or key drivers - that need to be manipulated. These distributions were mapped against real data and approximated with the use of the three parameter Weibull distribution. Simulation using Xeras® (Runge) proved to be impossible. This is due to the time needed for setup and processing. The budget was described as an empirical function of the production tonnages split according to the Pareto analyses. These functions were then utilised in Arena® to build a stochastic simulation model. The individual distributions are being modelled to supply the stochastic drivers for the budget distribution. Income, based on the sales, was added to the model in order for the Nett profit to be reflected as a distribution. This is analysed to determine the probability of meeting the budget. The underlying analysis of an open pit mining process clearly reflects that there are primary variables that may be controlled to trigger major changes in the production process. The most important parameter is the hauling cycle, because the haul trucks are the nexus of the production operation. It is further shown that the budget is primarily influenced by either FTE’s (full time employees, i.e. bodies) or funds (Capex or Opex) or a combination of both. The model uses probabilistic logic and ultimately culminates in the decision of how much money is needed and where it should be applied. This ensures that the probability of achieving the budget is increased in a rational and demonstrable way. The logical question that arises is: “Can something be done to utilise this knowledge and change behaviour of the operators?” This led to (IOPA – Intelligent Operator Performance Analyses) – where the performance or lack thereof is measured on a shift by shift basis. This is evaluated and communicated through automated feedback to the supervisors and operators and is being implemented. Early results and feedback are hugely positive. The last step is prove where capital (or any additional money spend) that is applied to the budget will give the most benefit or have the biggest positive influence on the achievement thereof. The strength of the model application lies therein that it combines stochastic simulation, probability theory, financial budgeting and practical mine schedule to predict (or describe) the event of budget achievement as a probability distribution. The main contribution is a new level of understanding financial risk and or constraints in the budget of a large (open pit) mine.
Dissertation (MSc)--University of Pretoria, 2014.
Mining Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
41

Mahendiran, Aravindan. "Automated Vocabulary Building for Characterizing and Forecasting Elections using Social Media Analytics." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/25430.

Full text
Abstract:
Twitter has become a popular data source in the recent decade and garnered a significant amount of attention as a surrogate data source for many important forecasting problems. Strong correlations have been observed between Twitter indicators and real-world trends spanning elections, stock markets, book sales, and flu outbreaks. A key ingredient to all methods that use Twitter for forecasting is to agree on a domain-specific vocabulary to track the pertinent tweets, which is typically provided by subject matter experts (SMEs). The language used in Twitter drastically differs from other forms of online discourse, such as news articles and blogs. It constantly evolves over time as users adopt popular hashtags to express their opinions. Thus, the vocabulary used by forecasting algorithms needs to be dynamic in nature and should capture emerging trends over time. This thesis proposes a novel unsupervised learning algorithm that builds a dynamic vocabulary using Probabilistic Soft Logic (PSL), a framework for probabilistic reasoning over relational domains. Using eight presidential elections from Latin America, we show how our query expansion methodology improves the performance of traditional election forecasting algorithms. Through this approach we demonstrate how we can achieve close to a two-fold increase in the number of tweets retrieved for predictions and a 36.90% reduction in prediction error.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
42

Wong, Vicky W. "Characterizing the parallel performance and soft error resilience of probabilistic inference algorithms /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Giusti, Giulia. "Sui Tipi Sessione, le Scelte Probabilistiche e il Tempo Polinomiale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24922/.

Full text
Abstract:
I protocolli crittografici sono progettati per garantire comunicazioni protette su canali eventualmente controllati da utenti malintenzionati. Considerando le dimensioni crescenti delle reti di comunicazione e la loro dipendenza dai protocolli crittografici, risulta necessario un elevato livello di sicurezza in questi ultimi. Nel modello computazionale della crittografia moderna, le proprietà di sicurezza vengono definite in modo rigoroso e la verifica dei protocolli rispetto a tali proprietà viene effettuata mediante prove matematiche. In tale scenario si colloca il seguente progetto di tesi. L'obiettivo di questa tesi riguarda lo sviluppo di algebre di processo che siano in grado di rappresentare i protocolli crittografici nel senso del modello computazionale. L'algebra di processo indagata necessita di una nozione di scelta probabilistica e di vincoli di complessità computazionale per gli avversari. A tale fine, è stato definito un sistema di tipi sessione in grado di verificare la sicurezza dei protocolli. I problemi di scalabilità dei modelli relativi alla verifica automatica dei protocolli definiti secondo l'approccio computazionale sottolineano la rilevanza scientifica di questo elaborato. La radice di tali limitazioni risiede nella contemporanea presenza di non determinismo e scelte probabilistiche. Il sistema sviluppato nel presente progetto risulta essere, da una parte, sufficientemente permissivo da consentire la rappresentazione dei protocolli standard e delle prove per riduzione e, dall'altra, il più possibile restrittivo da impedire il non determinismo e i fenomeni di deadlock, in genere non presenti nel modello computazionale. Inoltre, gode di importanti proprietà quali Subject Reduction e limitazione polinomiale della complessità computazionale dei processi.
APA, Harvard, Vancouver, ISO, and other styles
44

Vaccari, Giulio. "Dal Paradigma Funzionale a Quello Logico in Presenza di Scelte Probabilistiche: un Approccio Basato sulla Geometria dell'Interazione." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16726/.

Full text
Abstract:
In questa tesi verrà trattato lo sviluppo di un software che svolge la funzione di traduttore tra due linguaggi di programmazione. Lo scopo di un traduttore è quello di trasformare un programma scritto in un dato linguaggio in un nuovo programma funzionalmente equivalente a quello di partenza ma scritto in un linguaggio diverso. Il linguaggio di partenza per la traduzione è rappresentato dal lambda calcolo probabilistico. Studieremo il paradigma di programmazione su cui si basa, per poi analizzare la struttura dei programmi definibili nel linguaggio. Vedremo quindi ProbLog, un linguaggio fondato sul paradigma di programmazione logica arricchito con costrutti probabilistici, che rappresenterà il linguaggio di destinazione per il traduttore. I linguaggi logici permettono un approccio alla programmazione basato sulla definizione di teorie logiche, in cui da proposizioni assunte vere si derivano nuovi risultati attraverso un processo di deduzione formale. La caratteristica principale di ProbLog che lo differenzia dagli altri linguaggi logici risiede nella possibilità di definire proposizioni che risultano vere con una data probabilità, permettendoci di modellare realtà in cui sono presenti fatti e regole non più per forza veri in senso assoluto. Per realizzare il processo di traduzione verrà impiegata la Geometria dell'Interazione, una struttura semantica per la logica lineare introdotta dal logico Jean-Yves Girard. Intuitivamente può sembrare difficile immaginare come questa possa aver trovato posto nello sviluppo del traduttore, ma vedremo come in realtà un programma del lambda calcolo sia intrinsecamente connesso a questo tipo di logica. La Geometria dell'Interazione ci darà la possibilità di vedere i programmi funzionali sotto una nuova angolazione, accorciando la distanza che separa il paradigma funzionale da quello logico.
APA, Harvard, Vancouver, ISO, and other styles
45

Myers, Andrew T. "Testing and probabilistic simulation of ductile fracture initiation in structural steel components and weldments /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Morais, Eduardo Menezes de. "Answer set programming probabilístico." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-20022013-001051/.

Full text
Abstract:
Este trabalho introduz uma técnica chamada Answer Set Programming Probabilístico (PASP), que permite a modelagem de teorias complexas e a verificação de sua consistência em relação a um conjunto de dados estatísticos. Propomos métodos de resolução baseados em uma redução para o problema da satisfazibilidade probabilística (PSAT) e um método de redução de Turing ao ASP.
This dissertation introduces a technique called Probabilistic Answer Set Programming (PASP), that allows modeling complex theories and check its consistence with respect to a set of statistical data. We propose a method of resolution based in the reduction to the probabilistic satisfiability problem (PSAT) and a Turing reduction method to ASP.
APA, Harvard, Vancouver, ISO, and other styles
47

Torres, Parra Jimena Cecilia. "A Perception Based Question-Answering Architecture Derived from Computing with Words." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967797581&sid=1&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tothong, Polsak. "Probabilistic seismic demand analysis using advanced ground motion intensity measures, attenuation relationships, and near-fault effects /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kobayashi, H. "An application of probabilistic life-cycle cost analysis to the construction and maintenance of reinforced concrete bridges /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Baier, Christel, Marcus Daum, Benjamin Engel, Hermann Härtig, Joachim Klein, Sascha Klüppelholz, Steffen Märcker, Hendrik Tews, and Marcus Völp. "Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-121319.

Full text
Abstract:
Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS) code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography