Artykuły w czasopismach na temat „Propositional satisfiability problems”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Propositional satisfiability problems.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Propositional satisfiability problems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Jeroslow, Robert G., i Jinchang Wang. "Solving propositional satisfiability problems". Annals of Mathematics and Artificial Intelligence 1, nr 1-4 (wrzesień 1990): 167–87. http://dx.doi.org/10.1007/bf01531077.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

MEIER, ARNE, MICHAEL THOMAS, HERIBERT VOLLMER i MARTIN MUNDHENK. "THE COMPLEXITY OF SATISFIABILITY FOR FRAGMENTS OF CTL AND CTL⋆". International Journal of Foundations of Computer Science 20, nr 05 (październik 2009): 901–18. http://dx.doi.org/10.1142/s0129054109006954.

Pełny tekst źródła
Streszczenie:
The satisfiability problems for [Formula: see text] and [Formula: see text] are known to be EXPTIME-complete, resp. 2EXPTIME-complete (Fischer and Ladner (1979), Vardi and Stockmeyer (1985)). For fragments that use less temporal or propositional operators, the complexity may decrease. This paper undertakes a systematic study of satisfiability for [Formula: see text]-and [Formula: see text]-formulae over restricted sets of propositional and temporal operators. We show that restricting the temporal operators yields satisfiability problems complete for 2EXPTIME, EXPTIME, PSPACE, and NP. Restricting the propositional operators either does not change the complexity (as determined by the temporal operators), or yields very low complexity like NC1, TC0, or NLOGTIME.
Style APA, Harvard, Vancouver, ISO itp.
3

MOUHOUB, MALEK, i SAMIRA SADAOUI. "SOLVING INCREMENTAL SATISFIABILITY". International Journal on Artificial Intelligence Tools 16, nr 01 (luty 2007): 139–47. http://dx.doi.org/10.1142/s0218213007003254.

Pełny tekst źródła
Streszczenie:
Propositional satisfiability (SAT) problem is fundamental to the theory of NP-completeness. Indeed, using the concept of "polynomial-time reducibility" all NP-complete problems can be polynomially reduced to SAT. Thus, any new technique for satisfiability problems will lead to general approaches for thousands of hard combinatorial problems. In this paper, we introduce the incremental propositional satisfiability problem that consists of maintaining the satisfiability of a propositional formula anytime a conjunction of new clauses is added. More precisely, the goal here is to check whether a solution to a SAT problem continues to be a solution anytime a new set of clauses is added and if not, whether the solution can be modified efficiently to satisfy the old formula and the new clauses. We will study the applicability of systematic and approximation methods for solving incremental SAT problems. The systematic method is based on the branch and bound technique while the approximation methods rely on stochastic local search and genetic algorithms. Experimental tests, conducted on randomly generated SAT instances, demonstrate the efficiency in time of the approximation methods over the branch and bound algorithm. However these approximation methods do not always guarantee the completeness of the solution returned. We show that a method we propose that uses non systematic search in a limited form together with branch and bound has the best compromise, in practice, between time and quality of the solution returned (success ratio).
Style APA, Harvard, Vancouver, ISO itp.
4

Aravantinos, V., R. Caferra i N. Peltier. "Decidability and Undecidability Results for Propositional Schemata". Journal of Artificial Intelligence Research 40 (22.03.2011): 599–656. http://dx.doi.org/10.1613/jair.3351.

Pełny tekst źródła
Streszczenie:
We define a logic of propositional formula schemata adding to the syntax of propositional logic indexed propositions and iterated connectives ranging over intervals parameterized by arithmetic variables. The satisfiability problem is shown to be undecidable for this new logic, but we introduce a very general class of schemata, called bound-linear, for which this problem becomes decidable. This result is obtained by reduction to a particular class of schemata called regular, for which we provide a sound and complete terminating proof procedure. This schemata calculus allows one to capture proof patterns corresponding to a large class of problems specified in propositional logic. We also show that the satisfiability problem becomes again undecidable for slight extensions of this class, thus demonstrating that bound-linear schemata represent a good compromise between expressivity and decidability.
Style APA, Harvard, Vancouver, ISO itp.
5

Cameron, Chris, Rex Chen, Jason Hartford i Kevin Leyton-Brown. "Predicting Propositional Satisfiability via End-to-End Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 3324–31. http://dx.doi.org/10.1609/aaai.v34i04.5733.

Pełny tekst źródła
Streszczenie:
Strangely enough, it is possible to use machine learning models to predict the satisfiability status of hard SAT problems with accuracy considerably higher than random guessing. Existing methods have relied on extensive, manual feature engineering and computationally complex features (e.g., based on linear programming relaxations). We show for the first time that even better performance can be achieved by end-to-end learning methods — i.e., models that map directly from raw problem inputs to predictions and take only linear time to evaluate. Our work leverages deep network models which capture a key invariance exhibited by SAT problems: satisfiability status is unaffected by reordering variables and clauses. We showed that end-to-end learning with deep networks can outperform previous work on random 3-SAT problems at the solubility phase transition, where: (1) exactly 50% of problems are satisfiable; and (2) empirical runtimes of known solution methods scale exponentially with problem size (e.g., we achieved 84% prediction accuracy on 600-variable problems, which take hours to solve with state-of-the-art methods). We also showed that deep networks can generalize across problem sizes (e.g., a network trained only on 100-variable problems, which typically take about 10 ms to solve, achieved 81% accuracy on 600-variable problems).
Style APA, Harvard, Vancouver, ISO itp.
6

Reith, Steffen, i Heribert Vollmer. "Optimal satisfiability for propositional calculi and constraint satisfaction problems". Information and Computation 186, nr 1 (październik 2003): 1–19. http://dx.doi.org/10.1016/s0890-5401(03)00092-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

MOUHOUB, MALEK. "SYSTEMATIC VERSUS LOCAL SEARCH AND GA TECHNIQUES FOR INCREMENTAL SAT". International Journal of Computational Intelligence and Applications 07, nr 01 (marzec 2008): 77–96. http://dx.doi.org/10.1142/s1469026808002193.

Pełny tekst źródła
Streszczenie:
Propositional satisfiability (SAT) problem is fundamental to the theory of NP-completeness. Indeed, using the concept of "polynomial-time reducibility" all NP-complete problems can be polynomially reduced to SAT. Thus, any new technique for satisfiability problems will lead to general approaches for thousands of hard combinatorial problems. In this paper, we introduce the incremental propositional satisfiability problem that consists of maintaining the satisfiability of a propositional formula anytime a conjunction of new clauses is added. More precisely, the goal here is to check whether a solution to a SAT problem continues to be a solution anytime a new set of clauses is added and if not, whether the solution can be modified efficiently to satisfy the old formula and the new clauses. We will study the applicability of systematic and approximation methods for solving incremental SAT problems. The systematic method is based on the branch and bound technique, whereas the approximation methods rely on stochastic local search (SLS) and genetic algorithms (GAs). A comprehensive empirical study, conducted on a wide range of randomly generated consistent SAT instances, demonstrates the efficiency in time of the approximation methods over the branch and bound algorithm. However, these approximation methods do not guarantee the completeness of the solution returned. We show that a method we propose that uses nonsystematic search in a limited form together with branch and bound has the best compromise, in practice, between time and the success ratio (percentage of instances completely solved).
Style APA, Harvard, Vancouver, ISO itp.
8

Balu, Radhakrishnan, Dale Shires i Raju Namburu. "A quantum algorithm for uniform sampling of models of propositional logic based on quantum probability". Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 16, nr 1 (17.05.2016): 57–65. http://dx.doi.org/10.1177/1548512916648232.

Pełny tekst źródła
Streszczenie:
We describe a class of quantum algorithms to generate models of propositional logic with equal probability. We consider quantum stochastic flows that are the quantum analogues of classical Markov chains and establish a relation between fixed points on the two flows. We construct chains inspired by von Neumann algorithms using uniform measures as fixed points to construct the corresponding irreversible quantum stochastic flows. We formulate sampling models of propositions in the framework of adiabatic quantum computing and solve the underlying satisfiability instances. Satisfiability formulation is an important and successful technique in modeling the decision theoretic problems in a classical context. We discuss some features of the proposed algorithms tested on an existing quantum annealer D-Wave II extending the simulation of decision theoretic problems to a quantum context.
Style APA, Harvard, Vancouver, ISO itp.
9

Pinkas, Gadi. "Symmetric Neural Networks and Propositional Logic Satisfiability". Neural Computation 3, nr 2 (czerwiec 1991): 282–91. http://dx.doi.org/10.1162/neco.1991.3.2.282.

Pełny tekst źródła
Streszczenie:
Connectionist networks with symmetric weights (like Hopfield networks and Boltzmann Machines) use gradient descent to find a minimum for quadratic energy functions. We show equivalence between the problem of satisfiability in propositional calculus and the problem of minimizing those energy functions. The equivalence is in the sense that for any satisfiable well-formed formula (WFF) we can find a quadratic function that describes it, such that the set of solutions that minimizes the function is equal to the set of truth assignments that satisfy the WFF. We also show that in the same sense every quadratic energy function describes some satisfiable WFF. Algorithms are given to transform any propositional WFF into an energy function that describes it and vice versa. High-order models that use sigma-pi units are shown to be equivalent to the standard quadratic models with additional hidden units. An algorithm to convert high-order networks to low-order ones is used to implement a satisfiability problem-solver on a connectionist network. The results give better understanding of the role of hidden units and of the limitations and capabilities of symmetric connectionist models. The techniques developed for the satisfiability problem may be applied to a wide range of other problems, such as associative memories, finding maximal consistent subsets, automatic deduction, and even nonmonotonic reasoning.
Style APA, Harvard, Vancouver, ISO itp.
10

Boudane, Abdelhamid, Saïd Jabbour, Lakhdar Sais i Yakoub Salhi. "SAT-Based Data Mining". International Journal on Artificial Intelligence Tools 27, nr 01 (luty 2018): 1840002. http://dx.doi.org/10.1142/s021821301840002x.

Pełny tekst źródła
Streszczenie:
Several propositional satisfiability (SAT) based encodings have been proposed to deal with various data mining problems including itemset and sequence mining problems. This research issue allows to model data mining problems in a declarative way, while exploiting efficient SAT solving techniques. In this paper, we overview our contributions on the application of propositional satisfiability (SAT) to model and solve itemset mining tasks. We first present a SAT based encoding of frequent closed itemset mining task as a propositional formula whose models corresponds to the patterns to be mined. Secondly, we show that some data mining constraints can be avoided by reformulation. We illustrate this issue by reformulating the closeness constraint using the notion of minimal models. Finally, we also addressed the scalability issue, one of the most important challenge of these nice declarative framework. To this end, we proposed a complete partition based approach whose aim is to avoid encoding the whole database as a single formula. Using a partition on the set of items, our new approach leads to several propositional formulas of reasonable size. The experimental evaluation on several known datasets shows huge improvements in comparison to the direct approach without partitioning, while reducing significantly the performance gap with respect to specialized algorithms.
Style APA, Harvard, Vancouver, ISO itp.
11

WOLFMAN, STEVEN A., i DANIEL S. WELD. "Combining linear programming and satisfiability solving for resource planning". Knowledge Engineering Review 16, nr 1 (marzec 2001): 85–99. http://dx.doi.org/10.1017/s0269888901000017.

Pełny tekst źródła
Streszczenie:
Compilation to Boolean satisfiability has become a powerful paradigm for solving artificial intelligence problems. However, domains that require metric reasoning cannot be compiled efficiently to satisfiability even if they would otherwise benefit from compilation. We address this problem by combining techniques from the artificial intelligence and operations research communities. In particular, we introduce the LCNF (Linear Conjunctive Normal Form) representation that combines propositional logic with metric constraints. We present LPSAT (Linear Programming plus SATisfiability), an engine that solves LCNF problems by interleaving calls to an incremental Simplex algorithm with systematic satisfaction methods. We explore several techniques for enhancing LPSAT's efficiency and expressive power by adjusting the interaction between the satisfiability and linear programming components of LPSAT. Next, we describe a compiler that converts metric resource planning problems into LCNF for processing by LPSAT. Finally, the experimental section of the paper explores several optimisations to LPSAT, including learning from constraint failure and randomised cutoffs.
Style APA, Harvard, Vancouver, ISO itp.
12

Mayer, Joachim, Ilse Mitterreiter i Franz Josef Radermacher. "Running time experiments on some algorithms for solving propositional satisfiability problems". Annals of Operations Research 55, nr 1 (luty 1995): 139–78. http://dx.doi.org/10.1007/bf02031719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Limón-Priego, Yensen, Ismael Everardo Bárcenas-Patiño, Edgard Iván Benítez-Guerrero, Guillermo Gilberto Molero-Castillo i Alejandro Velazquez-Mena. "Mu-Calculus Satisfiability with Arithmetic Constraints". Proceedings of the Institute for System Programming of the RAS 33, nr 2 (2021): 191–200. http://dx.doi.org/10.15514/ispras-2021-33(2)-12.

Pełny tekst źródła
Streszczenie:
The propositional modal μ-calculus is a well-known specification language for labeled transition systems. In this work, we study an extension of this logic with converse modalities and Presburger arithmetic constraints, interpreted over tree models. We describe a satisfiability algorithm based on breadth-first construction of Fischer-Lardner models. An implementation together several experiments are also reported. Furthermore, we also describe an application of the algorithm to solve static analysis problems over semi-structured data.
Style APA, Harvard, Vancouver, ISO itp.
14

Cimatti, A., A. Griggio i R. Sebastiani. "Computing Small Unsatisfiable Cores in Satisfiability Modulo Theories". Journal of Artificial Intelligence Research 40 (14.04.2011): 701–28. http://dx.doi.org/10.1613/jair.3196.

Pełny tekst źródła
Streszczenie:
The problem of finding small unsatisfiable cores for SAT formulas has recently received a lot of interest, mostly for its applications in formal verification. However, propositional logic is often not expressive enough for representing many interesting verification problems, which can be more naturally addressed in the framework of Satisfiability Modulo Theories, SMT. Surprisingly, the problem of finding unsatisfiable cores in SMT has received very little attention in the literature. In this paper we present a novel approach to this problem, called the Lemma-Lifting approach. The main idea is to combine an SMT solver with an external propositional core extractor. The SMT solver produces the theory lemmas found during the search, dynamically lifting the suitable amount of theory information to the Boolean level. The core extractor is then called on the Boolean abstraction of the original SMT problem and of the theory lemmas. This results in an unsatisfiable core for the original SMT problem, once the remaining theory lemmas are removed. The approach is conceptually interesting, and has several advantages in practice. In fact, it is extremely simple to implement and to update, and it can be interfaced with every propositional core extractor in a plug-and-play manner, so as to benefit for free of all unsat-core reduction techniques which have been or will be made available. We have evaluated our algorithm with a very extensive empirical test on SMT-LIB benchmarks, which confirms the validity and potential of this approach.
Style APA, Harvard, Vancouver, ISO itp.
15

L. C., Kho, Kasihmuddin M. S. M., Mansor M. A. i Sathasivam S. "Propositional Satisfiability Logic via Ant Colony Optimization in Hopfield Neural Network". Malaysian Journal of Mathematical Sciences 16, nr 1 (31.01.2022): 37–53. http://dx.doi.org/10.47836/mjms.16.1.04.

Pełny tekst źródła
Streszczenie:
Minimizing the cost function that corresponds to propositional logic is vital to ensure the learning phase of HNN can occur optimally. In that regard, optimal and non-biased algorithm is required to ensure HNN will always converge to global solution. Ant Colony Optimization (ACO) is a population-based and nature-inspired algorithm to solve various combinatorial optimization problems. ACO simulates the behaviour of the real ants that forage for food and communication of ants through pheromone density. In this work, ACO will be used to minimize the cost function that corresponds to the logical rule in Hopfield Neural Network. ACO will utilize pheromone density to find the optimal path that leads to zero cost function without consuming more learning iteration. Performance for all learning models will be evaluated based on various performance metrics. Results collected from computer simulation implies that ACO outperformed conventional learning model in minimizing the logical cost function.
Style APA, Harvard, Vancouver, ISO itp.
16

Preto, Sandro Márcio da Silva. "Semantics modulo satisfiability with applications: function representation, probabilities and game theory". Bulletin of Symbolic Logic 28, nr 2 (czerwiec 2022): 264–65. http://dx.doi.org/10.1017/bsl.2022.2.

Pełny tekst źródła
Streszczenie:
AbstractIn the context of propositional logics, we apply semantics modulo satisfiability—a restricted semantics which comprehends only valuations that satisfy some specific set of formulas—with the aim to efficiently solve some computational tasks. Three possible such applications are developed.We begin by studying the possibility of implicitly representing rational McNaughton functions in Łukasiewicz Infinitely-valued Logic through semantics modulo satisfiability. We theoretically investigate some approaches to such representation concept, called representation modulo satisfiability, and describe a polynomial algorithm that builds representations in the newly introduced system. An implementation of the algorithm, test results and ways to randomly generate rational McNaughton functions for testing are presented. Moreover, we propose an application of such representations to the formal verification of properties of neural networks by means of the reasoning framework of Łukasiewicz Infinitely-valued Logic.Then, we move to the investigation of the satisfiability of joint probabilistic assignments to formulas of Łukasiewicz Infinitely-valued Logic, which is known to be an NP-complete problem. We provide an exact decision algorithm derived from the combination of linear algebraic methods with semantics modulo satisfiability. Also, we provide an implementation for such algorithm for which the phenomenon of phase transition is empirically detected.Lastly, we study the game theory situation of observable games, which are games that are known to reach a Nash equilibrium, however, an external observer does not know what is the exact profile of actions that occur in a specific instance; thus, such observer assigns subjective probabilities to players actions. We study the decision problem of determining if a set of these probabilistic constraints is coherent by reducing it to the problems of satisfiability of probabilistic assignments to logical formulas both in Classical Propositional Logic and Łukasiewicz Infinitely-valued Logic depending on whether only pure equilibria or also mixed equilibria are allowed. Such reductions rely upon the properties of semantics modulo satisfiability. We provide complexity and algorithmic discussion for the coherence problem and, also, for the problem of computing maximal and minimal probabilistic constraints on actions that preserves coherence.Abstract prepared by Sandro Márcio da Silva Preto.E-mail: spreto@ime.usp.brURL:https://doi.org/10.11606/T.45.2021.tde-17062021-163257
Style APA, Harvard, Vancouver, ISO itp.
17

SIMON, J. C., i O. DUBOIS. "NUMBER OF SOLUTIONS OF SATISFIABILITY INSTANCES—APPLICATIONS TO KNOWLEDGE BASES". International Journal of Pattern Recognition and Artificial Intelligence 03, nr 01 (marzec 1989): 53–65. http://dx.doi.org/10.1142/s0218001489000061.

Pełny tekst źródła
Streszczenie:
In propositional logic (zero order) a system of logical rules may be put under the form of a conjunction of disjunction, i.e. a “satisfiability” or SAT-problem. SAT is central to NP-complete problems. Any result obtained on SAT would have consequences for a lot of problems important in artificial intelligence. We deal with the question of the number N of solutions of SAT. Firstly, any system of SAT clauses may be transformed in a system of independent clauses by an exponential process; N may be computed exactly. Secondly, by a statistical approach, results are obtained showing that for a given SAT-instance, it should be possible to find an estimate of N with a margin of confidence in polynomial time. Thirdly, we demonstrate the usefulness of these ideas on large knowledge bases.
Style APA, Harvard, Vancouver, ISO itp.
18

Balyo, Tomáš, Roman Barták i Otakar Trunda. "REINFORCED ENCODING FOR PLANNING AS SAT". Acta Polytechnica CTU Proceedings 2, nr 2 (31.12.2015): 1–7. http://dx.doi.org/10.14311/app.2015.1.0001.

Pełny tekst źródła
Streszczenie:
Solving planning problems via translation to satisfiability (SAT) is one of the most successful approaches to automated planning. We propose a new encoding scheme, called Reinforced Encoding, which encodes a planning problem represented in the SAS+ formalism into SAT. The Reinforced Encoding is a combination of the transition-based SASE encoding with the classical propositional encoding. In our experiments we compare our new encoding to other known SAS+ based encodings. The results indicate, that he Reinforced encoding performs well on the benchmark problems of the 2011 International Planning Competition and can outperform all the other known encodings for several domains.
Style APA, Harvard, Vancouver, ISO itp.
19

Giunchiglia, E., M. Narizzano i A. Tacchella. "Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas". Journal of Artificial Intelligence Research 26 (17.08.2006): 371–416. http://dx.doi.org/10.1613/jair.1959.

Pełny tekst źródła
Streszczenie:
Resolution is the rule of inference at the basis of most procedures for automated reasoning. In these procedures, the input formula is first translated into an equisatisfiable formula in conjunctive normal form (CNF) and then represented as a set of clauses. Deduction starts by inferring new clauses by resolution, and goes on until the empty clause is generated or satisfiability of the set of clauses is proven, e.g., because no new clauses can be generated. In this paper, we restrict our attention to the problem of evaluating Quantified Boolean Formulas (QBFs). In this setting, the above outlined deduction process is known to be sound and complete if given a formula in CNF and if a form of resolution, called ``Q-resolution'', is used. We introduce Q-resolution on terms, to be used for formulas in disjunctive normal form. We show that the computation performed by most of the available procedures for QBFs --based on the Davis-Logemann-Loveland procedure (DLL) for propositional satisfiability-- corresponds to a tree in which Q-resolution on terms and clauses alternate. This poses the theoretical bases for the introduction of learning, corresponding to recording Q-resolution formulas associated with the nodes of the tree. We discuss the problems related to the introduction of learning in DLL based procedures, and present solutions extending state-of-the-art proposals coming from the literature on propositional satisfiability. Finally, we show that our DLL based solver extended with learning, performs significantly better on benchmarks used in the 2003 QBF solvers comparative evaluation.
Style APA, Harvard, Vancouver, ISO itp.
20

Balyo, Tomáš, i Roman Barták. "No One SATPlan Encoding To Rule Them All". Proceedings of the International Symposium on Combinatorial Search 6, nr 1 (1.09.2021): 146–50. http://dx.doi.org/10.1609/socs.v6i1.18371.

Pełny tekst źródła
Streszczenie:
Solving planning problems via translation to propositional satisfiability (SAT) is one of the most successful approaches to automated planning. An important aspect of this approach is the encoding, i.e., the construction of a propositional formula from a given planning problem instance. Numerous encoding schemes have been proposed in the recent years each aiming to outperform the previous encodings on the majority of the benchmark problems. In this paper we take a different approach. Instead of trying to develop a new encoding that is better for all kinds of benchmarks we take recently developed specialized encoding schemes and design a method to automatically select the proper encoding for a given planning problem instance. In the paper we also examine ranking heuristics for the Relaxed Relaxed Exists-Step encoding, which plays an important role in our algorithm. Experiments show that our new approach significantly outperforms the state-of-the-art encoding schemes when compared on the benchmarks of the 2011 International Planning Competition.
Style APA, Harvard, Vancouver, ISO itp.
21

Stephen, Shirly, i Torsten Hahmann. "Identifying Bottlenecks in Practical SAT-Based Model Finding for First-Order Logic Ontologies with Datasets". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 10039–40. http://dx.doi.org/10.1609/aaai.v33i01.330110039.

Pełny tekst źródła
Streszczenie:
Satisfiability of first-order logic (FOL) ontologies is typically verified by translation to propositional satisfiability (SAT) problems, which is then tackled by a SAT solver. Unfortunately, SAT solvers often experience scalability issues when reasoning with FOL ontologies and even moderately sized datasets. While SAT solvers have been found to capably handle complex axiomatizations, finding models of datasets gets considerably more complex and time-intensive as the number of clause exponentially increases with increase in individuals and axiomatic complexity. We identify FOL definitions as a specific bottleneck and demonstrate via experiments that the presence of many defined terms of the highest arity significantly slows down model finding. We also show that removing optional definitions and substituting these terms by their definiens leads to a reduction in the number of clauses, which makes SAT-based model finding practical for over 100 individuals in a FOL theory.
Style APA, Harvard, Vancouver, ISO itp.
22

AMENDOLA, GIOVANNI, CARMINE DODARO i MARCO MARATEA. "Abstract Solvers for Computing Cautious Consequences of ASP programs". Theory and Practice of Logic Programming 19, nr 5-6 (wrzesień 2019): 740–56. http://dx.doi.org/10.1017/s1471068419000164.

Pełny tekst źródła
Streszczenie:
AbstractAbstract solvers are a method to formally analyze algorithms that have been profitably used for describing, comparing and composing solving techniques in various fields such as Propositional Satisfiability (SAT), Quantified SAT, Satisfiability Modulo Theories, Answer Set Programming (ASP), and Constraint ASP.In this paper, we design, implement and test novel abstract solutions for cautious reasoning tasks in ASP. We show how to improve the current abstract solvers for cautious reasoning in ASP with new techniques borrowed from backbone computation in SAT, in order to design new solving algorithms. By doing so, we also formally show that the algorithms for solving cautious reasoning tasks in ASP are strongly related to those for computing backbones of Boolean formulas. We implement some of the new solutions in the ASP solver wasp and show that their performance are comparable to state-of-the-art solutions on the benchmark problems from the past ASP Competitions.
Style APA, Harvard, Vancouver, ISO itp.
23

BRAHA, DAN. "Design-as-satisfiability: A new approach to automated synthesis". Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, nr 5 (listopad 2001): 385–99. http://dx.doi.org/10.1017/s0890060401155022.

Pełny tekst źródła
Streszczenie:
This article addresses computational synthesis systems that attempt to find a structural description that matches a set of initial functional requirements and design constraints with a finite sequence of production rules. It has been previously shown by the author that it is computationally difficult to identify a sequence of production rules that can lead to a satisficing design solution. As a result, computational synthesis, particularly with large volumes of selection information, requires effective design search procedures. Many computational synthesis systems utilize transformational search strategies. However, such search strategies are inefficient due to the combinatorial nature of the problem. In this article, the problem is approached using a completely different paradigm. The new approach encodes a design search problem as a Boolean (propositional) satisfiability problem, such that from every satisfying Boolean-valued truth assignment to the corresponding Boolean expression we efficiently can derive a solution to the original synthesis problem (along with its finite sequence of production rules). A major advantage of the proposed approach is the possibility of utilizing recently developed powerful randomized search algorithms for solving Boolean satisfiability problems, which considerably outperform the most widely used satisfiability algorithms. The new design-as-satisfiability technique provides a flexible framework for stating a variety of design constraints, and also represents properly the theory behind modern constraint-based design systems.
Style APA, Harvard, Vancouver, ISO itp.
24

LIU, JIMING, XIAOLONG JIN i JING HAN. "DISTRIBUTED PROBLEM SOLVING WITHOUT COMMUNICATION — AN EXAMINATION OF COMPUTATIONALLY HARD SATISFIABILITY PROBLEMS". International Journal of Pattern Recognition and Artificial Intelligence 16, nr 08 (grudzień 2002): 1041–64. http://dx.doi.org/10.1142/s0218001402002143.

Pełny tekst źródła
Streszczenie:
In this paper, we extend and modify the ERA approach proposed in Ref. 13 to solve Propositional Satisfiability Problems (SATs). The new ERA approach involves a multiagent system where each agent only senses its local environment and applies some self-organizing rules for governing its movements. The environment, which is a two-dimensional cellular environment, records and updates the local values that are computed and affected according to the movements of individual agents. In solving a SAT with the ERA approach, we first divide variables into several groups, and represent each variable group with an agent whose possible positions correspond to the elements in a Cartesian product of variable domains, and then randomly place each agent onto one of its possible positions. Thereafter, the ERA system will keep on dispatching agents to choose their movements until an exact or approximate solution emerges. The experimental results on some benchmark SAT test-sets have shown that the ERA approach can obtain comparable results as well as stable performances for SAT problems. In particular, it can find approximate solutions for SAT problems in only a few steps. The real value of this approach is that it is a distributed asynchronous approach without any centralized control or evaluation, where the agents can cooperate to solve problems without explicit communication.
Style APA, Harvard, Vancouver, ISO itp.
25

Richardson, Kyle, i Ashish Sabharwal. "Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 11209–19. http://dx.doi.org/10.1609/aaai.v36i10.21371.

Pełny tekst źródła
Streszczenie:
Investigating the reasoning abilities of transformer models, and discovering new challenging tasks for them, has been a topic of much interest. Recent studies have found these models to be surprisingly strong at performing deductive reasoning over formal logical theories expressed in natural language. A shortcoming of these studies, however, is that they do not take into account that logical theories, when sampled uniformly at random, do not necessarily lead to hard instances. We propose a new methodology for creating challenging algorithmic reasoning datasets that focus on natural language satisfiability (NLSat) problems. The key idea is to draw insights from empirical sampling of hard propositional SAT problems and from complexity-theoretic studies of language. This methodology allows us to distinguish easy from hard instances, and to systematically increase the complexity of existing reasoning benchmarks such as RuleTaker. We find that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems of substantially increased difficulty. They also exhibit some degree of scale-invariance—the ability to generalize to problems of larger size and scope. Our results, however, reveal important limitations too: careful sampling of training data is crucial for building models that generalize to larger problems, and transformer models’ limited scale-invariance suggests they are far from learning robust deductive reasoning algorithms.
Style APA, Harvard, Vancouver, ISO itp.
26

Kurup, Unmesh, i Nicholas Cassimatis. "Integrating Constraint Satisfaction and Spatial Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 1 (5.07.2010): 1536–41. http://dx.doi.org/10.1609/aaai.v24i1.7574.

Pełny tekst źródła
Streszczenie:
Many problems in AI, including planning, logical reasoning and probabilistic inference, have been shown to reduce to (weighted) constraint satisfaction. While there are a number of approaches for solving such problems, the recent gains in efficiency of the satisfiability approach have made SAT solvers a popular choice. Modern propositional SAT solvers are efficient for a wide variety of problems. However, particularly in the case of spatial reasoning, conversion to propositional SAT can sometimes result in a large number of variables and/or clauses. Moreover, spatial reasoning problems can often be more efficiently solved if the agent is able to exploit the geometric nature of space to make better choices during search and backtracking. The result of these two drawbacks — larger problem sizes and inefficient search — is that even simple spatial constraint problems are often intractable in the SAT approach. In this paper we propose a spatial reasoning system that provides significant performance improvements in constraint satisfaction problems involving spatial predicates. The key to our approach is to integrate a diagrammatic representation with a DPLL-based backtracking algorithm that is specialized for spatial reasoning. The resulting integrated system can be applied to larger and more complex problems than current approaches and can be adopted to improve performance in a variety of problems ranging from planning to probabilistic inference
Style APA, Harvard, Vancouver, ISO itp.
27

Hao i Liu. "Enhanced Membrane Computing Algorithm for SAT Problems Based on the Splitting Rule". Symmetry 11, nr 11 (15.11.2019): 1412. http://dx.doi.org/10.3390/sym11111412.

Pełny tekst źródła
Streszczenie:
Boolean propositional satisfiability (SAT) problem is one of the most widely studied NP-complete problems and plays an outstanding role in many domains. Membrane computing is a branch of natural computing which has been proven to solve NP problems in polynomial time with a parallel compute mode. This paper proposes a new algorithm for SAT problem which combines the traditional membrane computing algorithm of SAT problem with a classic simplification rule, the splitting rule, which can divide a clause set into two axisymmetric subsets, deal with them respectively and simultaneously, and obtain the solution of the original clause set with the symmetry of their solutions. The new algorithm is shown to be able to reduce the space complexity by distributing clauses with the splitting rule repeatedly, and also reduce both time and space complexity by executing one-literal rule and pure-literal rule as many times as possible.
Style APA, Harvard, Vancouver, ISO itp.
28

TEO, JASON, i HUSSEIN A. ABBASS. "A TRUE ANNEALING APPROACH TO THE MARRIAGE IN HONEY-BEES OPTIMIZATION ALGORITHM". International Journal of Computational Intelligence and Applications 03, nr 02 (czerwiec 2003): 199–211. http://dx.doi.org/10.1142/s146902680300094x.

Pełny tekst źródła
Streszczenie:
Marriage in Honey-Bees Optimization is a new swarm intelligence technique inspired by the marriage process of honey-bees. It has been shown to be very effective in solving the propositional satisfiability problem known as 3-SAT. The objective of this paper is to test a conventional annealing approach as the basis for determining the pool of drones. The modified algorithm is tested using a group of randomly generated hard 3-SAT problems to compare its behavior and efficiency against previous implementations. The overall performance of the MBO algorithm was found to have improved significantly using the proposed annealing function. Furthermore, a dramatic improvement was noted with the committee machine using this true annealing approach.
Style APA, Harvard, Vancouver, ISO itp.
29

Littman, M. L., J. Goldsmith i M. Mundhenk. "The Computational Complexity of Probabilistic Planning". Journal of Artificial Intelligence Research 9 (1.08.1998): 1–36. http://dx.doi.org/10.1613/jair.505.

Pełny tekst źródła
Streszczenie:
We examine the computational complexity of testing and finding small plans in probabilistic planning domains with both flat and propositional representations. The complexity of plan evaluation and existence varies with the plan type sought; we examine totally ordered plans, acyclic plans, and looping plans, and partially ordered plans under three natural definitions of plan value. We show that problems of interest are complete for a variety of complexity classes: PL, P, NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. In the process of proving that certain planning problems are complete for NP^PP, we introduce a new basic NP^PP-complete problem, E-MAJSAT, which generalizes the standard Boolean satisfiability problem to computations involving probabilistic quantities; our results suggest that the development of good heuristics for E-MAJSAT could be important for the creation of efficient algorithms for a wide variety of problems.
Style APA, Harvard, Vancouver, ISO itp.
30

WU, HUAYUE, i PETER VAN BEEK. "PORTFOLIOS WITH DEADLINES FOR BACKTRACKING SEARCH". International Journal on Artificial Intelligence Tools 17, nr 05 (październik 2008): 835–56. http://dx.doi.org/10.1142/s0218213008004187.

Pełny tekst źródła
Streszczenie:
Backtracking search is often the method of choice for solving constraint satisfaction and propositional satisfiability problems. Previous studies have shown that portfolios of backtracking algorithms — a selection of one or more algorithms plus a schedule for executing the algorithms — can dramatically improve performance on some instances. In this paper, we consider a setting that often arises in practice where the instances to be solved arise over time, the instances all belong to some class of problem instances, and a limit or deadline is placed on the computational resources that can be consumed in solving any instance. For such a scenario, we present a simple scheme for learning a good portfolio of backtracking algorithms from a small sample of instances. We demonstrate the effectiveness of our approach through an extensive empirical evaluation using two testbeds: real-world instruction scheduling problems and the widely used quasigroup completion problems.
Style APA, Harvard, Vancouver, ISO itp.
31

Grädel, Erich. "On the Restraining Power of Guards". Journal of Symbolic Logic 64, nr 4 (grudzień 1999): 1719–42. http://dx.doi.org/10.2307/2586808.

Pełny tekst źródła
Streszczenie:
AbstractGuarded fragments of first-order logic were recently introduced by Andréka, van Benthem and Németi; they consist of relational first-order formulae whose quantifiers are appropriately relativized by atoms. These fragments are interesting because they extend in a natural way many propositional modal logics, because they have useful model-theoretic properties and especially because they are decidable classes that avoid the usual syntactic restrictions (on the arity of relation symbols, the quantifier pattern or the number of variables) of almost all other known decidable fragments of first-order logic.Here, we investigate the computational complexity of these fragments. We prove that the satisfiability problems for the guarded fragment (GF) and the loosely guarded fragment (LGF) of first-order logic are complete for deterministic double exponential time. For the subfragments that have only a bounded number of variables or only relation symbols of bounded arity, satisfiability is Exptime-complete. We further establish a tree model property for both the guarded fragment and the loosely guarded fragment, and give a proof of the finite model property of the guarded fragment.It is also shown that some natural, modest extensions of the guarded fragments are undecidable.
Style APA, Harvard, Vancouver, ISO itp.
32

GIUNCHIGLIA, FAUSTO, i PAVEL SHVAIKO. "Semantic matching". Knowledge Engineering Review 18, nr 3 (wrzesień 2003): 265–80. http://dx.doi.org/10.1017/s0269888904000074.

Pełny tekst źródła
Streszczenie:
We think of match as an operator that takes two graph-like structures (e.g. database schemas or ontologies) and produces a mapping between elements of the two graphs that correspond semantically to each other. The goal of this paper is to propose a new approach to matching, called semantic matching. As its name indicates, in semantic matching the key intuition is to exploit the model-theoretic information, which is codified in the nodes and the structure of graphs. The contributions of this paper are (i) a rational reconstruction of the major matching problems and their articulation in terms of the more generic problem of matching graphs, (ii) the identification of semantic matching as a new approach for performing generic matching and (iii) a proposal for implementing semantic matching by testing propositional satisfiability.
Style APA, Harvard, Vancouver, ISO itp.
33

Surynek, Pavel. "Preprocessing in Propositional Satisfiability Using Bounded (2, k)-Consistency on Regions with a Locally Difficult Constraint Setup". International Journal on Artificial Intelligence Tools 23, nr 01 (luty 2014): 1350029. http://dx.doi.org/10.1142/s0218213013500292.

Pełny tekst źródła
Streszczenie:
A new type of partially global consistency derived from (2, k)-consistency called bounded (2, k)- consistency (B2C-consistency) is presented in this paper. It is designed for application in propositional satisfiability (SAT) as a building block for a preprocessing tool. Together with the new B2C-consistency a special mechanism for selecting regions of the input SAT instance with difficult constraint setup was also proposed. This mechanism is used to select suitable difficult sub-problems whose simplification through consistency can lead to a significant reduction in the effort needed to solve the instance. A new prototype preprocessing tool preprocessSIGMA which is based on the proposed techniques was implemented. As a proof of new concepts a competitive experimental evaluation on a set of relatively difficult SAT instances was conducted. It showed that our prototype preprocessor is competitive with respect to the existent preprocessing tools LiVer, NiVer, HyPre, blocked clause elimination (BCE), and Shatter with saucy 3.0.
Style APA, Harvard, Vancouver, ISO itp.
34

DOVIER, AGOSTINO. "Preface". Theory and Practice of Logic Programming 17, nr 4 (lipiec 2017): 359–64. http://dx.doi.org/10.1017/s1471068417000199.

Pełny tekst źródła
Streszczenie:
Magic squares, chess-like problems, cryptarithmetic puzzles, and similar classes of problems have been extensively used to challenge human reasoning capabilities. Lo Shu magic square can be traced back to 650 B.C., the eight-queens problem was proposed in 1848 by the chess player Max Bazzel TWO × TWO = THREE puzzle appeared in Strand Magazine in 1924. These puzzles are nowadays widely used in constraint programming courses. The first programming language provided with constraint modelling primitives (Sketchpad) has been proposed by the Turing award winner Ivan Sutherland in his PhD thesis (1963). Logemann and Loveland, when implementing the Davis–Putnam procedure (Davis and Putnam 1960) for testing the satisfiability of a propositional formula (SAT), devised an algorithm (Davis–Putnam–Logemann–Loveland (DPLL)) that has become the core of all SAT/and Answer Set Programming solvers (50 years later). It consists in choosing an un-assigned variable, assigning it a value 0 or 1, propagating the chosen value (unit propagation), and proceeding with the alternative value, if the original assignment leads to a contradiction (backtracking). Some years later Waltz (1975) introduced the notion of domain filtering (arc-consistency-based constraint propagation). With this idea the same DPLL scheme can be used for verifying the satisfiability of a constraint satisfaction problem, where the assignment is no longer 0/1 and the unit propagation is replaced by constraint propagation. For a detailed history of these early years achievements, we refer the reader to the works by Loveland et al. (2017), Jaffar and Maher (1994), and Freuder and Mackworth (2006).
Style APA, Harvard, Vancouver, ISO itp.
35

Gent, I. P., i T. Walsh. "An Empirical Analysis of Search in GSAT". Journal of Artificial Intelligence Research 1 (1.09.1993): 47–59. http://dx.doi.org/10.1613/jair.7.

Pełny tekst źródła
Streszczenie:
We describe an extensive study of search in GSAT, an approximation procedure for propositional satisfiability. GSAT performs greedy hill-climbing on the number of satisfied clauses in a truth assignment. Our experiments provide a more complete picture of GSAT's search than previous accounts. We describe in detail the two phases of search: rapid hill-climbing followed by a long plateau search. We demonstrate that when applied to randomly generated 3SAT problems, there is a very simple scaling with problem size for both the mean number of satisfied clauses and the mean branching rate. Our results allow us to make detailed numerical conjectures about the length of the hill-climbing phase, the average gradient of this phase, and to conjecture that both the average score and average branching rate decay exponentially during plateau search. We end by showing how these results can be used to direct future theoretical analysis. This work provides a case study of how computer experiments can be used to improve understanding of the theoretical properties of algorithms.
Style APA, Harvard, Vancouver, ISO itp.
36

Qiu, Junming, Wenqing Li, Zhanhao Xiao, Quanlong Guan, Liangda Fang, Zhao-Rong Lai i Qian Dong. "Knowledge Compilation Meets Logical Separability". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 5 (28.06.2022): 5851–60. http://dx.doi.org/10.1609/aaai.v36i5.20529.

Pełny tekst źródła
Streszczenie:
Knowledge compilation is an alternative solution to address demanding reasoning tasks with high complexity via converting knowledge bases into a suitable target language. Interestingly, the notion of logical separability, proposed by Levesque, offers a general explanation for the tractability of clausal entailment for two remarkable languages: decomposable negation normal form and prime implicates. It is interesting to explore what role logical separability on earth plays in problem tractability. In this paper, we apply the notion of logical separability in three reasoning problems within the context of propositional logic: satisfiability check (CO), clausal entailment check (CE) and model counting (CT), contributing to three corresponding polytime procedures. We provide three logical separability based properties: CO- logical separability, CE-logical separability and CT-logical separability. We then identify three novel normal forms: CO-LSNNF, CE-LSNNF and CT-LSNNF based on the above properties. Besides, we show that every normal form is the necessary and sufficient condition under which the corresponding procedure is correct. We finally integrate the above four normal forms into the knowledge compilation map.
Style APA, Harvard, Vancouver, ISO itp.
37

Ishebabi, Harold, Philipp Mahr, Christophe Bobda, Martin Gebser i Torsten Schaub. "Answer Set versus Integer Linear Programming for Automatic Synthesis of Multiprocessor Systems from Real-Time Parallel Programs". International Journal of Reconfigurable Computing 2009 (2009): 1–11. http://dx.doi.org/10.1155/2009/863630.

Pełny tekst źródła
Streszczenie:
An automated design approach for multiprocessor systems on FPGAs is presented which customizes architectures for parallel programs by simultaneously solving the problems of task mapping, resource allocation, and scheduling. The latter considers effects of fixed-priority preemptive scheduling in order to guarantee real-time requirements, hence covering a broad spectrum of embedded applications. Being inherently a combinatorial optimization problem, the design space is modeled using linear equations that capture high-level design parameters. A comparison of two methods for solving resulting problem instances is then given. The intent is to study how well recent advances in propositional satisfiability (SAT) and thus Answer Set Programming (ASP) can be exploited to automate the design of flexible multiprocessor systems. Integer Linear Programming (ILP) is taken as a baseline, where architectures for IEEE 802.11g and WCDMA baseband signal processing are synthesized. ASP-based synthesis used a few seconds in the solver, faster by three orders of magnitude compared to ILP-based synthesis, thereby showing a great potential for solving difficult instances of the automated synthesis problem.
Style APA, Harvard, Vancouver, ISO itp.
38

Bienvenu, Meghyn, Camille Bourgaux i François Goasdoué. "Computing and Explaining Query Answers over Inconsistent DL-Lite Knowledge Bases". Journal of Artificial Intelligence Research 64 (10.03.2019): 563–644. http://dx.doi.org/10.1613/jair.1.11395.

Pełny tekst źródła
Streszczenie:
Several inconsistency-tolerant semantics have been introduced for querying inconsistent description logic knowledge bases. The first contribution of this paper is a practical approach for computing the query answers under three well-known such semantics, namely the AR, IAR and brave semantics, in the lightweight description logic DL-LiteR. We show that query answering under the intractable AR semantics can be performed efficiently by using IAR and brave semantics as tractable approximations and encoding the AR entailment problem as a propositional satisfiability (SAT) problem. The second issue tackled in this work is explaining why a tuple is a (non-)answer to a query under these semantics. We define explanations for positive and negative answers under the brave, AR and IAR semantics. We then study the computational properties of explanations in DL-LiteR. For each type of explanation, we analyze the data complexity of recognizing (preferred) explanations and deciding if a given assertion is relevant or necessary. We establish tight connections between intractable explanation problems and variants of SAT, enabling us to generate explanations by exploiting solvers for Boolean satisfaction and optimization problems. Finally, we empirically study the efficiency of our query answering and explanation framework using a benchmark we built upon the well-established LUBM benchmark.
Style APA, Harvard, Vancouver, ISO itp.
39

Fichte, Johannes, i Stefan Szeider. "Backdoors to Normality for Disjunctive Logic Programs". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (30.06.2013): 320–27. http://dx.doi.org/10.1609/aaai.v27i1.8624.

Pełny tekst źródła
Streszczenie:
Over the last two decades, propositional satisfiability (SAT) has become one of the most successful and widely applied techniques for the solution of NP-complete problems. The aim of this paper is to investigate theoretically how SAT can be utilized for the efficient solution of problems that are harder than NP or co-NP. In particular, we consider the fundamental reasoning problems in propositional disjunctive answer set programming (ASP), BRAVE REASONING and SKEPTICA REASONING, which ask whether a given atom is contained in at least one or in all answer sets, respectively. Both problems are located at the second level of the Polynomial Hierarchy and thus assumed to be harder than NP or co-NP. One cannot transform these two reasoning problems into SAT in polynomial time, unless the Polynomial Hierarchy collapses. We show that certain structural aspects of disjunctive logic programs can be utilized to break through this complexity barrier, using new techniques from Parameterized Complexity. In particular, we exhibit transformations from BRAVE and SKEPTICAL REASONING to SAT that run in time $O(2^k n^2)$ where k is a structural parameter of the instance and n the input size. In other words, the reduction is fixed-parameter tractable for parameter k. As the parameter k we take the size of a smallest backdoor with respect to the class of normal (i.e., disjunction-free) programs. Such a backdoor is a set of atoms that when deleted makes the program normal. In consequence, the combinatorial explosion, which is expected when transforming a problem from the second level of the Polynomial Hierarchy to the first level, can now be confined to the parameter k, while the running time of the reduction is polynomial in the input size n, where the order of the polynomial is independent of k. We show that such a transformation is not possible if we consider backdoors with respect to tightness instead of normality. We think that our approach is applicable to many other hard combinatorial problems that lie beyond NP or co-NP, and thus significantly enlarge the applicability of SAT.
Style APA, Harvard, Vancouver, ISO itp.
40

Kerschke, Pascal, Holger H. Hoos, Frank Neumann i Heike Trautmann. "Automated Algorithm Selection: Survey and Perspectives". Evolutionary Computation 27, nr 1 (marzec 2019): 3–45. http://dx.doi.org/10.1162/evco_a_00242.

Pełny tekst źródła
Streszczenie:
It has long been observed that for practically any computational problem that has been intensely studied, different instances are best solved using different algorithms. This is particularly pronounced for computationally hard problems, where in most cases, no single algorithm defines the state of the art; instead, there is a set of algorithms with complementary strengths. This performance complementarity can be exploited in various ways, one of which is based on the idea of selecting, from a set of given algorithms, for each problem instance to be solved the one expected to perform best. The task of automatically selecting an algorithm from a given set is known as the per-instance algorithm selection problem and has been intensely studied over the past 15 years, leading to major improvements in the state of the art in solving a growing number of discrete combinatorial problems, including propositional satisfiability and AI planning. Per-instance algorithm selection also shows much promise for boosting performance in solving continuous and mixed discrete/continuous optimisation problems. This survey provides an overview of research in automated algorithm selection, ranging from early and seminal works to recent and promising application areas. Different from earlier work, it covers applications to discrete and continuous problems, and discusses algorithm selection in context with conceptually related approaches, such as algorithm configuration, scheduling, or portfolio selection. Since informative and cheaply computable problem instance features provide the basis for effective per-instance algorithm selection systems, we also provide an overview of such features for discrete and continuous problems. Finally, we provide perspectives on future work in the area and discuss a number of open research challenges.
Style APA, Harvard, Vancouver, ISO itp.
41

Fichte, Johannes. "Backdoors to Tractability of Answer-Set Programming". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (29.06.2013): 1662–63. http://dx.doi.org/10.1609/aaai.v27i1.8505.

Pełny tekst źródła
Streszczenie:
The practical results of answer-set programming indicate that classical complexity theory is insufficient as a theoretical framework to explain why modern answer-set programming solvers work fast on industrial applications. Complexity analysis by means of parameterized complexity theory seems to be promising, because we think that the reason for the gap between theory and practice is the presence of a "hidden structure" in real-world instances. The application of parameterized complexity theory to answer-set programming would give a crucial understanding of how solver heuristics work. This profound understanding can be used to improve the decision heuristics of modern solvers and yields new efficient algorithms for decision problems in the nonmonotonic setting. My research aims to explain the gap between theoretical upper bounds and the effort to solve real-world instances. I will further develop by means of parameterized complexity exact algorithms which work efficiently for real-world instances. The approach is based on backdoors which are small sets of atoms that represent "clever reasoning shortcuts" through the search space. The concept of backdoors is widely used in the areas of propositional satisfiability and constraint satisfaction. I will show how this concept can be adapted to the nonmonotonic setting and how it can be utilized to improve common algorithms.
Style APA, Harvard, Vancouver, ISO itp.
42

Klimek, Radosław, Katarzyna Grobler-Dębska i Edyta Kucharska. "System for automatic generation of logical formulas". MATEC Web of Conferences 252 (2019): 03005. http://dx.doi.org/10.1051/matecconf/201925203005.

Pełny tekst źródła
Streszczenie:
The satisfiability problem (SAT) is one of the classical and also most important problems of the theoretical computer science and has a direct bearing on numerous practical cases. It is one of the most prominent problems in artificial intelligence and has important applications in many fields, such as hardware and software verification, test-case generation, AI planning, scheduling, and data structures that allow efficient implementation of search space pruning. In recent years, there has been a huge development in SAT solvers, especially CDCL-based solvers (Conflict-Driven Clause-Learning) for propositional logic formulas. The goal of this paper is to design and implement a simple but effective system for random generation of long and complex logical formulas with a variety of difficulties encoded inside. The resulting logical formulas, i.e. problem instances, could be used for testing existing SAT solvers. The entire system would be widely available as a web application in the client-server architecture. The proposed system enables generation of syntactically correct logical formulas with a random structure, encoded in a manner understandable to SAT Solvers. Logical formulas can be presented in different formats. A number of parameters affect the form of generated instances, their complexity and physical dimensions. The randomness factor can be entered to every generated formula. The developed application is easy to modify and open for further extensions. The final part of the paper describes examples of solvers’ tests of logical formulas generated by the implemented generator.
Style APA, Harvard, Vancouver, ISO itp.
43

Chang, Wenjing, Hengkai Zhang i Junwei Luo. "Predicting Propositional Satisfiability Based on Graph Attention Networks". International Journal of Computational Intelligence Systems 15, nr 1 (28.09.2022). http://dx.doi.org/10.1007/s44196-022-00139-9.

Pełny tekst źródła
Streszczenie:
AbstractBoolean satisfiability problems (SAT) have very rich generic and domain-specific structures. How to capture these structural features in the embedding space and feed them to deep learning models is an important factor influencing the use of neural networks to solve SAT problems. Graph neural networks have achieved good results, especially for message-passing models. These capture the displacement-invariant architecture well, whether building end-to-end models or improving heuristic algorithms for traditional solvers. We present the first framework for predicting the satisfiability of domain-specific SAT problems using graph attention networks, GAT-SAT. Our model can learn satisfiability features in a weakly supervised setting, i.e., in the absence of problem-specific feature engineering. We test the model to predict the satisfiability of randomly generated SAT instances SR(N) and random 3-SAT problems. Experiments demonstrate that our model improves the prediction accuracy of random 3-SAT problems by 1–4% and significantly outperforms other graph neural network approaches on random SR(N). Compared to NeuroSAT, our model can almost always achieve the same or even higher accuracy with half the amount of iterations. At the end of the paper, we also try to explain the role played by the graph attention mechanism in the model.
Style APA, Harvard, Vancouver, ISO itp.
44

Dransfield, Michael R., Lengning Liu, Victor W. Marek i Mirosław Truszczyński. "Satisfiability and Computing van der Waerden Numbers". Electronic Journal of Combinatorics 11, nr 1 (16.06.2004). http://dx.doi.org/10.37236/1794.

Pełny tekst źródła
Streszczenie:
In this paper we bring together the areas of combinatorics and propositional satisfiability. Many combinatorial theorems establish, often constructively, the existence of positive integer functions, without actually providing their closed algebraic form or tight lower and upper bounds. The area of Ramsey theory is especially rich in such results. Using the problem of computing van der Waerden numbers as an example, we show that these problems can be represented by parameterized propositional theories in such a way that decisions concerning their satisfiability determine the numbers (function) in question. We show that by using general-purpose complete and local-search techniques for testing propositional satisfiability, this approach becomes effective — competitive with specialized approaches. By following it, we were able to obtain several new results pertaining to the problem of computing van der Waerden numbers. We also note that due to their properties, especially their structural simplicity and computational hardness, propositional theories that arise in this research can be of use in development, testing and benchmarking of SAT solvers.
Style APA, Harvard, Vancouver, ISO itp.
45

Mahmood, Yasir, i Arne Meier. "Parameterised complexity of model checking and satisfiability in propositional dependence logic". Annals of Mathematics and Artificial Intelligence, 27.02.2021. http://dx.doi.org/10.1007/s10472-021-09730-w.

Pełny tekst źródła
Streszczenie:
AbstractDependence Logic was introduced by Jouko Väänänen in 2007. We study a propositional variant of this logic (PDL) and investigate a variety of parameterisations with respect to central decision problems. The model checking problem (MC) of PDL is NP-complete (Ebbing and Lohmann, SOFSEM 2012). The subject of this research is to identify a list of parameterisations (formula-size, formula-depth, treewidth, team-size, number of variables) under which MC becomes fixed-parameter tractable. Furthermore, we show that the number of disjunctions or the arity of dependence atoms (dep-arity) as a parameter both yield a paraNP-completeness result. Then, we consider the satisfiability problem (SAT) which classically is known to be NP-complete as well (Lohmann and Vollmer, Studia Logica 2013). There we are presenting a different picture: under team-size, or dep-arity SAT is paraNP-complete whereas under all other mentioned parameters the problem is FPT. Finally, we introduce a variant of the satisfiability problem, asking for a team of a given size, and show for this problem an almost complete picture.
Style APA, Harvard, Vancouver, ISO itp.
46

Bearden, Sean R. B., Yan Ru Pei i Massimiliano Di Ventra. "Efficient solution of Boolean satisfiability problems with digital memcomputing". Scientific Reports 10, nr 1 (12.11.2020). http://dx.doi.org/10.1038/s41598-020-76666-2.

Pełny tekst źródła
Streszczenie:
AbstractBoolean satisfiability is a propositional logic problem of interest in multiple fields, e.g., physics, mathematics, and computer science. Beyond a field of research, instances of the SAT problem, as it is known, require efficient solution methods in a variety of applications. It is the decision problem of determining whether a Boolean formula has a satisfying assignment, believed to require exponentially growing time for an algorithm to solve for the worst-case instances. Yet, the efficient solution of many classes of Boolean formulae eludes even the most successful algorithms, not only for the worst-case scenarios, but also for typical-case instances. Here, we introduce a memory-assisted physical system (a digital memcomputing machine) that, when its non-linear ordinary differential equations are integrated numerically, shows evidence for polynomially-bounded scalability while solving “hard” planted-solution instances of SAT, known to require exponential time to solve in the typical case for both complete and incomplete algorithms. Furthermore, we analytically demonstrate that the physical system can efficiently solve the SAT problem in continuous time, without the need to introduce chaos or an exponentially growing energy. The efficiency of the simulations is related to the collective dynamical properties of the original physical system that persist in the numerical integration to robustly guide the solution search even in the presence of numerical errors. We anticipate our results to broaden research directions in physics-inspired computing paradigms ranging from theory to application, from simulation to hardware implementation.
Style APA, Harvard, Vancouver, ISO itp.
47

Fröhlich, Andreas, Armin Biere, Christoph Wintersteiger i Youssef Hamadi. "Stochastic Local Search for Satisfiability Modulo Theories". Proceedings of the AAAI Conference on Artificial Intelligence 29, nr 1 (16.02.2015). http://dx.doi.org/10.1609/aaai.v29i1.9372.

Pełny tekst źródła
Streszczenie:
Satisfiability Modulo Theories (SMT) is essential for many practical applications, e.g., in hard- and software verification, and increasingly also in other scientific areas like computational biology. A large number of applications in these areas benefit from bit-precise reasoning over finite-domain variables. Current approaches in this area translate a formula over bit-vectors to an equisatisfiable propositional formula, which is then given to a SAT solver. In this paper, we present a novel stochastic local search (SLS) algorithm to solve SMT problems, especially those in the theory of bit-vectors, directly on the theory level. We explain how several successful techniques used in modern SLS solvers for SAT can be lifted to the SMT level. Experimental results show that our approach can compete with state-of-the-art bit-vector solvers on many practical instances and, sometimes, outperform existing solvers. This offers interesting possibilities in combining our approach with existing techniques, and, moreover, new insights into the importance of exploiting problem structure in SLS solvers for SAT. Our approach is modular and, therefore, extensible to support other theories, potentially allowing SLS to become part of the more general SMT framework.
Style APA, Harvard, Vancouver, ISO itp.
48

Bienvenu, Meghyn, Camille Bourgaux i François Goasdoué. "Explaining Inconsistency-Tolerant Query Answering over Description Logic Knowledge Bases". Proceedings of the AAAI Conference on Artificial Intelligence 30, nr 1 (21.02.2016). http://dx.doi.org/10.1609/aaai.v30i1.10092.

Pełny tekst źródła
Streszczenie:
Several inconsistency-tolerant semantics have been introduced for querying inconsistent description logic knowledge bases. This paper addresses the problem of explaining why a tuple is a (non-)answer to a query under such semantics. We define explanations for positive and negative answers under the brave, AR and IAR semantics. We then study the computational properties of explanations in the lightweight description logic DL-Lite_R. For each type of explanation, we analyze the data complexity of recognizing (preferred) explanations and deciding if a given assertion is relevant or necessary. We establish tight connections between intractable explanation problems and variants of propositional satisfiability (SAT), enabling us to generate explanations by exploiting solvers for Boolean satisfaction and optimization problems. Finally, we empirically study the efficiency of our explanation framework using the well-established LUBM benchmark.
Style APA, Harvard, Vancouver, ISO itp.
49

Lange, Martin. "Temporal Logics Beyond Regularity". BRICS Report Series 14, nr 13 (13.01.2007). http://dx.doi.org/10.7146/brics.v14i13.22178.

Pełny tekst źródła
Streszczenie:
Non-regular program correctness properties play an important role in the specification of unbounded buffers, recursive procedures, etc. This thesis surveys results about the relative expressive power and complexity of temporal logics which are capable of defining non-regular properties. In particular, it features Propositional Dynamic Logic of Context-Free Programs, Fixpoint Logic with Chop, the Modal Iteration Calculus, and Higher-Order Fixpoint Logic.<br /> <br />Regarding expressive power we consider two classes of structures: arbitrary transition systems as well as finite words as a subclass of the former. The latter is meant to give an intuitive account of the logics' expressive powers by relating them to known language classes defined in terms of grammars or Turing Machines. <br /> <br /> Regarding the computational complexity of temporal logics beyond regularity we focus on their model checking problems since their satisfiability problems are all highly undecidable. Their model checking complexities range between polynomial time and non-elementary.
Style APA, Harvard, Vancouver, ISO itp.
50

Semenov, Alexander, Oleg Zaikin, Ilya Otpuschennikov, Stepan Kochemazov i Alexey Ignatiev. "On Cryptographic Attacks Using Backdoors for SAT". Proceedings of the AAAI Conference on Artificial Intelligence 32, nr 1 (26.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.12205.

Pełny tekst źródła
Streszczenie:
Propositional satisfiability (SAT) is at the nucleus of state-of-the-art approaches to a variety of computationally hard problems, one of which is cryptanalysis. Moreover, a number of practical applications of SAT can only be tackled efficiently by identifying and exploiting a subset of formula's variables called backdoor set (or simply backdoors). This paper proposes a new class of backdoor sets for SAT used in the context of cryptographic attacks, namely guess-and-determine attacks. The idea is to identify the best set of backdoor variables subject to a statistically estimated hardness of the guess-and-determine attack using a SAT solver. Experimental results on weakened variants of the renowned encryption algorithms exhibit advantage of the proposed approach compared to the state of the art in terms of the estimated hardness of the resulting guess-and-determine attacks.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii