Journal articles on the topic 'Parallel satisfiability'

To see the other types of publications on this topic, follow the link: Parallel satisfiability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel satisfiability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Martins, Ruben, Vasco Manquinho, and Inês Lynce. "Parallel search for maximum satisfiability." AI Communications 25, no. 2 (2012): 75–95. http://dx.doi.org/10.3233/aic-2012-0517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Martins, Ruben. "Parallel search for maximum satisfiability." Constraints 20, no. 4 (September 10, 2015): 469–70. http://dx.doi.org/10.1007/s10601-015-9207-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

HAGLIN, DAVID J. "APPROXIMATING MAXIMUM 2-CNF SATISFIABILITY." Parallel Processing Letters 02, no. 02n03 (September 1992): 181–87. http://dx.doi.org/10.1142/s0129626492000301.

Full text
Abstract:
A parallel approximation algorithm for the MAXIMUM 2-CNF SATISFIABILITY problem is presented. This algorithm runs in O( log 2(n + |F|)) parallel time on a CREW PRAM machine using O(n + |F|) processors, where n is the number of variables and |F| is the number of clauses. Performance guarantees are considered for three slightly differing definitions of this problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Feldman, Yulik, Nachum Dershowitz, and Ziyad Hanna. "Parallel Multithreaded Satisfiability Solver: Design and Implementation." Electronic Notes in Theoretical Computer Science 128, no. 3 (April 2005): 75–90. http://dx.doi.org/10.1016/j.entcs.2004.10.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

SADOWSKI, Adrian. "A parallel pipelined naive method for testing satisfiability." PRZEGLĄD ELEKTROTECHNICZNY 1, no. 11 (November 5, 2015): 156–59. http://dx.doi.org/10.15199/48.2015.11.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blochinger, Wolfgang, Carsten Sinz, and Wolfgang Küchlin. "Parallel propositional satisfiability checking with distributed dynamic learning." Parallel Computing 29, no. 7 (July 2003): 969–94. http://dx.doi.org/10.1016/s0167-8191(03)00068-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wen-Zhang, Liu, Zhang Jing-Fu, and Long Gui-Lu. "A Parallel Quantum Algorithm for the Satisfiability Problem." Communications in Theoretical Physics 49, no. 3 (March 2008): 629–30. http://dx.doi.org/10.1088/0253-6102/49/3/22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Dan. "The New Democratic Revolution in Music during the Play Experience of the Ideological and Political Education Function." Applied Mechanics and Materials 556-562 (May 2014): 6602–5. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6602.

Full text
Abstract:
After a deep investigation on the maximum terms space of the clause set, the concept of the partial maximum terms space of the clause set, which the maximum terms of the clause set decomposed, is brought forward. By investigating the extension rule, this paper introduces the concept of the satisfiability and the unsatisfiability of the partial maximum terms space, and gives an algorithm determining the satisfiability of a partial space of the maximum terms - algorithm PSER (Partial Semi-Extension Rule). Then, the TP problem is decomposed into several sub-problems independent of each other, which can be solved by the given parallel computing method PPSER (Parallel Partial Semi-Extension Rule).
APA, Harvard, Vancouver, ISO, and other styles
9

Czutro, Alexander, Ilia Polian, Matthew Lewis, Piet Engelke, Sudhakar M. Reddy, and Bernd Becker. "Thread-Parallel Integrated Test Pattern Generator Utilizing Satisfiability Analysis." International Journal of Parallel Programming 38, no. 3-4 (January 1, 2010): 185–202. http://dx.doi.org/10.1007/s10766-009-0124-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

HEAD, TOM. "PHOTOCOMPUTING: EXPLORATIONS WITH TRANSPARENCY AND OPACITY." Parallel Processing Letters 17, no. 04 (December 2007): 339–47. http://dx.doi.org/10.1142/s0129626407003071.

Full text
Abstract:
We continue to search for methods of parallel computing using light. An algorithm for solving instances of the Boolean satisfiability problem is given and illustrated using a photocopying machine with plastic transparencies as medium. The algorithm solves satisfiability problems in linear time but requires the assumption that information can be stored with a density that is exponential in the number of variables in the problem instance. Consideration is given to situations in which this density limitation is not quite absolute.
APA, Harvard, Vancouver, ISO, and other styles
11

Liao, Xiaojuan, Hui Zhang, Miyuki Koshimura, Rong Huang, Wenxin Yu, and Fagen Li. "Modeling and Solving Scheduling in Overloaded Situations with Weighted Partial MaxSAT." Mathematical Problems in Engineering 2021 (July 16, 2021): 1–17. http://dx.doi.org/10.1155/2021/9615463.

Full text
Abstract:
In real-time systems, where tasks have timing requirements, once the workload exceeds the system’s capacity, missed due dates may cause system overload. In this situation, finding an optimal scheduling that minimizes the cumulative values of late tasks is critical in both theory and practice. Recently, formalizing scheduling problems as a class of generalized problems, such as Satisfiability Modulo Theory (SMT) and Maximum Satisfiability (MaxSAT), has been receiving immense concern. Enlightened by the high efficiency of these satisfiability-based methods, this paper formulates the single-machine scheduling problem of minimizing the total weight of late tasks as a Weighted Partial Maximum (WPM) Satisfiability problem. In the formulation, scheduling features are encoded as rigidly enforced hard clauses and the scheduling objective is treated as a set of weighted soft ones. Then an off-the-shelf WPM solver is exploited to maximize the total weight of the satisfied soft clauses, provided that all the hard clauses are satisfied. Experimental results demonstrate that, compared with the existing satisfiability-based methods, the proposed method significantly improves the efficiency of identifying the optimal schedule. Moreover, we make minor changes to apply the WPM formulation to parallel-machine scheduling, showing that the proposed method is sufficiently flexible and well scalable.
APA, Harvard, Vancouver, ISO, and other styles
12

Rintanen, Jussi, Keijo Heljanko, and Ilkka Niemelä. "Planning as satisfiability: parallel plans and algorithms for plan search." Artificial Intelligence 170, no. 12-13 (September 2006): 1031–80. http://dx.doi.org/10.1016/j.artint.2006.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liao, Xiaojuan, Hui Zhang, Miyuki Koshimura, Rong Huang, and Fagen Li. "Solving Restricted Preemptive Scheduling on Parallel Machines with SAT and PMS." JUCS - Journal of Universal Computer Science 29, no. 8 (August 28, 2023): 911–37. http://dx.doi.org/10.3897/jucs.97743.

Full text
Abstract:
Restricted preemption plays a crucial role in reducing total completion time while controlling preemption overhead. A typical version of restricted preemptive models is k-restricted preemptive scheduling, where preemption is only allowed after a task has been continuously processed for at least k units of time. Though solving this problem of minimizing the makespan on parallel machines is NP-hard in general, it is of vital importance to obtain the optimal solution for small-sized problems, as well as for evaluation of heuristics. This paper proposes optimal strategies to the aforementioned problem. Motivated by the dramatic speed-up of Boolean Satisfiability (SAT) solvers, we make the first step towards a study of applying a SAT solver to the k-restricted scheduling problem. We set out to encode the scheduling problem into propositional Boolean logic and determine the optimal makespan by repeatedly calling an off-the-shelf SAT solver. Moreover, we move one step further by encoding the problem into Partial Maximum Satisfiability (PMS), which is an optimized version of SAT, so that the explicit successive calls of the solver can be eliminated. The optimal makespan of the problem and the performance of the proposed methods are studied experimentally. Furthermore, an existing heuristic algorithm is evaluated by the optimization methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Bagchi, Ansuman, Brigitte Servatius, and Weigeng Shi. "2-satisfiability and diagnosing faulty processors in massively parallel computing systems." Discrete Applied Mathematics 60, no. 1-3 (June 1995): 25–37. http://dx.doi.org/10.1016/0166-218x(94)00041-b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lopez Ramirez, Cristina, and Guillermo De Ita Luna. "Modelling the 3-coloring of Serial-Parallel Graphs via Incremental Satisfiability." IEEE Latin America Transactions 17, no. 04 (April 2019): 607–14. http://dx.doi.org/10.1109/tla.2019.8891885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sohn, Andrew. "Parallel Satisfiability Test with Synchronous Simulated Annealing on Distributed-Memory Multiprocessor." Journal of Parallel and Distributed Computing 36, no. 2 (August 1996): 195–204. http://dx.doi.org/10.1006/jpdc.1996.0100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bofill, Miquel, Joan Espasa, and Mateu Villaret. "A Semantic Notion of Interference for Planning Modulo Theories." Proceedings of the International Conference on Automated Planning and Scheduling 26 (March 30, 2016): 56–64. http://dx.doi.org/10.1609/icaps.v26i1.13734.

Full text
Abstract:
The aim of being able to reason about quantities, time, space and much more has been the main objective of the many efforts on the integration of propositional planning with extensions to handle different theories. Planning Modulo Theories (PMT) is an approximation inspired by Satisfiability Modulo Theories (SMT) that generalizes the integration of arbitrary theories with propositional planning. Parallel plans are crucial to reduce plan lengths and hence the time needed to reach a feasible plan in many approaches. Parallelization of actions relies on the notion of (non-)interference, which is usually determined syntactically at compile time. In this paper we present a general semantic notion of interference between actions in PMT. Along its generality, this notion can be efficiently checked at compile time by means of satisfiability checks.
APA, Harvard, Vancouver, ISO, and other styles
18

Martins, Ruben, Vasco Manquinho, and Inês Lynce. "Deterministic Parallel MaxSAT Solving." International Journal on Artificial Intelligence Tools 24, no. 03 (June 2015): 1550005. http://dx.doi.org/10.1142/s0218213015500050.

Full text
Abstract:
Multicore processors are becoming the dominant platform in modern days. As a result, parallel Maximum Satisfiability (MaxSAT) solvers have been developed to exploit this new architecture. However, parallel MaxSAT solvers suffer from non-deterministic behavior, i.e. several runs of the same solver can lead to different solutions. This is a clear downside for applications that require solving the same problem instance more than once. This paper presents the first deterministic parallel MaxSAT solver that ensures reproducibility of results. Experimental results show that the performance of the new deterministic solver is comparable to the corresponding non-deterministic version.Another advantage of using a deterministic solver is the fact that one can easily observe the gains coming from different techniques, since the non-determinism is removed from the solver. For example, sharing learned clauses in parallel MaxSAT is expected to help to further prune the search space and boost the performance of a parallel solver. Yet, so far it has not been made clear which learned clauses should be shared among the different threads. By using the deterministic solver, we present a comparison showing that sharing learned clauses improves the overall performance of parallel MaxSAT solvers.
APA, Harvard, Vancouver, ISO, and other styles
19

Rankooh, Masood Feyzbakhsh, and Gholamreza Ghassem-Sani. "ITSAT: An Efficient SAT-Based Temporal Planner." Journal of Artificial Intelligence Research 53 (July 30, 2015): 541–632. http://dx.doi.org/10.1613/jair.4697.

Full text
Abstract:
Planning as satisfiability is known as an efficient approach to deal with many types of planning problems. However, this approach has not been competitive with the state-space based methods in temporal planning. This paper describes ITSAT as an efficient SAT-based (satisfiability based) temporal planner capable of temporally expressive planning. The novelty of ITSAT lies in the way it handles temporal constraints of given problems without getting involved in the difficulties of introducing continuous variables into the corresponding satisfiability problems. We also show how, as in SAT-based classical planning, carefully devised preprocessing and encoding schemata can considerably improve the efficiency of SAT-based temporal planning. We present two preprocessing methods for mutex relation extraction and action compression. We also show that the separation of causal and temporal reasoning enables us to employ compact encodings that are based on the concept of parallel execution semantics. Although such encodings have been shown to be quite effective in classical planning, ITSAT is the first temporal planner utilizing this type of encoding. Our empirical results show that not only does ITSAT outperform the state-of-the-art temporally expressive planners, it is also competitive with the fast temporal planners that cannot handle required concurrency.
APA, Harvard, Vancouver, ISO, and other styles
20

Guo, Wensheng, Guowu Yang, Wei Wu, Lei He, and Mingyu Sun. "A Parallel Attractor Finding Algorithm Based on Boolean Satisfiability for Genetic Regulatory Networks." PLoS ONE 9, no. 4 (April 9, 2014): e94258. http://dx.doi.org/10.1371/journal.pone.0094258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Shengcai, Ke Tang, and Xin Yao. "Automatic Construction of Parallel Portfolios via Explicit Instance Grouping." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1560–67. http://dx.doi.org/10.1609/aaai.v33i01.33011560.

Full text
Abstract:
Exploiting parallelism is becoming more and more important in designing efficient solvers for computationally hard problems. However, manually building parallel solvers typically requires considerable domain knowledge and plenty of human effort. As an alternative, automatic construction of parallel portfolios (ACPP) aims at automatically building effective parallel portfolios based on a given problem instance set and a given rich configuration space. One promising way to solve the ACPP problem is to explicitly group the instances into different subsets and promote a component solver to handle each of them. This paper investigates solving ACPP from this perspective, and especially studies how to obtain a good instance grouping. The experimental results on two widely studied problem domains, the boolean satisfiability problems (SAT) and the traveling salesman problems (TSP), showed that the parallel portfolios constructed by the proposed method could achieve consistently superior performances to the ones constructed by the state-of-the-art ACPP methods, and could even rival sophisticated hand-designed parallel solvers.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Kairong, and Masahiro Nagamatu. "Using Attenuation Coefficient Generating Function in Parallel Execution of Neural Networks for Solving SAT." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 2 (March 20, 2005): 121–26. http://dx.doi.org/10.20965/jaciii.2005.p0121.

Full text
Abstract:
The satisfiability problem (SAT) is one of the most basic and important problems in computer science. We have proposed a recurrent analog neural network called Lagrange Programming neural network with Polarized High-order connections (LPPH) for the SAT, together with a method of parallel execution of LPPH. Experimental results demonstrate a high speedup ratio. Furthermore this method is very easy to realize by hardware. LPPH dynamics has an important parameter, the attenuation coefficient, known to strongly affect LPPH execution speed, but determining a good value of attenuation coefficient is difficult. Experimental results show that the parallel execution reduces this difficulty. In this paper we propose a method to assign different values of attenuation coefficients to LPPHs used in the parallel execution. The values are generated uniformly randomly or randomly using a probability density function.
APA, Harvard, Vancouver, ISO, and other styles
23

ARBELAEZ, ALEJANDRO, CHARLOTTE TRUCHET, and PHILIPPE CODOGNET. "Using sequential runtime distributions for the parallel speedup prediction of SAT local search." Theory and Practice of Logic Programming 13, no. 4-5 (July 2013): 625–39. http://dx.doi.org/10.1017/s1471068413000392.

Full text
Abstract:
AbstractThis paper presents a detailed analysis of the scalability and parallelization of local search algorithms for the Satisfiability problem. We propose a framework to estimate the parallel performance of a given algorithm by analyzing the runtime behavior of its sequential version. Indeed, by approximating the runtime distribution of the sequential process with statistical methods, the runtime behavior of the parallel process can be predicted by a model based on order statistics. We apply this approach to study the parallel performance of two SAT local search solvers, namely Sparrow and CCASAT, and compare the predicted performances to the results of an actual experimentation on parallel hardware up to 384 cores. We show that the model is accurate and predicts performance close to the empirical data. Moreover, as we study different types of instances (random and crafted), we observe that the local search solvers exhibit different behaviors and that their runtime distributions can be approximated by two types of distributions: exponential (shifted and non-shifted) and lognormal.
APA, Harvard, Vancouver, ISO, and other styles
24

Lück, Martin. "Quirky Quantifiers: Optimal Models and Complexity of Computation Tree Logic." International Journal of Foundations of Computer Science 29, no. 01 (January 2018): 17–61. http://dx.doi.org/10.1142/s0129054118500028.

Full text
Abstract:
The satisfiability problem of the branching time logic CTL is studied in terms of computational complexity. Tight upper and lower bounds are provided for each temporal operator fragment. In parallel, the minimal model size is studied with a suitable notion of minimality. Thirdly, flat CTL is investigated, i.e., formulas with very low temporal operator nesting depth. A sharp dichotomy is shown in terms of complexity and minimal models: Temporal depth one has low expressive power, while temporal depth two is equivalent to full CTL.
APA, Harvard, Vancouver, ISO, and other styles
25

Alhazov, Artiom. "Minimal Parallelism and Number of Membrane Polarizations." Triangle, no. 6 (June 28, 2018): 1. http://dx.doi.org/10.17345/triangle6.1-17.

Full text
Abstract:
It is known that the satisfiability problem (SAT) can be efficiently solved by a uniform family of P systems with active membranes that have two polarizations working in a maximally parallel way. We study P systems with active membranes without non-elementary membrane division, working in minimally parallel way. The main question we address is what number of polarizations is sufficient for an efficient computation depending on the types of rules used.In particular, we show that it is enough to have four polarizations, sequential evolution rules changing polarizations, polarizationless non-elementary membrane division rules and polarizationless rules of sending an object out. The same problem is solved with the standard evolution rules, rules of sending an object out and polarizationless non-elementary membrane division rules, with six polarizations. It is an open question whether these numbers are optimal.
APA, Harvard, Vancouver, ISO, and other styles
26

Katsirelos, George, Ashish Sabharwal, Horst Samulowitz, and Laurent Simon. "Resolution and Parallelizability: Barriers to the Efficient Parallelization of SAT Solvers." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 481–88. http://dx.doi.org/10.1609/aaai.v27i1.8660.

Full text
Abstract:
Recent attempts to create versions of Satisfiability (SAT) solversthat exploit parallel hardware and information sharing have met withlimited success. In fact,the most successful parallel solvers in recent competitions were basedon portfolio approaches with little to no exchange of informationbetween processors. This experience contradicts the apparentparallelizability of exploring a combinatorial search space. Wepresent evidence that this discrepancy can be explained by studyingSAT solvers through a proof complexity lens, as resolution refutationengines. Starting with theobservation that a recently studied measure of resolution proofs,namely depth, provides a (weak) upper bound to the best possiblespeedup achievable by such solvers, we empirically show the existenceof bottlenecks to parallelizability that resolution proofs typicallygenerated by SAT solvers exhibit. Further, we propose a new measureof parallelizability based on the best-case makespan of an offlineresource constrained scheduling problem. This measureexplicitly accounts for a bounded number of parallel processors andappears to empirically correlate with parallel speedups observed inpractice. Our findings suggest that efficient parallelization of SATsolvers is not simply a matter of designing the right clause sharingheuristics; even in the best case, it can be --- and indeed is ---hindered by the structure of the resolution proofs current SAT solverstypically produce.
APA, Harvard, Vancouver, ISO, and other styles
27

Song, Bosheng, and Yuan Kong. "Solution to PSPACE-Complete Problem Using P Systems with Active Membranes with Time-Freeness." Mathematical Problems in Engineering 2019 (June 27, 2019): 1–8. http://dx.doi.org/10.1155/2019/5793234.

Full text
Abstract:
P systems with active membranes are powerful parallel natural computing models, which were inspired by cell structure and behavior. Inspired by the parallel processing of biological information and with the idealistic assumption that each rule is completed in exactly one time unit, P systems with active membranes are able to solve computational hard problems in a feasible time. However, an important biological fact in living cells is that the execution time of a biochemical reaction cannot be accurately divided equally and completed in one time unit. In this work, we consider time as an important factor for the computation in P systems with active membranes and investigate the computational efficiency of such P systems. Specifically, we present a time-free semiuniform solution to the quantified Boolean satisfiability problem (QSATproblem, for short) in the framework of P systems with active membranes, where the solution to such problem is correct, which does not depend on the execution time for the used rules.
APA, Harvard, Vancouver, ISO, and other styles
28

Rybakov, V. "Algorithm for Decision Procedure in Temporal Logic Treating Uncertainty, Plausibility, Knowledge and Interacting Agents." International Journal of Intelligent Information Technologies 6, no. 1 (January 2010): 31–45. http://dx.doi.org/10.4018/jiit.2010100903.

Full text
Abstract:
Our article studies logic UIALTL, which is a combination of the linear temporal logic LTL, a multi-agent logic with operation for passing knowledge via agents’ interaction, and a suggested logic based on operation of logical uncertainty. The logical operations of UIALTL also include (together with operations from LTL) operations of strong and weak until, agents’ knowledge operations, operation of knowledge via interaction, operation of logical uncertainty, the operations for environmental and global knowledge. UIALTL is defined as a set of all formulas valid at all Kripke-Hintikka like models NC. Any frame NC represents possible unbounded (in time) computation with multi-processors (parallel computational units) and agents’ channels for connections between computational units. The main aim of our article is to determine possible ways for computation logical laws of UIALTL. Principal problems we are dealing with are decidability and the satisfiability problems for UIALTL. We find an algorithm which recognizes theorems of UIALTL (so we show that UIALTL is decidable) and solves satisfiability problem for UIALTL. As an instrument we use reduction of formulas to rules in the reduced normal form and a technique to contract models NC to special non-UIALTL-models, and, then, verification of validity these rules in models of bounded size. The article uses standard results from non-classical logics based on Kripke-Hintikka models.
APA, Harvard, Vancouver, ISO, and other styles
29

Domshlak, C., J. Hoffmann, and A. Sabharwal. "Friends or Foes? On Planning as Satisfiability and Abstract CNF Encodings." Journal of Artificial Intelligence Research 36 (December 21, 2009): 415–69. http://dx.doi.org/10.1613/jair.2817.

Full text
Abstract:
Planning as satisfiability, as implemented in, for instance, the SATPLAN tool, is a highly competitive method for finding parallel step-optimal plans. A bottleneck in this approach is to *prove the absence* of plans of a certain length. Specifically, if the optimal plan has N steps, then it is typically very costly to prove that there is no plan of length N-1. We pursue the idea of leading this proof within solution length preserving abstractions (over-approximations) of the original planning task. This is promising because the abstraction may have a much smaller state space; related methods are highly successful in model checking. In particular, we design a novel abstraction technique based on which one can, in several widely used planning benchmarks, construct abstractions that have exponentially smaller state spaces while preserving the length of an optimal plan. Surprisingly, the idea turns out to appear quite hopeless in the context of planning as satisfiability. Evaluating our idea empirically, we run experiments on almost all benchmarks of the international planning competitions up to IPC 2004, and find that even hand-made abstractions do not tend to improve the performance of SATPLAN. Exploring these findings from a theoretical point of view, we identify an interesting phenomenon that may cause this behavior. We compare various planning-graph based CNF encodings F of the original planning task with the CNF encodings F_abs of the abstracted planning task. We prove that, in many cases, the shortest resolution refutation for F_abs can never be shorter than that for F. This suggests a fundamental weakness of the approach, and motivates further investigation of the interplay between declarative transition-systems, over-approximating abstractions, and SAT encodings.
APA, Harvard, Vancouver, ISO, and other styles
30

Ngoko, Yanik, Christophe Cérin, and Denis Trystram. "Solving Sat in a Distributed Cloud: A Portfolio Approach." International Journal of Applied Mathematics and Computer Science 29, no. 2 (June 1, 2019): 261–74. http://dx.doi.org/10.2478/amcs-2019-0019.

Full text
Abstract:
Abstract We introduce a new parallel and distributed algorithm for the solution of the satisfiability problem. It is based on an algorithm portfolio and is intended to be used for servicing requests in a distributed cloud. The core of our contribution is the modeling of the optimal resource sharing schedule in parallel executions and the proposition of heuristics for its approximation. For this purpose, we reformulate a computational problem introduced in a prior work. The main assumption is that it is possible to learn optimal resource sharing from traces collected on past executions on a representative set of instances. We show that the learning can be formalized as a set coverage problem. Then we propose to solve it by approximation and dynamic programming algorithms based on classical greedy algorithms for the maximum coverage problem. Finally, we conduct an experimental evaluation for comparing the performance of the various algorithms proposed. The results show that some algorithms become more competitive if we intend to determine the trade-off between their quality and the runtime required for their computation.
APA, Harvard, Vancouver, ISO, and other styles
31

Богачкова, И. А., О. С. Заикин, С. Е. Кочемазов, И. В. Отпущенников, А. А. Семенов, and О. О. Хамисов. "Problems of search for collisions of cryptographic hash functions of the MD family as variants of Boolean satisfiability problem." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 1 (April 2, 2015): 61–77. http://dx.doi.org/10.26089/nummet.v16r107.

Full text
Abstract:
Рассмотрена реализация разностной атаки на криптографические хеш-функции MD4 (Message Digest 4) и MD5 (Message Digest 5) через сведение задачи поиска коллизий для этих хеш-функций к задаче о булевой выполнимости (SAT, SATisfiability). Новизна полученных результатов заключается в том, что предложены существенно более экономные (в сравнении с известными) SAT-кодировки рассматриваемых алгоритмов, а также в использовании для решения полученных SAT-задач современных многопоточных и параллельных SAT-решателей. Задачи поиска одноблоковых коллизий для MD4 в данной постановке оказались чрезвычайно простыми. Кроме того, найдены несколько десятков двухблоковых коллизий для MD5. В процессе соответствующих вычислительных экспериментов определен некоторый класс сообщений, дающих коллизии: построено множество пар дающих коллизии сообщений, у которых первые 10 байтов нулевые. An implementation of the differential attacks on cryptographic hash functions MD4 (Message Digest 4) and MD5 (Message Digest 5) by reducing the problems of search for collisions of these hash functions to the Boolean satisfiability problem (SAT) is considered. The novelty of the results obtained consists in a more compact (compared to already known) SAT encodings for the algorithms considered and in the use of modern parallel and distributed SAT solvers in applications to the formulated SAT problems. Searching for single block collisions of MD4 in this approach turned out to be very simple. In addition, several dozens of double block collisions of MD5 are found. In the process of the corresponding numerical experiments, a certain class of messages that produce the collisions is found: in particular, a set of pairs of such messages with first 10 zero bytes is constructed.
APA, Harvard, Vancouver, ISO, and other styles
32

Ishebabi, Harold, Philipp Mahr, Christophe Bobda, Martin Gebser, and Torsten Schaub. "Answer Set versus Integer Linear Programming for Automatic Synthesis of Multiprocessor Systems from Real-Time Parallel Programs." International Journal of Reconfigurable Computing 2009 (2009): 1–11. http://dx.doi.org/10.1155/2009/863630.

Full text
Abstract:
An automated design approach for multiprocessor systems on FPGAs is presented which customizes architectures for parallel programs by simultaneously solving the problems of task mapping, resource allocation, and scheduling. The latter considers effects of fixed-priority preemptive scheduling in order to guarantee real-time requirements, hence covering a broad spectrum of embedded applications. Being inherently a combinatorial optimization problem, the design space is modeled using linear equations that capture high-level design parameters. A comparison of two methods for solving resulting problem instances is then given. The intent is to study how well recent advances in propositional satisfiability (SAT) and thus Answer Set Programming (ASP) can be exploited to automate the design of flexible multiprocessor systems. Integer Linear Programming (ILP) is taken as a baseline, where architectures for IEEE 802.11g and WCDMA baseband signal processing are synthesized. ASP-based synthesis used a few seconds in the solver, faster by three orders of magnitude compared to ILP-based synthesis, thereby showing a great potential for solving difficult instances of the automated synthesis problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Drechsler, Rolf, Görschwin Fey, and Sebastian Kinder. "An integrated approach for combining BDDs and SAT provers." Facta universitatis - series: Electronics and Energetics 20, no. 3 (2007): 415–36. http://dx.doi.org/10.2298/fuee0703415d.

Full text
Abstract:
Many formal verification tools today are based on Boolean proof techniques The two most powerful approaches in this context are Binary Decision Diagrams (BDDs) and methods based on Boolean Satisfiability (SAT). Recent studies have shown that BDDs and SAT are orthogonal, i.e. there exist problems where BDDs work well, while SAT solvers fail and vice versa. Beside this, the techniques are very different in general. E.g. SAT solvers try to find a single solution and BDDs represent all solutions in parallel. In this paper the first integrated approach is presented that combines BDDs and SAT within a single data structure. This hybrid approach combines the advantages of the two techniques, i.e. multiple solutions can be computed while the memory requirement remains small. Experimental results demonstrate the quality of the approach in comparison to BDDs and SAT solvers.
APA, Harvard, Vancouver, ISO, and other styles
34

Hao and Liu. "Enhanced Membrane Computing Algorithm for SAT Problems Based on the Splitting Rule." Symmetry 11, no. 11 (November 15, 2019): 1412. http://dx.doi.org/10.3390/sym11111412.

Full text
Abstract:
Boolean propositional satisfiability (SAT) problem is one of the most widely studied NP-complete problems and plays an outstanding role in many domains. Membrane computing is a branch of natural computing which has been proven to solve NP problems in polynomial time with a parallel compute mode. This paper proposes a new algorithm for SAT problem which combines the traditional membrane computing algorithm of SAT problem with a classic simplification rule, the splitting rule, which can divide a clause set into two axisymmetric subsets, deal with them respectively and simultaneously, and obtain the solution of the original clause set with the symmetry of their solutions. The new algorithm is shown to be able to reduce the space complexity by distributing clauses with the splitting rule repeatedly, and also reduce both time and space complexity by executing one-literal rule and pure-literal rule as many times as possible.
APA, Harvard, Vancouver, ISO, and other styles
35

INTERLANDI, MATTEO, and LETIZIA TANCA. "A datalog-based computational model for coordination-free, data-parallel systems." Theory and Practice of Logic Programming 18, no. 5-6 (September 2018): 874–927. http://dx.doi.org/10.1017/s147106841800042x.

Full text
Abstract:
AbstractCloud computingrefers to maximizing efficiency by sharing computational and storage resources, whiledata-parallel systemsexploit the resources available in the cloud to perform parallel transformations over large amounts of data. In the same line, considerable emphasis has been recently given to two apparently disjoint research topics:data-parallel, andeventually consistent, distributedsystems.Declarative networkinghas been recently proposed to ease the task of programming in the cloud, by allowing the programmer to express only the desired result and leave the implementation details to the responsibility of the run-time system. In this context, we deem it appropriate to propose a study on alogic-programming-based computational modelfor eventually consistent, data-parallel systems, the keystone of which is provided by the recent finding that the class of programs that can be computed in an eventually consistent, coordination-free way is that ofmonotonic programs. This principle is called Consistency and Logical Monotonicity (CALM) and has been proven by Amelootet al.for distributed, asynchronous settings. We advocate that CALM should be employed as a basic theoretical tool also for data-parallel systems, wherein computation usually proceeds synchronously in rounds and where communication is assumed to be reliable. We deem this problem relevant and interesting, especially for what concernsparallel dataflow optimizations. Nowadays, we are in fact witnessing an increasing concern about understanding which properties distinguish synchronous from asynchronous parallel processing, and when the latter can replace the former. It is general opinion that coordination-freedom can be seen as a major discriminant factor. In this work, we make the case that the current form of CALM does not hold in general for data-parallel systems, and show how, using novel techniques, the satisfiability of the CALM principle can still be obtained although just for the subclass of programs calledconnected monotonic queries. We complete the study with considerations on the relationships between our model and the one employed by Amelootet al., showing that our techniques subsume the latter when the synchronization constraints imposed on the system are loosened.
APA, Harvard, Vancouver, ISO, and other styles
36

Behnke, Gregor, and Susanne Biundo. "X and more Parallelism. Integrating LTL-Next into SAT-based Planning with Trajectory Constraints while Allowing for even more Parallelism." Inteligencia Artificial 21, no. 62 (September 11, 2018): 75. http://dx.doi.org/10.4114/intartif.vol21iss62pp75-90.

Full text
Abstract:
Linear temporal logic (LTL) provides expressive means to specify temporally extended goals as well as preferences.Recent research has focussed on compilation techniques, i.e., methods to alter the domain ensuring that every solution adheres to the temporally extended goals.This requires either new actions or an construction that is exponential in the size of the formula.A translation into boolean satisfiability (SAT) on the other hand requires neither.So far only one such encoding exists, which is based on the parallel $\exists$-step encoding for classical planning.We show a connection between it and recently developed compilation techniques for LTL, which may be exploited in the future.The major drawback of the encoding is that it is limited to LTL without the X operator.We show how to integrate X and describe two new encodings, which allow for more parallelism than the original encoding.An empirical evaluation shows that the new encodings outperform the current state-of-the-art encoding.
APA, Harvard, Vancouver, ISO, and other styles
37

Algazy, Kunbolat, Kairat Sakan, and Nursulu Kapalova. "Evaluation of the strength and performance of a new hashing algorithm based on a block cipher." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 3 (June 1, 2023): 3124. http://dx.doi.org/10.11591/ijece.v13i3.pp3124-3130.

Full text
Abstract:
The article evaluates the reliability of the new HBC-256 hashing algorithm. To study the cryptographic properties, the algorithm was implemented in software using Python and C programming languages. Also, for the algebraic analysis of the HBC-256 algorithm, a system of Boolean equations was built for one round using the Transalg tool. The program code that implements the hashing algorithm was converted into a software program for generating equations. As a result, one round of the compression function was described as conjunctive normal form (CNF) using 82,533 equations and 16,609 variables. To search for a collision, the satisfiability (SAT) problem solver Lingeling was used, including a version with the possibility of parallel computing. It is shown that each new round doubles the number of equations and variables, and the time to find the solution will grow exponentially. Therefore, it is not possible to find solutions for the full HBC256 hash function.
APA, Harvard, Vancouver, ISO, and other styles
38

Shi, Kai, Huiqun Yu, Jianmei Guo, Guisheng Fan, Liqiong Chen, and Xingguang Yang. "A Parallel Framework of Combining Satisfiability Modulo Theory with Indicator-Based Evolutionary Algorithm for Configuring Large and Real Software Product Lines." International Journal of Software Engineering and Knowledge Engineering 29, no. 04 (April 2019): 489–513. http://dx.doi.org/10.1142/s0218194019500219.

Full text
Abstract:
Multi-objective evolutionary algorithm (MOEA) has been widely applied to software product lines (SPLs) for addressing the configuration optimization problems. For example, the state-of-the-art SMTIBEA algorithm extends the constraint expressiveness and supports richer constraints to better address these problems. However, it just works better than the competitor for four out of five SPLs in five objectives and the convergence speed is not significantly increased for largest Linux SPL from 5 to 30[Formula: see text]min. To further improve the optimization efficiency, we propose a parallel framework SMTPORT, which combines four corresponding SMTIBEA variants and performs these variants by utilizing parallelization techniques within the limited time budget. For case studies in LVAT repository, we conduct a series of experiments on seven real-world and highly-constrained SPLs. Empirical results demonstrate that our approach significantly outperforms the state-of-the-art for all the seven SPLs in terms of a quality Hypervolume metric and a diversity Pareto Front Size indicator.
APA, Harvard, Vancouver, ISO, and other styles
39

Minaeva, Anna, and Zdeněk Hanzálek. "Survey on Periodic Scheduling for Time-triggered Hard Real-time Systems." ACM Computing Surveys 54, no. 1 (April 2021): 1–32. http://dx.doi.org/10.1145/3431232.

Full text
Abstract:
This survey covers the basic principles and related works addressing the time-triggered scheduling of periodic tasks with deadlines. The wide range of applications and the increasing complexity of modern real-time systems result in the continually growing interest in this topic. However, the articles in this field appear without systematic notation. To address it, we extend the three-field Graham notation to cover periodic scheduling. Moreover, we formally define three example periodic scheduling problems (PSPs) and provide straightforward implementations of these examples in the Satisfiability Modulo Theories formalism with source codes. Then, we present a summary of the complexity results containing existing polynomially solvable PSPs. We also provide an overview of simple state-of-the-art methods and tricks to solve the PSPs efficiently in terms of time. Next, we survey the existing works on PSP according to the resource environment: scheduling on a single resource, on parallel identical resources, and on dedicated resources. In the survey, we indicate which works propose solution methods for more general PSPs. Finally, we present related problems that are not periodic by nature to provide inspiration for the PSP solution.
APA, Harvard, Vancouver, ISO, and other styles
40

HOOS, HOLGER, ROLAND KAMINSKI, MARIUS LINDAUER, and TORSTEN SCHAUB. "aspeed: Solver scheduling via answer set programming." Theory and Practice of Logic Programming 15, no. 1 (February 17, 2014): 117–42. http://dx.doi.org/10.1017/s1471068414000015.

Full text
Abstract:
AbstractAlthough Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers inppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines.
APA, Harvard, Vancouver, ISO, and other styles
41

Muniyandi, Ravie Chandren, and Ali Maroosi. "A Representation of Membrane Computing with a Clustering Algorithm on the Graphical Processing Unit." Processes 8, no. 9 (September 22, 2020): 1199. http://dx.doi.org/10.3390/pr8091199.

Full text
Abstract:
Long-timescale simulations of biological processes such as photosynthesis or attempts to solve NP-hard problems such as traveling salesman, knapsack, Hamiltonian path, and satisfiability using membrane systems without appropriate parallelization can take hours or days. Graphics processing units (GPU) deliver an immensely parallel mechanism to compute general-purpose computations. Previous studies mapped one membrane to one thread block on GPU. This is disadvantageous given that when the quantity of objects for each membrane is small, the quantity of active thread will also be small, thereby decreasing performance. While each membrane is designated to one thread block, the communication between thread blocks is needed for executing the communication between membranes. Communication between thread blocks is a time-consuming process. Previous approaches have also not addressed the issue of GPU occupancy. This study presents a classification algorithm to manage dependent objects and membranes based on the communication rate associated with the defined weighted network and assign them to sub-matrices. Thus, dependent objects and membranes are allocated to the same threads and thread blocks, thereby decreasing communication between threads and thread blocks and allowing GPUs to maintain the highest occupancy possible. The experimental results indicate that for 48 objects per membrane, the algorithm facilitates a 93-fold increase in processing speed compared to a 1.6-fold increase with previous algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Tan, Daniel Bochen, Dolev Bluvstein, Mikhail D. Lukin, and Jason Cong. "Compiling Quantum Circuits for Dynamically Field-Programmable Neutral Atoms Array Processors." Quantum 8 (March 14, 2024): 1281. http://dx.doi.org/10.22331/q-2024-03-14-1281.

Full text
Abstract:
Dynamically field-programmable qubit arrays (DPQA) have recently emerged as a promising platform for quantum information processing. In DPQA, atomic qubits are selectively loaded into arrays of optical traps that can be reconfigured during the computation itself. Leveraging qubit transport and parallel, entangling quantum operations, different pairs of qubits, even those initially far away, can be entangled at different stages of the quantum program execution. Such reconfigurability and non-local connectivity present new challenges for compilation, especially in the layout synthesis step which places and routes the qubits and schedules the gates. In this paper, we consider a DPQA architecture that contains multiple arrays and supports 2D array movements, representing cutting-edge experimental platforms. Within this architecture, we discretize the state space and formulate layout synthesis as a satisfiability modulo theories problem, which can be solved by existing solvers optimally in terms of circuit depth. For a set of benchmark circuits generated by random graphs with complex connectivities, our compiler OLSQ-DPQA reduces the number of two-qubit entangling gates on small problem instances by 1.7x compared to optimal compilation results on a fixed planar architecture. To further improve scalability and practicality of the method, we introduce a greedy heuristic inspired by the iterative peeling approach in classical integrated circuit routing. Using a hybrid approach that combined the greedy and optimal methods, we demonstrate that our DPQA-based compiled circuits feature reduced scaling overhead compared to a grid fixed architecture, resulting in 5.1X less two-qubit gates for 90 qubit quantum circuits. These methods enable programmable, complex quantum circuits with neutral atom quantum computers, as well as informing both future compilers and future hardware choices.
APA, Harvard, Vancouver, ISO, and other styles
43

Gütl, Christian. "Editorial." JUCS - Journal of Universal Computer Science 29, no. 8 (August 28, 2023): 836–37. http://dx.doi.org/10.3897/jucs.109658.

Full text
Abstract:
Dear Readers,  It gives me great pleasure to announce the eighth regular issue of 2023. In this issue, five papers by 24 authors from nine countries cover various topical aspects of computer science. In an ongoing effort to further strengthen our journal, I would like to expand the editorial board: If you are a tenured associate professor or above with a strong publication record, you are welcome to apply to join our editorial board. We are also interested in high-quality proposals for special issues on new topics and trends.  As always, I would like to thank all the authors for their sound research and the editorial board for their extremely valuable review effort and suggestions for improvements. These contributions, together with the generous support of the consortium members, sustain the quality of our journal.  As we want to secure the financial support also for the years to come, we are looking for institutions and libraries to financially support our diamond open access journal as consortium members, who will then benefit from the research community, international visibility, and the opportunity to manage special issues and focused topics within the journal. Please think about the possibility of such financial participation of your institution, we would be very grateful for any kind of support.  In this regular issue, I am very pleased to introduce the following 5 accepted articles: In a collaborative research effort between Jordan and Qatar, Ahmad Abusukhon, Ala Al-Fuqaha and Belal Hawashin present their technique for detecting underground water pipeline leakage based on an Internet of Things approach. In another collaboration between researchers from Croatia and Bosnia and Herzegovina, Ani Grubišić, Slavomir Stankov, Branko Žitko, Ines Šarić-Grgić, Angelina Gašpar, Emil Brajković, and Daniel Vasić describe and evaluate the performance of a semiautomatic authoring tool for knowledge extraction in the AC&NL Tutor and discuss strengths and weaknesses. Also in a collaboration between researchers from the United States and United Arab Emirates, Longhao Li, Taieb Znati, and Rami Melhem propose an energy-aware fault-tolerance model for silent error detection and mitigation in heterogeneous extreme-scale computing environments, referred to diffReplication, which is associated with one replica that executes at the same rate as the main process, and one diffReplica that is executed at a fraction of the execution rate of the main process. In a collaboration between researchers form China and Japan, Xiaojuan Liao, Hui Zhang, Miyuki Koshimura, Rong Huang, and Fagen Li propose in their paper an optimized strategy for solving restricted preemptive scheduling on parallel machines applying Partial Maximum Satisfiability (PMS), an optimized version of a Boolean Satisfiability (SAT) solver. Finally, in a collaboration between researchers from South Korea and the USA, Jaeyoung Yang, Sooin Kim, Sangwoo Lee, Won-gyum Kim, Donghoon Kim, and Doosung Hwang aim in their paper to establish a binary classification method for distinguishing copyrighted and noncopyrighted images by introducing a deep hashing model for an image authentication system, which uses deep learning-based perceptual hashing.  Enjoy Reading!  Cordially, Christian Gütl, Managing Editor 
APA, Harvard, Vancouver, ISO, and other styles
44

Pantůčková, Kristýna, and Roman Barták. "Compilation-Based Approaches to Parallel Planning: An Empirical Comparison." International FLAIRS Conference Proceedings 34, no. 1 (April 18, 2021). http://dx.doi.org/10.32473/flairs.v34i1.128537.

Full text
Abstract:
Automated planning deals with finding a sequence of actions, a plan, to reach a goal. One of the possible approaches to automated planning is a compilation of a planning problem to a Boolean satisfiability problem or to a constraint satisfaction problem, which takes direct advantage of the advancements of satisfiability and constraint satisfaction solvers. This paper provides a comparison of three encodings proposed for the compilation of planning problems: Transition constraints for parallel planning (TCPP), Relaxed relaxed exist-Step encoding and Reinforced Encoding. We implemented the encodings using the programming language Picat 2.8, we suggested certain modifications, and we compared the performance of the encodings on benchmarks from international planning competitions.
APA, Harvard, Vancouver, ISO, and other styles
45

Bybee, Connor, Denis Kleyko, Dmitri E. Nikonov, Amir Khosrowshahi, Bruno A. Olshausen, and Friedrich T. Sommer. "Efficient optimization with higher-order ising machines." Nature Communications 14, no. 1 (September 27, 2023). http://dx.doi.org/10.1038/s41467-023-41214-9.

Full text
Abstract:
AbstractA prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Boolean k-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines.
APA, Harvard, Vancouver, ISO, and other styles
46

Heule, Marijn. "Schur Number Five." Proceedings of the AAAI Conference on Artificial Intelligence 32, no. 1 (April 26, 2018). http://dx.doi.org/10.1609/aaai.v32i1.12209.

Full text
Abstract:
We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number n such that there exists a five-coloring of the positive numbers up to n without a monochromatic solution of the equation a + b = c? We obtained the solution, n = 160, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We also constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Bofill, Miquel, Joan Espasa, and Mateu Villaret. "Relaxing non-interference requirements in parallel plans." Logic Journal of the IGPL, August 1, 2019. http://dx.doi.org/10.1093/jigpal/jzz026.

Full text
Abstract:
Abstract The aim of being able to reason about quantities, time or space has been the main objective of the many efforts on the integration of propositional planning with extensions to handle different theories. Planning modulo theories (PMTs) are an approximation inspired by satisfiability modulo theories (SMTs) that generalize the integration of arbitrary theories with propositional planning. Parallel plans are crucial to reduce plan lengths and hence the time needed to reach a feasible plan in many approaches. Parallelization of actions relies on the notion of (non-)interference, which is usually determined syntactically at compile time. In this paper we define a semantic notion of interference between actions in PMT. Apart from being strictly stronger than any syntactic notion of interference, we show how semantic interference can be easily and efficiently checked by calling an off-the-shelf SMT solver at compile time, constituting a technique orthogonal to the solving method.
APA, Harvard, Vancouver, ISO, and other styles
48

Yin, Zhixiang, Jing Yang, Qiang Zhang, Zhen Tang, Guoqiang Wang, and Zhongtuan Zheng. "DNA Computing Model for Satisfiability Problem Based on Hybridization Chain Reaction." International Journal of Pattern Recognition and Artificial Intelligence, October 12, 2020, 2159010. http://dx.doi.org/10.1142/s0218001421590102.

Full text
Abstract:
Satisfiability problem is a famous nondeterministic polynomial-time complete (NP-complete) problem, which has always been a hotspot in artificial intelligence. In this paper, by combining the advantages of DNA origami with hybridization chain reaction, a computing model was proposed to solve the satisfiability problem. For each clause in the given formula, a DNA origami device was devised. The device corresponding to the clause was capable of searching for assignments that satisfied the clause. When all devices completed the search in parallel, the intersection of these satisfying assignments found must satisfy all the clauses. Therefore, whether the given formula is satisfiable or not was decided. The simulation results demonstrated that the proposed computing model was feasible. Our work showed the capability of DNA origami in architecting automatic computing device. The paper proposed a novel method for designing functional nanoscale devices based on DNA origami.
APA, Harvard, Vancouver, ISO, and other styles
49

Grimaldi, Andrea, Luis Sánchez-Tejerina, Navid Anjum Aadit, Stefano Chiappini, Mario Carpentieri, Kerem Camsari, and Giovanni Finocchio. "Spintronics-compatible Approach to Solving Maximum-Satisfiability Problems with Probabilistic Computing, Invertible Logic, and Parallel Tempering." Physical Review Applied 17, no. 2 (February 18, 2022). http://dx.doi.org/10.1103/physrevapplied.17.024052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bannach, Max, Malte Skambath, and Till Tantau. "On the Parallel Parameterized Complexity of MaxSAT Variants." Journal of Artificial Intelligence Research 78 (November 19, 2023). http://dx.doi.org/10.1613/jair.1.14748.

Full text
Abstract:
In the maximum satisfiability problem (max-sat) we are given a propositional formula in conjunctive normal form and have to find an assignment that satisfies as many clauses as possible. We study the parallel parameterized complexity of various versions of max-sat and provide the first constant-time algorithms parameterized either by the solution size or by the allowed excess relative to some guarantee. For the dual parameterized version where the parameter is the number of clauses we are allowed to leave unsatisfied, we present the first parallel algorithm for max-2sat (known as almost-2sat). The difficulty in solving almost-2sat in parallel comes from the fact that the iterative compression method, originally developed to prove that the problem is fixed-parameter tractable at all, is inherently sequential. We observe that a graph flow whose value is a parameter can be computed in parallel and develop a parallel algorithm for the vertex cover problem parameterized above the size of a given matching. Finally, we study the parallel complexity of max-sat parameterized by the vertex cover number, the treedepth, the feedback vertex set number, and the treewidth of the input’s incidence graph. While max-sat is fixedparameter tractable for all of these parameters, we show that they allow different degrees of possible parallelization. For all four we develop dedicated parallel algorithms that are constructive, meaning that they output an optimal assignment – in contrast to results that can be obtained by parallel meta-theorems, which often only solve the decision version.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography