Rozprawy doktorskie na temat „SAT solver”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: SAT solver.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „SAT solver”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Izrael, Petr. "SAT Solver akcelerovaný pomocí GPU". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236336.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with design and implementation of a complete SAT solver accelerated on GPU. The achitecture of modern graphics cards is described as well as the CUDA platform and a list of common algorithms used for solving the boolean satisfiability problem (the SAT problem). The presented solution is based on the 3-SAT DC alogirthm, which belongs to the family of well-known DPLL based algorithms. This work describes problems encountered during the design and implementation. The resulting application was then analyzed and optimized. The presented solver cannot compete with state of the art solvers, but proves that it can be up to 21x faster than an equivalent sequential version. Unfortunately, the current implementation can only handle formulas of a limited size. Suggestions on further improvements are given in final sections.
Style APA, Harvard, Vancouver, ISO itp.
2

Kokotov, Daniel (Daniel L. ). 1978. "PSolver : a distributed SAT solver framework". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86807.

Pełny tekst źródła
Streszczenie:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 129-130).
by Daniel Kokotov.
M.Eng.and S.B.
Style APA, Harvard, Vancouver, ISO itp.
3

Pilz, Enrico. "Boolsche Gleichungssysteme, SAT Solver und Stromchiffren". [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:289-vts-65265.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Le, Piane Fabio. "Il teorema di Cook-Levin e i SAT-solver". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9026/.

Pełny tekst źródła
Streszczenie:
In questa tesi è trattato il tema della soddisfacibilità booleana o proposizionale, detta anche SAT, ovvero il problema di determinare se una formula booleana è soddisfacibile o meno. Soddisfacibile significa che è possibile assegnare le variabili in modo che la formula assuma il valore di verità vero; viceversa si dice insoddisfacibile se tale assegnamento non esiste e se quindi la formula esprime una funzione identicamente falsa. A tal fine si introducono degli strumenti preliminari che permetteranno di affrontare più approfonditamente la questione, partendo dalla definizione basilare di macchina di Turing, affrontando poi le classi di complessità e la riduzione, la nozione di NP-completezza e si dimostra poi che SAT è un problema NP-completo. Infine è fornita una definizione generale di SAT-solver e si discutono due dei principali algoritmi utilizzati a tale scopo.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhao, Yuting. "Answer set programming : SAT based solver and phase transition /". View abstract or full-text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20ZHAOY.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Pintjuk, Daniil. "Boosting SAT-solver Performance on FACT Instances with Automatic Parameter Tuning". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166552.

Pełny tekst źródła
Streszczenie:
Previous work by Asketorp [2014] has shown that integer factorization with the best SAT-solvers is orders of magnitude slower then with general number field sieve (GNFS). However only default configurations for the tested SAT-solvers ware considered in thous tests therefor this rapport attempts to explore what difference use of good configurations would have made. Our method involved using automatic parameter tunning with the algorithm iterated local search (ILS). ILS was used to tune the sat solver Minisat. And the tuned configuration was compared with the default by timing Minisat runs with respective configurations. The results show that performance gains are far from being big enough to suggest that the SAT-solvers tested by Asketorp [2014], would have come much closer to the performance of GNFS if tuned configurations were used. However the performance gains achieved in this report are impressive enough to advocate the use of automated parameter tuning of Sat-solvers for specific sets of instances.
Style APA, Harvard, Vancouver, ISO itp.
7

Subramanian, Rishi Bharadwaj. "FPGA Based Satisfiability Checking". University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1583154848438753.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hoessen, Benoît. "Solving the Boolean satisfiability problem using the parallel paradigm". Thesis, Artois, 2014. http://www.theses.fr/2014ARTO0406/document.

Pełny tekst źródła
Streszczenie:
Cette thèse présente différentes techniques permettant de résoudre le problème de satisfaction de formule booléenes utilisant le parallélisme et du calcul distribué. Dans le but de fournir une explication la plus complète possible, une présentation détaillée de l'algorithme CDCL est effectuée, suivi d'un état de l'art. De ce point de départ, deux pistes sont explorées. La première est une amélioration d'un algorithme de type portfolio, permettant d'échanger plus d'informations sans perte d'efficacité. La seconde est une bibliothèque de fonctions avec son interface de programmation permettant de créer facilement des solveurs SAT distribués
This thesis presents different technique to solve the Boolean satisfiability problem using parallel and distributed architectures. In order to provide a complete explanation, a careful presentation of the CDCL algorithm is made, followed by the state of the art in this domain. Once presented, two propositions are made. The first one is an improvement on a portfolio algorithm, allowing to exchange more data without loosing efficiency. The second is a complete library with its API allowing to easily create distributed SAT solver
Style APA, Harvard, Vancouver, ISO itp.
9

Silveira, Jaime Kirch da. "Parallel SAT solvers and their application in automatic parallelization". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/95373.

Pełny tekst źródła
Streszczenie:
Desde a diminuição da tendência de aumento na frequência de processadores, uma nova tendência surgiu para permitir que softwares tirem proveito de harwares mais rápidos: a paralelização. Contudo, diferente de aumentar a frequência de processadores, utilizar parallelização requer um tipo diferente de programação, a programação paralela, que é geralmente mais difícil que a programação sequencial comum. Neste contexto, a paralelização automática apareceu, permitindo que o software tire proveito do paralelismo sem a necessidade de programação paralela. Nós apresentamos aqui duas propostas: SAT-PaDdlinG e RePaSAT. SAT-PaDdlinG é um SAT Solver DPLL paralelo que roda em GPU, o que permite que RePaSAT utilize esse ambiente. RePaSAT é a nossa proposta de uma máquina paralela que utiliza o Problema SAT para paralelizar automaticamente código sequencial. Como uma GPU provê um ambiente barato e massivamente paralelo, SAT-PaDdlinG tem como objetivo prover esse paralelismo massivo a baixo custo para RePaSAT, como para qualquer outra ferramenta ou problema que utilize SAT Solvers.
Since the slowdown in improvement in the frequency of processors, a new tendency has arisen to allow software to take advantage of faster hardware: parallelization. However, different from increasing the frequency of processors, using parallelization requires a different kind of programming, parallel programming, which is usually harder than common sequential programming. In this context, automatic parallelization has arisen, allowing software to take advantage of parallelism without the need of parallel programming. We present here two proposals: SAT-PaDdlinG and RePaSAT. SAT-PaDdlinG is a parallel DPLL SAT Solver on GPU, which allows RePaSAT to use this environment. RePaSAT is our proposal of a parallel machine that uses the SAT Problem to automatically parallelize sequential code. Because GPU provides a cheap, massively parallel environment, SATPaDdlinG aims at providing this massive parallelism and low cost to RePaSAT, as well as to any other tool or problem that uses SAT Solvers.
Style APA, Harvard, Vancouver, ISO itp.
10

Kibria, Raihan Hassnain [Verfasser], i Hans [Akademischer Betreuer] Eveking. "Soft Computing Approaches to DPLL SAT Solver Optimization / Raihan Hassnain Kibria. Betreuer: Hans Eveking". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2011. http://d-nb.info/1105563952/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Fernandez, Davila Jorge Luis. "Planification cognitive basée sur la logique : de la théorie à l'implémentation". Electronic Thesis or Diss., Toulouse 3, 2022. http://thesesups.ups-tlse.fr/5491/.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous définissons un cadre de planification cognitive permettant de doter des agents artificiels des compétences nécessaires pour représenter et raisonner sur les états mentaux d'autres agents. Notre cadre de planification cognitive est basé sur un fragment NP d'une logique épistémique avec une sémantique exploitant des bases de croyances et dont le problème de satisfiabilité peut être réduit à SAT. Nous détaillons l'ensemble des traductions pour la réduction de notre fragment à SAT. De plus, nous fournissons des résultats de complexité pour vérifier la satisfiabilité des formules dans notre fragment NP. Nous définissons une architecture générale pour le problème de planification cognitive. Ensuite, nous définissons deux types de problème de planification : informatif et interrogatif, et nous calculons la complexité pour trouver une solution au problème de planification cognitive dans chacun de ces deux cas. De plus, nous illustrons le potentiel de notre cadre pour les applications en interaction humain-machine à l'aide de deux exemples dans lesquels un agent artificiel est censé interagir avec un agent humain par le dialogue et persuader l'humain de se comporter d'une certaine manière. Enfin, nous introduisons une formalisation de la planification cognitive simple en tant que formule booléenne quantifiée (QBF) avec un nombre optimal de quantificateurs dans le préfixe. Le modèle de planification cognitive a été mis implémenté. Nous décrivons comment représenter et générer la base de croyance. De plus, nous démontrons comment la machine exécute le processus de raisonnement pour trouver une séquence d'actes de langage destinés à induire une intention potentielle chez l'agent humain. L'implémentation comporte trois composants principaux : la révision des croyances, la planification cognitive et le module de traduction. Ces modules fonctionnent de manière intégrée pour capturer les croyances de l'agent humain au cours du processus d'interaction humain-machine et générer une séquence d'actes de parole pour atteindre un objectif persuasif. Finalement, nous présentons un langage épistémique pour représenter les croyances et les actions d'un joueur artificiel dans le contexte du jeu de société coopératif Yokai. Ce jeu nécessite une combinaison de théorie de l'esprit (ToM), de raisonnement temporel et de raisonnement spatial pour qu'un agent artificiel joue efficacement. Nous montrons que le langage rend bien compte de ces trois dimensions et que son problème de satisfiabilité est NP-complet. Nous implémentons le jeu et réalisons des expériences pour comparer le niveau de coopération entre les agents lorsqu'ils tentent d'atteindre un objectif commun en analysant deux scénarios : lorsque le jeu se joue entre un humain et l'agent artificiel versus lorsque deux humains jouent ensemble
In this thesis, we introduced a cognitive planning framework that can be used to endow artificial agents with the necessary skills to represent and reason about other agents' mental states. Our cognitive planning framework is based on an NP-fragment of an epistemic logic with a semantics exploiting belief bases and whose satisfiability problem can be reduced to SAT. We detail the set of translations for the reduction of our fragment to SAT. In addition, we provide complexity results for checking satisfiability of formulas in our NP-fragment. We define a general architecture for the cognitive planning problem. Afterward, we define two types of planning problem: informative and interrogative, and we find the complexity of finding a solution for the cognitive planning problem in both cases. Furthermore, we illustrated the potential of our framework for applications in human-machine interaction with the help of two examples in which an artificial agent is expected to interact with a human agent through dialogue and to persuade the human to behave in a certain way. Moreover, we introduced a formalization of simple cognitive planning as a quantified boolean formula (QBF) with an optimal number of quantifiers in the prefix. The model for cognitive planning was implemented. We describe how to represent and generate the belief base. Furthermore, we demonstrate how the machine performs the reasoning process to find a sequence of speech acts intended to induce a potential intention in the human agent. The implemented system has three main components: belief revision, cognitive planning, and the translator module. These modules work integrated to capture the human agent's beliefs during the human-machine interaction process and generate a sequence of speech acts to achieve a persuasive goal. Finally, we present an epistemic language to represent the beliefs and actions of an artificial player in the context of the board game Yokai. The cooperative game Yokai requires a combination of theory of mind (ToM), temporal and spatial reasoning for an artificial agent to play effectively. We show that the language properly accounts for these three dimensions and that its satisfiability problem is NP-complete. We implement the game and perform experiments to compare the cooperation level between agents when they try to achieve a common goal by analyzing two scenarios: when the game is played between a human and the artificial agent versus when two humans play the game
Style APA, Harvard, Vancouver, ISO itp.
12

Baud-Berthier, Guillaume. "Encodage Efficace des Systèmes Critiques pour la Vérificaton Formelle par Model Checking à base de Solveurs SAT". Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0147/document.

Pełny tekst źródła
Streszczenie:
Le développement de circuits électroniques et de systèmes logiciels critiques pour le ferroviaire ou l’avionique, par exemple, demande à être systématiquement associé à un processus de vérification formelle permettant de garantir l’exhaustivité des tests. L’approche formelle la plus répandue dans l’industrie est le Model Checking. Le succès de son adoption provient de deux caractéristiques : (i) son aspect automatique, (ii) sa capacité à produire un témoin (un scénario rejouable) lorsqu’un comportement indésirable est détecté, ce qui fournit une grande aide aux concepteurs pour corriger le problème. Néanmoins, la complexité grandissante des systèmes à vérifier est un réel défi pour le passage à l’échelle des techniques existantes. Pour y remédier, différents algorithmes de model checking (e.g., parcours symbolique des états du système, interpolation), diverses méthodes complémentaires (e.g., abstraction,génération automatique d’invariants), et de multiples procédures de décision(e.g., diagramme de décision, solveur SMT) sont envisageables.Dans cette thèse, nous nous intéressons plus particulièrement à l’induction temporelle.Il s’agit d’un algorithme de model checking très utilisé dans l’industrie pour vérifier les systèmes critiques. C’est également l’algorithme principal de l’outil développé au sein de l’entreprise Safe River, entreprise dans laquelle cette thèse a été effectuée. Plus précisément, l’induction temporelle combine deux techniques :(i) BMC (Bounded Model Checking), une méthode très efficace pour la détection debugs dans un système (ii) k-induction, une méthode ajoutant un critère de terminaison à BMC lorsque le système n’admet pas de bug. Ces deux techniques génèrent des formules logiques propositionnelles pour lesquelles il faut en déterminer la satisfaisabilité.Pour se faire, nous utilisons un solveur SAT, c’est-à-dire une procédure de décision qui détermine si une telle formule admet une solution.L’originalité des travaux proposés dans cette thèse repose en grande partie sur la volonté de renforcer la collaboration entre le solveur SAT et le model checker.Nos efforts visent à accroître l’interconnexion de ces deux modules en exploitant la structure haut niveau du problème. Nous avons alors défini des méthodes profitant de la structure symétrique des formules. Cette structure apparaît lors du dépliage successif de la relation de transition, et nous permet de dupliquer les clauses ou encore de déplier les transitions dans différentes directions (i.e., avant ou arrière). Nous avons aussi pu instaurer une communication entre le solveur SAT et le model checker permettant de : (i) simplifier la représentation au niveau du model checker grâce à des informations déduites par le solveur, et (ii) aider le solveur lors de la résolution grâce aux simplifications effectuées sur la représentation haut niveau. Une autre contribution importante de cette thèse est l’expérimentation des algorithmes proposées. Cela se concrétise par l’implémentation d’un model checker prenant en entrée des modèles AIG (And-Inverter Graph) dans lequel nous avons pu évaluer l’efficacité de nos différentes méthodes
The design of electronic circuits and safety-critical software systems in railway or avionic domains for instance, is usually associated with a formal verification process. More precisely, test methods for which it is hard to show completeness are combined with approaches that are complete by definition. Model Checking is one of those approaches and is probably the most prevalent in industry. Reasons of its success are mainly due to two characteristics, namely: (i) its fully automatic aspect, and (ii) its ability to produce a short execution trace of undesired behaviors, which is very helpful for designers to fix the issues. However, the increasing complexity of systems to be verified is a real challenge for the scalability of existing techniques. To tackle this challenge, different model checking algorithms (e.g., symbolic model checking, interpolation), various complementary methods (e.g., abstraction, automatic generation of invariants) and multiple decision procedures (e.g., decision diagram, SMT solver) can be considered. In this thesis, we particularly focus on temporal induction. It is a model checking algorithm widely used in the industry to check safety-critical systems. This is also the core algorithm of the tool developed within SafeRiver, company in which this thesis was carried out. More precisely, temporal induction consists of a combination of BMC (Bounded Model Checking) and k-induction. BMC is a very efficient bugfinding method. While k-induction adds a termination criterion to BMC when the system does not admit bugs. These two techniques generate formulas for which it is necessary to determine their satisfiability. To this end, we use a SAT solver as a decision procedure to determine whether a propositional formula has a solution. The main contribution of this thesis aims to strengthen the collaboration between the SAT solver and the model checker. The improvements proposed mainly focus on increasing the interconnections of these two modules by exploiting the high-level structure of the problem.We have therefore defined several methods taking advantage of the symmetrical structure of the formulas. This structure emerges during the successive unfolding of the transition relation, and allows us to duplicate clauses or even unroll the transitions in different directions (i.e., forward or backward). We also established a communication between the solver and the model checker, which has for purpose to: (i) simplify the model checker representation using the information inferred by the solver, and (ii) assist the solver during resolution with simplifications performed on the high-level representation. Another important contribution of this thesis is the empirical evaluation of the proposed algorithms on well-established benchmarks. This is achieved concretely via the implementation of a model checker taking AIG (And-Inverter Graph) as input, from which we were able to evaluate the effectiveness of our algorithms
Style APA, Harvard, Vancouver, ISO itp.
13

SIVA, SUBRAMANYAN D. "APPLICATIONS OF SATISFIABILITY IN SYNTHESIS OF RECONFIGURABLE COMPUTERS". University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1022761893.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Serédi, Silvester. "Evoluční algoritmy v úloze booleovské splnitelnosti". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236224.

Pełny tekst źródła
Streszczenie:
The goal of this Master's Thesis is finding a SAT solving heuristic by the application of an evolutionary algorithm. This thesis surveys various approaches used in SAT solving and some variants of evolutionary algorithms that are relevant to this topic. Afterwards the implementation of a linear genetic programming system that searches for a suitable heuristic for SAT problem instances is described, together with the implementation of a custom SAT solver which expoloits the output of the genetic program. Finally, the achieved results are summarized.
Style APA, Harvard, Vancouver, ISO itp.
15

Guo, Long. "Résolution séquentielle et parallèle du problème de la satisfiabilité propositionnelle". Thesis, Artois, 2013. http://www.theses.fr/2013ARTO0408/document.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur la résolution séquentielle et parallèle du problème de la satisfiabilité propositionnelle(SAT). Ce problème important sur le plan théorique admet de nombreuses applications qui vont de la vérification formelle de matériels et de logiciels à la cryptographie en passant par la planification et la bioinformatique. Plusieurs contributions sont apportées dans cette thèse. La première concerne l’étude et l’intégration des concepts d’intensification et de diversification dans les solveurs SAT parallèle de type portfolio. Notre seconde contribution exploite l’état courant de la recherche partiellement décrit par les récentes polarités des littéraux « progress saving », pour ajuster et diriger dynamiquement les solveurs associés aux différentes unités de calcul. Dans la troisième contribution, nous proposons des améliorations de la stratégie de réduction de labase des clauses apprises. Deux nouveaux critères, permettant d’identifier les clauses pertinentes pour la suite de la recherche, ont été proposés. Ces critères sont utilisés ensuite comme paramètre supplémentaire de diversification dans les solveurs de type portfolio. Finalement, nous présentons une nouvelle approche de type diviser pour régner où la division s’effectue par ajout de contraintes particulières
In this thesis, we deal with the sequential and parallel resolution of the problem SAT. Despite of its complexity, the resolution of SAT problem is an excellent and competitive approach for solving thecombinatorial problems such as the formal verification of hardware and software, the cryptography, theplanning and the bioinfomatics. Several contribution are made in this thesis. The first contribution aims to find the compromise of diversification and intensification in the solver of type portfolio. In our second contribution, we propose to dynamically adjust the configuration of a core in a portfolio parallel sat solver when it is determined that another core performs similar work. In the third contribution, we improve the strategy of reduction of the base of learnt clauses, we construct a portfolio strategy of reduction in parallel solver. Finally, we present a new approach named "Virtual Control" which is to distribute the additional constraints to each core in a parallel solver and verify their consistency during search
Style APA, Harvard, Vancouver, ISO itp.
16

Bousmar, Khadija. "Conception d'un solveur matériel spécifique pour la résolution rapide du problème SAT appliqué à l'évaluation du risque en génie industriel". Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0341.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous abordons un sujet dans le domaine du génie industriel se rapportant à la résolution d’un problème de décision, fondamental dans la théorie de la complexité et de la satisfiabilité propositionnelle, nommé SAT. Ce dernier est présenté généralement sous un formalisme mathématique, permettant de modéliser des problèmes complexes, tant académiques qu’issus du monde réel. Ces problèmes sont présentés sous une forme booléenne dans le but de tester leur faisabilité. Ils sont relatifs à plusieurs domaines applicatifs, tels que la vérification de matériels et logiciels, les télécommunications, la médecine, et notamment la planification. L’évolution et les progrès observés ces dernières années dans le domaine de la résolution des problèmes utilisant SAT, a permis de renforcer la conviction que ce domaine peut être encore plus prometteur dans la résolution des problèmes difficiles (complexes ou NP complexes) et qu’il faut lui accorder plus d’intérêt. C’est dans cette optique, que nous nous sommes intéressés à l’appliquer sur des problèmes purement industriels afin de proposer des contributions dans un nouveau domaine d’application. L’objectif de cette thèse est de développer des outils d’aide à la décision pouvant être employés dans le domaine de la gestion du risque industriel. Bien que le formalisme SAT soit très puissant, dans la pratique, quand les problèmes ciblés sont de taille importante, les outils de résolution s’avèrent moins performants. Par conséquent le but de ces travaux de recherche est de développer une architecture matérielle rapide (avec une implémentation ciblée sur FPGA) permettant une accélération massive de la résolution grâce au niveau élevé de traitement parallèle de l’approche matérielle. Dans cette thèse, deux aspects principaux sont étudiés et développés pour résoudre un problème de gestion des ressources de production industriel. Ces aspects sont d’une part, les principes de base de fonctionnement et de résolution d’un solveur générique paramétrable SAT, d’autre part, des méthodes adaptées au principe de fonctionnement retenu pour le solveur matériel. En effet, bien que ciblant des buts comparables à ceux de l’approche logicielle (optimisation du parcours de l’espace de recherche), l’approche matérielle requiert le développement de méthodes de résolutions spécifiques. Ces dernières ont été spécifiquement optimisées pour le domaine applicatif cible qui est celui de l’industrie. L’efficacité de l’approche matérielle développée a montré des résultats satisfaisant, point de vu du nombre de variables utilisés et temps de résolution sur les problèmes testés
In this thesis, we address a topic in the field of industrial engineering related to solving a fundamental decision problem in the theory of complexity and propositional satisfiability called SAT. The latter is usually presented in a mathematical formalism, allowing the modelling of complex problems, both academic and from real world. These problems are presented in boolean form in order to check their feasibility. They relate to several application areas, such as hardware and software verification, telecommunications, medicine, and planning. The evolution and progress observed in recent years in the field of problem-solving using SAT has made it possible to reinforce the conviction that this field can be even more promising in solving difficult (complex or complex NP) problems and that more attention needs to be dedicated to it. It is with this in mind that we have taken an interest in applying it to purely industrial problems in order to propose contributions in a new field of application. The objective of this thesis is to develop decision-support tools that can be used in the field of industrial risk management. Although the SAT formalism is very powerful, in practice, when the targeted problems are large, the resolution tools prove to be less effective. Therefore, the aim of this research is to develop a rapid hardware architecture (with FPGA-targeted implementation) that allows massive acceleration of resolution due to the high level of parallel processing of the hardware approach. In this thesis, two main aspects are studied and developed to solve a problem of management of industrial production resources. These aspects are, on the one hand, the basic principles of operation and resolution of a generic SAT configurable solver and, on the other hand, methods adapted to the operating principle adopted for the hardware solver. Indeed, although targeting goals comparable to those of the software approach (optimization of the search space path), the material approach requires the development of specific resolution methods. These have been specifically optimised for the target application area of industry. The effectiveness of the material approach developed showed satisfactory results, point of view of the number of variables used and resolution time on the problems tested
Style APA, Harvard, Vancouver, ISO itp.
17

Pham, Duc Nghia, i n/a. "Modelling and Exploiting Structures in Solving Propositional Satisfiability Problems". Griffith University. Institute for Integrated and Intelligent Systems, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070216.143447.

Pełny tekst źródła
Streszczenie:
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems and then solve using a general purpose SAT solver. However, much of the valuable information and structure of these realistic problems is flattened out and hidden inside the corresponding Conjunctive Normal Form (CNF) encodings of the SAT domain. Recently, systematic SAT solvers have been progressively improved and are now able to solve many highly structured practical problems containing millions of clauses. In contrast, state-of-the-art Stochastic Local Search (SLS) solvers still have difficulty in solving structured problems, apparently because they are unable to exploit hidden structure as well as the systematic solvers. In this thesis, we study and evaluate different ways to effectively recognise, model and efficiently exploit useful structures hidden in realistic problems. A summary of the main contributions is as follows: 1. We first investigate an off-line processing phase that applies resolution-based pre-processors to input formulas before running SLS solvers on these problems. We report an extensive empirical examination of the impact of SAT pre-processing on the performance of contemporary SLS techniques. It emerges that while all the solvers examined do indeed benefit from pre-processing, the effects of different pre-processors are far from uniform across solvers and across problems. Our results suggest that SLS solvers need to be equipped with multiple pre-processors if they are ever to match the performance of systematic solvers on highly structured problems. [Part of this study was published at the AAAI-05 conference]. 2. We then look at potential approaches to bridging the gap between SAT and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formalism (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. In this study, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based mechanism to handle weights, together with a CSP-based method to instantiate variables, is superior to other combinations of SAT and CSP-based approaches. In addition, SLS solvers based on this many-valued weighting approach outperform other existing approaches to handle many-valued CSP structures. [Part of this study was published at the AAAI-05 conference]. 3. Finally, we propose and evaluate six different schemes to encode temporal reasoning problems, in particular the Interval Algebra (IA) networks, into SAT CNF formulas. We then empirically examine the performance of local search as well as systematic solvers on the new temporal SAT representations, in comparison with solvers that operate on native IA representations. Our empirical results show that zChaff (a state-of-the-art complete SAT solver) together with the best IA-to-SAT encoding scheme, can solve temporal problems significantly faster than existing IA solvers working on the equivalent native IA networks. [Part of this study was published at the CP-05 workshop].
Style APA, Harvard, Vancouver, ISO itp.
18

Raimondi, Daniele. "Crittoanalisi Logica di DES". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1895/.

Pełny tekst źródła
Streszczenie:
La crittografia ha sempre rivestito un ruolo primario nella storia del genere umano, dagli albori ai giorni nostri, e il periodo in cui viviamo non fa certo eccezione. Al giorno d'oggi, molti dei gesti che vengono compiuti anche solo come abitudine (operazioni bancarie, apertura automatica dell'auto, accedere a Facebook, ecc.), celano al loro interno la costante presenza di sofisticati sistemi crittografici. Proprio a causa di questo fatto, è importante che gli algoritmi utilizzati siano in qualche modo certificati come ragionevolmente sicuri e che la ricerca in questo campo proceda costantemente, sia dal punto di vista dei possibili nuovi exploit per forzare gli algoritmi usati, sia introducendo nuovi e sempre più complessi sistemi di sicurezza. In questa tesi viene proposto una possibile implementazione di un particolare tipo di attacco crittoanalitico, introdotto nel 2000 da due ricercatori dell'Università "La Sapienza" di Roma, e conosciuto come "Crittoanalisi Logica". L'algoritmo su cui è incentrato il lavoro è il Data Encryption Standard (DES), ostico standard crittografico caduto in disuso nel 1999 a causa delle dimensioni ridotte della chiave, seppur tuttora sia algebricamente inviolato. Il testo è strutturato nel seguente modo: il primo capitolo è dedicato ad una breve descrizione di DES e della sua storia, introducendo i concetti fondamentali con cui si avrà a che fare per l'intera dissertazione Nel secondo capitolo viene introdotta la Crittoanalisi Logica e viene fornita una definizione della stessa, accennando ai concetti matematici necessari alla comprensione dei capitoli seguenti. Nel capitolo 3 viene presentato il primo dei due software sviluppati per rendere possibile l'attuazione di questo attacco crittoanalitico, una libreria per la rappresentazione e la manipolazione di formule logiche scritta in Java. Il quarto ed ultimo capitolo descrive il programma che, utilizzando la libreria descritta nel capitolo 3, elabora in maniera automatica un insieme di proposizioni logiche semanticamente equivalenti a DES, la cui verifica di soddisfacibilità, effettuata tramite appositi tools (SAT solvers) equivale ad effettuare un attacco di tipo known-plaintext su tale algoritmo.
Style APA, Harvard, Vancouver, ISO itp.
19

Pham, Duc Nghia. "Modelling and Exploiting Structures in Solving Propositional Satisfiability Problems". Thesis, Griffith University, 2006. http://hdl.handle.net/10072/365503.

Pełny tekst źródła
Streszczenie:
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems and then solve using a general purpose SAT solver. However, much of the valuable information and structure of these realistic problems is flattened out and hidden inside the corresponding Conjunctive Normal Form (CNF) encodings of the SAT domain. Recently, systematic SAT solvers have been progressively improved and are now able to solve many highly structured practical problems containing millions of clauses. In contrast, state-of-the-art Stochastic Local Search (SLS) solvers still have difficulty in solving structured problems, apparently because they are unable to exploit hidden structure as well as the systematic solvers. In this thesis, we study and evaluate different ways to effectively recognise, model and efficiently exploit useful structures hidden in realistic problems. A summary of the main contributions is as follows: 1. We first investigate an off-line processing phase that applies resolution-based pre-processors to input formulas before running SLS solvers on these problems. We report an extensive empirical examination of the impact of SAT pre-processing on the performance of contemporary SLS techniques. It emerges that while all the solvers examined do indeed benefit from pre-processing, the effects of different pre-processors are far from uniform across solvers and across problems. Our results suggest that SLS solvers need to be equipped with multiple pre-processors if they are ever to match the performance of systematic solvers on highly structured problems. [Part of this study was published at the AAAI-05 conference]. 2. We then look at potential approaches to bridging the gap between SAT and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formalism (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. In this study, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based mechanism to handle weights, together with a CSP-based method to instantiate variables, is superior to other combinations of SAT and CSP-based approaches. In addition, SLS solvers based on this many-valued weighting approach outperform other existing approaches to handle many-valued CSP structures. [Part of this study was published at the AAAI-05 conference]. 3. Finally, we propose and evaluate six different schemes to encode temporal reasoning problems, in particular the Interval Algebra (IA) networks, into SAT CNF formulas. We then empirically examine the performance of local search as well as systematic solvers on the new temporal SAT representations, in comparison with solvers that operate on native IA representations. Our empirical results show that zChaff (a state-of-the-art complete SAT solver) together with the best IA-to-SAT encoding scheme, can solve temporal problems significantly faster than existing IA solvers working on the equivalent native IA networks. [Part of this study was published at the CP-05 workshop].
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
Style APA, Harvard, Vancouver, ISO itp.
20

Mukherjee, Rajdeep. "Precise abstract interpretation of hardware designs". Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:680f0093-0405-4a0b-88dc-c4d7177d840f.

Pełny tekst źródła
Streszczenie:
This dissertation shows that the bounded property verification of hardware Register Transfer Level (RTL) designs can be efficiently performed by precise abstract interpretation of a software representation of the RTL. The first part of this dissertation presents a novel framework for RTL verification using native software analyzers. To this end, we first present a translation of the hardware circuit expressed in Verilog RTL into the software in C called the software netlist. We then present the application of native software analyzers based on SAT/SMT-based decision procedures as well as abstraction-based techniques such as abstract interpretation for the formal verification of the software netlist design generated from the hardware RTL. In particular, we show that the path-based symbolic execution techniques, commonly used for automatic test case generation in system softwares, are also effective for proving bounded safety as well as detecting bugs in the software netlist designs. Furthermore, by means of experiments, we show that abstract interpretation techniques, commonly used for static program analysis, can also be used for bounded as well as unbounded safety property verification of the software netlist designs. However, the analysis using abstract interpretation shows high degree of imprecision on our benchmarks which is handled by manually guiding the analysis with various trace partitioning directives. The second part of this dissertation presents a new theoretical framework and a practical instantiation for automatically refining the precision of abstract interpretation using Conflict Driven Clause Learning (CDCL)-style analysis. The theoretical contribution is the abstract interpretation framework that generalizes CDCL to precise safety verification for automatic transformer refinement called Abstract Conflict Driven Learning for Safety (ACDLS). The practical contribution instantiates ACDLS over a template polyhedra abstract domain for bounded safety verification of the software netlist designs. We experimentally show that ACDLS is more efficient than a SAT-based analysis as well as sufficiently more precise than a commercial abstract interpreter.
Style APA, Harvard, Vancouver, ISO itp.
21

Asketorp, Jonatan. "Attacking RSA moduli with SAT solvers". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157352.

Pełny tekst źródła
Streszczenie:
This thesis aimed to explore how sequential boolean satisability solvers can be used on the integer factorisation problem. The integer factorisation problem is believed to be hard and modern public key cryptography relies on that,note worthily SSL/TSL and SSH support the use of RSA. However, it is not proven that integer factorisation is hard and therefore it is of great importanceto explore dierent attack avenues. The modulus in RSA is a semiprime, e.g.an integer that is the product of two primes. Hence, in this thesis an empiricalstudy of the factorisation of semiprimes with a bit-length of up to 32 bits iscarried out. Randomly selected semiprimes are factorized through six dierent reductions using three dierent solvers (Glucose, Lingeling and PicoSAT) and the result are compared to that of MSieve, an open-source integer factorisationprogram. As seen in the comparison boolean satisability solvers cannot be used as a replacement of an integer factorisation solver. Additionally comparisons between the dierent reductions and boolean satisability solvers show that the combination of solver and reduction greatly aects performance. The implication is that further explorations of the integer factorisation problem with boolean satisability solvers can greatly benet from either avoiding a inadequate solver and reduction pair or from attempting to identify the outliers that signify a inadequate coupling.
Style APA, Harvard, Vancouver, ISO itp.
22

Lundén, Daniel, i Erik Forsblom. "Factoring integers with parallel SAT solvers". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166436.

Pełny tekst źródła
Streszczenie:
Factoring integers is a well known problem that at present cannot be solved in polynomial time. Therefore, other approaches for solving factorization problems are of interest. One such approach is to reduce factorization to SAT and solve it with a dedicated SAT solver. In this study, parallel SAT solvers are examined in this context and evaluated in terms of speedup, efficiency and effectiveness versus sequential SAT solvers. The methodology used was an experimental approach where different parallel and sequential solvers were benchmarked on different reductions from integer factorization to SAT. The benchmarks concluded that parallel SAT solvers are not better suited for solving factorization problems than sequential solvers. The performance boosts achieved by the fastest parallel solver barely corresponded to the extra amount of available parallel resources over the fastest sequential solver.
Att faktorisera heltal är ett välkänt problem som för närvarande inte kan lösas i polynomisk tid. Därför är andra tillvägagångssätt för att lösa faktorisering av intresse. Ett sådant tillvägagångssätt är att reducera faktorisering till SAT och sedan lösa problemet med en dedikerad SAT-lösare. I denna studie undersöks parallella SAT-lösare i detta sammanhang och utvärderas i förhållande till uppsnabbning, effektivitet och ändamålsenlighet jämfört med sekventiella SAT-lösare. Den metod som användes var en experimentell sådan där olika parallella och sekventiella lösare jämfördes på olika reduktioner från heltalsfaktorisering till SAT. Genom testerna erhölls slutsatsen att parallella SAT-lösare inte är bättre lämpade för att lösa heltalsfaktorisering än sekventiella lösare. Prestandavinsterna som uppnåddes av den snabbaste parallella lösaren motsvarade knappt den extra mängd parallella resurser som denna hade över den snabbaste sekventiella lösaren.
Style APA, Harvard, Vancouver, ISO itp.
23

Nelson, Max (Max M. ). "A new approach to parallel SAT solvers". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85456.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 71-72).
We present a novel approach to solving SAT problems in parallel by partitioning the entire set of problem clauses into smaller pieces that can be solved by individual threads. We examine the complications that arise with this partitioning, including the idea of global variables, broadcasting global conflict clauses, and a protocol to ensure correctness. Along with this algorithm description, we provide the details of a C++ implementation, ParallelSAT, with a few specific optimizations. Finally, we demonstrate that this approach provides a significant speedup on a set of SAT problems related to program analysis.
by Max Nelson.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
24

Procházka, Lukáš. "Redukce nedeterministických konečných automatů". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237032.

Pełny tekst źródła
Streszczenie:
Nondeterministic finite automaton is an important tool, which is used to process strings in many different areas of programming. It is important to try to reduce its size for increasing programs' effectiveness. However, this problem is computationally hard, so we need to search for new techniques. Basics of finite automata are described in this work. Some methods for their reduction are then introduced. Usable reduction algorithms are described in greater detail. Then they are implemented and tested. The test results are finally evaluated.
Style APA, Harvard, Vancouver, ISO itp.
25

Szczepanski, Nicolas. "SAT en Parallèle". Thesis, Artois, 2017. http://www.theses.fr/2017ARTO0403/document.

Pełny tekst źródła
Streszczenie:
La thèse porte sur la résolution des problèmes de satisfaisabilité booléenne (SAT) dans un cadre massivement parallèle. Le problème SAT est largement utilisé pour résoudre des problèmes combinatoires de première importance comme la vérification formelle de matériels et de logiciels, la bio-informatique, la cryptographie, la planification et l’ordonnancement de tâches. Plusieurs contributions sont apportées dans cette thèse. Elles vont de la conception d’algorithmes basés sur les approches « portfolio » et « diviser pour mieux régner », à l’adaptation de modèles de programmation parallèle, notamment hybride (destinés à des architectures à mémoire partagée et distribuée), à SAT, en passant par l’amélioration des stratégies de résolution. Ce travail de thèse a donné lieu à plusieurs contributions dans des conférences internationales du domaine ainsi qu’à plusieurs outils (open sources) de résolution des problèmes SAT, compétitifs au niveau international
This thesis deals with propositional satisfiability (SAT) in a massively parallel setting. The SAT problem is widely used for solving several combinatorial problems (e.g. formal verification of hardware and software, bioinformatics, cryptography, planning, scheduling, etc.). The first contribution of this thesis concerns the design of efficient algorithms based on the approaches « portfolio » and « divide and conquer ». Secondly, an adaptation of several parallel programming models including hybrid (parallel and distributed computing) to SAT is proposed. This work has led to several contributions to international conferences and highly competitive distributed SAT solvers
Style APA, Harvard, Vancouver, ISO itp.
26

Manthey, Norbert. "Towards Next Generation Sequential and Parallel SAT Solvers". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-158672.

Pełny tekst źródła
Streszczenie:
This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology.
Style APA, Harvard, Vancouver, ISO itp.
27

Minařík, Vojtěch. "Využití SAT solverů v úloze optimalizace kombinačních obvodů". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399701.

Pełny tekst źródła
Streszczenie:
This thesis is focused on the task of application of SAT problem and it's modifications in area of evolution logic circuit development. This task is supposed to increase speed of evaluating candidate circuits by fitness function in cases where simulation usage fails. Usage of SAT and #SAT problems make evolution of complex circuits with high input number significantly faster. Implemented solution is based on #SAT problem. Two applications were implemented. They differ by the approach to checking outputs of circuit for wrong values. Time complexity of implemented algorithm depends on logical complexity of circuit, because it uses logical formulas and it's satisfiability to evaluate logic circuits.
Style APA, Harvard, Vancouver, ISO itp.
28

Sidenmark, Ludwig, i Kjellberg Erik Villaman. "FACT- and SAT-solvers on different types of semiprimes". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166565.

Pełny tekst źródła
Streszczenie:
This thesis is aimed to cover how boolean satisfiability solvers can be used on integer factorization problems and to compare them with already established integer factorization solvers. The integer factorization problem is believed to be hard and is used in modern day cryptoalgorithms such as RSA. This thesis is also aimed to explore how well different solvers can solve different types of semiprimes and if there is any notable difference between them. This report covers three different boolean satisfiability solvers as well as three of the integer factorization solvers. The results from the thesis show that boolean satisfiability solvers cannot be used as a replacement of already established integer factorization solvers. The thesis also shows that the type of semiprime does affect how fast the solvers are able to factorize the semiprime. The boolean satisfiability solvers had favorable results toward asymmetrical semiprimes and disfavorable results toward prime powers.
Denna avhandlings mål är att undersöka hur lösare för booleska satisfierbarhetsproblemet kan användas för att lösa primtalsfaktoriseringsproblem och jämföra dem med redan etablerade lösare för primtalsfaktorisering. Primtalsfaktorisering tros vara svårt och används i flera moderna krypteringsalgoritmer som RSA. Denna avhandling undersöker även hur väl olika lösare kan lösa olika typer av semiprimtal och om det finns någon noterbar skillnad mellan dem. Rapporten täcker tre olika lösare för booleska satisfierbarhetsproblem och tre olika primtalsfaktoriseringslösare. Resultaten från denna avhandling visar att lösare för booleska satisfierbarhetsproblem inte kan används som ersättning för redan etablerade primtalsfaktoriseringslösare. Avhandlingen visar även att typen av semiprimtal påverkar hur snabbt lösarna faktoriserar semiprimtalet. Lösarna för boolesk satisfierbarhet visade fördelaktiga resultat mot asymmetriska semiprimtal och ofördelaktiga resultat mot primtalspotenser.
Style APA, Harvard, Vancouver, ISO itp.
29

Khudabukhsh, Ashiqur Rahman. "SATenstein : automatically building local search SAT solvers from components". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/13852.

Pełny tekst źródła
Streszczenie:
Designing high-performance solvers for computationally hard problems is a difficult and often time-consuming task. It is often the case that a new solver is created by augmenting an existing algorithm with a mechanism found in a different algorithm or by combining components from different algorithms. In this work, we demonstrate that this task can be automated in the context of stochastic local search (SLS) solvers for the propositional satisfiability problem (SAT). We first introduce a generalized, highly parameterized solver framework, dubbed SATenstein, that includes components drawn from or inspired by existing high-performance SLS algorithms for SAT. In SATenstein, we exposed several design elements in the form of parameters that control both the selection and the behavior of components. We also exposed some parameters that were hard-coded into the implementations of the algorithms we studied. By setting these parameters, SATenstein can be instantiated as a huge number of different solvers, including many known high-performance solvers and trillions of solvers never studied before. We used an automated algorithm configuration procedure to find instantiations of SATenstein that perform well on several well-known, challenging distributions of SAT instances. Overall, we consistently obtained significant improvements over the previous best-performing SLS algorithms, despite expending minimal manual effort.
Style APA, Harvard, Vancouver, ISO itp.
30

Oh, Chanseok. "Improving SAT Solvers by Exploiting Empirical Characteristics of CDCL". Thesis, New York University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10025676.

Pełny tekst źródła
Streszczenie:

The Boolean Satisfiability Problem (SAT) is a canonical decision problem originally shown to be NP-complete in Cook's seminal work on the theory of computational complexity. The SAT problem is one of several computational tasks identified by researchers as core problems in computer science. The existence of an efficient decision procedure for SAT would imply P = NP. However, numerous algorithms and techniques for solving the SAT problem have been proposed in various forms in practical settings. Highly efficient solvers are now actively being used, either directly or as a core engine of a larger system, to solve real-world problems that arise from many application domains. These state-of-the-art solvers use the Davis-Putnam-Logemann-Loveland (DPLL) algorithm extended with Conflict-Driven Clause Learning (CDCL). Due to the practical importance of SAT, building a fast SAT solver can have a huge impact on current and prospective applications. The ultimate contribution of this thesis is improving the state of the art of CDCL by understanding and exploiting the empirical characteristics of how CDCL works on real-world problems. The first part of the thesis shows empirically that most of the unsatisfiable real-world problems solvable by CDCL have a refutation proof with near-constant width for the great portion of the proof. Based on this observation, the thesis provides an unconventional perspective that CDCL solvers can solve real-world problems very efficiently and often more efficiently just by maintaining a small set of certain classes of learned clauses. The next part of the thesis focuses on understanding the inherently different natures of satisfiable and unsatisfiable problems and their implications on the empirical workings of CDCL. We examine the varying degree of roles and effects of crucial elements of CDCL based on the satisfiability status of a problem. Ultimately, we propose effective techniques to exploit the new insights about the different natures of proving satisfiability and unsatisfiability to improve the state of the art of CDCL. In the last part of the thesis, we present a reference solver that incorporates all the techniques described in the thesis. The design of the presented solver emphasizes minimality in implementation while guaranteeing state-of-the-art performance. Several versions of the reference solver have demonstrated top-notch performance, earning several medals in the annual SAT competitive events. The minimal spirit of the reference solver shows that a simple CDCL framework alone can still be made competitive with state-of-the-art solvers that implement sophisticated techniques outside the CDCL framework.

Style APA, Harvard, Vancouver, ISO itp.
31

Reis, Poliana Magalhães. "Análise da distribuição do número de operações de resolvedores SAT". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-21012013-220441/.

Pełny tekst źródła
Streszczenie:
No estudo da complexidade de problemas computacionais destacam-se duas classes conhecidas como P e NP. A questao P=NP e um dos maiores problemas nao resolvidos em Ciencia da Compu- tacao teorica e Matematica contemporanea. O problema SAT foi o primeiro problema reconhecido como NP-completo e consiste em verificar se uma determinada formula da logica proposicional clas- sica e ou nao satisfazivel. As implementacoes de algoritmos para resolver problemas SAT sao conhe- cidas como resolvedores SAT (SAT Solvers). Existem diversas aplicacoes em Ciencia da Computacao que podem ser realizadas com SAT Solvers e com outros resolvedores de problemas NP-completos que podem ser reduzidos ao SAT como por exemplo problemas de coloracao de grafos, problemas de agendamento e problemas de planejamento. Dentre os mais eficientes algoritmos para resolvedores de SAT estao Sato, Grasp, Chaff, MiniSat e Berkmin. O Algoritmo Chaff e baseado no Algoritmo DPLL o qual existe a mais de 40 anos e e a estrategia mais utilizada para os Sat Solvers. Essa dissertacao apresenta um estudo aprofundado do comportamento do zChaff (uma implementacao muito eficiente do Chaff) para saber o que esperar de suas execucoes em geral .
In the study of computational complexity stand out two classes known as P and NP. The question P = NP is one of the greatest unsolved problems in theoretical computer science and contemporary mathematics. The SAT problem was first problem recognized as NP-complete and consists to check whether a certain formula of classical propositional logic is satisfiable or not. The implementations of algorithms to solve SAT problems are known as SAT solvers. There are several applications in computer science that can be performed with SAT solvers and other solvers NP- complete problems can be reduced to SAT problems such as graph coloring, scheduling problems and planning problems. Among the most efficient algorithms for SAT solvers are Sato, Grasp, Chaf, MiniSat and Berkmin. The Chaff algorithm is based on the DPLL algorithm which there is more than 40 years and is the most used strategy for Sat Solvers. This dissertation presents a detailed study of the behavior of zChaff (a very efficient implementation of the Chaff) to know what to expect from their performance in general.
Style APA, Harvard, Vancouver, ISO itp.
32

Zamora, Carlos Enrique Jr. "HAMMING DISTANCE PLOT TECHNIQUES FOR SLS SAT SOLVERS: EXPLORING THE BEHAVIOR OF STATE-OF-THE-ART SLS SOLVERS ON RANDOM K-SAT PROBLEMS". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1554825851959352.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Coquereau, Albin. "[ErgoFast] Amélioration de performances du solveur SMT Alt-Ergo grâce à l’intégration d’un solveur SAT efficace". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY007.

Pełny tekst źródła
Streszczenie:
Les démonstrateurs automatiques de la famille SMT (Satisfiability Modulo Theories) sont de plus en plus utilisés dans l’industrie et dans le monde académique. La raison de ce succès est liée d’une part à l’expressivité des langages d’entrée de ces solveurs (logique du premier ordre avec de nombreuses théories prédéfinies), et d’autre part, à leur efficacité toujours croissante. La rapidité des solveurs SMT est principalement liée aux procédures de décision qu’ils implémentent (SAT solvers, Simplex, etc.). Ainsi, les structures de données utilisées et les mécanismes de gestion mémoire ont un impact immédiat sur les performances. De même, le langage de programmation utilisé et les optimisations de code disponibles dans le compilateur sont très importants. Dans l’équipe VALS du LRI, nous développons le solveur SMT Alt-Ergo. Cet outil est programmé avec le langage OCaml et il est principalement utilisé pour prouver des formules logiques issues d’ateliers pour la preuve de programme comme Why3, Spark, Frama-C ou l’Atelier B. Ses concurrents directs sont z3 (Microsoft), CVC4 (Univ. New-York et Iowa) et yices2 (SRI). Malgré nos efforts dans la conception et l’optimisation des procédures de décision implantées, il ressort qu’Alt-Ergo est plus lent que ses concurrents sur certaines suites d’essais. Les raisons à cela sont multiples. Nous avons identifié trois causes importantes. — La première semble être liée aux structures de données utilisées dans le solveur. Pour des rai- sons de sûreté, la plus grande partie d’Alt-Ergo est développée dans un style de programmation purement fonctionnel avec des structures persistantes. Mais, l’efficacité de ces structures est en général moins bonne que des structures impératives. — La deuxième semble être liée à la gestion mémoire par ramasse-miettes du langage OCaml qui, comparée à une gestion manuelle, engendre de nombreux déplacements de blocs mémoire et probablement trop de défauts de cache. La différence entre un accès à la mémoire cache d’un ordinateur et un accès à la RAM étant de l’ordre de 150 cycles d’horloge, l’utilisation maximale de la mémoire cache est très importante pour les performances. — Enfin, la troisième semble être liée au manque d’optimisations du compilateur OCaml. En effet, nous avons constaté que l’écart de performance entre Alt-Ergo et certains de ses concurrents (écrits principalement en C ou C++) était fortement réduit lorsque l’on re-compilait ces derniers en baissant le niveau d’optimisation du compilateur
The automatic SMT (Satisfiability Modulo Theories) solvers are more and more used in the industry and in the academic world. The reason of this success is connected on to the expressiveness of the languages of entrance of these solvers (first order logic with predefined theories), and on their increasing efficiency. The speed of SMT solvers is mainly connected to the decision-making procedures which they implement (SAT solvers, Simplex, etc.). The data structures used and the memory management mechanisms have an immediate impact on the performances. Also, the programming language and the available optimizations of code in the compiler are very important. In the team VALS of the LRI, we develop the SMT solver Alt-Ergo. This tool is programmed with the language OCaml and it is mainly used to prove logical formulas from proof of program workshops as Why3, Spark, Frama-C or the B workshop. His direct competitors are z3 (Microsoft), CVC4 (Univ. New York and Iowa) and yices2 ( SRI). In spite of our efforts in the design and the optimization of the implanted decision-making procedures, it appears that Alt-Ergo is slower than his competitors on certain benchmarks. The reasons are multiple. We identified three important causes. - The first one seems to be connected to the data structures used in the solver. For safety reason, the largest part of Alt-Ergo is developed in a purely functional style of programming with persistent structures. But, the efficiency of these structures is generally worse than imperative structures. - The second seems to be connected to the memory management by the Garbage Collector of the language OCaml, which, compared with a manual management, engenders numerous movements of memory blocks and probably too many cache miss. The difference between cache memory access and RAM access being of the order of 150 clock cycles, the maximal use of the cache memory is very important for the performances. - Finally, the third seems to be connected to the lack of optimizations of the OCaml compiler. Indeed, we noticed that the gap from performance between Alt-Ergo and some of his competitors (written mainly in C or C ++) was strongly reduced when we recompiled them by lowering the compiler optimization level
Style APA, Harvard, Vancouver, ISO itp.
34

Nain, Prerna. "Determining Optimal Arithmetic Circuits for Solving Linear Optimization Problems with SAT Solvers". Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10752239.

Pełny tekst źródła
Streszczenie:

As the fields of Artificial Intelligence, operations research, and computer science are expanding, the complexity of computing problems also increases, making such problems more difficult to solve. Broadly speaking, many such problems are constraint programming problems. Constraint programming is the paradigm which can be successfully applied to areas such as scheduling, vehicle routing, and many more. All these decision problems are NP-complete and thus hard to solve. In this paper, we will discuss two ways to solve these problems. One is by reducing the problem to a satisfiablity problem and solving it using a SAT solver. Minisat, a SAT solver, is used in this study. The other way is to reduce the problem to a quantified boolean formula decision problem and solve it using the QBF Backjump algorithm. The advantage of using this algorithm is that it is more expressive and makes encoding much simpler. Also, we will discuss how these algorithms accelerate problem solving when they are used with various arithmetic circuits. We will use Linear Optimization problems as input for both of the algorithms. These linear constraint problems are translated to the arithmetic circuit, which is composed of many adder and multiplication circuits. The circuit for addition can be designed by using one of the two arithmetic adders, namely Ripple-Carry (RC) adder, which gives linear size, and Carry-Look-Ahead (CLA) adder, which provides log-linear size. Also, the circuit for multiplication will have either linear size or log-linear size, based on the choice of adder, because multiplication involves addition of partial products. The results show that CLA outperforms the RC adder because the circuit optimization phase reduces the size of the circuit and makes the size comparable to that of RC.

Style APA, Harvard, Vancouver, ISO itp.
35

GAZZARATA, GIORGIA. "Extensions and Experimental Evaluation of SAT-based solvers for the UAQ problem". Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1008022.

Pełny tekst źródła
Streszczenie:
Nowadays, most of the health organizations make use of Health Information Systems (HIS) to support the staff to provide patients with proper care service. In this context, security and privacy are key to establish trust between the actors involved in the healthcare process, including the patient. However, patients' privacy cannot jeopardize their safety: as a consequence, a compromise between the two must eventually be found. Privilege management and access control are necessary elements to provide security and privacy. In this thesis, we first present the main features that make the Role Based Access Control suitable for permissions management and access control in HIS. We then address the User Authorization Query (UAQ) problem for RBAC, namely the problem of determining the optimum set of roles to activate to provide the user with the requested permissions (if the user is authorized) while satisfying a set of Dynamic Mutually Exclusive Roles (DMER) constraints and achieving some optimization objective (least privilege versus availability). As a first contribution, we show how DMER can be used to support the enforcement of SoD. The UAQ problem is known to be NP-hard. Most of the techniques proposed in the literature to solve it have been experimentally evaluated by running them against different benchmark problems. However, the adequacy of the latter is seldom discussed. In this thesis, we propose a methodology for evaluating existing benchmarks or designing new ones: the methodology leverages the asymptotic complexity analysis of the solving procedures provided in other works to forsee the benchmarks complexity given the values of the most significant RBAC dimensions. First, we use our methodology to demonstrate that the state-of-the-art benchmarks are unsatisfactory. We then introduce UAQ-Solve, a tool that works both as generator of benchmarks and as UAQ solver leveraging existing PMAXSAT complete solvers. By using UAQ-Solve, we apply our methodology to generate a novel suite of parametric benchmarks that allows for the systematic assessment of UAQ solvers over a number of relevant dimensions. These include problems for which no polynomial-time algorithm is known as well as problems for which polynomial-time algorithms do exist. We then execute UAQ-Solve over our benchmarks to compare the performance of different complete and incomplete PMAXSAT solvers.
Style APA, Harvard, Vancouver, ISO itp.
36

Moore, Neil C. A. "Improving the efficiency of learning CSP solvers". Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/2100.

Pełny tekst źródła
Streszczenie:
Backtracking CSP solvers provide a powerful framework for search and reasoning. The aim of constraint learning is increase global reasoning power by learning new constraints to boost reasoning and hopefully reduce search effort. In this thesis constraint learning is developed in several ways to make it faster and more powerful. First, lazy explanation generation is introduced, where explanations are generated as needed rather than continuously during propagation. This technique is shown to be effective is reducing the number of explanations generated substantially and consequently reducing the amount of time taken to complete a search, over a wide selection of benchmarks. Second, a series of experiments are undertaken investigating constraint forgetting, where constraints are discarded to avoid time and space costs associated with learning new constraints becoming too large. A major empirical investigation into the overheads introduced by unbounded constraint learning in CSP is conducted. This is the first such study in either CSP or SAT. Two significant results are obtained. The first is that typically a small percentage of learnt constraints do most propagation. While this is conventional wisdom, it has not previously been the subject of empirical study. The second is that even constraints that do no effective propagation can incur significant time overheads. Finally, the use of forgetting techniques from the literature is shown to significantly improve the performance of modern learning CSP solvers, contradicting some previous research. Finally, learning is generalised to use disjunctions of arbitrary constraints, where before only disjunctions of assignments and disassignments have been used in practice (g-nogood learning). The details of the implementation undertaken show that major gains in expressivity are available, and this is confirmed by a proof that it can save an exponential amount of search in practice compared with g-nogood learning. Experiments demonstrate the promise of the technique.
Style APA, Harvard, Vancouver, ISO itp.
37

Buck, Rebecca Arlene. "Integrating the Least-Cost Grade-Mix Solver into ROMI". Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/36339.

Pełny tekst źródła
Streszczenie:
Up to 70 percent of rough mill manufacturing expenses stem from raw material (lumber) cost. Rough mill costs can be reduced by optimizing the lumber grade or grades that are purchased. This solution is known as the least-cost lumber grade-mix solution. The least-cost lumber grade-mix solutions has been a topic of great interest to both the secondary hardwood industry and to academia since even small changes in raw material cost can contribute to substantial reduction in rough mill expenses. A statistical model was developed for finding the least-cost lumber grade-mix which uses the rough mill simulator, ROMI-RIP 2.0, and the statistical package, SAS 8.2. The SAS 8.2-based least-cost lumber grade-mix model was validated by comparing SAS 8.2-based least-cost grade-mix solutions to OPTIGRAMI 2.0, a least-cost lumber grade-mix solver that relies on linear modeling. The SAS 8.2-based least-cost lumber grade-mix solver found lower cost solutions in 9 of 10 cutting bills that were tested. The SAS 8.2-based least-cost lumber grade-mix solver was packaged with ROMI 3.0, an updated version of ROMI-RIP, and provided to industry free of charge by the USDA Forest Service. The USDA Forest Service also purchased a SAS server license to allow least-cost lumber grade-mix solver users free access to SAS 8.2. However, industry users were reluctant to use the USDA Forest Service SAS server since it requires the user to enter individual cost and yield data to a government computer. This solution also required the user to have internet access and limited access to one user at any time. Thus, the goal of this research was to incorporate the least-cost lumber grade-mix solver into ROMI using the free, open source statistical package R 2.7.2. An R 2.7.2-based least-cost lumber grade-mix solver was developed and validated by comparing the R 2.7.2-based least-cost lumber grade-mix solutions to the updated SAS 9.2-based least-cost lumber grade-mix solutions. No differences were found in the least-cost lumber grade-mix solutions from either solver. Thus, a new least-cost lumber grade-mix solver using the R 2.7.2 open source statistical package was created. R 2.7.2 is installed on each personal computer on which the USDA Forest Serviceâ s ROMI rough mill simulation software is installed and, thus, no external computing resources are needed when solving the least-cost lumber grade-mix problem.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
38

[Verfasser], Nguyen Thanh Hung, i Gerhard [Akademischer Betreuer] Pfister. "Combinations of Boolean Gröbner Bases and SAT Solvers / Thanh Hung Nguyen. Betreuer: Gerhard Pfister". Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1064306241/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Nguyen, Thanh Hung [Verfasser], i Gerhard [Akademischer Betreuer] Pfister. "Combinations of Boolean Gröbner Bases and SAT Solvers / Thanh Hung Nguyen. Betreuer: Gerhard Pfister". Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1064306241/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Krieger, Matthias. "Test generation and animation based on object-oriented specifications". Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00660427.

Pełny tekst źródła
Streszczenie:
The goal of this thesis is the development of support for test generation and animation based on object-oriented specifications. We aim particularly to take advantage of state-of-the-art satisfiability solving techniques by using an appropriate representation of object-oriented data. While automated test generation seeks a large set of data to execute an implementation on, animation performs computations that comply with a specification based on user-provided input data. Animation is a valuable technique for validating specifications.As a foundation of this work, we present clarifications and a partial formalization of the Object Constraint Language (OCL) as well as some extensions in order to allow for test generation and animation based on OCL specifications.For test generation, we have implemented several enhancements to HOL-TestGen, a tool built on top of the Isabelle theorem proving system that generates tests from specifications in Higher-Order Logic (HOL). We show how SMT solvers can be used to solve various types of constraints in HOL and present a modular approach to case splitting for deriving test cases. The latter facilitates the introduction of splitting rules that are tailored to object-oriented specifications.For animation, we implemented the tool OCLexec for animating OCL specifications. OCLexec generates from operation contracts corresponding Java implementations that call an SMT-based constraint solver at runtime.
Style APA, Harvard, Vancouver, ISO itp.
41

Swanson, Tayler John. "Properties of Mixing SAC Solder Alloys with Bismuthcontaining Solder Alloys for a Low Reflow Temperature Process". Thesis, Rochester Institute of Technology, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13812766.

Pełny tekst źródła
Streszczenie:

The subject of extensive research has been the establishing of lower temperature soldering of electronic assemblies that are similar to the once common yet still preferred eutectic Tin-Lead (SnPb) soldering manufacturing processes that are below 217 °C. This research opportunity will contribute data on mixed solder alloy assemblies that can be formed at lower process temperatures. There are many environmental and economic benefits of avoiding the current reliability concerns of assembling electronics at the standard high temperatures which peak at 230 °C 260 °C. To reduce this temperature the use of Bismuth containing solder pastes are mixing with the standard high temperature SAC solders for electronic assemblies. The materials evaluated are the (in weight percentages) 96.5Tin/3Silver/.5Copper (Sn/Ag/Cu) solder ball mixed with each solder paste, the eutectic 58Bismuth/42Tin (58Bi/42Sn), 57Bi/42Sn /1Ag and a propriety alloy that has a lower Bismuth content along with various micro alloys, 40-58Bi/Sn/X (X representing proprietary micro alloys or doping). In the assembly portion of this research the solder alloys were exposed to three different peak temperatures 180 °C, 195 °C, 205 °C. Another reflow profile attribute of focus was times above 138 °C the melting point of the eutectic Sn58Bi alloy. The ball and paste assembly portion of this research used the times above melting of 120sec and 240sec to represent process extremes and verify their significance on improving mixing level results. These times above melting did not consistently improve the mixing levels and therefore are not recommended or required during mixed low temperature solder assemblies. The results in this study suggest the recommended and optimum reflow profile to have a time above the melting point to be less than or equal to 90 seconds for mixed solder alloy assemblies in “low” (< 200 °C) peak temperature reflow oven profiles. This attribute ensures a reflow window similar to that of the eutectic SnPb processing. The second leg of this research was with a component assembly of a large ball grid array at the same various peak temperatures with a single time above 138 °C, 90sec. This “large” (> 20mm a side) component is a SAC405 solder balled BGA with the dimensions of 42 × 28 × 0.8mm. With any large component the temperature gradient across the component is a risk factor and the results show that there are significantly differences of mixing from the center of the component to the edge due to an average 2.3 °C temperature difference during convection reflow. The average mixing % levels recorded for Tpeak= 180 °C for the solder pastes with a 58Bi = 47%, 57Bi = 47% and 40-58Bi = 44%. The average mixing % levels recorded for Tpeak= 195 °C for the solder pastes with a 58Bi = 69%, 57Bi = 77% and 40-58Bi = 57%. The conclusions found also match previous work identifying the reflow peak temperatures remain a significant factor on the mixing %. This work’s goal was to add to the knowledge of the electronics industry to better understanding the microstructure and mixing mechanisms of Bi/Sn/X-SAC solder joints for low temperature reflow assembly processes.

Style APA, Harvard, Vancouver, ISO itp.
42

Manthey, Norbert [Verfasser], Steffen [Akademischer Betreuer] Hölldobler i Armin [Akademischer Betreuer] Biere. "Towards Next Generation Sequential and Parallel SAT Solvers / Norbert Manthey. Gutachter: Steffen Hölldobler ; Armin Biere". Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://d-nb.info/1069092452/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Lafitte, Frédéric. "On the automated verification of symmetric-key cryptographic algorithms: an approach based on SAT-solvers". Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/257908.

Pełny tekst źródła
Streszczenie:
A cryptographic protocol is a structured exchange of messages protected by means of cryptographic algorithms. Computer security in general relies heavily on these protocols and algorithms; in turn, these rely absolutely on smaller components called primitives. As technology advances, computers have reached a cost and a degree of miniaturisation conducive to their proliferation throughout society in the form of software-controlled network-enabled things. As these things find their way into environments where security is critical, their protection ultimately relies on primitives; if a primitive fails, all security solutions (protocols, policies, etc.) that are built on top of it are likely to offer no security at all. Lightweight symmetric-key primitives, in particular, will play a critical role.The security of protocols is frequently verified using formal and automated methods. Concerning algorithms and public-key primitives, formal proofs are often used, although they are somewhat error prone and current efforts aim to automate them. On the other hand, symmetric-key primitives are still currently analysed in a rather ad-hoc manner. Since their security is only guaranteed by the test-of-time, they traditionally have a built-in security margin. Despite being paramount to the security of embedded devices, lightweight primitives appear to have a smaller security margin and researchers would greatly benefit from automated tools in order to strengthen tests-of-time.In their seminal work back in 2000, Massacci and Marraro proposed to formulate primitives in propositional logic and to use SAT solvers to automatically verify their properties. At that time, SAT solvers were quite different from what they have become today; the continuous improvement of their performance makes them an even better choice for a verification back-end. The performance of SAT solvers improved so much that starting around 2006, some cryptanalysts started to use them, but mostly in order to speedup their attacks. This thesis introduces the framework CryptoSAT and shows its advantages for the purpose of verification.
La sécurité informatique repose en majeure partie sur des mécanismes cryptographiques, qui à leur tour dépendent de composants encore plus fondamentaux appelés primitives ;si une primitive échoue, toute la sécurité qui en dépend est vouée à l'échec. Les ordinateurs ont atteint un coût et un degré de miniaturisation propices à leur prolifération sous forme de systèmes embarqués (ou enfouis) qui offrent généralement peu de ressources calculatoires, notamment dans des environnements où la sécurité est primordiale. Leur sécurité repose donc lourdement sur les primitives dites à clé symétrique, puisque ce sont celles qui sont le mieux adaptées aux ressources limitées dont disposent les systèmes embarqués. Il n'est pas mathématiquement prouvé que les primitives à clé symétrique soient dépourvues de failles de sécurité, contrairement à tous les autres mécanismes cryptographiques :alors que la protection qu'offre la cryptographie peut, en général, être prouvée de façon formelle (dans un modèle limité) et parfois au moyen de méthodes automatisées qui laissent peu de place à l'erreur, la protection qu'offrent les primitives à clé symétrique n'est garantie que par “l'épreuve du temps”, c.-à-d. par la résistance (durable) de ces primitives face aux attaques conçues par la communauté des chercheurs en cryptologie. Pour compenser l'absence de garanties formelles, ces primitives sont traditionnellement pourvues d'une “marge de sécurité”, c.-à-d. de calculs supplémentaires, juste au cas où, dont le coût est difficile à justifier lorsque les ressources calculatoires sont rares.Afin de pallier à l'insuffisance de l'épreuve du temps et à la diminution des marges de sécurité, cette thèse revient sur les travaux de Massacci et Marraro qui, en 2000, avaient proposé de formuler les primitives en logique propositionnelle de sorte que leurs propriétés puissent être vérifiées automatiquement au moyen d'algorithmes SAT. A cette époque, les algorithmes SAT étaient très différents de ce qu'ils sont devenus aujourd'hui ;l'amélioration de leur performance, continuelle au fil des années, en fait un choix encore plus judicieux comme moteur de vérification. Dans le cadre de cette thèse, une méthode a été développée pour permettre à un cryptologue de facilement vérifier les propriétés d'une primitive à clé symétrique de façon formelle et automatique à l'aide d'algorithmes SAT, tout en lui permettant de faire abstraction de la logique propositionnelle. L'utilité de la méthode a ensuite été mise en évidence en obtenant des réponses formelles à des questions, posées dans la littérature en cryptanalyse, concernant des failles potentielles tant au niveau de la conception qu'au niveau de la mise en oeuvre de certaines primitives.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Style APA, Harvard, Vancouver, ISO itp.
44

Ishtaiwi, Abdelraouf. "Towards Effective Parameter-Free Clause Weighting Local Search for SAT". Thesis, Griffith University, 2008. http://hdl.handle.net/10072/366980.

Pełny tekst źródła
Streszczenie:
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems, and then solve them using general purpose SAT solvers. However, most SAT solvers require the tuning of parameters in order to obtain optimum performance. Tuning these parameters usually takes a considerable amount of time, and even to achieve average performance can require many runs with many different parameter settings. In this thesis we investigate various ways to improve the overall performance of local search solvers via new techniques that do not employ parameters and therefore take considerably less time for experimentation...
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Faculty of Engineering and Information Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
45

PALENA, MARCO. "Exploiting Boolean Satisfiability Solvers for High Performance Bit-Level Model Checking". Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2680997.

Pełny tekst źródła
Streszczenie:
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. Correctness of digital systems can be addressed resorting to formal verification techniques such as model checking. A major issue of model checking techniques, however, is scalability. Recent performance advances in Boolean Satisfiability (SAT) solvers, brought about a leap in the scalability of those techniques. SAT-based model checking algorithms, however, still suffer from non-negligible applicability issues when confronted with the complexity of many industrial-scale designs. In this dissertation we present several approaches to improve performance and scalability of SAT-based model checking algorithms, focusing in particular on interpolation and IC3. Each of the presented approaches addresses scalability from a different perspective. The first approach focuses on the interaction between IC3 and the underlying SAT solver. IC3, in fact, proves to be highly sensitive to the way its SAT queries are handled. We propose and compare different strategies for SAT solvers management in IC3. The second approach consists of a novel interpolation-based algorithm that specifically targets the limited flexibility and incrementality of the original interpolation method. The proposed algorithm overcomes such limitations through the use of an incremental data structure for overapproximated reachability as well as techniques to better control the precision of the computed overapproximations. The third approach addresses interpolants compaction, proposing a novel SAT-based technique that relies on proof-based abstraction to achieve considerable compaction rates. Each of the proposed approaches was experimentally evaluated and proved to increase scalability of SAT-based model checking algorithms, in particular when considered in the context of portfolio-based model checking tools.
Style APA, Harvard, Vancouver, ISO itp.
46

Signati, Teresa. "Evaluating Coppersmith’s Criteria by way of SAT Solving". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16149/.

Pełny tekst źródła
Streszczenie:
S-boxes are the non-linear part of DES cryptosystem. Along the years it has became clear that any kind of edit to the structure of DES S-boxes increases the probability of success of breaking the algorithm, which was very carefully designed. The reason why the S-boxes were built in this way was clarified by Coppersmith, years after the publication of the encryption algorithm. The aim of this thesis is to investigate on Coppersmith’s DES S-boxes design criteria and to evaluate them by way of SAT Solving, in order to analyze the performance of SAT-Solvers for different versions of DES algorithm, in which S-boxes respect only a sample of Coppersmith’s design criteria. This aim is achieved thanks to the implementation of a Python tool: DESBoxGen. The main challenge in the design of DESBoxGen is the one of finding a way to efficiently generating S-boxes satisfying certain criteria.
Style APA, Harvard, Vancouver, ISO itp.
47

Lonlac, Konlac Jerry Garvin. "Contributions à la résolution du problème de la Satisfiabilité Propositionnelle". Thesis, Artois, 2014. http://www.theses.fr/2014ARTO0404/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous nous intéressons à la résolution du problème de la satisfiabilité propositionnelle (SAT). Ce problème fondamental en théorie de la complexité est aujourd'hui utilisé dans de nombreux domaines comme la planification, la bio-informatique, la vérification de matériels et de logiciels. En dépit d'énormes progrès observés ces dernières années dans la résolution pratique du problème SAT, il existe encore une forte demande d'algorithmes efficaces pouvant permettre de résoudre les problèmes difficiles. C'est dans ce contexte que se situent les différentes contributions apportées par cette thèse. Ces contributions s'attellent principalement autour de deux composants clés des solveurs SAT : l'apprentissage de clauses et les heuristiques de choix de variables de branchement. Premièrement, nous proposons une méthode de résolution permettant d'exploiter les fonctions booléennes cachées généralement introduites lors de la phase d'encodage CNF pour réduire la taille des clauses apprises au cours de la recherche. Ensuite, nous proposons une approche de résolution basée sur le principe d'intensification qui indique les variables sur lesquelles le solveur devrait brancher prioritairement à chaque redémarrage. Ce principe permet ainsi au solveur de diriger la recherche sur la sous-formule booléenne la plus contraignante et de tirer profit du travail de recherche déjà accompli en évitant d'explorer le même sous-espace de recherche plusieurs fois. Dans une troisième contribution, nous proposons un nouveau schéma d'apprentissage de clauses qui permet de dériver une classe particulière de clauses Bi-Assertives et nous montrons que leur exploitation améliore significativement les performances des solveurs SAT CDCL issus de l'état de l'art. Finalement, nous nous sommes intéressés aux principales stratégies de gestion de la base de clauses apprises utilisées dans la littérature. En effet, partant de deux stratégies de réduction simples : élimination des clauses de manière aléatoire et celle utilisant la taille des clauses comme critère pour juger la qualité d'une clause apprise, et motiver par les résultats obtenus à partir de ces stratégies, nous proposons plusieurs nouvelles stratégies efficaces qui combinent le maintien de clauses courtes (de taille bornée par k), tout en supprimant aléatoirement les clauses de longueurs supérieures à k. Ces nouvelles stratégies nous permettent d'identifier les clauses les plus pertinentes pour le processus de recherche
In this thesis, we focus on propositional satisfiability problem (SAT). This fundamental problem in complexity theory is now used in many application domains such as planning, bioinformatic, hardware and software verification. Despite enormous progress observed in recent years in practical SAT solving, there is still a strong demand of efficient algorithms that can help to solve hard problems. Our contributions fit in this context. We focus on improving two of the key components of SAT solvers: clause learning and variable ordering heuristics. First, we propose a resolution method that allows to exploit hidden Boolean functions generally introduced during the encoding phase CNF to reduce the size of clauses learned during the search. Then, we propose an resolution approach based on the intensification principle that circumscribe the variables on which the solver should branch in priority at each restart. This principle allows the solver to direct the search to the most constrained sub-formula and takes advantage of the previous search to avoid exploring the same part of the search space several times. In a third contribution, we propose a new clause learning scheme that allows to derive a particular Bi-Asserting clauses and we show that their exploitation significantly improves the performance of the state-of-the art CDCL SAT solvers. Finally, we were interested to the main learned clauses database reduction strategies used in the literature. Indeed, starting from two simple strategies : random and size-bounded reduction strategies, and motivated by the results obtained from these strategies, we proposed several new effective ones that combine maintaing short clauses (of size bounded by k), while deleting randomly clauses of size greater than k. Several other efficient variants are proposed. These new strategies allow us to identify the most important learned clauses for the search process
Style APA, Harvard, Vancouver, ISO itp.
48

Yang, Chaoran. "Comparison of thermal fatigue reliability between SAC and SnPb solders under various stress range conditions /". View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?MECH%202009%20YANGC.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Le, Frioux Ludovic. "Towards more efficient parallel SAT solving". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS209.

Pełny tekst źródła
Streszczenie:
Les solveurs résolvants le problème de SATisfiabilité booléenne sont utilisés avec succès dans de nombreux contextes. Ceci est dû à leur capacité à résoudre d'énormes problèmes. La plupart des solveurs SAT ont longtemps été séquentiels. L'émergence de machines multi-coeurs ouvre de nouvelles perspectives. Il existe de nombreux solveurs SAT parallèles différant par leurs stratégies, langages, etc. Il est donc difficile de comparer l'efficacité de différentes stratégies de manière équitable. De plus, l'introduction de nouvelles approches nécessite une compréhension approfondie de l'implémentation des solveurs existants. Nous présentons Painless : une platefrome pour implémenter des solveurs SAT parallèles pour les multi-coeurs. Grâce à sa modularité, elle offre une implémentation des composants de base et permet également aux utilisateurs de créer facilement leurs solveurs parallèles basés sur de nouvelles stratégies. Painless a permis de construire et tester de nombreux solveurs. Nous avons pu imiter le comportement de trois solveurs de l'état de l'art tout en factorisant de nombreuses parties. L'efficacité de Painless a été soulignée par le fait que ces implémentations sont au moins aussi efficaces que les originales. De plus, l'un des nos solveurs a gagné la compétition SAT 2018. Painless nous a permis de mener des expériences équitables avec des solveurs diviser pour régner et de mettre en lumière des compositions originales d'heuristiques plus performantes que celles déjà connues. De plus, nous avons été en mesure de créer et tester de nouvelles stratégies exploitant la structure modulaire des formules SAT
Boolean SATisfiability has been used successfully in many applicative contexts. This is due to the capability of modern SAT solvers to solve complex problems involving millions of variables. Most SAT solvers have long been sequential and based on the CDCL algorithm. The emergence of many-core machines opens new possibilities in this domain. There are numerous parallel SAT solvers that differ by their strategies, programming languages, etc. Hence, comparing the efficiency of the theoretical approaches in a fair way is a challenging task. Moreover, the introduction of a new approach needs a deep understanding of the existing solvers' implementations. We present Painless: a framework to build parallel SAT solvers for many-core environments. Thanks to its genericness and modularity, it provides the implementation of basics for parallel SAT solving. It also enables users to easily create their parallel solvers based on new strategies. Painless allowed to build and test existing strategies by using different chunk of solutions present in the literature. We were able to easily mimic the behaviour of three state-of-the-art solvers by factorising many parts of their implementations. The efficiency of Painless was highlighted as these implementations are at least efficient as the original ones. Moreover, one of our solvers won the SAT Competition'18. Going further, Painless enabled to conduct fair experiments in the context of divide-and-conquer solvers, and allowed us to highlight original compositions of strategies performing better than already known ones. Also, we were able to create and test new original strategies exploiting the modular structure of SAT instances
Style APA, Harvard, Vancouver, ISO itp.
50

Matras, Jan. "Aplikace reaktivních nanočástic do SAC pájecí pasty". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377074.

Pełny tekst źródła
Streszczenie:
This work is a research on the topic of reactive nanoparticles and their agitation into the solder paste, which it also describes. It describes in detail the properties of each solder alloys. It explains the creation of intermetallic layers in the soldering process and examines their structure. It also focuses on the evaluation and methodology of testing the properties of solder pastes. In the practical part, individual tests are performed with PF606 and PF610 solder paste.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii