Rozprawy doktorskie na temat „Constraint satisfaction problem formalisms”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Constraint satisfaction problem formalisms.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Constraint satisfaction problem formalisms”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Pham, Duc Nghia, i n/a. "Modelling and Exploiting Structures in Solving Propositional Satisfiability Problems". Griffith University. Institute for Integrated and Intelligent Systems, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070216.143447.

Pełny tekst źródła
Streszczenie:
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems and then solve using a general purpose SAT solver. However, much of the valuable information and structure of these realistic problems is flattened out and hidden inside the corresponding Conjunctive Normal Form (CNF) encodings of the SAT domain. Recently, systematic SAT solvers have been progressively improved and are now able to solve many highly structured practical problems containing millions of clauses. In contrast, state-of-the-art Stochastic Local Search (SLS) solvers still have difficulty in solving structured problems, apparently because they are unable to exploit hidden structure as well as the systematic solvers. In this thesis, we study and evaluate different ways to effectively recognise, model and efficiently exploit useful structures hidden in realistic problems. A summary of the main contributions is as follows: 1. We first investigate an off-line processing phase that applies resolution-based pre-processors to input formulas before running SLS solvers on these problems. We report an extensive empirical examination of the impact of SAT pre-processing on the performance of contemporary SLS techniques. It emerges that while all the solvers examined do indeed benefit from pre-processing, the effects of different pre-processors are far from uniform across solvers and across problems. Our results suggest that SLS solvers need to be equipped with multiple pre-processors if they are ever to match the performance of systematic solvers on highly structured problems. [Part of this study was published at the AAAI-05 conference]. 2. We then look at potential approaches to bridging the gap between SAT and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formalism (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. In this study, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based mechanism to handle weights, together with a CSP-based method to instantiate variables, is superior to other combinations of SAT and CSP-based approaches. In addition, SLS solvers based on this many-valued weighting approach outperform other existing approaches to handle many-valued CSP structures. [Part of this study was published at the AAAI-05 conference]. 3. Finally, we propose and evaluate six different schemes to encode temporal reasoning problems, in particular the Interval Algebra (IA) networks, into SAT CNF formulas. We then empirically examine the performance of local search as well as systematic solvers on the new temporal SAT representations, in comparison with solvers that operate on native IA representations. Our empirical results show that zChaff (a state-of-the-art complete SAT solver) together with the best IA-to-SAT encoding scheme, can solve temporal problems significantly faster than existing IA solvers working on the equivalent native IA networks. [Part of this study was published at the CP-05 workshop].
Style APA, Harvard, Vancouver, ISO itp.
2

Pham, Duc Nghia. "Modelling and Exploiting Structures in Solving Propositional Satisfiability Problems". Thesis, Griffith University, 2006. http://hdl.handle.net/10072/365503.

Pełny tekst źródła
Streszczenie:
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems and then solve using a general purpose SAT solver. However, much of the valuable information and structure of these realistic problems is flattened out and hidden inside the corresponding Conjunctive Normal Form (CNF) encodings of the SAT domain. Recently, systematic SAT solvers have been progressively improved and are now able to solve many highly structured practical problems containing millions of clauses. In contrast, state-of-the-art Stochastic Local Search (SLS) solvers still have difficulty in solving structured problems, apparently because they are unable to exploit hidden structure as well as the systematic solvers. In this thesis, we study and evaluate different ways to effectively recognise, model and efficiently exploit useful structures hidden in realistic problems. A summary of the main contributions is as follows: 1. We first investigate an off-line processing phase that applies resolution-based pre-processors to input formulas before running SLS solvers on these problems. We report an extensive empirical examination of the impact of SAT pre-processing on the performance of contemporary SLS techniques. It emerges that while all the solvers examined do indeed benefit from pre-processing, the effects of different pre-processors are far from uniform across solvers and across problems. Our results suggest that SLS solvers need to be equipped with multiple pre-processors if they are ever to match the performance of systematic solvers on highly structured problems. [Part of this study was published at the AAAI-05 conference]. 2. We then look at potential approaches to bridging the gap between SAT and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formalism (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. In this study, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based mechanism to handle weights, together with a CSP-based method to instantiate variables, is superior to other combinations of SAT and CSP-based approaches. In addition, SLS solvers based on this many-valued weighting approach outperform other existing approaches to handle many-valued CSP structures. [Part of this study was published at the AAAI-05 conference]. 3. Finally, we propose and evaluate six different schemes to encode temporal reasoning problems, in particular the Interval Algebra (IA) networks, into SAT CNF formulas. We then empirically examine the performance of local search as well as systematic solvers on the new temporal SAT representations, in comparison with solvers that operate on native IA representations. Our empirical results show that zChaff (a state-of-the-art complete SAT solver) together with the best IA-to-SAT encoding scheme, can solve temporal problems significantly faster than existing IA solvers working on the equivalent native IA networks. [Part of this study was published at the CP-05 workshop].
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
Style APA, Harvard, Vancouver, ISO itp.
3

Thebault, Patricia. "Formalisme CSP (Constraint Satisfaction Problem) et localisation de motifs structurés dans les textes génomiques". Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00011452.

Pełny tekst źródła
Streszczenie:
La recherche d'occurrences de gènes d'ARN dans les séquences
génomiques est un problème dont l'importance est renouvelée par la
découverte récente de très nombreux ARN fonctionnels, opérant
souvent en interaction avec d'autres molécules.

Le formalisme des réseaux de contraintes est approprié à cette problématique aussi bien sur le plan de la modélisation que sur les développements algorithmiques qu'il permet de proposer.

Après une analyse et une comparaison des outils existants plongés dans le cadre des réseaux de contraintes, nous
montrons comment l'utilisation conjointe des réseaux de contraintes,
des techniques de résolution associées et des algorithmes et
structures de données du "pattern matching" permet de modéliser et de
rechercher efficacement des motifs structurés en interaction (faisant
intervenir plusieurs textes génomiques simultanément).
Style APA, Harvard, Vancouver, ISO itp.
4

Fulla, Peter. "On the valued constraint satisfaction problem". Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:bb2491ef-d802-4c5d-a388-a042644a4b47.

Pełny tekst źródła
Streszczenie:
The Valued Constraint Satisfaction Problem (VCSP) is a framework which captures many natural decision and optimisation problems. An instance of the VCSP consists of a set of variables, which are to be assigned labels from a finite domain, and a collection of local constraints, each specified by a weighted relation mapping labellings of the variables in the constraint's scope to values. The objective is to minimise the total value from all constraints. The VCSP is commonly parameterised by a language, i.e. a set of weighted relations that are available for use in the constraints. Languages are classified according to the computational complexity of the VCSP as tractable, for which the problem can be solved in polynomial time, and intractable, for which the problem is NP-hard. The recently proved VCSP dichotomy theorem established a classification of all languages into these two categories. Additionally, various structural restrictions can be imposed to limit the set of admissible instances further, thus potentially changing the complexity of the VCSP. Our first contribution relates to the algebraic approach to the VCSP, which proved instrumental in recent advances in the field. We generalise the Galois connection between weighted relational clones and weighted clones so that it applies to infinite sets as well. Second, we study a structural restriction requiring that the incidence graph be planar. In this setting, we establish a complexity classification of conservative languages (i.e. languages containing all {0,1}-valued unary weighted relations) and a necessary tractability condition for Boolean languages (i.e. languages over a two-element domain). Third, we study the surjective variant of the VCSP, in which labellings are required to assign every domain element to at least one variable. We establish a complexity classification of Boolean languages, which encompasses a new tractable class of problems.
Style APA, Harvard, Vancouver, ISO itp.
5

Nightingale, Peter. "Consistency and the quantified constraint satisfaction problem /". St Andrews, 2007. http://hdl.handle.net/10023/759.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Nightingale, Peter William. "Consistency and the Quantified Constraint Satisfaction Problem". Thesis, University of St Andrews, 2007. http://hdl.handle.net/10023/759.

Pełny tekst źródła
Streszczenie:
Constraint satisfaction is a very well studied and fundamental artificial intelligence technique. Various forms of knowledge can be represented with constraints, and reasoning techniques from disparate fields can be encapsulated within constraint reasoning algorithms. However, problems involving uncertainty, or which have an adversarial nature (for example, games), are difficult to express and solve in the classical constraint satisfaction problem. This thesis is concerned with an extension to the classical problem: the Quantified Constraint Satisfaction Problem (QCSP). QCSP has recently attracted interest. In QCSP, quantifiers are allowed, facilitating the expression of uncertainty. I examine whether QCSP is a useful formalism. This divides into two questions: whether QCSP can be solved efficiently; and whether realistic problems can be represented in QCSP. In attempting to answer these questions, the main contributions of this thesis are the following: - the definition of two new notions of consistency; - four new constraint propagation algorithms (with eight variants in total), along with empirical evaluations; - two novel schemes to implement the pure value rule, which is able to simplify QCSP instances; - a new optimization algorithm for QCSP; - the integration of these algorithms and techniques into a solver named Queso; - and the modelling of the Connect 4 game, and of faulty job shop scheduling, in QCSP. These are set in context by a thorough review of the QCSP literature.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, Lixi. "Solving the timetabling problem using constraint satisfaction programming". Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20051104.155838/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Powell, Robert David. "Complexity classifications for the valued constraint satisfaction problem". Thesis, Durham University, 2016. http://etheses.dur.ac.uk/11485/.

Pełny tekst źródła
Streszczenie:
In a valued constraint satisfaction problem (VCSP), the goal is to find an assignment of values to variables that minimizes a given sum of functions. Each function in the sum depends on a subset of variables, takes values which are rational numbers or infinity, and is chosen from a fixed finite set of functions called a constraint language. We study how the computational complexity of this problem depends on the constraint language. We often consider the case where infinite values are disallowed, and refer to such constraint languages as being finite-valued. If we consider such finite-valued constraint languages, the case where we allow variables to take two values was classified by Cohen et al., who show that submodular functions essentially give rise to the only tractable case. Non-submodular functions can be used to express the NP-hard Max Cut problem. We consider the case where the variables can take three values, and identify a new infinite set of functions called skew bisubmodular functions which imply tractability. We prove that submodularity with respect to some total order and skew bisubmodularity give rise to the only tractable cases, and in all other cases Max Cut can be expressed. We also show that our characterisation of tractable cases is tight, that is, none of the conditions can be omitted. Thus, our results provide a new dichotomy theorem in constraint satisfaction research. We also negatively answer the question of whether multimorphisms can capture all necessary tractable constraint languages. We then study the VCSP as a homomorphism problem on digraphs. By adapting a proof designed for CSPs we show that each VCSP with a fixed finite constraint language is equivalent to one where the constraint language consists of one {0,infinity}-valued binary function (i.e. a digraph) and one finite-valued unary function. This latter problem is known as the Minimum Cost Homomorphism Problem for digraphs. We also show that our reduction preserves a number of useful algebraic properties of the constraint language. Finally, given a finite-valued constraint language, we consider the case where the variables of our VCSP are allowed to take four values. We prove that 1-defect chain multimorphisms, which are required in the four element dichotomy of Min CSP, are a special case of more general fractional polymorphisms we call {a,b}-1-defect fractional polymorphisms. We conclude with a conjecture for the four element case, and some interesting open problems which might lead to a tighter description of tractable finite-valued constraint languages on finite domains of any size.
Style APA, Harvard, Vancouver, ISO itp.
9

Andersson, Jakob. "Automatic Invoice Data Extraction as a Constraint Satisfaction Problem". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411596.

Pełny tekst źródła
Streszczenie:
Invoice processing has traditionally been heavily dependent onmanual labor, where the task is to identify and move certaininformation from an origin to a destination. A time demandingtask with a high interest of automation to reduce time ofexecution, fault-risk and cost.With the evergrowing interest in automation and ArtificialIntelligence (AI), this thesis will explore the possibilities ofautomating the task of extracting and mapping information ofinterest by defining the problem as a Constraint OptimizationProblem (COP) using numeric relations between present information.The problem is then solved by extracting the numericalvalues in a document and utilizing it as an input space whereeach combination of numeric values are tested using a backendsolver.Several different models were defined, using different approachesand constraints on relations between possible existingfields. A solution to an invoice was considered correct if thetotal, tax, net and rounding amounts were estimated correctly.The final best achieved results were 84.30% correct and8.77% incorrect solutions on a set of 1400 various types of invoices.The achieved results show a promising alternative route toproposed solutions using e.g. machine learning or other intelligentsolutions using graphical or positional data. While only regardingthe numerical values present in each document, the proposedsolution becomes decentralized and therefor can be implementedand ran on any set of invoices without any pre-training phase.
Style APA, Harvard, Vancouver, ISO itp.
10

Magaji, Amina Sambo-Muhammad. "Combining search strategies for distributed constraint satisfaction". Thesis, Robert Gordon University, 2015. http://hdl.handle.net/10059/1374.

Pełny tekst źródła
Streszczenie:
Many real-life problems such as distributed meeting scheduling, mobile frequency allocation and resource allocation can be solved using multi-agent paradigms. Distributed constraint satisfaction problems (DisCSPs) is a framework for describing such problems in terms of related subproblems, called a complex local problem (CLP), which are dispersed over a number of locations, each with its own constraints on the values their variables can take. An agent knows the variables in its CLP plus the variables (and their current value) which are directly related to one of its own variables and the constraints relating them. It knows little about the rest of the problem. Thus, each CLP is solved by an agent which cooperates with other agents to solve the overall problem. Algorithms for solving DisCSPs can be classified as either systematic or local search with the former being complete and the latter incomplete. The algorithms generally assume that each agent has only one variable as they can solve DisCSP with CLPs using “virtual” agents. However, in large DisCSPs where it is appropriate to trade completeness off against timeliness, systematic search algorithms can be expensive when compared to local search algorithms which generally converge quicker to a solution (if a solution is found) when compared to systematic algorithms. A major drawback of local search algorithms is getting stuck at local optima. Significant researches have focused on heuristics which can be used in an attempt to either escape or avoid local optima. This thesis makes significant contributions to local search algorithms for DisCSPs. Firstly, we present a novel combination of heuristics in DynAPP (Dynamic Agent Prioritisation with Penalties), which is a distributed synchronous local search algorithm for solving DisCSPs having one variable per agent. DynAPP combines penalties on values and dynamic agent prioritisation heuristics to escape local optima. Secondly, we develop a divide and conquer approach that handles DisCSP with CLPs by exploiting the structure of the problem. The divide and conquer approach prioritises the finding of variable instantiations which satisfy the constraints between agents which are often more expensive to satisfy when compared to constraints within an agent. The approach also exploits concurrency and combines the following search strategies: (i) both systematic and local searches; (ii) both centralised and distributed searches; and (iii) a modified compilation strategy. We also present an algorithm that implements the divide and conquer approach in Multi-DCA (Divide and Conquer Algorithm for Agents with CLPs). DynAPP and Multi-DCA were evaluated on several benchmark problems and compared to the leading algorithms for DisCSPs and DisCSPs with CLPs respectively. The results show that at the region of difficult problems, combining search heuristics and exploiting problem structure in distributed constraint satisfaction achieve significant benefits (i.e. generally used less computational time and communication costs) over existing competing methods.
Style APA, Harvard, Vancouver, ISO itp.
11

Carbonnel, Clément. "Harnessing tractability in constraint satisfaction problems". Thesis, Toulouse, INPT, 2016. http://www.theses.fr/2016INPT0118/document.

Pełny tekst źródła
Streszczenie:
Le problème de satisfaction de contraintes (CSP) est un problème NP-complet classique en intelligence artificielle qui a suscité un engouement important de la communauté scientifique grâce à la richesse de ses aspects pratiques et théoriques. Cependant, au fil des années un gouffre s'est creusé entre les praticiens, qui développent des méthodes exponentielles mais efficaces pour résoudre des instances industrielles, et les théoriciens qui conçoivent des algorithmes sophistiqués pour résoudre en temps polynomial certaines restrictions de CSP dont l'intérêt pratique n'est pas avéré. Dans cette thèse nous tentons de réconcilier les deux communautés en fournissant des méthodes polynomiales pour tester automatiquement l'appartenance d'une instance de CSP à une sélection de classes traitables majeures. Anticipant la possibilité que les instances réelles ne tombent que rarement dans ces classes traitables, nous analysons également de manière systématique la possibilité de décomposer efficacement une instance en sous-problèmes traitables en utilisant des méthodes de complexité paramétrée. Finalement, nous introduisons un cadre général pour exploiter dans les CSP les idées développées pour la kernelization, un concept fondamental de complexité paramétrée jusqu'ici peu utilisé en pratique. Ce dernier point est appuyé par des expérimentations prometteuses
The Constraint Satisfaction Problem (CSP) is a fundamental NP-complete problem with many applications in artificial intelligence. This problem has enjoyed considerable scientific attention in the past decades due to its practical usefulness and the deep theoretical questions it relates to. However, there is a wide gap between practitioners, who develop solving techniques that are efficient for industrial instances but exponential in the worst case, and theorists who design sophisticated polynomial-time algorithms for restrictions of CSP defined by certain algebraic properties. In this thesis we attempt to bridge this gap by providing polynomial-time algorithms to test for membership in a selection of major tractable classes. Even if the instance does not belong to one of these classes, we investigate the possibility of decomposing efficiently a CSP instance into tractable subproblems through the lens of parameterized complexity. Finally, we propose a general framework to adapt the concept of kernelization, central to parameterized complexity but hitherto rarely used in practice, to the context of constraint reasoning. Preliminary experiments on this last contribution show promising results
Style APA, Harvard, Vancouver, ISO itp.
12

Wu, Yi. "The Approximability of Learning and Constraint Satisfaction Problems". Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/24.

Pełny tekst źródła
Streszczenie:
An α-approximation algorithm is an algorithm guaranteed to output a solutionthat is within an α ratio of the optimal solution. We are interested in thefollowing question: Given an NP-hard optimization problem, what is the bestapproximation guarantee that any polynomial time algorithm could achieve? We mostly focus on studying the approximability of two classes of NP-hardproblems: Constraint Satisfaction Problems (CSPs) and Computational Learning Problems. For CSPs, we mainly study the approximability of MAX CUT, MAX 3-CSP,MAX 2-LINR, VERTEX-PRICING, as well as serval variants of the UNIQUEGAMES.• The problem of MAX CUT is to find a partition of a graph so as to maximizethe number of edges between the two partitions. Assuming theUnique Games Conjecture, we give a complete characterization of the approximationcurve of the MAX CUT problem: for every optimum value ofthe instance, we show that certain SDP algorithm with RPR2 roundingalways achieve the optimal approximation curve.• The input to a 3-CSP is a set of Boolean constraints such that each constraintcontains at most 3 Boolean variables. The goal is to find an assignmentto these variables to maximize the number of satisfied constraints.We are interested in the case when a 3-CSP is satisfiable, i.e.,there does exist an assignment that satisfies every constraint. Assumingthe d-to-1 conjecture (a variant of the Unique Games Conjecture), weprove that it is NP-hard to give a better than 5/8-approximation for theproblem. Such a result matches a SDP algorithm by Zwick which givesa 5/8-approximation problem for satisfiable 3-CSP. In addition, our resultalso conditionally resolves a fundamental open problem in PCP theory onthe optimal soundness for a 3-query nonadaptive PCP system for NP withperfect completeness.• The problem of MAX 2-LINZ involves a linear systems of integer equations;these equations are so simple such that each equation contains atmost 2 variables. The goal is to find an assignment to the variables so asto maximize the total number of satisfied equations. It is a natural generalizationof the Unique Games Conjecture which address the hardness ofthe same equation systems over finite fields. We show that assuming theUnique Games Conjecture, for a MAX 2-LINZ instance, even that thereexists a solution that satisfies 1−ε of the equations, it is NP-hard to findone that satisfies ² of the equations for any ε > 0.
Style APA, Harvard, Vancouver, ISO itp.
13

Likeman, Martin. "A study of centralised and distributed problem solving applied to water supply scheduling". Thesis, University of Reading, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241628.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Chang, Chih-Hui 1967. "An Adaptive Linearization Method for a Constraint Satisfaction Problem in Semiconductor Device Design Optimization". Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc500248/.

Pełny tekst źródła
Streszczenie:
The device optimization is a very important element in semiconductor technology advancement. Its objective is to find a design point for a semiconductor device so that the optimized design goal meets all specified constraints. As in other engineering fields, a nonlinear optimizer is often used for design optimization. One major drawback of using a nonlinear optimizer is that it can only partially explore the design space and return a local optimal solution. This dissertation provides an adaptive optimization design methodology to allow the designer to explore the design space and obtain a globally optimal solution. One key element of our method is to quickly compute the set of all feasible solutions, also called the acceptability region. We described a polytope-based representation for the acceptability region and an adaptive linearization technique for device performance model approximation. These efficiency enhancements have enabled significant speed-up in estimating acceptability regions and allow acceptability regions to be estimated for a larger class of device design tasks. Our linearization technique also provides an efficient mechanism to guarantee the global accuracy of the computed acceptability region. To visualize the acceptability region, we study the orthogonal projection of high-dimensional convex polytopes and propose an output sensitive algorithm for projecting polytopes into two dimensions.
Style APA, Harvard, Vancouver, ISO itp.
15

Uppman, Hannes. "On Some Combinatorial Optimization Problems : Algorithms and Complexity". Doctoral thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-116859.

Pełny tekst źródła
Streszczenie:
This thesis is about the computational complexity of several classes of combinatorial optimization problems, all related to the constraint satisfaction problems. A constraint language consists of a domain and a set of relations on the domain. For each such language there is a constraint satisfaction problem (CSP). In this problem we are given a set of variables and a collection of constraints, each of which is constraining some variables with a relation in the language. The goal is to determine if domain values can be assigned to the variables in a way that satisfies all constraints. An important question is for which constraint languages the corresponding CSP can be solved in polynomial time. We study this kind of question for optimization problems related to the CSPs. The main focus is on extended minimum cost homomorphism problems. These are optimization versions of CSPs where instances come with an objective function given by a weighted sum of unary cost functions, and where the goal is not only to determine if a solution exists, but to find one of minimum cost. We prove a complete classification of the complexity for these problems on three-element domains. We also obtain a classification for the so-called conservative case. Another class of combinatorial optimization problems are the surjective maximum CSPs. These problems are variants of CSPs where a non-negative weight is attached to each constraint, and the objective is to find a surjective mapping of the variables to values that maximizes the weighted sum of satisfied constraints. The surjectivity requirement causes these problems to behave quite different from for example the minimum cost homomorphism problems, and many powerful techniques are not applicable. We prove a dichotomy for the complexity of the problems in this class on two-element domains. An essential ingredient in the proof is an algorithm that solves a generalized version of the minimum cut problem. This algorithm might be of independent interest. In a final part we study properties of NP-hard optimization problems. This is done with the aid of restricted forms of polynomial-time reductions that for example preserves solvability in sub-exponential time. Two classes of optimization problems similar to those discussed above are considered, and for both we obtain what may be called an easiest NP-hard problem. We also establish some connections to the exponential time hypothesis.
Style APA, Harvard, Vancouver, ISO itp.
16

Nguyen, Van-Hau. "SAT Encodings of Finite CSPs". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-162186.

Pełny tekst źródła
Streszczenie:
Boolean satisfiability (SAT) is the problem of determining whether there exists an assignment of the Boolean variables to the truth values such that a given Boolean formula evaluates to true. SAT was the first example of an NP-complete problem. Only two decades ago SAT was mainly considered as of a theoretical interest. Nowadays, the picture is very different. SAT solving becomes mature and is a successful approach for tackling a large number of applications, ranging from artificial intelligence to industrial hardware design and verification. SAT solving consists of encodings and solvers. In order to benefit from the tremendous advances in the development of solvers, one must first encode the original problems into SAT instances. These encodings should not only be easily generated, but should also be efficiently processed by SAT solvers. Furthermore, an increasing number of practical applications in computer science can be expressed as constraint satisfaction problems (CSPs). However, encoding a CSP to SAT is currently regarded as more of an art than a science, and choosing an appropriate encoding is considered as important as choosing an algorithm. Moreover, it is much easier and more efficient to benefit from highly optimized state-of-the-art SAT solvers than to develop specialized tools from scratch. Hence, finding appropriate SAT encodings of CSPs is one of the most fascinating challenges for solving problems by SAT. This thesis studies SAT encodings of CSPs and aims at: 1) conducting a comprehensively profound study of SAT encodings of CSPs by separately investigating encodings of CSP domains and constraints; 2) proposing new SAT encodings of CSP domains; 3) proposing new SAT encoding of the at-most-one constraint, which is essential for encoding CSP variables; 4) introducing the redundant encoding and the hybrid encoding that aim to benefit from both two efficient and common SAT encodings (i.e., the sparse and order encodings) by using the channeling constraint (a term used in Constraint Programming) for SAT; and 5) revealing interesting guidelines on how to choose an appropriate SAT encoding in the way that one can exploit the availability of many efficient SAT solvers to solve CSPs efficiently and effectively. Experiments show that the proposed encodings and guidelines improve the state-of-the-art SAT encodings of CSPs.
Style APA, Harvard, Vancouver, ISO itp.
17

Jones, Charles H. "A Mathematical Model for Instrumentation Configuration". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604273.

Pełny tekst źródła
Streszczenie:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
This paper describes a model of how to configure settings on instrumentation. For any given instrument there may be 100s of settings that can be set to various values. However, randomly selecting values for each setting is not likely to produce a valid configuration. By "valid" we mean a set of setting values that can be implemented by each instrument. The valid configurations must satisfy a set of dependency rules between the settings and other constraints. The formalization provided allows for identification of different sets of configurations settings under control by different systems and organizations. Similarly, different rule sets are identified. A primary application of this model is in the context of a multi-vendor system especially when including vendors that maintain proprietary rules governing their systems. This thus leads to a discussion of an application user interface (API) between different systems with different rules and settings.
Style APA, Harvard, Vancouver, ISO itp.
18

Zhao, Zhengyang. "Optimizing Task Sequence and Cell Layout for Dual Arm Robot Assembly Using Constraint Programming". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186593.

Pełny tekst źródła
Streszczenie:
Nowadays, assembly robots are increasingly used in the manufacturing industry to replace or collaborate with human labors. This is the goal of the dual arm assembly robot developed by ABB. With the rapid upgrading in consumer electronics products, the lifetime of an assembly line could be only a few months. However, even for experienced programmers, to manually construct a good enough assembly sequence is time consuming, and the quality of the generated assembly sequence is not guaranteed. Moreover, a good robot assembly sequence is important to the throughput of an assembly line. For dual arm robots, it is also important to obtain a balance between the two arms, as well as handling scheduling conflicts and avoiding collisions in a crowded environment. In this master thesis, a program is produced to automatically generate the optimal assembly sequence for a class of real-world assembly cases. The solution also takes the layout of the assembly cell into account, thus constructing the best combination of cell layout, workload balancing, task sequence and task scheduling. The program is implemented using Google OR-Tools – an open-source support library for combinatorial optimization. A customized search strategy is proposed and a comparison between this strategy and the built-in search strategy of Google OR-Tools is done. The result shows that the used approach is effective for the problem study case. It takes about 4 minutes to find the optimal solution and 32 minutes to prove its optimality. In addition, the result also shows that the customized search strategy works consistently with good performance for different problem cases. Moreover, the customized strategy is more efficient than built-in search strategy in many cases.
Numera används monteringsrobotar alltmer inom tillverkningsindustrin för att ersätta eller samarbeta med människor. Detta är måluppgiften för den tvåarmiga monteringsroboten, YuMi, som utvecklats av ABB. Med den korta produktlivslängden för hemelektronikprodukter kan livslängden för en monteringslinje vara ett fåtal månader. Även för erfarna robotprogrammerare är det svårt och tidsödande att manuellt konstruera en tillräckligt bra monteringsordning, och dessutom kan resultatets kvalitet inte garanteras. En bra monteringsordning är nödvändig för genomströmningen i en monteringslinje. För tvåarmiga robotar, är det också viktigt att få en balans mellan de två armarna, samt hantering av schemakrockar och undvika kollisioner i en trång miljö. I detta examensarbete har ett program skrivits, som automatiskt genererar optimala lösningar för en klass av verkliga monteringsfall. Lösningen tar hänsyn till utformningen av monteringscellen och arrangerar cellen på bästa sätt, balanserar arbetsbelastningen, ordnar och tidsbestämmer uppgifter. Programmet använder sig av Google OR-Tools – ett öppet kodbibliotek för kombinatorisk optimering. Dessutom föreslås en skräddarsydd sökstrategi, som jämförs med Google OR-Tools inbyggda sökstrategi. Resultatet visar att den använda metoden är effektiv för problemtypen. Det tar ungefär 4 minuter att hitta den optimala lösningen och 32 minuter för att bevisa optimalitet. Dessutom visar resultatet att den anpassade sökstrategin konsekvent har en bra prestanda för olika problemfall. Dessutom är den anpassade strategin effektivare än den inbyggda sökstrategin i många fall.
Style APA, Harvard, Vancouver, ISO itp.
19

Park, Vincent Se-jin. "AN EMPIRICAL STUDY OF DIFFERENT BRANCHING STRATEGIES FOR CONSTRAINT SATISFACTION PROBLEMS". Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1193.

Pełny tekst źródła
Streszczenie:
Many real life problems can be formulated as constraint satisfaction problems (CSPs). Backtracking search algorithms are usually employed to solve CSPs and in backtracking search the choice of branching strategies can be critical since they specify how a search algorithm can instantiate a variable and how a problem can be reduced into subproblems; that is, they define a search tree. In spite of the apparent importance of the branching strategy, there have been only a few empirical studies about different branching strategies and they all have been tested exclusively for numerical constraints. In this thesis, we employ the three most commonly used branching strategies in solving finite domain CSPs. These branching strategies are described as follows: first, a branching strategy with strong commitment assigns its variables in the early stage of the search as in k-Way branching; second, 2-Way branching guides a search by branching one side with assigning a variable and the other with eliminating the assigned value; third, the domain splitting strategy, based on the least commitment principle, branches by dividing a variable's domain rather than by assigning a single value to a variable. In our experiments, we compared the efficiency of different branching strategies in terms of their execution times and the number of choice points in solving finite domain CSPs. Interestingly, our experiments provide evidence that the choice of branching strategy for finite domain problems does not matter much in most cases--provided we are using an effective variable ordering heuristic--as domain splitting and 2-Way branching end up simulating k-Way branching. However, for an optimization problem with large domain size, the branching strategy with the least commitment principle can be more efficient than the other strategies. This empirical study will hopefully interest other practitioners to take different branching schemes into consideration in designing heuristics.
Style APA, Harvard, Vancouver, ISO itp.
20

Banerjee, Bonny. "Spatial problem solving for diagrammatic reasoning". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1194455860.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Sadeg, Lamia. "Méthodes de décomposition pour la résolution des PCSP (Partial Constraint Satisfaction Problem) : application aux problèmes FAP et coloration de graphes". Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0314.

Pełny tekst źródła
Streszczenie:
Les applications réelles liées aux problèmes de satisfaction partielle de contraintes (PCSP : Partial Constraints Satisfaction Problem) sont de plus en plus nombreuses, ce qui justifie l’intérêt croissant des chercheurs pour cette classe de problèmes. La résolution d’un PCSP revient à affecter des valeurs à toutes ses variables tout en maximisant (ou minimisant) une fonction objectif prédéfinie. Ces problèmes sont NP-difficiles, par conséquent il n’existe aucune approche aussi bien exacte qu’heuristique efficace sur les grandes instances. Pour résoudre efficacement les instances difficiles, une multitude de solutions sont proposées, allant de l’hybridation à l’apprentissage en passant par la décomposition. Dans notre travail, nous nous intéressons à cette dernière proposition, qui consiste à fractionner le problème PCSP en plusieurs sous-problèmes PCSP de tailles raisonnables, puis proposer des algorithmes de résolution pour les problèmes décomposés. Cette approche a pour but de bénéficier de la structure du problème afin d’accélérer sa résolution tout en garantissant des solutions optimales ou sous-optimales. Deux grand axes sont explorés : les approches basées sur la décomposition et celles guidées par la décomposition. Les approches basées sur la décomposition consistent à résoudre séparément les parties difficiles du problème décomposé, puis combiner les solutions partielles obtenues en vue d’atteindre une solution globale du problème d’origine. Les approches guidées par la décomposition consistent à développer des métaheuristiques qui tiennent compte de la structure du problème décomposé. Les algorithmes proposés sont testés et validés sur des instances réelles des problèmes PSCP, comme le problème d’affectation de fréquences et le problème de coloration de graphes
The wide range of potential applications concerned by the resolution of Partial Constraints Satisfaction Problems (PCSP) justifies the growing interest of scientists in this class of problems. Solving a PCSP means searching for values to assign to the decision variables in order to maximize (or minimize) a predefined objective function. These problems are NP-hard, so there isn’t an exact approach nor an efficient heuristic able to provide the optimal solution for large instances. In order to solve effectively the difficult instances, numerous approaches based on hybridization, learning or decomposition are proposed. In the present work, we focus on the latter proposal, which consists in splitting the PCSP into several smaller size PCSPs and we propose some methods to solve the decomposed problem. Two wide axes are explored : the resolution based on the decomposition and the one guided by decomposition. The former solves separately the difficult parts of the decomposed problem (cuts or clusters) and then combines partial solutions obtained in order to achieve a global solution for the original problem. The latter aims at benefiting from the structure of the problem to be decomposed in order to accelerate its resolution while ensuring optimal or near optimal solutions. All the proposed algorithms are tested and validated on the well-known benchmarks of PCSP problems such as Frequency Assignment Problem (FAP) and graph coloring problem
Style APA, Harvard, Vancouver, ISO itp.
22

Barco, Santa Andrés Felipe. "Constraint-based design : two-dimensional insulating panels configuration". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2016. http://www.theses.fr/2016EMAC0006/document.

Pełny tekst źródła
Streszczenie:
Les travaux de recherche présentés dans cette thèse se situent dans une problématique d’aide à la conception d’enveloppes isolantes pour la rénovation thermique de bâtiments résidentiels collectifs. Ces enveloppes isolantes sont composées de panneaux multifonctionnels rectangulaires, configurables et préfabriqués en usine. Leur conception repose sur les cinq caractéristiques suivantes. Premièrement, le nombre de panneaux nécessaires pour concevoir une enveloppe ainsi que leur taille respective ne sont pas connus au début de la rénovation (mais leur taille est cependant bornée). Deuxièmement, en raison des contraintes de fabrication, chaque fenêtre et chaque porte présentes sur la façade à rénover doivent être insérées dans un et un seul panneau. Troisièmement, les panneaux sont fixés à des endroits spécifiques de la façade, assez résistants pour supporter leur poids, nommés zones d’accroche. Quatrièmement, ni trous (zone non couverte), ni chevauchements entre panneaux ne sont autorisés. Cinquièmement, afin de garantir une isolation thermique performante tout en minimisant son coût, les enveloppes doivent être composées d’un nombre minimal de panneaux. Aux vues de la complexité de ce problème, nous restreignons nos travaux de recherche aux façades rectangulaires portant des menuiseries et des zones d’accroche rectangulaires. Compte tenu des cinq caractéristiques énoncées et de l’hypothèse de forme rectangulaire des éléments traités (panneaux, façades, menuiseries, zones d’accroche), la conception des enveloppes est à la fois un problème de découpe et de conditionnement à deux dimensions et un problème de configuration. Ce problème est formalisé et traité comme un problème de satisfaction de contraintes et a pour but d’aider la conception dédites enveloppes isolantes. En tant que tel, les travaux de cette thèse présentent deux contributions majeures. En raison des caractéristiques originales du problème de calepinage de façades, sa description et sa formalisation comme un problème de satisfaction de contraintes constituent la première contribution de ces travaux de thèse. Deuxièmement, les solutions algorithmiques basées sur les contraintes constituent notre seconde contribution. En particulier, ces travaux de thèse présentent deux solutions manuelles et trois automatiques pour le problème de conception d’enveloppes isolantes
The research presented in this thesis falls within the problem of supporting the design of thermal insulating envelopes for the renovation of collective residential buildings. These insulating envelopes are composed of rectangular multi-functional panels, configurable and prefabricated in the factory. Their design is based on the following five characteristics. First, the number of panels needed to design an envelope and their size are not known at the beginning of the renovation (but their size is however bounded). Second, because of manufacturing constraints, every window and every door present on the facade to be renovated must be inserted into one and only one panel. Third, panels are attached to specific areas of the facade strong enough to support their weight, called supporting areas. Fourth, neither holes (uncovered area) or overlapping between panels are allowed. Fifth, to ensure efficient thermal insulation while minimizing cost, envelopes should be composed of a minimum number of panels. In view of the complexity of this problem, we restrict our research to rectangular facades with rectangular joinery and supporting areas. Given the five stated characteristics and the assumption of rectangular elements (panels, facades, joinery, supporting areas), the envelopes design is both a two-dimensional Cutting & Packing problem as well as a configuration one. This problem is formalized and treated as a constraint satisfaction problem and aims to support the design of such insulating structures. As such, the thesis presents two major contributions. Given the original features of the building renovation problem, its description and its formalization as a constraint satisfaction problem are the first contribution of the work. Second, constraint-based algorithmic solution’s are our second contribution. In particular, the thesis presents two manual and three automatic solutions for the design problem of insulating envelopes
Style APA, Harvard, Vancouver, ISO itp.
23

Huang, Sangxia. "Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect Completeness". Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168576.

Pełny tekst źródła
Streszczenie:
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0>0, such that for any K >= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs.
Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0>0, så att för alla K >= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP.

QC 20150916

Style APA, Harvard, Vancouver, ISO itp.
24

Benharrat, Nassim. "Model-Based Testing of Timed Distributed Systems : A Constraint-Based Approach for Solving the Oracle Problem". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC021/document.

Pełny tekst źródła
Streszczenie:
Le test à base de modèles des systèmes réactifs est le processus de vérifier si un système sous test (SUT) est conforme à sa spécification. Il consiste à gérer à la fois la génération des données de test et le calcul de verdicts en utilisant des modèles. Nous spécifions le comportement des systèmes réactifs à l'aide des systèmes de transitions symboliques temporisées à entrée-sortie (TIOSTS). Quand les TIOSTSs sont utilisés pour tester des systèmes avec une interface centralisée, l'utilisateur peut ordonner complètement les événements (i.e., les entrées envoyées au système et les sorties produites). Les interactions entre le testeur et le SUT consistent en des séquences d'entrées et de sortie nommées traces, pouvant être séparées par des durées dans le cadre du test temporisé, pour former ce que l'on appelle des traces temporisées. Les systèmes distribués sont des collections de composants locaux communiquant entre eux et interagissant avec leur environnement via des interfaces physiquement distribuées. Différents événements survenant à ces différentes interfaces ne peuvent plus être ordonnés. Cette thèse concerne le test de conformité des systèmes distribués où un testeur est placé à chaque interface localisée et peut observer ce qui se passe à cette interface. Nous supposons qu'il n'y a pas d’horloge commune mais seulement des horloges locales pour chaque interface. La sémantique de tels systèmes est définie comme des tuples de traces temporisées. Nous considérons une approche du test dans le contexte de la relation de conformité distribuée dtioco. La conformité globale peut être testée dans une architecture de test en utilisant des testeurs locaux sans communication entre eux. Nous proposons un algorithme pour vérifier la communication pour un tuple de traces temporisées en formulant le problème de message-passing en un problème de satisfaction de contraintes (CSP). Nous avons mis en œuvre le calcul des verdicts de test en orchestrant à la fois les algorithmes du test centralisé off-line de chacun des composants et la vérification des communications par le biais d'un solveur de contraintes. Nous avons validé notre approche sur un cas étude de taille significative
Model-based testing of reactive systems is the process of checking if a System Under Test (SUT) conforms to its model. It consists of handling both test data generation and verdict computation by using models. We specify the behaviour of reactive systems using Timed Input Output Symbolic Transition Systems (TIOSTS) that are timed automata enriched with symbolic mechanisms to handle data. When TIOSTSs are used to test systems with a centralized interface, the user may completely order events occurring at this interface (i.e., inputs sent to the system and outputs produced from it). Interactions between the tester and the SUT are sequences of inputs and outputs named traces, separated by delays in the timed framework, to form so-called timed traces. Distributed systems are collections of communicating local components which interact with their environment at physically distributed interfaces. Interacting with such a distributed system requires exchanging values with it by means of several interfaces in the same testing process. Different events occurring at different interfaces cannot be ordered any more. This thesis focuses on conformance testing for distributed systems where a separate tester is placed at each localized interface and may only observe what happens at this interface. We assume that there is no global clock but only local clocks for each localized interface. The semantics of such systems can be seen as tuples of timed traces. We consider a framework for distributed testing from TIOSTS along with corresponding test hypotheses and a distributed conformance relation called dtioco. Global conformance can be tested in a distributed testing architecture using only local testers without any communication between them. We propose an algorithm to check communication policy for a tuple of timed traces by formulating the verification of message passing in terms of Constraint Satisfaction Problem (CSP). Hence, we were able to implement the computation of test verdicts by orchestrating both localised off-line testing algorithms and the verification of constraints defined by message passing that can be supported by a constraint solver. Lastly, we validated our approach on a real case study of a telecommunications distributed system
Style APA, Harvard, Vancouver, ISO itp.
25

Boyd, Adriane Amelia. "Detecting and Diagnosing Grammatical Errors for Beginning Learners of German: From Learner Corpus Annotation to Constraint Satisfaction Problems". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1325170396.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Paris, Nicolas. "Intégration de techniques CSP pour la résolution du problème WCSP". Thesis, Artois, 2014. http://www.theses.fr/2014ARTO0405/document.

Pełny tekst źródła
Streszczenie:
Cette thèse se situe dans le contexte de la programmation par contraintes (CP). Plus précisément, nous nous sommes intéressés au problème de satisfaction de contraintes pondérées (WCSP). De nombreuses approches ont été proposées pour traiter ce problème d’optimisation. Les méthodes les plus efficaces utilisent des cohérences locales souples sophistiquées comme par exemple la cohérence d’arc directionnelle complète FDAC∗, la cohérence d’arc directionnelle existentielle EDAC∗, etc. Établies grâce à des opérations de transferts de coût préservant l’équivalence des réseaux, l’utilisation de ces cohérences permet généralement d’accélérer la résolution en réduisant l’espace de recherche via la suppression de valeurs et le calcul de bornes inférieures utiles en pratique. Cependant, l’utilisation de ces méthodes pose un problème lorsque l’arité des contraintes augmente de manière significative. L’efficacité des techniques du cadre du problème de satisfaction de contraintes (CSP) étant avérée, nous pensons que l’intégration de techniques CSP peut être très utile à la résolution d’instances WCSP. Dans cette thèse, nous proposons tout d’abord un algorithme de filtrage établissant la cohérence d’arc souple généralisée GAC∗ sur des contraintes tables souples de grande arité. Cette approche combine la technique de réduction tabulaire simple (STR), issue du cadre CSP, et le principe de transfert de coûts. Notre approche qui est polynomiale calcule efficacement pour chaque valeur les coûts minimaux dans les tuples explicites et implicites des contraintes tables souples. Ces coûts minimaux sont ensuite utilisés pour transférer les coûts afin d’établir GAC∗. Dans un second temps, nous proposons une approche alternative aux méthodes de résolution habituelles du problème WCSP. Elle consiste à résoudre une instance WCSP en résolvant une séquence d’instances CSP classiques obtenues depuis cette instance WCSP. À partir d’une instance CSP dans laquelle toutes les contraintes de l’instanceWCSP d’origine sont durcies au maximum, les instances CSP suivantes correspondent à un relâchement progressif de contraintes de l’instance WCSP déterminées par l’extraction de noyaux insatisfaisables minimaux (MUC) depuis les réseaux insatisfaisables de la séquence. Nos résultats expérimentaux montrent que notre première approche est compétitive avec l’état de l’art, tandis que la deuxième représente une approche alternative aux méthodes de résolutionhabituelles d’instances WCSP
This thesis is in the context of constraint programming (CP). Specifically, we are interested in the Weighted Constraint Satisfaction Problem (WCSP). Many approaches have been proposed to handle this optimization problem. The most effective methods use sophisticated soft local consistencies such as, for example, full directional arc consistency FDAC∗, existential directional arc consistency EDAC∗, etc. Established by equivalence preserving transformations (cost transfer operations), the use of these consistencies generally allows both to accelerate the resolution by reducing the search space through the elimination of values and to compute lower bounds useful in practice. However, these methods reach theirlimits when the arity of constraints increases significantly. The techniques of the Constraint Satisfaction Problem framework (CSP) having proved efficienty, we believe that the integration of CSP techniques can be very useful for solving WCSP instances. In this thesis, we first propose a filtering algorithm to enforce a soft version of generalized arc consistency (GAC∗) on soft table constraints of large arity. This approach combines the techniques of simple tabular reduction (STR), from the CSP framework, with the techniques of cost transfer. Our approach, proved polynomial, efficiently calculates for each value the minimum cost of the explicit and implicit tuples from soft table constraints. The minimum costs are thenused to transfer costs to establish GAC∗. In a second step, we propose an alternative approach to the usual techniques to solve WCSP. The principle is to solve a WCSP instance by solving a sequence of classical CSP instances obtained from this WCSP instance. From a CSP instance containing all the constraints hardened to the maximum from the WCSP instance, the next CSP instances correspond to a progressive relaxation of constraints defined by extraction of minimal unsatisfiable cores (MUC) from unsatisfiable networks of the sequence. Our experimental results show that our first approach is competitive with the state-of-the-art, whereas the second one represents an alternative approach to the usual methods to solve WCSP instances
Style APA, Harvard, Vancouver, ISO itp.
27

Tybon, Robert, i n/a. "Generating Solutions to the Jigsaw Puzzle Problem". Griffith University. School of Management, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20041101.085937.

Pełny tekst źródła
Streszczenie:
This thesis examines the problem of the automated re-assembly of jigsaw puzzles. The objectives of this research are as follows: to provide a clear statement of the jigsaw puzzle re-assembly problem; to find out which solution technique is best suited to this problem; to determine the level of sensitivity of the proposed solution technique when solving different variations of this problem; and to explore solution methods for solving incomplete jigsaw puzzles (puzzles with missing pieces). The jigsaw puzzle re-assembly problem has been investigated only intermittently in the research literature. This work presents an extensive examination of the suitability and efficiency of the standard solution techniques that can be applied to this problem. A detailed comparison between different solution methods including Genetic Algorithms, Simulated Annealing, Tabu Search and Constraint Satisfaction Programming, shows that a constraint-based approach is the most efficient method of generating solutions to the jigsaw puzzle problem. The proposed re-assembly algorithm is successful. Consequently, it can be used in development of automated solution generators for other problems in the same domain, thus creating new theoretical and applied directions in this field of research. One potential theoretical line of research concerns jigsaw puzzles that do not have a complete set of puzzle pieces. These incomplete puzzles represent a difficult aspect of this problem that is outlined but can not be resolved in the current research. The computational experiments conducted in this thesis demonstrate that the proposed algorithm being optimised to re-assemble the jigsaw puzzles is not efficient when applied to the puzzles with missing pieces. Further work was undertaken to modify the proposed algorithm to enable efficient re-assembly of incomplete jigsaw puzzles. Consequently, an original heuristic strategy, termed Empty Slot Prediction, was developed to support the proposed algorithm, and proved successful when applied to certain sub-classes of this problem. The results obtained indicate that no one algorithm can be used to solve the multitude of possible scenarios involved in the re-assembly of incomplete jigsaw puzzles. Other variations of the jigsaw puzzle problem that still remain unsolved are presented as avenues for future research. The solution of this problem involves a number of procedures with significant applications in other computer-related areas such as pattern recognition, feature and shape description, boundary-matching, and heuristic modelling. It also has more practical applications in robotic vision and reconstruction of broken artefacts in archaeology.
Style APA, Harvard, Vancouver, ISO itp.
28

Tybon, Robert. "Generating Solutions to the Jigsaw Puzzle Problem". Thesis, Griffith University, 2004. http://hdl.handle.net/10072/366062.

Pełny tekst źródła
Streszczenie:
This thesis examines the problem of the automated re-assembly of jigsaw puzzles. The objectives of this research are as follows: to provide a clear statement of the jigsaw puzzle re-assembly problem; to find out which solution technique is best suited to this problem; to determine the level of sensitivity of the proposed solution technique when solving different variations of this problem; and to explore solution methods for solving incomplete jigsaw puzzles (puzzles with missing pieces). The jigsaw puzzle re-assembly problem has been investigated only intermittently in the research literature. This work presents an extensive examination of the suitability and efficiency of the standard solution techniques that can be applied to this problem. A detailed comparison between different solution methods including Genetic Algorithms, Simulated Annealing, Tabu Search and Constraint Satisfaction Programming, shows that a constraint-based approach is the most efficient method of generating solutions to the jigsaw puzzle problem. The proposed re-assembly algorithm is successful. Consequently, it can be used in development of automated solution generators for other problems in the same domain, thus creating new theoretical and applied directions in this field of research. One potential theoretical line of research concerns jigsaw puzzles that do not have a complete set of puzzle pieces. These incomplete puzzles represent a difficult aspect of this problem that is outlined but can not be resolved in the current research. The computational experiments conducted in this thesis demonstrate that the proposed algorithm being optimised to re-assemble the jigsaw puzzles is not efficient when applied to the puzzles with missing pieces. Further work was undertaken to modify the proposed algorithm to enable efficient re-assembly of incomplete jigsaw puzzles. Consequently, an original heuristic strategy, termed Empty Slot Prediction, was developed to support the proposed algorithm, and proved successful when applied to certain sub-classes of this problem. The results obtained indicate that no one algorithm can be used to solve the multitude of possible scenarios involved in the re-assembly of incomplete jigsaw puzzles. Other variations of the jigsaw puzzle problem that still remain unsolved are presented as avenues for future research. The solution of this problem involves a number of procedures with significant applications in other computer-related areas such as pattern recognition, feature and shape description, boundary-matching, and heuristic modelling. It also has more practical applications in robotic vision and reconstruction of broken artefacts in archaeology.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Management
Faculty of Commerce and Management
Full Text
Style APA, Harvard, Vancouver, ISO itp.
29

Blet, Loïc. "Configuration automatique d’un solveur générique intégrant des techniques de décomposition arborescente pour la résolution de problèmes de satisfaction de contraintes". Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0085/document.

Pełny tekst źródła
Streszczenie:
La programmation par contraintes intègre des algorithmes de résolution génériques dans des langages de modélisation déclaratifs basés sur les contraintes : ces langages permettent de décrire des problèmes combinatoires sous la forme d’un ensemble de variables devant prendre leurs valeurs dans des domaines en satisfaisant des contraintes. Nous introduisons dans cette thèse un algorithme de résolution générique paramétré par : — une stratégie d’exploration de l’espace de recherche, à choisir parmi, chronological backtracking, conflict directed backjumping, conflict directed backjumping with reordering, dynamic backtracking, decision repair, et backtracking with tree decomposition ; — une heuristique de choix de variables, à choisir parmi, min-domain/ddeg et min-domain/wdeg ; — une technique de propagation de contraintes, à choisir parmi, forward checking et maintaining arc consistency. Ainsi, cet algorithme générique s’instancie en vingt-quatre configurations différentes ; certaines correspondant à des algorithmes connus, d’autres étant nouvelles. Ces vingt- quatre configurations ont été comparées expérimentalement sur un benchmark de plus de mille instances, chaque configuration étant exécutée plusieurs fois sur chaque instance pour tenir compte du non déterminisme des exécutions. Des tests statistiques sont utilisés pour comparer les performances. Cette évaluation expérimentale a permis de mieux comprendre la complémentarité des différents mécanismes de résolution, avec une attention particulière portée sur la capacité à tirer parti de la structure des instances pour accélérer la résolution. Nous identifions notamment treize configurations complémentaires telles que chaque instance de notre benchmark est bien résolue par au moins une des treize configurations. Une deuxième contribution de la thèse est d’introduire un sélecteur capable de choisir automatiquement la meilleure configuration de notre algorithme générique pour chaque nouvelle instance à résoudre : nous décrivons chaque instance par un ensemble de descripteurs et nous utilisons des techniques d’apprentissage automatique pour construire un modèle de choix de configuration à partir de ces descripteurs. Sachant que l’apprentissage est généralement plus difficile quand il y a beaucoup de configurations, nous exprimons le problème du choix du sous-ensemble de configurations pouvant être sélectionnées comme un problème de couverture d’ensemble et nous comparons deux critères de choix : le premier vise à maximiser le nombre d’instances résolues par au moins une configuration et le second vise à maximiser le nombre d’instances pour lesquelles il y a une bonne configuration disponible. Nous montrons expérimentalement que la seconde stratégie obtient généralement de meilleurs résultats, et que le sélecteur obtient de meilleures performances que chacune de nos vingt-quatre configurations initiales
Constraint programming integrates generic solving algorithms within declarative languages based on constraints : these languages allow us to describe combinatorial problems as a set of variables which have to take their values in domains while satisfying constraints. Numerous real-life problems can be modelled in such a way, as for instance, planification problems, scheduling problems, . . . These problems are NP-complete in the general case of finite domains. We introduce in this work a generic solving algorithm parameterized by : — a strategy for exploring the search space, to be chosen from the following six, chronological backtracking, conflict directed backjumping, conflict directed backjumping with reordering, dynamic backtracking, decision repair, and backtracking with tree decomposition ; — a variable ordering heuristic, to be chosen from the following two, min-domain/ddeg and min-domain/wdeg ; — a constraint propagation technique, to be chosen from the following two, forward checking and maintaining arc consistency. Thus, this algorithm leads to 24 different configurations ; some corresponding to already known algorithms, others being new. These 24 configurations have been com- pared experimentally on a benchmark of more than a thousand instances, each configuration being executed several times to take into account the non-determinism of the executions, and a statistical test has been used to compare performances. This experimental evaluation allowed us to better understand the complementarity of the different solving mechanisms, with a focus on the ability to exploit the structure of the instances to speed up the solving process. We identify 13 complementary configurations such that every instance of our benchmark is well solved by at least one of the 13 configurations. A second contribution of this work is to introduce a selector able to choose automatically the best configuration of our generic solver for each new instance to be solved : we describe each instance by a set of features and we use machine learning techniques to build a model to choose a configuration based on these features. Knowing that the learning process is generally harder when there are many configurations to choose from, we state the problem of choosing a subset of configurations that can be picked as a set covering problem and we compare two criterion : the first one aims to maximize the number of instances solved by at least one configuration and the second one aims to maximize the number of instances for which there is a good configuration available. We show experimentally that the second strategy obtains generally better results and that the selector obtains better performances than each of the 24 initial configurations
Style APA, Harvard, Vancouver, ISO itp.
30

Mairidan, Wushouer. "Pivot-Based Bilingual Dictionary Creation for Low-Resource Languages". 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199441.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Janečková, Jitka. "Použití programování s omezujícími podmínkami při řešení diskrétních úloh". Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-81927.

Pełny tekst źródła
Streszczenie:
Application of constraint programming (CP) is one of the possible ways of solving discrete problems. It can be used for both search for feasible solution and optimization. CP offers a whole range of approaches for either a solution search or for acceleration of the process of its search -- from search algorithms or consistency techniques to propagation algorithms, which are basically only a combination of the two preceding methods. For optimization we most often use branch and bound approach, which differs in some aspects from a method of the same name used in mathematical programming (MP). Comparison of CP and MP is interesting in many other aspects. With CP the formulation of problems is more flexible, which allows for creation of often simpler and smaller models. On the other hand, its disadvantage is a limited use: Constraint satisfaction (optimisation) problem, as we call the constraint programming problem, cannot contain any discrete variables. CP is suitable especially for problems with a lot of constraints and only few variables, ideally only two. In the beginning, the paper introduces the basic terms of constraint programming, then it describes algorithms and techniques used for solving discrete problems and compares CP with mathematical programming.
Style APA, Harvard, Vancouver, ISO itp.
32

Srđan, Škrbić. "Upotreba fazi logike u relacionim bazama podataka". Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2009. http://dx.doi.org/10.2298/NS20090319SKRBIC.

Pełny tekst źródła
Streszczenie:
Doktorska disertacija pripada oblastiinformacionih sistema, odnosno podoblasti kojase bavi upravljanjem skladištenjem ipretraživanjem informacija. Osnovni ciljdisertacije je modeliranje i implementacijaskupa alata koji omogućavaju upotrebu fazilogike u radu sa relacionim bazama podataka.Da bi se do tog skupa alata došlo, najpre jerelacioni model podataka proširen elementimateorije fazi skupova, a zatim je definisano faziproširenje upitnog jezika SQL – PFSQL.Interpreter za taj jezik je implementiran uokviru fazi JDBC drajvera koji, osimimplementacije interpretera, sadrži i elementekoji omogućavaju jednostavnu upotrebu ovihmehanizama iz programskog jezika Java. Skupalata je zaokružen implementacijom CASEalata za razvoj fazi-relacionog modela bazepodataka. Osim toga, razmatrane su imogućnosti za upotrebu PFSQL jezika uvišeslojnim aplikacijama.
This doctoral dissertation belongs to thefield of information systems, subfieldinformation storage and retrieval management.The main subject of the dissertation is modelingand implementation of a set of tools that allowusage of fuzzy logic in relational databaseapplicationsIn order to achieve that goal, at first, therelational data model is extended with elementsof fuzzy set theory. After that, a fuzzyextension of the SQL query language, calledPFSQL, is defined. An interpreter for thatlanguage is implemented as a part of the fuzzyJDBC driver. Beside the implementation of theinterpreter, this fuzzy JDBC driver containselements that allow simple usage of offeredmechanisms from Java programming language.The set of tools is concluded with theimplementation of the CASE tool for thedevelopment of fuzzy-relational data models. Inaddition, possibilities to use PFSQL languageon the middle tier of multi tier systems arediscussed.
Style APA, Harvard, Vancouver, ISO itp.
33

Nordh, Gustav. "Complexity Dichotomies for CSP-related Problems". Doctoral thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8822.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Maňák, Ondřej. "Automatická tvorba varhanní předehry k církevním písním". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237144.

Pełny tekst źródła
Streszczenie:
The focus of this master's thesis is an automatic creation of organ overtures for church songs from both theoretical and practical points of view. Organ overture is a short introduction to a church song. According to the fact that it can be described by a finite set of rules, it is possible to use techniques for solving Constraint Satisfaction Problems. An effective instrument to develop such system can be C++ programming language and Gecode library.
Style APA, Harvard, Vancouver, ISO itp.
35

Karakoc, Erman. "Web Service Composition Under Resource Allocation Constraints". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608309/index.pdf.

Pełny tekst źródła
Streszczenie:
Web service composition is an inevitable aspect of web services technology, which solves complex problems by combining available basic services and ordering them to best suit the problem requirements. Automatic composition gives us flexibility of selecting best candidate services at composition time, this would require the user to define the resource allocation constraints for selecting and composing candidate web services. Resource allocation constraints define restrictions on how to allocate resources, and scheduling under resource allocation constraints to provide proper resource allocation to tasks. In this work, web service composition system named as CWSF (Composite Web Service Framework) constructed for users to define a workflow system in which a rich set of constraints can be defined on web services. On the contrary many technologies and studies, CWSF provides a user-friendly environment for modeling web service composition system. The output of the framework is the scheduling of web service composition in which how and when web services are executed are defined. With this work, a language, CWSL is defined to describe workflow, resource allocation constraints, selection and discovery rules of web services and associated semantic information. An important property of CWSF system is converting web service composition problem into a constraint satisfaction problem to find the best solution that meet the all criteria defined by user. Furthermore, CWSF has ability to display other possible solutions to provides users flexibility. This study also includes semantic matching and mapping facilities for service discovery.
Style APA, Harvard, Vancouver, ISO itp.
36

Cervoni, Laurent. "Méthodologies et techniques de résolution de problèmes avec contraintes. Application en programmation logique avec objets : cooxi". Rouen, 1994. http://www.theses.fr/1994ROUES032.

Pełny tekst źródła
Streszczenie:
Cette thèse se présente tout d'abord comme une étude exploratoire des techniques, des méthodes et des systèmes de résolution de problèmes avec contraintes, basée sur un retour d'expérience industriel. Devant l'émergence, ces dernières années, de nouvelles techniques ou outils pour résoudre les problèmes fortement combinatoires, ce document se propose de restituer plus précisément le contexte de la résolution de ces problèmes complexes en : passant en revue les approches disponibles, collectant le maximum de techniques, les comparant ; précisant les liens entre recherche opérationnelle et programmation par contraintes. Une méthodologie répondant de manière efficace à ces problèmes complexes (dits avec contraintes) est alors suggérée. Enfin, la dernière partie étudie les perspectives de quelques techniques au travers d'une structure d'accueil en programmation logique
Style APA, Harvard, Vancouver, ISO itp.
37

Andrade, Tales Pinheiro de. "Interações gênicas usando redes booleanas limiarizadas modeladas como um problema de satisfação de restrições". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-05062012-150618/.

Pełny tekst źródła
Streszczenie:
As reações químicas que resultam da expressão de genes são complexas e ainda não são total- mente compreendidas. Sabe-se que os genes enviam, recebem, e processam informações formando uma complexa rede de comunicação, mas a arquitetura e dinâmica destas redes não são totalmente conhecidas. Dessa forma, um problema importante é determinar como os genes se relacionam dentro da célula. Esse processo de determinar o relacionamento entre os genes é conhecido como inferência de redes gênicas. Uma das formas para representar o relacionamento entre os genes é usar modelos matemáticos e computacionais de Redes Gênicas. Em especial, um dos modelos de grande interesse é o de Redes Booleanas (BN - do inglês Boolean Networks), no qual os genes podem assumir dois estados, ativo ou inativo, se estão, respectivamente, expressos ou não. Estes estados podem variar ao longo do tempo, dependendo de como os genes se relacionam. Nosso interesse está em estudar um caso particular deste modelo, conhecido como Redes Booleanas Limiarizadas, onde apenas uma classe de funções booleanas é utilizada para construir as BNs. Para inferir as Redes Booleanas Limiarizadas, usamos um algoritmo constituído de dois passos. Primeiro, usamos o arcabouço do Problema de Satisfação de Restrições (CSP - do inglês Constraint Satisfaction Problem) para inferir conjuntos de soluções consistentes com uma dada série temporal de um conjunto de genes. Em seguida analisamos o comportamento dinâmico das soluções encon- tradas , filtrando conjuntos de soluções de maior interesse para testes práticos em laboratório. Usando o arcabouço do CSP, construímos um solver, usando a biblioteca Gecode,1 para inferência de redes consistentes, usando como entrada uma série temporal oriunda de dados de microarrays. Em seguida, através da simulação da dinâmica de uma amostra das redes encontradas no passo anterior, fomos capazes de determinar algumas restrições interessantes para filtrar o conjunto de redes. Aplicamos o nosso método para três conjuntos de dados: dois artificiais, e para validação, usamos uma série temporal de uma rede artificial conhecida na literatura. Com isso fomos capazes de inferir conjuntos de redes gênicas de possível interesse para testes em laboratório.
The chemical reactions that result in gene expression are complex and not yet fully understood. It is known that genes send, receive and process information to form a complex network of com- munication, but the architecture and dynamics of these networks are not fully known. Thus, one major problem is to determine how genes are linked within the cell. This process of determining the relationship between genes is known as inference of genetic networks. One way to represent the relationship between genes is to use mathematical and computer models of genetic networks. In particular, one of the models of great interest are Boolean Networks (BN), in which genes can take two states, active or inactive, if they are, respectively, expressed or not. These states may vary over time, depending on how genes are related. Our interest is in studying a case of this particular model, known as thresholded Boolean networks, where only one class of Boolean functions is used to build the GNs. To infer the thresholded Boolean networks, we use an algorithm that consists of two steps. First, we use the framework of Constraint Satisfaction Problem (CSP) to infer sets of solutions consistent with a time series of a given set of genes. Then analyze the dynamic behavior of the solutions, filtering sets of solutions with interest for practical tests in the laboratory. Using the framework of the CSP, we constructed a solver, using the library Gecode, 2 for in- ference of consistent networks, using as input a time series arising from microarrays data. Then, by simulating the dynamics of a sample of networks found in the previous step, we were able to determine some interesting constraints to filter the set of networks. We apply our method to three datasets: two artificial, and for validation, we use a time series of an artificial network known from literature. Thus we were able to infer genetic networks sets of possible interest for laboratory tests.
Style APA, Harvard, Vancouver, ISO itp.
38

Safadi, El Abed El. "Contribution à l'évaluation des risques liés au TMD (transport de matières dangereuses) en prenant en compte les incertitudes". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT059/document.

Pełny tekst źródła
Streszczenie:
Le processus d'évaluation des risques technologiques, notamment liés au Transport de Matières Dangereuses (TMD), consiste, quand un événement accidentel se produit, à évaluer le niveau de risque potentiel des zones impactées afin de pouvoir dimensionner et prendre rapidement des mesures de prévention et de protection (confinement, évacuation...) dans le but de réduire et maitriser les effets sur les personnes et l'environnement. La première problématique de ce travail consiste donc à évaluer le niveau de risque des zones soumises au transport des matières dangereuses. Pour ce faire, un certain nombre d'informations sont utilisées, comme la quantification de l'intensité des phénomènes qui se produisent à l'aide de modèles d'effets (analytique ou code informatique). Pour ce qui concerne le problème de dispersion de produits toxiques, ces modèles contiennent principalement des variables d'entrée liées à la base de données d'exposition, de données météorologiques,… La deuxième problématique réside dans les incertitudes affectant certaines entrées de ces modèles. Pour correctement réaliser une cartographie en déterminant la zone de de danger où le niveau de risque est jugé trop élevé, il est nécessaire d'identifier et de prendre en compte les incertitudes sur les entrées afin de les propager dans le modèle d'effets et ainsi d'avoir une évaluation fiable du niveau de risque. Une première phase de ce travail a consisté à évaluer et propager l'incertitude sur la concentration qui est induite par les grandeurs d'entrée incertaines lors de son évaluation par les modèles de dispersion. Deux approches sont utilisées pour modéliser et propager les incertitudes : l'approche ensembliste pour les modèles analytiques et l'approche probabiliste (Monte-Carlo) qui est plus classique et utilisable que le modèle de dispersion soit analytique ou défini par du code informatique. L'objectif consiste à comparer les deux approches pour connaitre leurs avantages et inconvénients en termes de précision et temps de calcul afin de résoudre le problème proposé. Pour réaliser les cartographies, deux modèles de dispersion (Gaussien et SLAB) sont utilisés pour évaluer l'intensité des risques dans la zone contaminée. La réalisation des cartographies a été abordée avec une méthode probabiliste (Monte Carlo) qui consiste à inverser le modèle d'effets et avec une méthode ensembliste générique qui consiste à formuler ce problème sous la forme d'un ensemble de contraintes à satisfaire (CSP) et le résoudre ensuite par inversion ensembliste. La deuxième phase a eu pour but d'établir une méthodologie générale pour réaliser les cartographies et améliorer les performances en termes de temps du calcul et de précision. Cette méthodologie s'appuie sur 3 étapes : l'analyse préalable des modèles d'effets utilisés, la proposition d'une nouvelle approche pour la propagation des incertitudes mixant les approches probabiliste et ensembliste en tirant notamment partie des avantages des deux approches précitées, et utilisable pour n'importe quel type de modèle d'effets spatialisé et statique, puis finalement la réalisation des cartographies en inversant les modèles d'effets. L'analyse de sensibilité présente dans la première étape s'adresse classiquement à des modèles probabilistes. Nous discutons de la validité d'utiliser des indices de type Sobol dans le cas de modèles intervalles et nous proposerons un nouvel indice de sensibilité purement intervalle cette fois-ci
When an accidental event is occurring, the process of technological risk assessment, in particular the one related to Dangerous Goods Transportation (DGT), allows assessing the level of potential risk of impacted areas in order to provide and quickly take prevention and protection actions (containment, evacuation ...). The objective is to reduce and control its effects on people and environment. The first issue of this work is to evaluate the risk level for areas subjected to dangerous goods transportation. The quantification of the intensity of the occurring events needed to do this evaluation is based on effect models (analytical or computer code). Regarding the problem of dispersion of toxic products, these models mainly contain inputs linked to different databases, like the exposure data and meteorological data. The second problematic is related to the uncertainties affecting some model inputs. To determine the geographical danger zone where the estimated risk level is not acceptable, it is necessary to identify and take in consideration the uncertainties on the inputs in aim to propagate them in the effect model and thus to have a reliable evaluation of the risk level. The first phase of this work is to evaluate and propagate the uncertainty on the gas concentration induced by uncertain model inputs during its evaluation by dispersion models. Two approaches are used to model and propagate the uncertainties. The first one is the set-membership approach based on interval calculus for analytical models. The second one is the probabilistic approach (Monte Carlo), which is more classical and used more frequently when the dispersion model is described by an analytic expression or is is defined by a computer code. The objective is to compare the two approaches to define their advantages and disadvantages in terms of precision and computation time to solve the proposed problem. To determine the danger zones, two dispersion models (Gaussian and SLAB) are used to evaluate the risk intensity in the contaminated area. The risk mapping is achieved by using two methods: a probabilistic method (Monte Carlo) which consists in solving an inverse problem on the effect model and a set-membership generic method that defines the problem as a constraint satisfaction problem (CSP) and to resolve it with an set-membership inversion method. The second phase consists in establishing a general methodology to realize the risk mapping and to improve performance in terms of computation time and precision. This methodology is based on three steps: - Firstly the analysis of the used effect model. - Secondly the proposal of a new method for the uncertainty propagationbased on a mix between the probabilistic and set-membership approaches that takes advantage of both approaches and that is suited to any type of spatial and static effect model. -Finally the realization of risk mapping by inversing the effect models. The sensitivity analysis present in the first step is typically addressed to probabilistic models. The validity of using Sobol indices for interval models is discussed and a new interval sensitivity indiceis proposed
Style APA, Harvard, Vancouver, ISO itp.
39

Atahary, Tanvir. "Acceleration of Cognitive Domain Ontologies". University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1460734067.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Anglada, Alexis. "Introduction de mécanisme de flexibilité dans la programmation par contraintes sur les domaines continus". Paris 6, 2005. http://www.theses.fr/2005PA066261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Chadim, Petr. "Automatické jazzové aranžmá". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237040.

Pełny tekst źródła
Streszczenie:
This Thesis is focused on the arranging of the melody, which is accompanied by jazz chords. It deals with creating a more harmonious voices using Block Voicing method. Distribution to target notes and passing notes is made using techniques of constraint programming (CSP). Passing notes are reharmonized by dominant seventh chord or by parallel chord. Using CSP a bass part is also created. To solve CSP is used Gecode library. The harmonious voices are arranged by Four Part Close Voicing. The application result is a tool for the music arranger.
Style APA, Harvard, Vancouver, ISO itp.
42

Tran-Dang, Hoa. "3D Spatial Modeling of Stacked Containers based on Wireless Sensor Network : application to the physical internet". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0049/document.

Pełny tekst źródła
Streszczenie:
Le paradigme de l’Internet Physique a été introduit il y a quelques années pour transformer globalement la manière dont les objets physiques seront manipulés, entreposés et transportés dans le cadre d’une logistique durable. L’une des caractéristiques importante de l’Internet Physique est liée à l’encapsulation des marchandises dans des conteneurs modulaires standardisés. Le modèle de fonctionnement proposé de l’Internet Physique, s’il rationnalise les transports, engendre des manutentions plus nombreuses, en particulier au sein des PI-hubs, où les opérations de routage, de déchargement et (re)chargement des conteneurs nécessitent une organisation et une gestion rationnelle. La multiplicité et la diversité des opérations (automatisées ou non) à mettre en œuvre simultanément ne peut être conduite de manière efficiente qu’en cas de parfaite synchronisation entre la réalité du système physique et de celle du système informationnel. Les propositions de cette thèse adressent cette problématique applicative et constituent à ce titre une contribution au concept de l’Internet Physique. Elles visent à l’obtention, en temps réel, d’une image ou d’un modèle spatial des PI-conteneurs, qui disposent chacun d’un nœud WSN. L’assemblage de ces différents conteneurs au sein d’un conteneur de plus haut niveau (ou conteneur composite) permet de constituer alors un réseau de capteurs ad-hoc. Ces conteneurs deviennent ainsi des éléments actifs de l’automatisation de la chaine logistique. A partir des seules informations de proximité issues de cette instrumentation, nous montrons dans cette thèse qu’il est possible de construire le modèle spatial des PI-conteneurs
The Physical Internet paradigm was introduced few years ago to transform globally how physical objects will be handled, stored and transported as part of a sustainable logistics. One of the important characteristics of the Physical Internet is the encapsulation of goods in standardized modular containers. Although the Physical Internet rationalizes transport, it generates more frequent handling, particularly within PI-hubs, where the operations of routing, unloading and (re) loading containers require an efficient organization and management. The multiplicity and the diversity of operations (automated or not) to be implemented simultaneously can only be carried out efficiently in the case of perfect synchronization between the reality of the physical system and that of the information system. The proposals of this thesis address this problem and constitute a contribution to the concept of the Physical Internet. They aim to obtain in real time, the spatial distribution (or layout) of the PI-containers when they are stacked in a higher-level container, so called composite container. To do this, we propose to exploit the intelligence and the activeness concepts of each PI container which is equipped with wireless sensor node. Hence, the composition of a composite PI containers constitutes an adhoc network of sensor nodes. From neighborhood relationships between these nodes, we show in this thesis that it is possible to construct the spatial 3D layout of the PI-containers and control at any time and at any place the effective compliance between the real composition and the data stored in the information system
Style APA, Harvard, Vancouver, ISO itp.
43

Restrepo, Lopez Ricardo. "Topics in spatial and dynamical phase transitions of interacting particle systems". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42729.

Pełny tekst źródła
Streszczenie:
In this work we provide several improvements in the study of phase transitions of interacting particle systems: - We determine a quantitative relation between non-extremality of the limiting Gibbs measure of a tree-based spin system, and the temporal mixing of the Glauber Dynamics over its finite projections. We define the concept of 'sensitivity' of a reconstruction scheme to establish such a relation. In particular, we focus on the independent sets model, determining a phase transition for the mixing time of the Glauber dynamics at the same location of the extremality threshold of the simple invariant Gibbs version of the model. - We develop the technical analysis of the so-called spatial mixing conditions for interacting particle systems to account for the connectivity structure of the underlying graph. This analysis leads to improvements regarding the location of the uniqueness/non-uniqueness phase transition for the independent sets model over amenable graphs; among them, the elusive hard-square model in lattice statistics, which has received attention since Baxter's solution of the analogous hard-hexagon in 1980. - We build on the work of Montanari and Gerschenfeld to determine the existence of correlations for the coloring model in sparse random graphs. In particular, we prove that correlations exist above the 'clustering' threshold of such a model; thus providing further evidence for the conjectural algorithmic 'hardness' occurring at such a point.
Style APA, Harvard, Vancouver, ISO itp.
44

Oliveira, Camila Nascimento de. "Uma investiga??o de algoritmos exatos e metaheur?sticos aplicados ao nonograma". Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18081.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2014-12-17T15:48:07Z (GMT). No. of bitstreams: 1 CamilaNOT_DISSERT.pdf: 4321465 bytes, checksum: d103bd2da647997e8dfd0a8784c2060d (MD5) Previous issue date: 2013-02-01
Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N  M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms
O Nonograma ? um jogo l?gico cujo problema de decis?o associado ? NP-completo. Ele possui aplica??o em problemas de identifica??o de padr?es e de compacta??o de dados, dentre outros. O jogo consiste em determinar uma aloca??o de cores em pixels distribu?dos em uma matriz N  M atendendo restri??es em linhas e colunas. Um Nonograma ? codificado atrav?s de vetores cujos elementos especificam o n?mero de pixels existentes em cada coluna e linha de uma figura, sem especificar suas coordenadas. Este trabalho apresenta abordagens exatas e heur?sticas para solucionar o Nonograma. A Busca em Profundidade foi uma das abordagens exatas escolhida, por ser um exemplo t?pico de algoritmo de for?a bruta de f?cil implementa??o. Outra abordagem exata implementada foi baseada no algoritmo Las Vegas, atrav?s do qual se pretende investigar se a aleatoriedade introduzida pelo algoritmo Las Vegas traria algum benef?cio em rela??o ? Busca em Profundidade. O Nonograma tamb?m ? transformado em um Problema de Satisfa??o de Restri??es. Tr?s abordagens heur?sticas s?o propostas: uma Busca Tabu e dois algoritmos Mem?tico. Uma nova abordagem para o c?lculo da fun??o objetivo ? proposta neste trabalho. As abordagens s?o testadas em 234 casos de teste de tamanho entre 5 x 5 e 100 x 100, incluindo Nonogramas l?gicos e aleat?rios
Style APA, Harvard, Vancouver, ISO itp.
45

"Integration of constraint programming and linear programming techniques for constraint satisfaction problem and general constrained optimization problem". 2001. http://library.cuhk.edu.hk/record=b5890598.

Pełny tekst źródła
Streszczenie:
Wong Siu Ham.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 131-138).
Abstracts in English and Chinese.
Abstract --- p.ii
Acknowledgments --- p.vi
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation for Integration --- p.2
Chapter 1.2 --- Thesis Overview --- p.4
Chapter 2 --- Preliminaries --- p.5
Chapter 2.1 --- Constraint Programming --- p.5
Chapter 2.1.1 --- Constraint Satisfaction Problems (CSP's) --- p.6
Chapter 2.1.2 --- Satisfiability (SAT) Problems --- p.10
Chapter 2.1.3 --- Systematic Search --- p.11
Chapter 2.1.4 --- Local Search --- p.13
Chapter 2.2 --- Linear Programming --- p.17
Chapter 2.2.1 --- Linear Programming Problems --- p.17
Chapter 2.2.2 --- Simplex Method --- p.19
Chapter 2.2.3 --- Mixed Integer Programming Problems --- p.27
Chapter 3 --- Integration of Constraint Programming and Linear Program- ming --- p.29
Chapter 3.1 --- Problem Definition --- p.29
Chapter 3.2 --- Related works --- p.30
Chapter 3.2.1 --- Illustrating the Performances --- p.30
Chapter 3.2.2 --- Improving the Searching --- p.33
Chapter 3.2.3 --- Improving the representation --- p.36
Chapter 4 --- A Scheme of Integration for Solving Constraint Satisfaction Prob- lem --- p.37
Chapter 4.1 --- Integrated Algorithm --- p.38
Chapter 4.1.1 --- Overview of the Integrated Solver --- p.38
Chapter 4.1.2 --- The LP Engine --- p.44
Chapter 4.1.3 --- The CP Solver --- p.45
Chapter 4.1.4 --- Proof of Soundness and Completeness --- p.46
Chapter 4.1.5 --- Compared with Previous Work --- p.46
Chapter 4.2 --- Benchmarking Results --- p.48
Chapter 4.2.1 --- Comparison with CLP solvers --- p.48
Chapter 4.2.2 --- Magic Squares --- p.51
Chapter 4.2.3 --- Random CSP's --- p.52
Chapter 5 --- A Scheme of Integration for Solving General Constrained Opti- mization Problem --- p.68
Chapter 5.1 --- Integrated Optimization Algorithm --- p.69
Chapter 5.1.1 --- Overview of the Integrated Optimizer --- p.69
Chapter 5.1.2 --- The CP Solver --- p.74
Chapter 5.1.3 --- The LP Engine --- p.75
Chapter 5.1.4 --- Proof of the Optimization --- p.77
Chapter 5.2 --- Benchmarking Results --- p.77
Chapter 5.2.1 --- Weighted Magic Square --- p.77
Chapter 5.2.2 --- Template design problem --- p.78
Chapter 5.2.3 --- Random GCOP's --- p.79
Chapter 6 --- Conclusions and Future Work --- p.97
Chapter 6.1 --- Conclusions --- p.97
Chapter 6.2 --- Future work --- p.98
Chapter 6.2.1 --- Detection of implicit equalities --- p.98
Chapter 6.2.2 --- Dynamical variable selection --- p.99
Chapter 6.2.3 --- Analysis on help of linear constraints --- p.99
Chapter 6.2.4 --- Local Search and Linear Programming --- p.99
Appendix --- p.101
Proof of Soundness and Completeness --- p.101
Proof of the optimization --- p.126
Bibliography --- p.130
Style APA, Harvard, Vancouver, ISO itp.
46

Carbonnel, Clément. "Harnessing tractability in constraint satisfaction problems". Phd thesis, 2016. http://oatao.univ-toulouse.fr/17821/7/clement_carbonnel.pdf.

Pełny tekst źródła
Streszczenie:
The Constraint Satisfaction Problem (CSP) is a fundamental NP-complete problem with many applications in artificial intelligence. This problem has enjoyed considerable scientific attention in the past decades due to its practical usefulness and the deep theoretical questions it relates to. However, there is a wide gap between practitioners, who develop solving techniques that are efficient for industrial instances but exponential in the worst case, and theorists who design sophisticated polynomial-time algorithms for restrictions of CSP defined by certain algebraic properties. In this thesis we attempt to bridge this gap by providing polynomial-time algorithms to test for membership in a selection of major tractable classes. Even if the instance does not belong to one of these classes, we investigate the possibility of decomposing efficiently a CSP instance into tractable subproblems through the lens of parameterized complexity. Finally, we propose a general framework to adapt the concept of kernelization, central to parameterized complexity but hitherto rarely used in practice, to the context of constraint reasoning. Preliminary experiments on this last contribution show promising results.
Style APA, Harvard, Vancouver, ISO itp.
47

Kuo, Hui-chun, i 郭卉君. "Solving the Distributed Constraint Satisfaction Problem for Cooperative Supply Chains Using Multi-agent Systems". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/09523951030156957435.

Pełny tekst źródła
Streszczenie:
碩士
國立中山大學
資訊管理學系研究所
92
Facing global and dynamic competition environment, companies have to collaborate with other companies instead of struggle alone to optimize performance of supply chain. In a distributed supply chain structure, it is an important issue for companies to coordinate seamlessly to effectively fulfill customer orders. In this thesis, we seek to propose a fast and flexible method to solve the order fulfillment scheduling conflicts among partners in a supply chain. Due to the risk of exposing trade secrets and the cost of gathering information, the centralized constraint satisfaction mechanism is infeasible to handle distributed scheduling problem in real world environment. Moreover, the distributed constraints satisfaction model just focuses on finding a globally executable order fulfillment schedule. Therefore, we propose an agent-based distributed coordination mechanism that integrates negotiation with generic algorithm. We chose the mold manufacturing industry as an example and conducted experiments to evaluate the performance of the proposed mechanism and to compare with other benchmark methods proposed by researchers prior to this study. The experimental results indicate that the distributed coordination mechanism we proposed is a feasible approach to solve the order fulfillment scheduling conflicts in outsourcing activities in a supply chain.
Style APA, Harvard, Vancouver, ISO itp.
48

Chang, Yi-Luen, i 張逸倫. "An Efficient and Robust for Multi-Objective Constraint-Satisfaction Problem in Cognitive Radio Systems". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/57262602445534103957.

Pełny tekst źródła
Streszczenie:
碩士
國立中正大學
資訊工程研究所
99
With rapid deployment of wireless applications and the development of new communication technologies, users need specific communication devices for different communication protocols, which causes great inconvenience to the users. To support different wireless communication protocols, Mitola et al. proposed the concept of cognitive radio (CR). CR adapts to wireless environment changes and tries to satisfy the demand of users by tuning radio parameters. However, the process of tuning the radio parameters is quite time-consuming because it needs to coordinate varying communication parameters across networking layers, thus its failure leads to high overhead. In order to allow a CR system to make accurate decisions, the wireless environment must be precisely modelled by reliable methods. A CR system also needs a method for tuning the radio parameters in a robust way so as to decrease the probability of doing system reconfiguration with each and every time of environment change. This Thesis uses artificial neural network (ANN) to dynamically model the environment, and use the Robust Light-weight Reasoning for Cognitive Radio (RoLR) to solve the multi-objective problem of satisfying user given constraints. Comparing RoLR with a method based on the genetic algorithm (GA), it is found that RoLR is more than two times accurate than the GA method, takes about 1/20 of the execution time of the GA method, and RoLR is more robust than GA became 42% of the RoLR solutions are feasible, while only 2% of the GA solution are feasible after the environment changes 12 times. Further, RoLR needs very few amount of training data around 10 for training the ANN model such that 90% of the candidate solutions are feasible.
Style APA, Harvard, Vancouver, ISO itp.
49

Nguyen, Van-Hau. "SAT Encodings of Finite CSPs". Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28571.

Pełny tekst źródła
Streszczenie:
Boolean satisfiability (SAT) is the problem of determining whether there exists an assignment of the Boolean variables to the truth values such that a given Boolean formula evaluates to true. SAT was the first example of an NP-complete problem. Only two decades ago SAT was mainly considered as of a theoretical interest. Nowadays, the picture is very different. SAT solving becomes mature and is a successful approach for tackling a large number of applications, ranging from artificial intelligence to industrial hardware design and verification. SAT solving consists of encodings and solvers. In order to benefit from the tremendous advances in the development of solvers, one must first encode the original problems into SAT instances. These encodings should not only be easily generated, but should also be efficiently processed by SAT solvers. Furthermore, an increasing number of practical applications in computer science can be expressed as constraint satisfaction problems (CSPs). However, encoding a CSP to SAT is currently regarded as more of an art than a science, and choosing an appropriate encoding is considered as important as choosing an algorithm. Moreover, it is much easier and more efficient to benefit from highly optimized state-of-the-art SAT solvers than to develop specialized tools from scratch. Hence, finding appropriate SAT encodings of CSPs is one of the most fascinating challenges for solving problems by SAT. This thesis studies SAT encodings of CSPs and aims at: 1) conducting a comprehensively profound study of SAT encodings of CSPs by separately investigating encodings of CSP domains and constraints; 2) proposing new SAT encodings of CSP domains; 3) proposing new SAT encoding of the at-most-one constraint, which is essential for encoding CSP variables; 4) introducing the redundant encoding and the hybrid encoding that aim to benefit from both two efficient and common SAT encodings (i.e., the sparse and order encodings) by using the channeling constraint (a term used in Constraint Programming) for SAT; and 5) revealing interesting guidelines on how to choose an appropriate SAT encoding in the way that one can exploit the availability of many efficient SAT solvers to solve CSPs efficiently and effectively. Experiments show that the proposed encodings and guidelines improve the state-of-the-art SAT encodings of CSPs.
Style APA, Harvard, Vancouver, ISO itp.
50

Ekpenyong, Olufisayo. "Parallel Pattern Search in Large, Partial-Order Data Sets on Multi-core Systems". Thesis, 2011. http://hdl.handle.net/10012/5740.

Pełny tekst źródła
Streszczenie:
Monitoring and debugging distributed systems is inherently a difficult problem. Events collected during the execution of distributed systems can enable developers to diagnose and fix faults. Process-time diagrams are normally used to view the relationships between the events and understand the interaction between processes over time. A major difficulty with analyzing these sets of events is that they are usually very large. Therefore, being able to search through the event-data sets can enable users to get to points of interest quickly and find out if patterns in the dataset represent the expected behaviour of the system. A lot of research work has been done to improve the search algorithm for finding event-patterns in large partial-order datasets. In this thesis, we improve on this work by parallelizing the search algorithm. This is useful as many computers these days have more than one core or processor. Therefore, it makes sense to exploit this available computing power as part of an effort to improve the speed of the algorithm. The search problem itself can be modeled as a Constraint Satisfaction Problem (CSP). We develop a simple and efficient way of generating tasks (to be executed by the cores) that guarantees that no two cores will ever repeat the same work-effort during the search. Our approach is generic and can be applied to any CSP consisting of a large domain space. We also implement an efficient dynamic work-stealing strategy that ensures the cores are kept busy throughout the execution of the parallel algorithm. We evaluate the efficiency and scalability of our algorithm through experiments and show that we can achieve efficiencies of up to 80% on a 24-core machine.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii