Siga este enlace para ver otros tipos de publicaciones sobre el tema: Combinatorial algorithms on string.

Artículos de revistas sobre el tema "Combinatorial algorithms on string"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Combinatorial algorithms on string".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Bernardini, Giulia, Huiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, Nadia Pisanti, Solon P. Pissis, Giovanna Rosone y Michelle Sweering. "Combinatorial Algorithms for String Sanitization". ACM Transactions on Knowledge Discovery from Data 15, n.º 1 (6 de enero de 2021): 1–34. http://dx.doi.org/10.1145/3418683.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

TAKAOKA, TADAO y STEPHEN VIOLICH. "FUSING LOOPLESS ALGORITHMS FOR COMBINATORIAL GENERATION". International Journal of Foundations of Computer Science 18, n.º 02 (abril de 2007): 263–93. http://dx.doi.org/10.1142/s0129054107004681.

Texto completo
Resumen
Some combinatorial generation problems can be broken into subproblems for which loopless algorithms already exist. This article discusses means by which loopless algorithms can be fused to produce a new loopless algorithm that solves the original problem. It demonstrates this method with two new loopless algorithms. The first generates well-formed parenthesis strings containing two different types of parentheses. The second generates multiset permutations in linear space using only arrays; it is simpler and more efficient than the recent algorithm of Korsh & LaFollette.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ambainis, Andris y Ashley Montanaro. "Quantum algorithms for search with wildcards and combinatorial group testing". Quantum Information and Computation 14, n.º 5&6 (mayo de 2014): 439–53. http://dx.doi.org/10.26421/qic14.5-6-4.

Texto completo
Resumen
We consider two combinatorial problems. The first we call ``search with wildcards'': given an unknown $n$-bit string $x$, and the ability to check whether any subset of the bits of $x$ is equal to a provided query string, the goal is to output $x$. We give a nearly optimal $O(\sqrt{n} \log n)$ quantum query algorithm for search with wildcards, beating the classical lower bound of $\Omega(n)$ queries. Rather than using amplitude amplification or a quantum walk, our algorithm is ultimately based on the solution to a state discrimination problem. The second problem we consider is combinatorial group testing, which is the task of identifying a subset of at most $k$ special items out of a set of $n$ items, given the ability to make queries of the form ``does the set $S$ contain any special items?''\ for any subset $S$ of the $n$ items. We give a simple quantum algorithm which uses $O(k)$ queries to solve this problem, as compared with the classical lower bound of $\Omega(k \log(n/k))$ queries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Moraglio, A., J. Togelius y S. Silva. "Geometric Differential Evolution for Combinatorial and Programs Spaces". Evolutionary Computation 21, n.º 4 (noviembre de 2013): 591–624. http://dx.doi.org/10.1162/evco_a_00099.

Texto completo
Resumen
Geometric differential evolution (GDE) is a recently introduced formal generalization of traditional differential evolution (DE) that can be used to derive specific differential evolution algorithms for both continuous and combinatorial spaces retaining the same geometric interpretation of the dynamics of the DE search across representations. In this article, we first review the theory behind the GDE algorithm, then, we use this framework to formally derive specific GDE for search spaces associated with binary strings, permutations, vectors of permutations and genetic programs. The resulting algorithms are representation-specific differential evolution algorithms searching the target spaces by acting directly on their underlying representations. We present experimental results for each of the new algorithms on a number of well-known problems comprising NK-landscapes, TSP, and Sudoku, for binary strings, permutations, and vectors of permutations. We also present results for the regression, artificial ant, parity, and multiplexer problems within the genetic programming domain. Experiments show that overall the new DE algorithms are competitive with well-tuned standard search algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nazeen, Sumaiya, M. Sohel Rahman y Rezwana Reaz. "Indeterminate string inference algorithms". Journal of Discrete Algorithms 10 (enero de 2012): 23–34. http://dx.doi.org/10.1016/j.jda.2011.08.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Graf, Alessandra, David G. Harris y Penny Haxell. "Algorithms for Weighted Independent Transversals and Strong Colouring". ACM Transactions on Algorithms 18, n.º 1 (31 de enero de 2022): 1–16. http://dx.doi.org/10.1145/3474057.

Texto completo
Resumen
An independent transversal (IT) in a graph with a given vertex partition is an independent set consisting of one vertex in each partition class. Several sufficient conditions are known for the existence of an IT in a given graph and vertex partition, which have been used over the years to solve many combinatorial problems. Some of these IT existence theorems have algorithmic proofs, but there remains a gap between the best existential bounds and the bounds obtainable by efficient algorithms. Recently, Graf and Haxell (2018) described a new (deterministic) algorithm that asymptotically closes this gap, but there are limitations on its applicability. In this article, we develop a randomized algorithm that is much more widely applicable, and demonstrate its use by giving efficient algorithms for two problems concerning the strong chromatic number of graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hamad, Ibrahim Ismael y Mohammad S. Hasan. "A Review: On using ACO Based Hybrid Algorithms for Path Planning of Multi-Mobile Robotics". International Journal of Interactive Mobile Technologies (iJIM) 14, n.º 18 (10 de noviembre de 2020): 145. http://dx.doi.org/10.3991/ijim.v14i18.16371.

Texto completo
Resumen
<p class="0abstract"><strong>Abstract-</strong>The path planning for Multi Mobile Robotic (MMR) system is a recent combinatorial optimisation problem. In the last decade, many researches have been published to solve this problem. Most of these researches focused on metaheuristic algorithms. This paper reviews articles on Ant Colony Optimisation (ACO) and its hybrid versions to solve the problem. The original Dorigo’s ACO algorithm uses exploration and exploitation phases to find the shortest route in a combinatorial optimisation problem in general without touching mapping, localisation and perception. Due to the properties of MMR, adaptations have been made to ACO algorithms. In this review paper, a literature survey of the recent studies on upgrading, modifications and applications of the ACO algorithms has been discussed to evaluate the application of the different versions of ACO in the MMR domain. The evaluation considered the quality, speed of convergence, robustness, scalability, flexibility of MMR and obstacle avoidance, static and dynamic environment characteristics of the tasks. <strong></strong></p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Saud, Suhair, Halife Kodaz y İsmail Babaoğlu. "Solving Travelling Salesman Problem by Using Optimization Algorithms". KnE Social Sciences 3, n.º 1 (15 de enero de 2018): 17. http://dx.doi.org/10.18502/kss.v3i1.1394.

Texto completo
Resumen
This paper presents the performances of different types of optimization techniques used in artificial intelligence (AI), these are Ant Colony Optimization (ACO), Improved Particle Swarm Optimization with a new operator (IPSO), Shuffled Frog Leaping Algorithms (SFLA) and modified shuffled frog leaping algorithm by using a crossover and mutation operators. They were used to solve the traveling salesman problem (TSP) which is one of the popular and classical route planning problems of research and it is considered as one of the widely known of combinatorial optimization. Combinatorial optimization problems are usually simple to state but very difficult to solve. ACO, PSO, and SFLA are intelligent meta-heuristic optimization algorithms with strong ability to analyze the optimization problems and find the optimal solution. They were tested on benchmark problems from TSPLIB and the test results were compared with each other.Keywords: Ant colony optimization, shuffled frog leaping algorithms, travelling salesman problem, improved particle swarm optimization
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

CLÉMENT, JULIEN y LAURA GIAMBRUNO. "Representing prefix and border tables: results on enumeration". Mathematical Structures in Computer Science 27, n.º 2 (22 de mayo de 2015): 257–76. http://dx.doi.org/10.1017/s0960129515000146.

Texto completo
Resumen
For some text algorithms, the real measure for the complexity analysis is not the string itself but its structure stored in its prefix table or equivalently border table. In this paper, we define the combinatorial class of prefix lists, namely a sequence of integers together with their size, and an injection ψ from the class of prefix tables to the class of prefix lists. We call a valid prefix list the image by ψ of a prefix table. In particular, we describe algorithms converting a prefix/border table to a prefix list and inverse linear algorithms from computing from a prefix list L = ψ(P) two words respectively in a minimal size alphabet and on a maximal size alphabet with P as prefix table. We then give a new upper bound on the number of prefix tables for strings of length n (on any alphabet) which is of order (1 + ϕ)n (with $\varphi=\frac{1+\sqrt{5}}{2}$ the golden mean) and also present a corresponding lower bound.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

FONTAINE, MARC, STEFAN BURKHARDT y JUHA KÄRKKÄINEN. "BDD-BASED ANALYSIS OF GAPPED q-GRAM FILTERS". International Journal of Foundations of Computer Science 16, n.º 06 (diciembre de 2005): 1121–34. http://dx.doi.org/10.1142/s0129054105003698.

Texto completo
Resumen
Recently, there has been a surge of interest in gapped q-gram filters for approximate string matching. Important design parameters for filters are for example the value of q, the filter-threshold and in particular the shape (aka seed) of the filter. A good choice of parameters can improve the performance of a q-gram filter by orders of magnitude and optimizing these parameters is a nontrivial combinatorial problem. We describe a new method for analyzing gapped q-gram filters. This method is simple and generic. It applies to a variety of filters, overcomes many restrictions that are present in existing algorithms and can easily be extended to new filter variants. To implement our approach, we use an extended version of BDDs (Binary Decision Diagrams), a data structure that efficiently represents sets of bit-strings. In a second step, we define a new class of multi-shape filters and analyze these filters with the BDD-based approach. Experiments show that multi-shape filters can outperform the best single-shape filters, which are currently in use, in many aspects. The BDD-based algorithm is crucial for the design and analysis of these new and better multi-shape filters. Our results apply to the k-mismatches problem, i.e. approximate string matching with Hamming distance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Bensouyad, Meriem, Nousseiba Guidoum y Djamel-Eddine Saïdouni. "An Efficient Evolutionary Algorithm for Strict Strong Graph Coloring Problem". International Journal of Applied Evolutionary Computation 5, n.º 2 (abril de 2014): 22–36. http://dx.doi.org/10.4018/ijaec.2014040102.

Texto completo
Resumen
A very promising approach for combinatorial optimization is evolutionary algorithms. As an application, this paper deals with the strict strong graph coloring problem defined by Haddad and Kheddouci (2009) where the authors have proposed an exact polynomial time algorithm for trees. The aim of this paper is to introduce a new evolutionary algorithm for solving this problem for general graphs. It combines an original crossover and a powerful correction operator. Experiments of this new approach are carried out on large Dimacs Challenge benchmark graphs. Results show very competitive with and even better than those of state of the art algorithms. To the best of the author's knowledge, it is the first time that an evolutionary algorithm is proposed to solve the strict strong graph coloring problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Alamro, Hayam, Mai Alzamel, Costas S. Iliopoulos, Solon P. Pissis, Wing-Kin Sung y Steven Watts. "Efficient Identification of k-Closed Strings". International Journal of Foundations of Computer Science 31, n.º 05 (agosto de 2020): 595–610. http://dx.doi.org/10.1142/s0129054120500288.

Texto completo
Resumen
A closed string contains a proper factor occurring as both a prefix and a suffix but not elsewhere in the string. Closed strings were introduced by Fici (WORDS 2011) as objects of combinatorial interest. This paper addresses a new problem by extending the closed string problem to the [Formula: see text]-closed string problem, for which a level of approximation is permitted up to a number of Hamming distance errors, set by the parameter [Formula: see text]. We address the problem of deciding whether or not a given string of length [Formula: see text] over an integer alphabet is [Formula: see text]-closed and additionally specifying the border resulting in the string being [Formula: see text]-closed. Specifically, we present an [Formula: see text]-time and [Formula: see text]-space algorithm to achieve this along with the pseudocode of an implementation and proof-of-concept experimental results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Alzamel, Mai, Lorraine A. K. Ayad, Giulia Bernardini, Roberto Grossi, Costas S. Iliopoulos, Nadia Pisanti, Solon P. Pissis y Giovanna Rosone. "Comparing Degenerate Strings". Fundamenta Informaticae 175, n.º 1-4 (28 de septiembre de 2020): 41–58. http://dx.doi.org/10.3233/fi-2020-1947.

Texto completo
Resumen
Uncertain sequences are compact representations of sets of similar strings. They highlight common segments by collapsing them, and explicitly represent varying segments by listing all possible options. A generalized degenerate string (GD string) is a type of uncertain sequence. Formally, a GD string Ŝ is a sequence of n sets of strings of total size N, where the ith set contains strings of the same length ki but this length can vary between different sets. We denote by W the sum of these lengths k0, k1, . . . , kn-1. Our main result is an 𝒪(N + M)-time algorithm for deciding whether two GD strings of total sizes N and M, respectively, over an integer alphabet, have a non-empty intersection. This result is based on a combinatorial result of independent interest: although the intersection of two GD strings can be exponential in the total size of the two strings, it can be represented in linear space. We then apply our string comparison tool to devise a simple algorithm for computing all palindromes in Ŝ in 𝒪(min{W, n2}N)-time. We complement this upper bound by showing a similar conditional lower bound for computing maximal palindromes in Ŝ. We also show that a result, which is essentially the same as our string comparison linear-time algorithm, can be obtained by employing an automata-based approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

MOOSA, TANAEEM M., SUMAIYA NAZEEN, M. SOHEL RAHMAN y REZWANA REAZ. "INFERRING STRINGS FROM COVER ARRAYS". Discrete Mathematics, Algorithms and Applications 05, n.º 02 (junio de 2013): 1360005. http://dx.doi.org/10.1142/s1793830913600057.

Texto completo
Resumen
Covers, being one of the most popular form of regularities in strings, have drawn much attention in the relevant literature. In this paper, we focus on the problem of linear-time inference of strings from cover arrays using the least sized alphabet. We present an algorithm that can reconstruct a string x over a binary alphabet whenever a valid cover array C is given as an input. We have devised our algorithm using several interesting combinatorial properties of cover arrays as well as an interesting relation between border array and cover array. Our algorithm runs in linear-time. The fact that, from any valid cover array, we can infer a binary string x, is, in itself, a fascinating result in stringology, and this work may be considered as the final step for this particular problem area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Lemström, Kjell, Gonzalo Navarro y Yoan Pinzon. "Practical algorithms for transposition-invariant string-matching". Journal of Discrete Algorithms 3, n.º 2-4 (junio de 2005): 267–92. http://dx.doi.org/10.1016/j.jda.2004.08.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

de Andrade, Carlos Eduardo, Rodrigo Franco Toso, Mauricio G. C. Resende y Flávio Keidi Miyazawa. "Biased Random-Key Genetic Algorithms for the Winner Determination Problem in Combinatorial Auctions". Evolutionary Computation 23, n.º 2 (junio de 2015): 279–307. http://dx.doi.org/10.1162/evco_a_00138.

Texto completo
Resumen
In this paper we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Hyyrö, Heikki. "Bit-parallel approximate string matching algorithms with transposition". Journal of Discrete Algorithms 3, n.º 2-4 (junio de 2005): 215–29. http://dx.doi.org/10.1016/j.jda.2004.08.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

BRIMKOV, VALENTIN E. "OPTIMALLY FAST CRCW-PRAM TESTING 2D-ARRAYS FOR EXISTENCE OF REPETITIVE PATTERNS". International Journal of Pattern Recognition and Artificial Intelligence 15, n.º 07 (noviembre de 2001): 1167–82. http://dx.doi.org/10.1142/s0218001401001349.

Texto completo
Resumen
In classical combinatorial string matching repetitions and other regularities play a central role. Besides their theoretical importance, repetitions in strings have been found relevant to coding and automata theory, formal languages, data compression, and molecular biology. An important motivation for developing a 2D pattern matching theory is seen in its relation with pattern recognition, image processing, computer vision and multimedia. Repetitions in 2D arrays have been defined and classified recently.5 In this paper we present an optimally fast CRCW-PRAM algorithm for testing whether a given n × n array contains repetitions of certain type. The algorithm takes optimal O( log log n) time with [Formula: see text] processors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Li, Xiu Ying y Dong Ju Du. "Research on the Application of College Automated Course Scheduling System Based on Genetic Algorithm". Advanced Materials Research 760-762 (septiembre de 2013): 1782–85. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1782.

Texto completo
Resumen
A reasonable curriculum contributes to the improvement of the training and teaching quality of college students. Using computer which is speed and strong ability to arrange curriculum automatically is imperative. Automatically curriculum arrangement is a constrained, multi-objective and intricate combinatorial optimization problem. Based on genetic algorithm of population search, it is suitable to process complex and nonlinear optimization problems which it difficult to solve for traditional search methods. In this paper solves complex automated course scheduling using genetic algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Skobtsov, Yu A. "Modern Immunological Models and Their Applications". Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, n.º 3 (140) (septiembre de 2022): 61–77. http://dx.doi.org/10.18698/0236-3933-2022-3-61-77.

Texto completo
Resumen
he paper considers main models and algorithms of artificial immune systems, which are related to the evolutionary computation paradigm and used to search for potential solutions, each of which is represented by an artificial lymphocyte. Same as an individual in evolutionary computation, an artificial lymphocyte is most often encoded by a binary string or a vector of real numbers. As far as the main models of artificial immune systems are concerned, the clonal selection algorithm is close to the evolutionary strategy of evolutionary computing, though it uses more powerful mutation operators and is applied mainly to solve numerical and combinatorial optimisation problems. The negative selection algorithm is based on the "friend or foe" recognition principle found in the immune system and is most popular in applications. The paper presents two aspects of the algorithm: 1) the basic concept, that is, expanding the set of "friend" cells; 2) the goal, which is to learn to distinguish between "friend" and "foe" cells, while only "friend" cell samples are available. We consider continuous and discrete network models representing regulated networks of molecules and cells. We note the advantages and disadvantages of these models and their application in the field of computer security, robotics, fraud and malfunction detection, data mining, text analysis, image recognition, bioinformatics, games, planning, etc.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

ChinnarajiAnnamalai. "Application of Factorial and Binomial Identities in Information, Cybersecurity and Machine Learning". International Journal of Advanced Networking and Applications 14, n.º 01 (2022): 5258–60. http://dx.doi.org/10.35444/ijana.2022.14103.

Texto completo
Resumen
This paper presents application of the binomial and factorial identities and expansions that are used in artificial intelligence, machine learning, and cybersecurity. The factorial and binomial identities can be used as methodological advances for various algorithms and applications in information and computational science. Cybersecurity is the practice of protecting the computing systems, communication networks, data and programs from cyber-attacks. Its objective is to reduce the risk of cyber-attacks and protect against the unauthorized exploitation of systems and networks. For this purposes, we need a strong cryptographic algorithms like RSA algorithm and Elliptic Curve Cryptography. In this connection, computing and combinatorial techniques based on factorials and binomial distributions are developed for the researchers who are working in artificial intelligence and cybersecurity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Adamczyk, Michał, Mai Alzamel, Panagiotis Charalampopoulos y Jakub Radoszewski. "Palindromic Decompositions with Gaps and Errors". International Journal of Foundations of Computer Science 29, n.º 08 (diciembre de 2018): 1311–29. http://dx.doi.org/10.1142/s0129054118430050.

Texto completo
Resumen
Identifying palindromes in sequences has been an interesting line of research in combinatorics on words and also in computational biology, after the discovery of the relation of palindromes in the DNA sequence with the HIV virus. Efficient algorithms for the factorization of sequences into palindromes and maximal palindromes have been devised in recent years. We extend these studies by allowing gaps in decompositions and errors in palindromes, and also imposing a lower bound to the length of acceptable palindromes. We first present an on-line algorithm for obtaining a palindromic decomposition of a string of length [Formula: see text] with the minimal total gap length in time [Formula: see text] and space [Formula: see text], where [Formula: see text] is the number of allowed gaps in the decomposition. We then consider a decomposition of the string in maximal [Formula: see text]-palindromes (i.e. palindromes with [Formula: see text] errors under the edit or Hamming distance) and [Formula: see text] allowed gaps. We present an algorithm to obtain such a decomposition with the minimal total gap length in time [Formula: see text] and space [Formula: see text]. Finally, we provide an implementation of our algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Badr, Elsayed, I. M. Selim, Hoda Mostafa y Hala Attiya. "An Integer Linear Programming Model for Partially Ordered Sets". Journal of Mathematics 2022 (10 de septiembre de 2022): 1–9. http://dx.doi.org/10.1155/2022/7660174.

Texto completo
Resumen
Linear programming is an important approach that is used to represent a large class of combinatorial optimization problems. The simplex algorithm is one of the algorithms for solving linear programming problems with exponential time complexity. Fortunately, this algorithm solves real-world problems with polynomial time complexity. In particular, a new Integer Linear Programming model (ILPM) is proposed for partially ordered sets. Robert Dilworth’s Decomposition theorem is formulated by ILPM and proves its correctness using the paradigm of strong duality in linear programming. Finally, ILPM is run on fifteen benchmark partially ordered sets for finding their width. The computational experiments show the validity of the proposed model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Simroth, Axel, Denise Holfeld y Renè Brunsch. "Job Shop Production Planning under Uncertainty: A Monte Carlo Rollout Approach". Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 3 (16 de junio de 2015): 175. http://dx.doi.org/10.17770/etr2015vol3.617.

Texto completo
Resumen
<p>The Monte Carlo Rollout method (MCR) is a novel approach to solve combinatorial optimization problems with uncertainties approximatively. It combines ideas from Rollout algorithms for combinatorial optimization and the Monte Carlo Tree Search in game theory. In this paper the results of an investigation of applying the MCR to a Scheduling Problem are shown. The quality of the MCR method depends on the model parameters, search depth and search width, which are strong linked to process parameters. These dependencies are analyzed by different simulations. The paper also deals with the question whether the Lookahead Pathology occurs. </p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Dasygenis, Minas y Kostas Stergiou. "Methods for Parallelizing Constraint Propagation through the Use of Strong Local Consistencies". International Journal on Artificial Intelligence Tools 27, n.º 04 (junio de 2018): 1860002. http://dx.doi.org/10.1142/s0218213018600023.

Texto completo
Resumen
Constraint programming (CP) is a powerful paradigm for various types of hard combinatorial problems. Constraint propagation techniques, such as arc consistency (AC), are used within solvers to prune inconsistent values from the domains of the variables and narrow down the search space. Local consistencies stronger than AC have the potential to prune the search space even more, but they are not widely used because they incur a high run time penalty in cases where they are unsuccessful. All constraint propagation techniques are sequential by nature, and thus they cannot be scaled up to modern multicore machines. For this reason, research on parallelizing constraint propagation is very limited. Contributing towards this direction, we exploit the parallelization possibilities of modern CPUs in tandem with strong local propagation methods in a novel way. Instead of trying to parallelize constraint propagation algorithms, we propose two search algorithms that apply different propagation methods in parallel. Both algorithms consist of a master search process, which is a typical CP solver, and a number of slave processes, with each one implementing a strong propagation method. The first algorithm runs the different propagators synchronously at each node of the search tree explored in the master process, while the second one can run them asynchronously at different nodes of the search tree. Preliminary experimental results on well-established benchmarks display the promise of our research by illustrating that our algorithms have execution times equal to those of serial solvers, in the worst case, while being faster in most cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kim, Yong-Hyuk, Alberto Moraglio, Ahmed Kattan y Yourim Yoon. "Geometric Generalisation of Surrogate Model-Based Optimisation to Combinatorial and Program Spaces". Mathematical Problems in Engineering 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/184540.

Texto completo
Resumen
Surrogate models (SMs) can profitably be employed, often in conjunction with evolutionary algorithms, in optimisation in which it is expensive to test candidate solutions. The spatial intuition behind SMs makes them naturally suited to continuous problems, and the only combinatorial problems that have been previously addressed are those with solutions that can be encoded as integer vectors. We show how radial basis functions can provide a generalised SM for combinatorial problems which have a geometric solution representation, through the conversion of that representation to a different metric space. This approach allows an SM to be cast in a natural way for the problem at hand, without ad hoc adaptation to a specific representation. We test this adaptation process on problems involving binary strings, permutations, and tree-based genetic programs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Halappanavar, Mahantesh, John Feo, Oreste Villa, Antonino Tumeo y Alex Pothen. "Approximate weighted matching on emerging manycore and multithreaded architectures". International Journal of High Performance Computing Applications 26, n.º 4 (9 de agosto de 2012): 413–30. http://dx.doi.org/10.1177/1094342012452893.

Texto completo
Resumen
Graph matching is a prototypical combinatorial problem with many applications in high-performance scientific computing. Optimal algorithms for computing matchings are challenging to parallelize. Approximation algorithms are amenable to parallelization and are therefore important to compute matchings for large-scale problems. Approximation algorithms also generate nearly optimal solutions that are sufficient for many applications. In this paper we present multithreaded algorithms for computing half-approximate weighted matching on state-of-the-art multicore (Intel Nehalem and AMD Magny-Cours), manycore (Nvidia Tesla and Nvidia Fermi), and massively multithreaded (Cray XMT) platforms. We provide two implementations: the first uses shared work queues and is suited for all platforms; and the second implementation, based on dataflow principles, exploits special features available on the Cray XMT. Using a carefully chosen dataset that exhibits characteristics from a wide range of applications, we show scalable performance across different platforms. In particular, for one instance of the input, an R-MAT graph (RMAT-G), we show speedups of about [Formula: see text] on [Formula: see text] cores of an AMD Magny-Cours, [Formula: see text] on [Formula: see text] cores of Intel Nehalem, [Formula: see text] on Nvidia Tesla and [Formula: see text] on Nvidia Fermi relative to one core of Intel Nehalem, and [Formula: see text] on [Formula: see text] processors of Cray XMT. We demonstrate strong as well as weak scaling for graphs with up to a billion edges using up to 12,800 threads. We avoid excessive fine-tuning for each platform and retain the basic structure of the algorithm uniformly across platforms. An exception is the dataflow algorithm designed specifically for the Cray XMT. To the best of the authors' knowledge, this is the first such large-scale study of the half-approximate weighted matching problem on multithreaded platforms. Driven by the critical enabling role of combinatorial algorithms such as matching in scientific computing and the emergence of informatics applications, there is a growing demand to support irregular computations on current and future computing platforms. In this context, we evaluate the capability of emerging multithreaded platforms to tolerate latency induced by irregular memory access patterns, and to support fine-grained parallelism via light-weight synchronization mechanisms. By contrasting the architectural features of these platforms against the Cray XMT, which is specifically designed to support irregular memory-intensive applications, we delineate the impact of these choices on performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Fotakis, Dimitris, Piotr Krysta y Carmine Ventre. "The Power of Verification for Greedy Mechanism Design". Journal of Artificial Intelligence Research 62 (4 de julio de 2018): 459–88. http://dx.doi.org/10.1613/jair.1.11215.

Texto completo
Resumen
Greedy algorithms are known to provide, in polynomial time, near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders. It is known that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees. In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subset of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary to obtain a 2-approximate truthful mechanism with money, based on a known greedy algorithm, for the problem of submodular CAs in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Feldman, A., G. Provan y A. Van Gemund. "Approximate Model-Based Diagnosis Using Greedy Stochastic Search". Journal of Artificial Intelligence Research 38 (27 de julio de 2010): 371–413. http://dx.doi.org/10.1613/jair.3025.

Texto completo
Resumen
We propose a StochAstic Fault diagnosis AlgoRIthm, called SAFARI, which trades off guarantees of computing minimal diagnoses for computational efficiency. We empirically demonstrate, using the 74XXX and ISCAS-85 suites of benchmark combinatorial circuits, that SAFARI achieves several orders-of-magnitude speedup over two well-known deterministic algorithms, CDA* and HA*, for multiple-fault diagnoses; further, SAFARI can compute a range of multiple-fault diagnoses that CDA* and HA* cannot. We also prove that SAFARI is optimal for a range of propositional fault models, such as the widely-used weak-fault models (models with ignorance of abnormal behavior). We discuss the optimality of SAFARI in a class of strong-fault circuit models with stuck-at failure modes. By modeling the algorithm itself as a Markov chain, we provide exact bounds on the minimality of the diagnosis computed. SAFARI also displays strong anytime behavior, and will return a diagnosis after any non-trivial inference time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Caiafa, Cesar F. y Andrzej Cichocki. "Estimation of Sparse Nonnegative Sources from Noisy Overcomplete Mixtures Using MAP". Neural Computation 21, n.º 12 (diciembre de 2009): 3487–518. http://dx.doi.org/10.1162/neco.2009.08-08-846.

Texto completo
Resumen
In this letter, we propose a new algorithm for estimating sparse nonnegative sources from a set of noisy linear mixtures. In particular, we consider difficult situations with high noise levels and more sources than sensors (underdetermined case). We show that when sources are very sparse in time and overlapped at some locations, they can be recovered even with very low signal-to-noise ratio, and by using many fewer sensors than sources. A theoretical analysis based on Bayesian estimation tools is included showing strong connections with algorithms in related areas of research such as ICA, NMF, FOCUSS, and sparse representation of data with overcomplete dictionaries. Our algorithm uses a Bayesian approach by modeling sparse signals through mixed-state random variables. This new model for priors imposes ℓ0 norm-based sparsity. We start our analysis for the case of nonoverlapped sources (1-sparse), which allows us to simplify the search of the posterior maximum avoiding a combinatorial search. General algorithms for overlapped cases, such as 2-sparse and k-sparse sources, are derived by using the algorithm for 1-sparse signals recursively. Additionally, a combination of our MAP algorithm with the NN-KSVD algorithm is proposed for estimating the mixing matrix and the sources simultaneously in a real blind fashion. A complete set of simulation results is included showing the performance of our algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Funcke, Lena, Tobias Hartung, Beate Heinemann, Karl Jansen, Annabel Kropf, Stefan Kühn, Federico Meloni, David Spataro, Cenk Tüysüz y Yee Chinn Yap. "Studying quantum algorithms for particle track reconstruction in the LUXE experiment". Journal of Physics: Conference Series 2438, n.º 1 (1 de febrero de 2023): 012127. http://dx.doi.org/10.1088/1742-6596/2438/1/012127.

Texto completo
Resumen
Abstract The LUXE experiment (LASER Und XFEL Experiment) is a new experiment in planning at DESY Hamburg, which will study Quantum Electrodynamics (QED) at the strong-field frontier. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum. LUXE intends to measure the positron production rate in this unprecedented regime by using, among others, a silicon tracking detector. The large number of expected positrons traversing the sensitive detector layers results in an extremely challenging combinatorial problem, which can become computationally very hard for classical computers. This paper presents a preliminary study to explore the potential of quantum computers to solve this problem and to reconstruct the positron trajectories from the detector energy deposits. The reconstruction problem is formulated in terms of a quadratic unconstrained binary optimisation. Finally, the results from the quantum simulations are discussed and compared with traditional classical track reconstruction algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Djurdjev, Mica, Robert Cep, Dejan Lukic, Aco Antic, Branislav Popovic y Mijodrag Milosevic. "A Genetic Crow Search Algorithm for Optimization of Operation Sequencing in Process Planning". Applied Sciences 11, n.º 5 (24 de febrero de 2021): 1981. http://dx.doi.org/10.3390/app11051981.

Texto completo
Resumen
Computer-aided process planning represents the main link between computer-aided design and computer-aided manufacturing. One of the crucial tasks in computer-aided process planning is an operation sequencing problem. In order to find the optimal process plan, operation sequencing problem is formulated as an NP hard combinatorial problem. To solve this problem, a novel genetic crow search approach (GCSA) is proposed in this paper. The traditional CSA is improved by employing genetic strategies such as tournament selection, three-string crossover, shift and resource mutation. Moreover, adaptive crossover and mutation probability coefficients were introduced to improve local and global search abilities of the GCSA. Operation precedence graph is adopted to represent precedence relationships among features and vector representation is used to manipulate the data in the Matlab environment. A new nearest mechanism strategy is added to ensure that elements of machines, tools and tool approach direction (TAD) vectors are integer values. Repair strategy to handle precedence constraints is adopted after initialization and shift mutation steps. Minimization of total production cost is used as the optimization criterion to evaluate process plans. To verify the performance of the GCSA, two case studies with different dimensions are carried out and comparisons with traditional and some modern algorithms from the literature are discussed. The results show that the GCSA performs well for operation sequencing problem in computer-aided process planning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Wang, Zhaocai, Xiaoguang Bao y Tunhua Wu. "A Parallel Bioinspired Algorithm for Chinese Postman Problem Based on Molecular Computing". Computational Intelligence and Neuroscience 2021 (15 de enero de 2021): 1–13. http://dx.doi.org/10.1155/2021/8814947.

Texto completo
Resumen
The Chinese postman problem is a classic resource allocation and scheduling problem, which has been widely used in practice. As a classical nondeterministic polynomial problem, finding its efficient algorithm has always been the research direction of scholars. In this paper, a new bioinspired algorithm is proposed to solve the Chinese postman problem based on molecular computation, which has the advantages of high computational efficiency, large storage capacity, and strong parallel computing ability. In the calculation, DNA chain is used to properly represent the vertex, edge, and corresponding weight, and then all possible path combinations are effectively generated through biochemical reactions. The feasible solution space is obtained by deleting the nonfeasible solution chains, and the optimal solution is solved by algorithm. Then the computational complexity and feasibility of the DNA algorithm are proved. By comparison, it is found that the computational complexity of the DNA algorithm is significantly better than that of previous algorithms. The correctness of the algorithm is verified by simulation experiments. With the maturity of biological operation technology, this algorithm has a broad application space in solving large-scale combinatorial optimization problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Tutkun, Nedim, Alessandro Burgio, Michal Jasinski, Zbigniew Leonowicz y Elzbieta Jasinska. "Intelligent Scheduling of Smart Home Appliances Based on Demand Response Considering the Cost and Peak-to-Average Ratio in Residential Homes". Energies 14, n.º 24 (17 de diciembre de 2021): 8510. http://dx.doi.org/10.3390/en14248510.

Texto completo
Resumen
With recent developments, smart grids assured for residential customers the opportunity to schedule smart home appliances’ operation times to simultaneously reduce both the electricity bill and the PAR based on demand response, as well as increasing user comfort. It is clear that the multi-objective combinatorial optimization problem involves constraints and the consumer’s preferences, and the solution to the problem is a difficult task. There have been a limited number of investigations carried out so far to solve the indicated problems using metaheuristic techniques like particle swarm optimization, mixed-integer linear programming, and the grey wolf and crow search optimization algorithms, etc. Due to the on/off control of smart home appliances, binary-coded genetic algorithms seem to be a well-fitted approach to obtain an optimal solution. It can be said that the novelty of this work is to represent the on/off state of the smart home appliance with a binary string which undergoes crossover and mutation operations during the genetic process. Because special binary numbers represent interruptible and uninterruptible smart home appliances, new types of crossover and mutation were developed to find the most convenient solutions to the problem. Although there are a few works which were carried out using the genetic algorithms, the proposed approach is rather distinct from those employed in their work. The designed genetic software runs at least ten times, and the most fitting result is taken as the optimal solution to the indicated problem; in order to ensure the optimal result, the fitness against the generation is plotted in each run, whether it is converged or not. The simulation results are significantly encouraging and meaningful to residential customers and utilities for the achievement of the goal, and they are feasible for a wide-range applications of home energy management systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Bansal, Sulabh y C. Patvardhan. "An Improved Generalized Quantum-Inspired Evolutionary Algorithm for Multiple Knapsack Problem". International Journal of Applied Evolutionary Computation 9, n.º 1 (enero de 2018): 17–51. http://dx.doi.org/10.4018/ijaec.2018010102.

Texto completo
Resumen
This article describes how the 0/1 Multiple Knapsack Problem (MKP), a generalization of popular 0/1 Knapsack Problem, is NP-hard and harder than simple Knapsack Problem. Solution of MKP involves two levels of choice – one for selecting an item to be placed and the other for selecting the knapsack in which it is to be placed. Quantum Inspired Evolutionary Algorithms (QIEAs), a subclass of Evolutionary algorithms, have been shown to be effective in solving difficult problems particularly NP-hard combinatorial optimization problems. QIEAs provide a general framework which needs to be customized according to the requirements of a given problem to obtain good solutions in reasonable time. An existing QIEA for MKP (QIEA-MKP) is based on the representation where a Q-bit collapse into a binary number. But decimal numbers are required to identify the knapsack where an item is placed. The implementation based on such representation suffers from overhead of frequent conversion from binary numbers to decimal numbers and vice versa. The generalized QIEA (GQIEA) is based on a representation where a Q-bit can collapse into an integer and thus no inter conversion between binary and decimal is required. A set of carefully selected features have been incorporated in proposed GQIEA-MKP to obtain better solutions in lesser time. Comparison with QIEA-MKP shows that GQIEA-MKP outperforms it in providing better solutions in lesser time for large sized MKPs. The generalization proposed can be used with advantage in other Combinatorial Optimization problems with integer strings as solutions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Lee, Hyun y Chunghun Ha. "Sustainable Integrated Process Planning and Scheduling Optimization Using a Genetic Algorithm with an Integrated Chromosome Representation". Sustainability 11, n.º 2 (18 de enero de 2019): 502. http://dx.doi.org/10.3390/su11020502.

Texto completo
Resumen
This paper proposes a genetic algorithm (GA) to find the pseudo-optimum of integrated process planning and scheduling (IPPS) problems. IPPS is a combinatorial optimization problem of the NP-complete class that aims to solve both process planning and scheduling simultaneously. The complexity of IPPS is very high because it reflects various flexibilities and constraints under flexible manufacturing environments. To cope with it, existing metaheuristics for IPPS have excluded some flexibilities and constraints from consideration or have built a complex structured algorithm. Particularly, GAs have been forced to construct multiple chromosomes to account for various flexibilities, which complicates algorithm procedures and degrades performance. The proposed new integrated chromosome representation makes it possible to incorporate various flexibilities into a single string. This enables the adaptation of a simple and typical GA procedure and previously developed genetic operators. Experiments on a set of benchmark problems showed that the proposed GA improved makespan by an average of 17% against the recently developed metaheuristics for IPPS in much shorter computation times.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

RAVIKUMAR, BALA. "THE BENFORD-NEWCOMB DISTRIBUTION AND UNAMBIGUOUS CONTEXT-FREE LANGUAGES". International Journal of Foundations of Computer Science 19, n.º 03 (junio de 2008): 717–27. http://dx.doi.org/10.1142/s0129054108005905.

Texto completo
Resumen
For a string w ∈ {0,1, 2,…, d-1}*, let vald(w) denote the integer whose base d representation is the string w and let MSDd(x) denote the most significant or the leading digit of a positive integer x when x is written as a base d integer. Hirvensalo and Karhumäki [9] studied the problem of computing the leading digit in the ternary representation of 2x ans showed that this problem has a polynomial time algorithm. In [16], some applications are presented for the problem of computing the leading digit of AB for given inputs A and B. In this paper, we study this problem from a formal language point of view. Formally, we consider the language Lb,d,j = {w|w ∈ {0,1, 2,…, d-1}*, 1 ≤ j ≤ 9, MSDb(dvalb(w))) = j} (and some related classes of languages) and address the question of whether this and some related languages are context-free. Standard pumping lemma proofs seem to be very difficult for these languages. We present a unified and simple combinatorial technique that shows that these languages are not unambiguous context-free languages. The Benford-Newcomb distribution plays a central role in our proofs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Wu, C.-Y. y K.-Y. Tseng. "Stress-based binary differential evolution for topology optimization of structures". Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 224, n.º 2 (1 de febrero de 2010): 443–57. http://dx.doi.org/10.1243/09544062jmes1764.

Texto completo
Resumen
Differential evolution (DE) is a heuristic optimization method used to solve many optimization problems in real-value search space. It has the advantage of incorporating a relatively simple and efficient form of mutation and crossover. However, the operator of DE is based on floating-point representation only, and is difficult to use in solving combinatorial optimization problems. In this article, a modified binary DE is developed using binary bit-string frameworks with a logical operation binary mutation mechanism. Further, a new stress-based binary mutation mechanism is also proposed to drive the binary DE search towards the optimal topology of the structure with higher performance and fewer objective function evaluations. The numerical results show that the performance of the proposed algorithm using stress-based binary mutation has high capability and efficiency in topology optimization of the structure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

López, Mario Rossainz, Ivo H. Pineda-Torres, Ivan Olmos Pineda y José Arturo Olvera López. "Parallel Object Compositions for the Search of Sequences DNA Strings in the Construction of Gnomes". International Journal of Privacy and Health Information Management 7, n.º 1 (enero de 2019): 18–44. http://dx.doi.org/10.4018/ijphim.2019010102.

Texto completo
Resumen
Within an environment of parallel objects, an approach of structured parallel programming and the paradigm of the orientation to objects show a programming method based on high level parallel compositions or HLPCs to solve two problems of combinatorial optimization: grouping fragments of DNA sequences and the parallel exhaustive search (PES) of RNA strings that help the sequence and the assembly of DNAs. The pipeline and farm models are shown as HLPCs under the object orientation paradigm and with them it is proposed the creation of a new HLPCs that combines and uses the previous ones to solve the cited problems. Each HLPC proposal contains a set of predefined synchronization constraints between processes, as well as the use of synchronous, asynchronous and asynchronous future modes of communication. This article shows the algorithms that solve the problems, their design and implementation as HLPCs and the performance metrics in their parallel execution using multicores and video accelerator card.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Rodriguez-Fernandez, Angel E., Bernardo Gonzalez-Torres, Ricardo Menchaca-Mendez y Peter F. Stadler. "Clustering Improves the Goemans–Williamson Approximation for the Max-Cut Problem". Computation 8, n.º 3 (26 de agosto de 2020): 75. http://dx.doi.org/10.3390/computation8030075.

Texto completo
Resumen
MAX-CUT is one of the well-studied NP-hard combinatorial optimization problems. It can be formulated as an Integer Quadratic Programming problem and admits a simple relaxation obtained by replacing the integer “spin” variables xi by unitary vectors v→i. The Goemans–Williamson rounding algorithm assigns the solution vectors of the relaxed quadratic program to a corresponding integer spin depending on the sign of the scalar product v→i·r→ with a random vector r→. Here, we investigate whether better graph cuts can be obtained by instead using a more sophisticated clustering algorithm. We answer this question affirmatively. Different initializations of k-means and k-medoids clustering produce better cuts for the graph instances of the most well known benchmark for MAX-CUT. In particular, we found a strong correlation of cluster quality and cut weights during the evolution of the clustering algorithms. Finally, since in general the maximal cut weight of a graph is not known beforehand, we derived instance-specific lower bounds for the approximation ratio, which give information of how close a solution is to the global optima for a particular instance. For the graphs in our benchmark, the instance specific lower bounds significantly exceed the Goemans–Williamson guarantee.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Grigoreva, Natalia S. "3/2-approximation algorithm for a single machine scheduling problem". Vestnik of Saint Petersburg University. Applied Mathematics. Computer Science. Control Processes 17, n.º 3 (2021): 240–53. http://dx.doi.org/10.21638/11701/spbu10.2021.302.

Texto completo
Resumen
The problem of minimizing the maximum delivery times while scheduling tasks on a single processor is a classical combinatorial optimization problem. Each task ui must be processed without interruption for t(ui) time units on the machine, which can process at most one task at time. Each task uw; has a release time r(ui), when the task is ready for processing, and a delivery time g(ui). Its delivery begins immediately after processing has been completed. The objective is to minimize the time, by which all jobs are delivered. In the Graham notation this problem is denoted by 1|rj,qi|Cmax, it has many applications and it is NP-hard in a strong sense. The problem is useful in solving owshop and jobshop scheduling problems. The goal of this article is to propose a new 3/2-approximation algorithm, which runs in O(n log n) times for scheduling problem 1|rj.qi|Cmax. An example is provided which shows that the bound of 3/2 is accurate. To compare the effectiveness of proposed algorithms, random generated problems of up to 5000 tasks were tested.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Zhang, Hao, Lihua Dou, Bin Xin, Ruowei Zhang y Qing Wang. "Reconnaissance and Confirmation Task Planning of Multiple Fixed-Wing UAVs with Specific Payloads: A Comparison Study". Journal of Advanced Computational Intelligence and Intelligent Informatics 26, n.º 4 (20 de julio de 2022): 570–80. http://dx.doi.org/10.20965/jaciii.2022.p0570.

Texto completo
Resumen
In this study, the reconnaissance and confirmation task planning of multiple fixed-wing unmanned aerial vehicles (UAV) with specific payloads, which is an NP-hard problem with strong constraints and mixed variables, is decomposed into two subproblems, task allocation with “payload-target” matching constraints, and fast path planning of the UAV group, for which two mathematical models are respectively established. A bi-layer collaborative solution framework is also proposed. The outer layer optimizes the allocation scheme between the UAVs and targets, whereas the inner layer generates the UAV path and evaluates the outer scheme. In the outer layer, a unified encoding based on the grouping and pairing relationship between UAVs and targets is proposed. The corresponding combinatorial mutation operators are then designed for the representative NSGA-II, MOEA/D-AWA, and DMOEA-ϵC algorithms. In the inner layer, an efficient heuristic algorithm is used to solve the path planning of each UAV group. The simulation results verify the effectiveness of the cooperative bi-layer solution scheme and the combined mutation operators. At the same time, compared with the NSGA-II and MOEA/D-AWA, DMOEA-ϵC can obtain a significantly better Pareto front and can weigh the assigned number of UAVs and the total task completion time to generate more diversified reconnaissance confirmation execution schemes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Torshin, I. Yu, O. A. Gromova, T. R. Grishina y V. A. Semenov. "The positive and negative effects of using transdermal nonsteroidal anti-inflammatory drugs: chemoreactome analysis". Neurology, Neuropsychiatry, Psychosomatics 12, n.º 5 (25 de octubre de 2020): 123–29. http://dx.doi.org/10.14412/2074-2711-2020-5-123-129.

Texto completo
Resumen
Transdermal nonsteroidal anti-inflammatory drugs (NSAIDs) are actively used for mild and moderate pain syndrome, muscle contusions and sprains, sports injuries, and the widest range of musculoskeletal diseases. The transdermal administration of NSAIDs aims to create sufficiently high drug concentrations in the lesion focus, provided that the side effects associated with its systemic action are maximally reduced.Objective: to comparatively simulate the effects of transdermal NSAIDs.Material and methods. Chemoreactome profiling of six NSAIDs (meloxicam, diclofenac, ibuprofen, ketoprofen, nimesulide, and dexketoprofen) was performed. The pharmacological capabilities of molecules were analyzed within the framework of a chemoreactome methodology, by comparing the chemical structure of the studied molecule with those of millions of other molecules, the pharmacological properties of which had already been studied in experimental and clinical studies. Training the artificial intelligence algorithms based on the big data available in in the databases PubChem/PHARMGKB, HMDB, STRING, and others was done with multi-level training quality control in the cross validation framework according to the combinatorial theory of solvability and the theory of feature value classification.Results and discussion. Meloxicam versus other NSAIDs accumulates primarily in the muscles and skin and, to a much lesser extent, in heart tissues, lymphocytes, gonads, and cartilage. This drug showed the greatest dose-dependent decongestant effect in the model of edema induced by croton oil. Analysis of the systemic effects of NSAIDs indicated that meloxicam might affect the metabolism of vitamins A, D, PP, and B6 to a lesser extent than other NSAIDs.Conclusion. The chemoreactomе analysis has demonstrated that meloxicam as a gel causing minimal side effects can be used effectively and long.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Rubio, Fernando y Ismael Rodríguez. "Water-Based Metaheuristics: How Water Dynamics Can Help Us to Solve NP-Hard Problems". Complexity 2019 (2 de abril de 2019): 1–13. http://dx.doi.org/10.1155/2019/4034258.

Texto completo
Resumen
Many water-based optimization metaheuristics have been introduced during the last decade, both for combinatorial and for continuous optimization. Despite the strong similarities of these methods in terms of their underlying natural metaphors (most of them emulate, in some way or another, how drops collaboratively form paths down to the sea), in general the resulting algorithms are quite different in terms of their searching approach or their solution construction approach. For instance, each entity may represent a solution by itself or, alternatively, entities may construct solutions by modifying the landscape while moving. A researcher or practitioner could assume that the degree of similarity between two water-based metaheuristics heavily depends on the similarity of the natural water mechanics they emulate, but this is not the case. In order to bring some clarity to this mosaic of apparently related metaheuristics, in this paper we introduce them, explain their mechanics, and highlight their differences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

DEVRIENDT, JO, BART BOGAERTS, MAURICE BRUYNOOGHE y MARC DENECKER. "On local domain symmetry for model expansion". Theory and Practice of Logic Programming 16, n.º 5-6 (septiembre de 2016): 636–52. http://dx.doi.org/10.1017/s1471068416000508.

Texto completo
Resumen
AbstractSymmetry in combinatorial problems is an extensively studied topic. We continue this research in the context of model expansion problems, with the aim of automating the workflow of detecting and breaking symmetry. We focus onlocal domain symmetry, which is induced by permutations of domain elements, and which can be detected on a first-order level. As such, our work is a continuation of the symmetry exploitation techniques of model generation systems, while it differs from more recent symmetry breaking techniques in answer set programming which detect symmetry on ground programs. Our main contributions are sufficient conditions for symmetry of model expansion problems, the identification oflocal domain interchangeability, which can often be broken completely, and efficient symmetry detection algorithms for both local domain interchangeability as well as local domain symmetry in general. Our approach is implemented in the model expansion system IDP, and we present experimental results showcasing the strong and weak points of our approach compared tosbass, a symmetry breaking technique for answer set programming.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Farhi, Edward, Jeffrey Goldstone, Sam Gutmann y Leo Zhou. "The Quantum Approximate Optimization Algorithm and the Sherrington-Kirkpatrick Model at Infinite Size". Quantum 6 (7 de julio de 2022): 759. http://dx.doi.org/10.22331/q-2022-07-07-759.

Texto completo
Resumen
The Quantum Approximate Optimization Algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization problems whose performance can only improve with the number of layers p. While QAOA holds promise as an algorithm that can be run on near-term quantum computers, its computational power has not been fully explored. In this work, we study the QAOA applied to the Sherrington-Kirkpatrick (SK) model, which can be understood as energy minimization of n spins with all-to-all random signed couplings. There is a recent classical algorithm by Montanari that, assuming a widely believed conjecture, can efficiently find an approximate solution for a typical instance of the SK model to within (1&#x2212;&#x03F5;) times the ground state energy. We hope to match its performance with the QAOA.Our main result is a novel technique that allows us to evaluate the typical-instance energy of the QAOA applied to the SK model. We produce a formula for the expected value of the energy, as a function of the 2p QAOA parameters, in the infinite size limit that can be evaluated on a computer with O(16p) complexity. We evaluate the formula up to p=12, and find that the QAOA at p=11 outperforms the standard semidefinite programming algorithm. Moreover, we show concentration: With probability tending to one as n&#x2192;&#x221E;, measurements of the QAOA will produce strings whose energies concentrate at our calculated value. As an algorithm running on a quantum computer, there is no need to search for optimal parameters on an instance-by-instance basis since we can determine them in advance. What we have here is a new framework for analyzing the QAOA, and our techniques can be of broad interest for evaluating its performance on more general problems where classical algorithms may fail.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Chistikov, Dmitry, Rupak Majumdar y Philipp Schepper. "Subcubic certificates for CFL reachability". Proceedings of the ACM on Programming Languages 6, POPL (16 de enero de 2022): 1–29. http://dx.doi.org/10.1145/3498702.

Texto completo
Resumen
Many problems in interprocedural program analysis can be modeled as the context-free language (CFL) reachability problem on graphs and can be solved in cubic time. Despite years of efforts, there are no known truly sub-cubic algorithms for this problem. We study the related certification task: given an instance of CFL reachability, are there small and efficiently checkable certificates for the existence and for the non-existence of a path? We show that, in both scenarios, there exist succinct certificates ( O ( n 2 ) in the size of the problem) and these certificates can be checked in subcubic (matrix multiplication) time. The certificates are based on grammar-based compression of paths (for reachability) and on invariants represented as matrix inequalities (for non-reachability). Thus, CFL reachability lies in nondeterministic and co-nondeterministic subcubic time. A natural question is whether faster algorithms for CFL reachability will lead to faster algorithms for combinatorial problems such as Boolean satisfiability (SAT). As a consequence of our certification results, we show that there cannot be a fine-grained reduction from SAT to CFL reachability for a conditional lower bound stronger than n ω , unless the nondeterministic strong exponential time hypothesis (NSETH) fails. In a nutshell, reductions from SAT are unlikely to explain the cubic bottleneck for CFL reachability. Our results extend to related subcubic equivalent problems: pushdown reachability and 2NPDA recognition; as well as to all-pairs CFL reachability. For example, we describe succinct certificates for pushdown non-reachability (inductive invariants) and observe that they can be checked in matrix multiplication time. We also extract a new hardest 2NPDA language, capturing the “hard core” of all these problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Qian, Chao, Yang Yu y Zhi-Hua Zhou. "Analyzing Evolutionary Optimization in Noisy Environments". Evolutionary Computation 26, n.º 1 (marzo de 2018): 1–41. http://dx.doi.org/10.1162/evco_a_00170.

Texto completo
Resumen
Many optimization tasks must be handled in noisy environments, where the exact evaluation of a solution cannot be obtained, only a noisy one. For optimization of noisy tasks, evolutionary algorithms (EAs), a type of stochastic metaheuristic search algorithm, have been widely and successfully applied. Previous work mainly focuses on the empirical study and design of EAs for optimization under noisy conditions, while the theoretical understandings are largely insufficient. In this study, we first investigate how noisy fitness can affect the running time of EAs. Two kinds of noise-helpful problems are identified, on which the EAs will run faster with the presence of noise, and thus the noise should not be handled. Second, on a representative noise-harmful problem in which the noise has a strong negative effect, we examine two commonly employed mechanisms dealing with noise in EAs: reevaluation and threshold selection. The analysis discloses that using these two strategies simultaneously is effective for the one-bit noise but ineffective for the asymmetric one-bit noise. Smooth threshold selection is then proposed, which can be proved to be an effective strategy to further improve the noise tolerance ability in the problem. We then complement the theoretical analysis by experiments on both synthetic problems as well as two combinatorial problems, the minimum spanning tree and the maximum matching. The experimental results agree with the theoretical findings and also show that the proposed smooth threshold selection can deal with the noise better.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Sutton, Andrew M., Francisco Chicano y L. Darrell Whitley. "Fitness Function Distributions over Generalized Search Neighborhoods in the q-ary Hypercube". Evolutionary Computation 21, n.º 4 (noviembre de 2013): 561–90. http://dx.doi.org/10.1162/evco_a_00098.

Texto completo
Resumen
The frequency distribution of a fitness function over regions of its domain is an important quantity for understanding the behavior of algorithms that employ randomized sampling to search the function. In general, exactly characterizing this distribution is at least as hard as the search problem, since the solutions typically live in the tails of the distribution. However, in some cases it is possible to efficiently retrieve a collection of quantities (called moments) that describe the distribution. In this paper, we consider functions of bounded epistasis that are defined over length-n strings from a finite alphabet of cardinality q. Many problems in combinatorial optimization can be specified as search problems over functions of this type. Employing Fourier analysis of functions over finite groups, we derive an efficient method for computing the exact moments of the frequency distribution of fitness functions over Hamming regions of the q-ary hypercube. We then use this approach to derive equations that describe the expected fitness of the offspring of any point undergoing uniform mutation. The results we present provide insight into the statistical structure of the fitness function for a number of combinatorial problems. For the graph coloring problem, we apply our results to efficiently compute the average number of constraint violations that lie within a certain number of steps of any coloring. We derive an expression for the mutation rate that maximizes the expected fitness of an offspring at each fitness level. We also apply the results to the slightly more complex frequency assignment problem, a relevant application in the domain of the telecommunications industry. As with the graph coloring problem, we provide formulas for the average value of the fitness function in Hamming regions around a solution and the expectation-optimal mutation rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Koutris, Paraschos y Shaleen Deep. "The Fine-Grained Complexity of CFL Reachability". Proceedings of the ACM on Programming Languages 7, POPL (9 de enero de 2023): 1713–39. http://dx.doi.org/10.1145/3571252.

Texto completo
Resumen
Many problems in static program analysis can be modeled as the context-free language (CFL) reachability problem on directed labeled graphs. The CFL reachability problem can be generally solved in time O ( n 3 ), where n is the number of vertices in the graph, with some specific cases that can be solved faster. In this work, we ask the following question: given a specific CFL, what is the exact exponent in the monomial of the running time? In other words, for which cases do we have linear, quadratic or cubic algorithms, and are there problems with intermediate runtimes? This question is inspired by recent efforts to classify classic problems in terms of their exact polynomial complexity, known as fine-grained complexity. Although recent efforts have shown some conditional lower bounds (mostly for the class of combinatorial algorithms), a general picture of the fine-grained complexity landscape for CFL reachability is missing. Our main contribution is lower bound results that pinpoint the exact running time of several classes of CFLs or specific CFLs under widely believed lower bound conjectures (e.g., Boolean Matrix Multiplication, k -Clique, APSP, 3SUM). We particularly focus on the family of Dyck- k languages (which are strings with well-matched parentheses), a fundamental class of CFL reachability problems. Remarkably, we are able to show a Ω( n 2.5 ) lower bound for Dyck-2 reachability, which to the best of our knowledge is the first super-quadratic lower bound that applies to all algorithms, and shows that CFL reachability is strictly harder that Boolean Matrix Multiplication. We also present new lower bounds for the case of sparse input graphs where the number of edges m is the input parameter, a common setting in the database literature. For this setting, we show a cubic lower bound for Andersen’s Pointer Analysis which significantly strengthens prior known results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía