Journal articles on the topic 'Local search heuristics'

To see the other types of publications on this topic, follow the link: Local search heuristics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Local search heuristics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Veerapaneni, Rishi, Muhammad Suhail Saleem, and Maxim Likhachev. "Learning Local Heuristics for Search-Based Navigation Planning." Proceedings of the International Conference on Automated Planning and Scheduling 33, no. 1 (July 1, 2023): 634–38. http://dx.doi.org/10.1609/icaps.v33i1.27245.

Full text
Abstract:
Graph search planning algorithms for navigation typically rely heavily on heuristics to efficiently plan paths. As a result, while such approaches require no training phase and can directly plan long horizon paths, they often require careful hand designing of informative heuristic functions. Recent works have started bypassing hand designed heuristics by using machine learning to learn heuristic functions that guide the search algorithm. While these methods can learn complex heuristic functions from raw input, they i) require significant training and ii) do not generalize well to new maps and longer horizon paths. Our contribution is showing that instead of learning a global heuristic estimate, we can define and learn local heuristics which results in a significantly smaller learning problem and improves generalization. We show that using such local heuristics can reduce node expansions by 2-20x while maintaining bounded suboptimality, are easy to train, and generalize to new maps & long horizon plans.
APA, Harvard, Vancouver, ISO, and other styles
2

Wilt, Christopher, and Wheeler Ruml. "Speedy Versus Greedy Search." Proceedings of the International Symposium on Combinatorial Search 5, no. 1 (September 1, 2021): 184–92. http://dx.doi.org/10.1609/socs.v5i1.18320.

Full text
Abstract:
In work on satisficing search, there has been substantial attention devoted to how to solve problems associated with local minima or plateaus in the heuristic function. One technique that has been shown to be quite promising is using an alternative heuristic function that does not estimate cost-to-go, but rather estimates distance-to-go. Empirical results generally favor using the distance-to-go heuristic over the cost-to-go heuristic, but there is currently little beyond intuition to explain the difference. We begin by empirically showing that the success of the distance-to-go heuristic appears related to its having smaller local minima. We then discuss a reasonable theoretical model of heuristics and show that, under this model, the expected size of local minima is higher for a cost- to-go heuristic than a distance-to-go heuristic, offering a possible explanation as to why distance-to-go heuristics tend to outperform cost-to-go heuristics.
APA, Harvard, Vancouver, ISO, and other styles
3

HASEGAWA, Manabu. "Potentially Local Search in Local Search Heuristics." Proceedings of The Computational Mechanics Conference 2004.17 (2004): 403–4. http://dx.doi.org/10.1299/jsmecmd.2004.17.403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Soria-Alcaraz, Jorge A., Gabriela Ochoa, Andres Espinal, Marco A. Sotelo-Figueroa, Manuel Ornelas-Rodriguez, and Horacio Rostro-Gonzalez. "A Methodology for Classifying Search Operators as Intensification or Diversification Heuristics." Complexity 2020 (February 13, 2020): 1–10. http://dx.doi.org/10.1155/2020/2871835.

Full text
Abstract:
Selection hyper-heuristics are generic search tools that dynamically choose, from a given pool, the most promising operator (low-level heuristic) to apply at each iteration of the search process. The performance of these methods depends on the quality of the heuristic pool. Two types of heuristics can be part of the pool: diversification heuristics, which help to escape from local optima, and intensification heuristics, which effectively exploit promising regions in the vicinity of good solutions. An effective search strategy needs a balance between these two strategies. However, it is not straightforward to categorize an operator as intensification or diversification heuristic on complex domains. Therefore, we propose an automated methodology to do this classification. This brings methodological rigor to the configuration of an iterated local search hyper-heuristic featuring diversification and intensification stages. The methodology considers the empirical ranking of the heuristics based on an estimation of their capacity to either diversify or intensify the search. We incorporate the proposed approach into a state-of-the-art hyper-heuristic solving two domains: course timetabling and vehicle routing. Our results indicate improved performance, including new best-known solutions for the course timetabling problem.
APA, Harvard, Vancouver, ISO, and other styles
5

Adubi, Stephen A., Olufunke O. Oladipupo, and Oludayo O. Olugbara. "Evolutionary Algorithm-Based Iterated Local Search Hyper-Heuristic for Combinatorial Optimization Problems." Algorithms 15, no. 11 (October 31, 2022): 405. http://dx.doi.org/10.3390/a15110405.

Full text
Abstract:
Hyper-heuristics are widely used for solving numerous complex computational search problems because of their intrinsic capability to generalize across problem domains. The fair-share iterated local search is one of the most successful hyper-heuristics for cross-domain search with outstanding performances on six problem domains. However, it has recorded low performances on three supplementary problems, namely knapsack, quadratic assignment, and maximum-cut problems, which undermines its credibility across problem domains. The purpose of this study was to design an evolutionary algorithm-based iterated local search (EA-ILS) hyper-heuristic that applies a novel mutation operator to control the selection of perturbative low-level heuristics in searching for optimal sequences for performance improvement. The algorithm was compared to existing ones in the hyper-heuristics flexible (HyFlex) framework to demonstrate its performance across the problem domains of knapsack, quadratic assignment, and maximum cut. The comparative results have shown that the EA-ILS hyper-heuristic can obtain the best median objective function values on 22 out of 30 instances in the HyFlex framework. Moreover, it has achieved superiority in its generalization capability when compared to the reported top-performing hyper-heuristic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Davidov, D., and S. Markovitch. "Multiple-Goal Heuristic Search." Journal of Artificial Intelligence Research 26 (August 25, 2006): 417–51. http://dx.doi.org/10.1613/jair.1940.

Full text
Abstract:
This paper presents a new framework for anytime heuristic search where the task is to achieve as many goals as possible within the allocated resources. We show the inadequacy of traditional distance-estimation heuristics for tasks of this type and present alternative heuristics that are more appropriate for multiple-goal search. In particular, we introduce the marginal-utility heuristic, which estimates the cost and the benefit of exploring a subtree below a search node. We developed two methods for online learning of the marginal-utility heuristic. One is based on local similarity of the partial marginal utility of sibling nodes, and the other generalizes marginal-utility over the state feature space. We apply our adaptive and non-adaptive multiple-goal search algorithms to several problems, including focused crawling, and show their superiority over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Abdul-Razaq, Tariq, Hanan Chachan, and Faez Ali. "Modified Heuristics for Scheduling in Flow Shop to Minimize Makespan." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 2 (October 19, 2021): 1–20. http://dx.doi.org/10.55562/jrucs.v30i2.361.

Full text
Abstract:
The NP-completeness of flow shops scheduling problems has been discussed for many years. Hence many heuristics have been proposed to obtain solutions of good quality with a small computational effort. The CDS (Campbell et al) and NEH (Nawaz, Enscore and Ham) heuristics are efficient among meta-heuristics such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA).This paper discusses some methods and suggests new developing to the methods of the scheduling in flow shop to minimize makespan problems. Our main object in this paper, from one side, is to improve efficient heuristics which will be better than the existing heuristics given in the literature and yield solutions within a short time like Simple Heuristic Methods (SHM) and the First Heuristic Decreasing Arrange (DR). From other side, we apply two local search methods like GA and PSO algorithms on flow shop problems.Experimental analysis has been given of the performance of the proposed heuristics and local search methods with the relative efficient existing heuristics.
APA, Harvard, Vancouver, ISO, and other styles
8

Sobrino, D. R. Delgado, Oliver Moravčik, D. Caganová, and P. Kostal. "Hybrid Iterative Local Search Heuristic with a Multiple Criteria Approach for the Vehicle Routing Problem." Advanced Materials Research 383-390 (November 2011): 4560–67. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.4560.

Full text
Abstract:
This paper presents a Hybrid Iterative Local Search Heuristic and its framework, whose aim lies on helping to escape from local optima when a construction heuristic, for the VRP, has been trapped. The approach was mainly inspired by basic and modified versions of related successfully applied heuristics such as Variable Neighborhood Search (VNS) and Granular Local Search (GLS). Differently to a great deal of local search heuristics revised, which mainly consider a single decision criterion, multiple optimization criteria are considered all along the local search and a Multiple Criteria Threshold has been proposed allowing defining which arcs must be included in a candidate list to explore, this considerably reduces the search area and has a major incidence in the satisfaction of the clients. The proposal is enriched with a good literature review, taking into account some of the gaps and achievements of the states of the art and practice.
APA, Harvard, Vancouver, ISO, and other styles
9

Nakhost, Hootan, Jörg Hoffmann, and Martin Müller. "Improving Local Search for Resource-Constrained Planning." Proceedings of the International Symposium on Combinatorial Search 1, no. 1 (August 25, 2010): 81–82. http://dx.doi.org/10.1609/socs.v1i1.18166.

Full text
Abstract:
A ubiquitous feature of planning problems — problems involving the automatic generation of action sequences for attaining a given goal — is the need to economize limited resources such as fuel or money. While heuristic search, mostly based on standard algorithms such as A*, is currently the superior method for most varieties of planning, its ability to solve critically resource-constrained problems is limited: current planning heuristics are bad at dealing with this kind of structure. To address this, one can try to devise better heuristics. An alternative approach is to change the nature of the search instead. Local search has received some attention in planning, but not with a specific focus on how to deal with limited resources. We herein begin to fill this gap. We highlight the limitations of previous methods, and we devise a new improvement (smart restarts) to the local search method of a previously proposed planner (Arvand). Systematic experiments show how performance depends on problem structure and search parameters. In particular, we show that our new method can outperform previous planners by a large margin.
APA, Harvard, Vancouver, ISO, and other styles
10

Lissovoi, Andrei, Pietro S. Oliveto, and John Alasdair Warwicker. "Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes." Evolutionary Computation 28, no. 3 (September 2020): 437–61. http://dx.doi.org/10.1162/evco_a_00258.

Full text
Abstract:
Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time [Formula: see text], instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to [Formula: see text] low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the [Formula: see text] heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to [Formula: see text]) and shed some light on the best choices for the parameter [Formula: see text] in various situations.
APA, Harvard, Vancouver, ISO, and other styles
11

Burke, Edmund K., Matthew R. Hyde, and Graham Kendall. "Grammatical Evolution of Local Search Heuristics." IEEE Transactions on Evolutionary Computation 16, no. 3 (June 2012): 406–17. http://dx.doi.org/10.1109/tevc.2011.2160401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pesant, G., C. Quimper, and A. Zanarini. "Counting-Based Search: Branching Heuristics for Constraint Satisfaction Problems." Journal of Artificial Intelligence Research 43 (February 26, 2012): 173–210. http://dx.doi.org/10.1613/jair.3463.

Full text
Abstract:
Designing a search heuristic for constraint programming that is reliable across problem domains has been an important research topic in recent years. This paper concentrates on one family of candidates: counting-based search. Such heuristics seek to make branching decisions that preserve most of the solutions by determining what proportion of solutions to each individual constraint agree with that decision. Whereas most generic search heuristics in constraint programming rely on local information at the level of the individual variable, our search heuristics are based on more global information at the constraint level. We design several algorithms that are used to count the number of solutions to specific families of constraints and propose some search heuristics exploiting such information. The experimental part of the paper considers eight problem domains ranging from well-established benchmark puzzles to rostering and sport scheduling. An initial empirical analysis identifies heuristic maxSD as a robust candidate among our proposals.eWe then evaluate the latter against the state of the art, including the latest generic search heuristics, restarts, and discrepancy-based tree traversals. Experimental results show that counting-based search generally outperforms other generic heuristics.
APA, Harvard, Vancouver, ISO, and other styles
13

Chatterjee, Ishani, Maxim Likhachev, Ashwin Khadke, and Manuela Veloso. "Speeding Up Search-Based Motion Planning via Conservative Heuristics." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 674–79. http://dx.doi.org/10.1609/icaps.v29i1.3535.

Full text
Abstract:
Weighted A* search (wA*) is a popular tool for robot motionplanning. Its efficiency however depends on the quality of heuristic function used. In fact, it has been shown that the correlation between the heuristic function and the true costto-goal significantly affects the efficiency of the search, when used with a large weight on the heuristics. Motivated by this observation, we investigate the problem of computing heuristics that explicitly aim to minimize the amount of search efforts in finding a feasible plan. The key observation we exploit is that while heuristics tries to guide the search along what looks like an optimal path towards the goal, there are other paths that are clearly sub-optimal yet are much easier to compute. For example, in motion planning domains like footstep-planning for humanoids, a heuristic that guides the search along a path away from obstacles is less likely to encounter local minima compared with the heuristics that guides the search along an optimal but close-to-obstacles path. We utilize this observation to define the concept of conservative heuristics and propose a simple algorithm for computing such a heuristic function. Experimental analysis on (1) humanoid footstep planning (simulation), (2) path planning for a UAV (simulation), and a real-world experiment in footstep-planning for a NAO robot shows the utility of the approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Lissovoi, Andrei, Pietro S. Oliveto, and John Alasdair Warwicker. "On the Time Complexity of Algorithm Selection Hyper-Heuristics for Multimodal Optimisation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2322–29. http://dx.doi.org/10.1609/aaai.v33i01.33012322.

Full text
Abstract:
Selection hyper-heuristics are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently selection hyperheuristics choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise a standard unimodal benchmark function from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper we extend our understanding to the domain of multimodal optimisation by considering a hyper-heuristic from the literature that can switch between elitist and nonelitist heuristics during the run. We first identify the range of parameters that allow the hyper-heuristic to hillclimb efficiently and prove that it can optimise a standard hillclimbing benchmark function in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where the hyper-heuristic is efficient by swiftly escaping local optima and ones where it is not. For a function class called CLIFFd where a new gradient of increasing fitness can be identified after escaping local optima, the hyper-heuristic is extremely efficient while a wide range of established elitist and non-elitist algorithms are not, including the well-studied Metropolis algorithm. We complete the picture with an analysis of another standard benchmark function called JUMPd as an example to highlight problem characteristics where the hyper-heuristic is inefficient. Yet, it still outperforms the wellestablished non-elitist Metropolis algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Lam, William, Kalev Kask, Javier Larrosa, and Rina Dechter. "Residual-Guided Look-Ahead in AND/OR Search for Graphical Models." Journal of Artificial Intelligence Research 60 (October 20, 2017): 287–346. http://dx.doi.org/10.1613/jair.5475.

Full text
Abstract:
We introduce the concept of local bucket error for the mini-bucket heuristics and show how it can be used to improve the power of AND/OR search for combinatorial optimization tasks in graphical models (e.g. MAP/MPE or weighted CSPs). The local bucket error illuminates how the heuristic errors are distributed in the search space, guided by the mini-bucket heuristic. We present and analyze methods for compiling the local bucket-errors (exactly and approximately) and show that they can be used to yield an effective tool for balancing look-ahead overhead during search. This can be especially instrumental when memory is restricted, accommodating the generation of only weak compiled heuristics. We illustrate the impact of the proposed schemes in an extensive empirical evaluation for both finding exact solutions and anytime suboptimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
16

Müller, Felipe Martins, and Iaê Santos Bonilha. "Hyper-Heuristic Based on ACO and Local Search for Dynamic Optimization Problems." Algorithms 15, no. 1 (December 24, 2021): 9. http://dx.doi.org/10.3390/a15010009.

Full text
Abstract:
Hyper-heuristics comprise a set of approaches that are motivated (at least in part) by the objective of intelligently combining heuristic methods to solve hard optimization problems. Ant colony optimization (ACO) algorithms have been proven to deal with Dynamic Optimization Problems (DOPs) properly. Despite the good results obtained by the integration of local search operators with ACO, little has been done to tackle DOPs. In this research, one of the most reliable ACO schemes, the MAX-MIN Ant System (MMAS), has been integrated with advanced and effective local search operators, resulting in an innovative hyper-heuristic. The local search operators are the Lin–Kernighan (LK) and the Unstringing and Stringing (US) heuristics, and they were intelligently chosen to improve the solution obtained by ACO. The proposed method aims to combine the adaptation capabilities of ACO for DOPs and the good performance of the local search operators chosen in an adaptive way and based on their performance, creating in this way a hyper-heuristic. The travelling salesman problem (TSP) was the base problem to generate both symmetric and asymmetric dynamic test cases. Experiments have shown that the MMAS provides good initial solutions to the local search operators and the hyper-heuristic creates a robust and effective method for the vast majority of test cases.
APA, Harvard, Vancouver, ISO, and other styles
17

Kaur, Jasmeet, Ishani Chatterjee, and Maxim Likhachev. "Speeding Up Search-Based Motion Planning using Expansion Delay Heuristics." Proceedings of the International Conference on Automated Planning and Scheduling 31 (May 17, 2021): 528–32. http://dx.doi.org/10.1609/icaps.v31i1.16000.

Full text
Abstract:
Suboptimal search algorithms are a popular way to find solutions to planning problems faster by trading off solution optimality for search time. This is often achieved with the help of inadmissible heuristics. Prior work has explored ways to learn such inadmissible heuristics. However, it has focused on learning the heuristic value as an estimate of the cost to reach a goal. In this paper, we present a different approach that computes inadmissible heuristics by learning Expansion Delay for transitions in the state space. Expansion Delay is defined as the number of states expanded during the search between two consecutive states. Expansion Delay can be used as a measure of the depth of local minima regions i.e., regions where the heuristic(s) are weakly correlated with the true cost-to-goal (Vats, Narayanan and Likhachev 2017). Our key idea is to learn this measure in order to guide the search such that it reduces the total Expansion delay for reaching the goal and hence, avoid local minima regions in the state space. We analyze our method on 3D (x, y, theta) planning and Humanoid footstep planning. We find that the heuristics computed using our technique result in finding feasible plans faster.
APA, Harvard, Vancouver, ISO, and other styles
18

Fukunaga, Alex S. "Automated Discovery of Local Search Heuristics for Satisfiability Testing." Evolutionary Computation 16, no. 1 (March 2008): 31–61. http://dx.doi.org/10.1162/evco.2008.16.1.31.

Full text
Abstract:
The development of successful metaheuristic algorithms such as local search for a difficult problem such as satisfiability testing (SAT) is a challenging task. We investigate an evolutionary approach to automating the discovery of new local search heuristics for SAT. We show that several well-known SAT local search algorithms such as Walksat and Novelty are composite heuristics that are derived from novel combinations of a set of building blocks. Based on this observation, we developed CLASS, a genetic programming system that uses a simple composition operator to automatically discover SAT local search heuristics. New heuristics discovered by CLASS are shown to be competitive with the best Walksat variants, including Novelty+. Evolutionary algorithms have previously been applied to directly evolve a solution for a particular SAT instance. We show that the heuristics discovered by CLASS are also competitive with these previous, direct evolutionary approaches for SAT. We also analyze the local search behavior of the learned heuristics using the depth, mobility, and coverage metrics proposed by Schuurmans and Southey.
APA, Harvard, Vancouver, ISO, and other styles
19

Aine, Sandip, Charupriya Sharma, and Maxim Likhachev. "Learning to Search More Efficiently from Experience: A Multi-heuristic Approach." Proceedings of the International Symposium on Combinatorial Search 6, no. 1 (September 1, 2021): 141–45. http://dx.doi.org/10.1609/socs.v6i1.18376.

Full text
Abstract:
Learning from experience can significantly improve the performance of search based planners, especially for challenging problems like high-dimensional planning. Experience Graph (E-Graph) is a recently developed framework that encodes experiences, obtained from solving instances in the past, into a single bounded-admissible heuristic, and uses it to guide the search. While the E-Graph approach was shown to be very useful for repetitive problems, it suffers from two issues. First, computing the E-Graph heuristic is time consuming as it maintains the bounded admissibility constraints. Second, a single heuristic can get stuck in a local minimum, and thereby, degrade the performance. In this work, we present an alternative approach to improving the runtime of search from experience, based on a recently developed search algorithm Multi-heuristic A* (MHA*). This framework provides an improvement over the E-Graph planner for two reasons: a) MHA* uses multiple heuristics simultaneously to explore the search space, which reduces the probability of getting stuck in a local minimum, and b) the heuristics in MHA* can be arbitrarily inadmissible, which makes it very easy to compute them. The paper describes the framework, explains how to compute these (inadmissible) heuristics through offline and online processing and presents experimental analysis on two domains, motion planning for a 6D planar arm and large sliding tile puzzles.
APA, Harvard, Vancouver, ISO, and other styles
20

Agur, Zvia, Refael Hassin, and Sigal Levy. "Optimizing Chemotherapy Scheduling Using Local Search Heuristics." Operations Research 54, no. 5 (October 2006): 829–46. http://dx.doi.org/10.1287/opre.1060.0320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Martin, Olivier C., and Steve W. Otto. "Combining simulated annealing with local search heuristics." Annals of Operations Research 63, no. 1 (February 1996): 57–75. http://dx.doi.org/10.1007/bf02601639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cabrera-Guerrero, Guillermo, Carolina Lagos, Carolina Castañeda, Franklin Johnson, Fernando Paredes, and Enrique Cabrera. "Parameter Tuning for Local-Search-Based Matheuristic Methods." Complexity 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/1702506.

Full text
Abstract:
Algorithms that aim to solve optimisation problems by combining heuristics and mathematical programming have attracted researchers’ attention. These methods, also known as matheuristics, have been shown to perform especially well for large, complex optimisation problems that include both integer and continuous decision variables. One common strategy used by matheuristic methods to solve such optimisation problems is to divide the main optimisation problem into several subproblems. While heuristics are used to seek for promising subproblems, exact methods are used to solve them to optimality. In general, we say that both mixed integer (non)linear programming problems and combinatorial optimisation problems can be addressed using this strategy. Beside the number of parameters researchers need to adjust when using heuristic methods, additional parameters arise when using matheuristic methods. In this paper we focus on one particular parameter, which determines the size of the subproblem. We show how matheuristic performance varies as this parameter is modified. We considered a well-known NP-hard combinatorial optimisation problem, namely, the capacitated facility location problem for our experiments. Based on the obtained results, we discuss the effects of adjusting the size of subproblems that are generated when using matheuristics methods such as the one considered in this paper.
APA, Harvard, Vancouver, ISO, and other styles
23

Shatabda, Swakkhar, M. A. Newton, and Abdul Sattar. "Mixed Heuristic Local Search for Protein Structure Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 876–82. http://dx.doi.org/10.1609/aaai.v27i1.8602.

Full text
Abstract:
Protein structure prediction is an unsolved problem in computational biology. One great difficulty is due to the unknown factors in the actual energy function. Moreover, the energy models available are often not very informative particularly when spatially similar structures are compared during search. We introduce several novel heuristics to augment the energy model and present a new local search algorithm that exploits these heuristics in a mixed fashion. Although the heuristics individually are weaker in performance than the energy function, their combination interestingly produces stronger results. For standard benchmark proteins on the face centered cubic lattice and a realistic 20x20 energy model, we obtain structures with significantly lower energy than those obtained by the state-of-the-art algorithms. We also report results for these proteins using the same energy model on the cubic lattice.
APA, Harvard, Vancouver, ISO, and other styles
24

Cai, Shaowei, Jinkun Lin, and Chuan Luo. "Finding A Small Vertex Cover in Massive Sparse Graphs: Construct, Local Search, and Preprocess." Journal of Artificial Intelligence Research 59 (July 31, 2017): 463–94. http://dx.doi.org/10.1613/jair.5443.

Full text
Abstract:
The problem of finding a minimum vertex cover (MinVC) in a graph is a well known NP-hard combinatorial optimization problem of great importance in theory and practice. Due to its NP-hardness, there has been much interest in developing heuristic algorithms for finding a small vertex cover in reasonable time. Previously, heuristic algorithms for MinVC have focused on solving graphs of relatively small size, and they are not suitable for solving massive graphs as they usually have high-complexity heuristics. This paper explores techniques for solving MinVC in very large scale real-world graphs, including a construction algorithm, a local search algorithm and a preprocessing algorithm. Both the construction and search algorithms are based on low-complexity heuristics, and we combine them to develop a heuristic algorithm for MinVC called FastVC. Experimental results on a broad range of real-world massive graphs show that, our algorithms are very fast and have better performance than previous heuristic algorithms for MinVC. We also develop a preprocessing algorithm to simplify graphs for MinVC algorithms. By applying the preprocessing algorithm to local search algorithms, we obtain two efficient MinVC solvers called NuMVC2+p and FastVC2+p, which show further improvement on the massive graphs.
APA, Harvard, Vancouver, ISO, and other styles
25

Torreño, Alejandro, Óscar Sapena, and Eva Onaindia. "Global Heuristics for Distributed Cooperative Multi-Agent Planning." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 225–33. http://dx.doi.org/10.1609/icaps.v25i1.13701.

Full text
Abstract:
Almost every planner needs good heuristics to be efficient. Heuristic planning has experienced an impressive progress over the last years thanks to the emergence of more and more powerful estimators. However, this progress has not been translated to multi-agent planning (MAP) due to the difficulty of applying classical heuristics in distributed environments. The application of local search heuristics in each agent has been the most widely adopted approach in MAP but there exist some recent attempts to use global heuristics. In this paper we show that the success of global heuristics in MAP depends on a proper selection of heuristics for a distributed environment as well as on their adequate combination.
APA, Harvard, Vancouver, ISO, and other styles
26

V. Tsyganov, Andrey. "Local Search Heuristics for NFA State Minimization Problem." International Journal of Communications, Network and System Sciences 05, no. 09 (2012): 638–43. http://dx.doi.org/10.4236/ijcns.2012.529074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Xiaoyuan, Hayato Ushijima-Mwesigwa, Avradip Mandal, Sarvagya Upadhyay, Ilya Safro, and Arnab Roy. "Leveraging special-purpose hardware for local search heuristics." Computational Optimization and Applications 82, no. 1 (February 21, 2022): 1–29. http://dx.doi.org/10.1007/s10589-022-00354-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kaljun, David, and Janez Žerovnik. "On local search based heuristics for optimization problems." Croatian Operational Research Review 5, no. 2 (December 30, 2014): 317–27. http://dx.doi.org/10.17535/crorr.2014.0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Galligari, Alessandro, Mirko Maischberger, and Fabio Schoen. "Local search heuristics for the zone planning problem." Optimization Letters 11, no. 1 (July 30, 2016): 195–207. http://dx.doi.org/10.1007/s11590-016-1069-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Karapetyan, Daniel, and Gregory Gutin. "Local search heuristics for the multidimensional assignment problem." Journal of Heuristics 17, no. 3 (April 21, 2010): 201–49. http://dx.doi.org/10.1007/s10732-010-9133-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jiang, He, Junying Qiu, and Jifeng Xuan. "A Hyper-Heuristic Using GRASP with Path-Relinking." Journal of Information Technology Research 4, no. 2 (April 2011): 31–42. http://dx.doi.org/10.4018/jitr.2011040103.

Full text
Abstract:
The goal of hyper-heuristics is to design and choose heuristics to solve complex problems. The primary motivation behind the hyper-heuristics is to generalize the solving ability of the heuristics. In this paper, the authors propose a Hyper-heuristic using GRASP with Path-Relinking (HyGrasPr). HyGrasPr generates heuristic sequences to produce solutions within an iterative procedure. The procedure of HyGrasPr consists of three phases, namely the construction phase, the local search phase, and the path-relinking phase. To show the performance of the HyGrasPr, the authors use the nurse rostering problem as a case study. The authors use an existing simulated annealing based hyper-heuristic as a baseline. The experimental results indicate that HyGrasPr can achieve better solutions than SAHH within the same running time and the path-relinking phase is effective for the framework of HyGrasPr.
APA, Harvard, Vancouver, ISO, and other styles
32

Sheta, Alaa F., Hossam Faris, and Ibrahim Aljarah. "Estimating ARMA Model Parameters of an Industrial Process Using Meta-Heuristic Search Algorithms." International Journal of Engineering & Technology 7, no. 3.10 (July 19, 2018): 187. http://dx.doi.org/10.14419/ijet.v7i3.10.14357.

Full text
Abstract:
This paper addresses the parameter estimation problem for a manufacturing process based on the Auto-Regressive Moving Average (ARMA) model. The accurate estimation of the ARMA model’s parameter helps to reduce the production costs, provide better product quality, increase productivity and profit. Meta-heuristic algorithms are among these approximate techniques which have been successfully used to search for an optimal solution in complex search space. Meta-heuristic algorithms can converge to an optimal global solution despite traditional parameter estimation techniques which stuck by local optimal. A comparison between Meta-heuristic algorithms: Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and the Accelerated PSO, Cuckoo Search, Krill Herd and Firefly algorithm is provided to handle the parameter estimation problem for a Winding process in the industry. The developed ARMA-meta-heuristics models for a winding machine are evaluated based on different evaluation metrics. The results reveal that meta-heuristics can provide an outstanding modeling performance.
APA, Harvard, Vancouver, ISO, and other styles
33

CASEAU, YVES, FRANÇOIS LABURTHE, CLAUDE LE PAPE, and BENOÎT ROTTEMBOURG. "Combining local and global search in a constraint programming environment." Knowledge Engineering Review 16, no. 1 (March 2001): 41–68. http://dx.doi.org/10.1017/s0269888901000078.

Full text
Abstract:
This paper presents several case studies which illustrate how constraint programming can benefit from the combination of global and local search techniques, offering a flexible and efficient platform for the design of combinatorial optimisation applications. For job-shop scheduling, we relate experiments with local search procedures that use global search to intensively explore a given neighbourhood, in the spirit of “shuffle” methods. For preemptive job-shop scheduling, two basic search strategies, Depth-First Search and Limited Discrepancy Search, are compared. For Vehicle Routing we report an Incremental Local Optimisation heuristic, combined with Limited Discrepancy Search. Finally, we show how ad hoc algebras can considerably enhance the design of heuristics based on local and global search within a constraint-programming environment. Experiments on vehicle routing will enlighten how such a language for “search and insert” control can enable automated tuning and discovery of new strategies adapted to the instances typology of the problem at stake.
APA, Harvard, Vancouver, ISO, and other styles
34

Sholekar, Shahed, Mehdi Seifbarghy, and Davar Pishva. "Innovative local search heuristics for uncapacitated facility location problem." International Journal of Industrial and Systems Engineering 42, no. 2 (2022): 172. http://dx.doi.org/10.1504/ijise.2022.126039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Heng Siong Lim and B. Venkatesh. "An efficient local search heuristics for asynchronous multiuser detection." IEEE Communications Letters 7, no. 7 (July 2003): 299–301. http://dx.doi.org/10.1109/lcomm.2003.813799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mateus, Catarina, Inês Marques, and M. Eugénia Captivo. "Local search heuristics for a surgical case assignment problem." Operations Research for Health Care 17 (June 2018): 71–81. http://dx.doi.org/10.1016/j.orhc.2017.04.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jussien, Narendra, and Olivier Lhomme. "Local search with constraint propagation and conflict-based heuristics." Artificial Intelligence 139, no. 1 (July 2002): 21–45. http://dx.doi.org/10.1016/s0004-3702(02)00221-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Brucker, Peter, Johann Hurink, and Frank Werner. "Improving local search heuristics for some scheduling problems—I." Discrete Applied Mathematics 65, no. 1-3 (March 1996): 97–122. http://dx.doi.org/10.1016/0166-218x(95)00030-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kincaid, Rex K. "Solving the damper placement problem via local search heuristics." OR Spektrum 17, no. 2-3 (June 1995): 149–58. http://dx.doi.org/10.1007/bf01719259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Boros, Endre, Peter L. Hammer, and Gabriel Tavares. "Local search heuristics for Quadratic Unconstrained Binary Optimization (QUBO)." Journal of Heuristics 13, no. 2 (February 21, 2007): 99–132. http://dx.doi.org/10.1007/s10732-007-9009-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Evans, James R. "Structural analysis of local search heuristics in combinatorial optimization." Computers & Operations Research 14, no. 6 (January 1987): 465–77. http://dx.doi.org/10.1016/0305-0548(87)90043-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Halper, Russell, S. Raghavan, and Mustafa Sahin. "Local search heuristics for the mobile facility location problem." Computers & Operations Research 62 (October 2015): 210–23. http://dx.doi.org/10.1016/j.cor.2014.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kamboj, Vikram Kumar, Sobhit Saxena, and Kamalpreet Sandhu. "Optimal Selection of Gear Ratio for Hybrid Electric Vehicles Using Modern Meta-Heuristics Search Algorithm." E3S Web of Conferences 87 (2019): 01006. http://dx.doi.org/10.1051/e3sconf/20198701006.

Full text
Abstract:
Gear Train Design problem is most important design problem for machine tools manufacturers. Recent work on gear train improvement has been bound towards multi-shaft gear trains of the speed-change kind, where major focus is to maximize the range of operating speeds and to minimize the number of gears and spindles. In the proposed research, a hybrid meta-heuristic search algorithm is presented to design and optimize multi-spindle gear trains problem. The objective of the research is to optimize gear trains on the basis of minimum overall centre distance, minimum overall size, minimum gear volume, or other desirable criteria, such as maximum contact or overlap ratios. The proposed hybrid meta-heuristic search algorithm is inspired by canis lupus family of grey wolves and exploitation capability of existing grey wolf optimizer is further enhanced by pattern search algorithm, which is a derivative-free, direct search optimization algorithm suitable for non-differential, discontinuous search space and does not require gradient for numerical optimization problem and have good exploitation capability in local search space. The effectiveness of the proposed algorithm has been tested on various mechanical and civil design problem including gear train design problem, which includes four different gear and experimental results are compared with others recently reported heuristics and meta-heuristics search algorithm. It has been found that the proposed algorithm indorses its effectiveness in the field of nature inspired meta heuristics algorithms for engineering design problems for hybrid electric vehicles.
APA, Harvard, Vancouver, ISO, and other styles
44

Lei, Yu, Maoguo Gong, Licheng Jiao, and Yi Zuo. "A memetic algorithm based on hyper-heuristics for examination timetabling problems." International Journal of Intelligent Computing and Cybernetics 8, no. 2 (June 8, 2015): 139–51. http://dx.doi.org/10.1108/ijicc-02-2015-0005.

Full text
Abstract:
Purpose – The examination timetabling problem is an NP-hard problem. A large number of approaches for this problem are developed to find more appropriate search strategies. Hyper-heuristic is a kind of representative methods. In hyper-heuristic, the high-level search is executed to construct heuristic lists by traditional methods (such as Tabu search, variable neighborhoods and so on). The purpose of this paper is to apply the evolutionary strategy instead of traditional methods for high-level search to improve the capability of global search. Design/methodology/approach – This paper combines hyper-heuristic with evolutionary strategy to solve examination timetabling problems. First, four graph coloring heuristics are employed to construct heuristic lists. Within the evolutionary algorithm framework, the iterative initialization is utilized to improve the number of feasible solutions in the population; meanwhile, the crossover and mutation operators are applied to find potential heuristic lists in the heuristic space (high-level search). At last, two local search methods are combined to optimize the feasible solutions in the solution space (low-level search). Findings – Experimental results demonstrate that the proposed approach obtains competitive results and outperforms the compared approaches on some benchmark instances. Originality/value – The contribution of this paper is the development of a framework which combines evolutionary algorithm and hyper-heuristic for examination timetabling problems.
APA, Harvard, Vancouver, ISO, and other styles
45

Abdi Oskouie, Mina, and Vadim Bulitko. "Robustness of Real-Time Heuristic Search Algorithms to Read/Write Error in Externally Stored Heuristics." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 13, no. 1 (June 25, 2021): 137–43. http://dx.doi.org/10.1609/aiide.v13i1.12943.

Full text
Abstract:
Real-time heuristic search algorithms follow the agent-centered search paradigm wherein the agent has access only to information local to the agent’s current position in the environment. This allows agents with constant-bounded computational faculties (e.g., memory) to take on search problems of progressively increasing sizes. As the agent’s memory does not scale with the size of the search problem, the heuristic must necessarily be stored externally, in the environment. Storing the heuristic in the environment brings the extra challenge of read/write errors. In video games, introducing error artificially to the heuristics can make the non-player characters (NPC) behave more naturally. In this paper, we evaluate effects of such errors on real-time heuristic search algorithms. In particular, we empirically study the effects of heuristic read redundancy on algorithm performance and compare its effects to the existing technique of using weights in heuristic learning. Finally, we evaluate a recently proposed technique of correcting the heuristic with a one-step error term in the presence of read/write error.
APA, Harvard, Vancouver, ISO, and other styles
46

Trojanowski, Krzysztof, and Artur Mikitiuk. "Wireless Sensor Network Coverage Optimization: Comparison of Local Search-Based Heuristics." Applied Computational Intelligence and Soft Computing 2022 (November 12, 2022): 1–21. http://dx.doi.org/10.1155/2022/3745358.

Full text
Abstract:
The Maximum Lifetime Coverage Problem (MLCP) requires heuristic optimization methods due to its complexity. A real-world problem model determines how a solution is represented and the operators applied in these heuristics. Our paper describes adapting a local search scheme and its operators to MLCP optimization. The operators originate from three local search algorithms we proposed earlier: LSHMA, LSCAIA, and LSRFTA. Two steps of the LS scheme’s main loop can be executed in three different ways each. Hence, nine versions of the LS approach can be obtained. In experimental research, we verified their effectiveness. Test cases come from three benchmarks: SCP1, proposed and used in our earlier research on the three LS algorithms mentioned above, and two others found in the literature. The results obtained with SCP1 showed that the algorithm based on the hypergraph model approach (HMA) is the most effective. The remaining results of the other algorithms divide them into two groups: effective ones and weak ones. However, other benchmarks showed that the more redundant the coverage of points of interest (POIs) by sensors, the more effective the perturbation method from the approach inspired by cellular automata (CAIA). The findings expose the strengths and weaknesses of the problem-specific steps applied in the LS algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Ma, Zhenfang, Kaizhou Gao, Hui Yu, and Naiqi Wu. "Solving Heterogeneous USV Scheduling Problems by Problem-Specific Knowledge Based Meta-Heuristics with Q-Learning." Mathematics 12, no. 2 (January 19, 2024): 339. http://dx.doi.org/10.3390/math12020339.

Full text
Abstract:
This study focuses on the scheduling problem of heterogeneous unmanned surface vehicles (USVs) with obstacle avoidance pretreatment. The goal is to minimize the overall maximum completion time of USVs. First, we develop a mathematical model for the problem. Second, with obstacles, an A* algorithm is employed to generate a path between two points where tasks need to be performed. Third, three meta-heuristics, i.e., simulated annealing (SA), genetic algorithm (GA), and harmony search (HS), are employed and improved to solve the problems. Based on problem-specific knowledge, nine local search operators are designed to improve the performance of the proposed algorithms. In each iteration, three Q-learning strategies are used to select high-quality local search operators. We aim to improve the performance of meta-heuristics by using Q-learning-based local search operators. Finally, 13 instances with different scales are adopted to validate the effectiveness of the proposed strategies. We compare with the classical meta-heuristics and the existing meta-heuristics. The proposed meta-heuristics with Q-learning are overall better than the compared ones. The results and comparisons show that HS with the second Q-learning, HS + QL2, exhibits the strongest competitiveness (the smallest mean rank value 1.00) among 15 algorithms.
APA, Harvard, Vancouver, ISO, and other styles
48

Lissovoi, Andrei, Pietro Oliveto, and John Alasdair Warwicker. "How the Duration of the Learning Period Affects the Performance of Random Gradient Selection Hyper-Heuristics." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2376–83. http://dx.doi.org/10.1609/aaai.v34i03.5617.

Full text
Abstract:
Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LeadingOnes in the best expected time achievable with the available heuristics, if sufficiently long learning periods τ are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: Ridge, where the HH has to learn that RLS1 is always the best low-level heuristic, and OneMax, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods τ are required for the HH to achieve optimal expected runtime for Ridge. Conversely, a sub-logarithmic learning period is the best static choice for OneMax, while using super-linear values for τ increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both OneMax and Ridge. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.
APA, Harvard, Vancouver, ISO, and other styles
49

Vasquez, Michel, and Nicolas Zufferey. "Consistent Neighborhood Search for Combinatorial Optimization." ISRN Computational Mathematics 2012 (September 13, 2012): 1–12. http://dx.doi.org/10.5402/2012/671423.

Full text
Abstract:
Many optimization problems (from academia or industry) require the use of a local search to find a satisfying solution in a reasonable amount of time, even if the optimality is not guaranteed. Usually, local search algorithms operate in a search space which contains complete solutions (feasible or not) to the problem. In contrast, in Consistent Neighborhood Search (CNS), after each variable assignment, the conflicting variables are deleted to keep the partial solution feasible, and the search can stop when all the variables have a value. In this paper, we formally propose a new heuristic solution method, CNS, which has a search behavior between exhaustive tree search and local search working with complete solutions. We then discuss, with a unified view, the great success of some existing heuristics, which can however be considered within the CNS framework, in various fields: graph coloring, frequency assignment in telecommunication networks, vehicle fleet management with maintenance constraints, and satellite range scheduling. Moreover, some lessons are given in order to have guidelines for the adaptation of CNS to other problems.
APA, Harvard, Vancouver, ISO, and other styles
50

Thomson, Sarah L., Gabriela Ochoa, Sébastien Verel, and Nadarajen Veerapen. "Inferring Future Landscapes: Sampling the Local Optima Level." Evolutionary Computation 28, no. 4 (December 2020): 621–41. http://dx.doi.org/10.1162/evco_a_00271.

Full text
Abstract:
Connection patterns among Local Optima Networks (LONs) can inform heuristic design for optimisation. LON research has predominantly required complete enumeration of a fitness landscape, thereby restricting analysis to problems diminutive in size compared to real-life situations. LON sampling algorithms are therefore important. In this article, we study LON construction algorithms for the Quadratic Assignment Problem (QAP). Using machine learning, we use estimated LON features to predict search performance for competitive heuristics used in the QAP domain. The results show that by using random forest regression, LON construction algorithms produce fitness landscape features which can explain almost all search variance. We find that LON samples better relate to search than enumerated LONs do. The importance of fitness levels of sampled LONs in search predictions is crystallised. Features from LONs produced by different algorithms are combined in predictions for the first time, with promising results for this “super-sampling”: a model to predict tabu search success explained 99% of variance. Arguments are made for the use-case of each LON algorithm and for combining the exploitative process of one with the exploratory optimisation of the other.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography