Journal articles on the topic 'Greedy heuristic algorithms'

To see the other types of publications on this topic, follow the link: Greedy heuristic algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Greedy heuristic algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wilt, Christopher, and Wheeler Ruml. "Building a Heuristic for Greedy Search." Proceedings of the International Symposium on Combinatorial Search 6, no. 1 (September 1, 2021): 131–40. http://dx.doi.org/10.1609/socs.v6i1.18352.

Full text
Abstract:
Suboptimal heuristic search algorithms such as greedy best-first search allow us to find solutions when constraints of either time, memory, or both prevent the application of optimal algorithms such as A*. Guidelines for building an effective heuristic for A* are well established in the literature, but we show that if those rules are applied for greedy best-first search, performance can actually degrade. Observing what went wrong for greedy best-first search leads us to a quantitative metric appropriate for greedy heuristics, called Goal Distance Rank Correlation (GDRC). We demonstrate that GDRC can be used to build effective heuristics for greedy best-first search automatically.
APA, Harvard, Vancouver, ISO, and other styles
2

Wilt, Christopher, and Wheeler Ruml. "Effective Heuristics for Suboptimal Best-First Search." Journal of Artificial Intelligence Research 57 (October 31, 2016): 273–306. http://dx.doi.org/10.1613/jair.5036.

Full text
Abstract:
Suboptimal heuristic search algorithms such as weighted A* and greedy best-first search are widely used to solve problems for which guaranteed optimal solutions are too expensive to obtain. These algorithms crucially rely on a heuristic function to guide their search. However, most research on building heuristics addresses optimal solving. In this paper, we illustrate how established wisdom for constructing heuristics for optimal search can fail when considering suboptimal search. We consider the behavior of greedy best-first search in detail and we test several hypotheses for predicting when a heuristic will be effective for it. Our results suggest that a predictive characteristic is a heuristic's goal distance rank correlation (GDRC), a robust measure of whether it orders nodes according to distance to a goal. We demonstrate that GDRC can be used to automatically construct abstraction-based heuristics for greedy best-first search that are more effective than those built by methods oriented toward optimal search. These results reinforce the point that suboptimal search deserves sustained attention and specialized methods of its own.
APA, Harvard, Vancouver, ISO, and other styles
3

Hignasari, L. Virginayoga. "Komparasi Algoritma Cheapest Insertion Heuristic (CIH) Dan Greedy Dalam Optimasi Rute Pendistribusian Barang." Jurnal Ilmiah Vastuwidya 2, no. 2 (June 16, 2020): 31–39. http://dx.doi.org/10.47532/jiv.v2i2.87.

Full text
Abstract:
This study was aimed to compare algorithms that can effectively provide better solutions related to the problem of determining the shortest route in the distribution of goods. This research was a qualitative research. The object of research was the route of shipping goods of a business that is engaged in printing and convection. The algorithms compared in this study were Cheapest Insertion Heuristic (CIH) and Greedy algorithms. Both algorithms have advantages and disadvantages in finding the shortest route. From the results of the analysis using these two algorithms, the Cheapest Insertion Heuristic (CIH) and Greedy algorithm can provide almost the same optimization results. The difference was only the selection of the journey. The strength of the Greedy algorithm was that the calculation steps are simpler than the Cheapest Insertion Heuristic (CIH) algorithm. While the disadvantage of the Greedy algorithm was that it is inappropriate to find the shortest route with a relatively large number of places visited. The advantage of the Cheapest Insertion Heuristic (CIH) algorithm was that this algorithm is still stable, used for the relatively large number of places visited. While the lack of Cheapest Insertion Heuristic (CIH) algorithm was a complicated principle of calculation and was relatively longer than the Greedy algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Heng, Fuhai Duan, Yong Sang, and Liang Fan. "Novel algorithms for sequential fault diagnosis based on greedy method." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 234, no. 6 (May 2, 2020): 779–92. http://dx.doi.org/10.1177/1748006x20914498.

Full text
Abstract:
Test sequencing for binary systems is a nondeterministic polynomial-complete problem, where greedy algorithms have been proposed to find the solution. The traditional greedy algorithms only extract a single kind of information from the D-matrix to search the optimal test sequence, so their application scope is limited. In this study, two novel greedy algorithms that combine the weight index for fault detection with the information entropy are introduced for this problem, which are defined as the Mix1 algorithm and the Mix2 algorithm. First, the application scope for the traditional greedy algorithms is demonstrated in detail by stochastic simulation experiments. Second, two new heuristic formulas are presented, and their scale factors are determined. Third, an example is used to show how the two new algorithms work, and four real-world D-matrices are employed to validate their universality and stability. Finally, the application scope of the Mix1 and Mix2 algorithms is determined based on stochastic simulation experiments, and the two greedy algorithms are also used to improve a multistep look-ahead heuristic algorithm. The Mix1 and Mix2 algorithms can obtain good results in a reasonable time and have a wide application scope, which also can be used to improve the multistep look-ahead heuristic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Panggabean, Jonas Franky R. "Hybrid Ant Colony Optimization-Genetics Algorithm to Minimize Makespan Flow Shop Scheduling." International Journal of Engineering & Technology 7, no. 2.2 (March 5, 2018): 40. http://dx.doi.org/10.14419/ijet.v7i2.2.11868.

Full text
Abstract:
Flow shop scheduling is a scheduling model in which the job to be processed entirely flows in the same product direction / path. In other words, jobs have routing work together. Scheduling problems often arise if there is n jobs to be processed on the machine m, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. In research of Zini, H and ElBernoussi, S. (2016) NEH Heuristic and Stochastic Greedy Heuristic (SG) algorithms. This paper presents modified harmony search (HS) for flow shop scheduling problems with the aim of minimizing the maximum completion time of all jobs (makespan). To validate the proposed algorithm this computational test was performed using a sample dataset of 60 from the Taillard Benchmark. The HS algorithm is compared with two constructive heuristics of the literature namely the NEH heuristic and stochastic greedy heuristic (SG). The experimental results were obtained on average for the dataset size of 20 x 5 to 50 x 10, that the ACO-GA algorithm has a smaller makespan than the other two algorithms, but for large-size datasets the ACO-GA algorithm has a greater makespan of both algorithms with difference of 1.4 units of time.
APA, Harvard, Vancouver, ISO, and other styles
6

Seeja, K. R. "HybridHAM: A Novel Hybrid Heuristic for Finding Hamiltonian Cycle." Journal of Optimization 2018 (October 16, 2018): 1–10. http://dx.doi.org/10.1155/2018/9328103.

Full text
Abstract:
Hamiltonian Cycle Problem is one of the most explored combinatorial problems. Being an NP-complete problem, heuristic approaches are found to be more powerful than exponential time exact algorithms. This paper presents an efficient hybrid heuristic that sits in between the complex reliable approaches and simple faster approaches. The proposed algorithm is a combination of greedy, rotational transformation and unreachable vertex heuristics that works in three phases. In the first phase, an initial path is created by using greedy depth first search. This initial path is then extended to a Hamiltonian path in second phase by using rotational transformation and greedy depth first search. Third phase converts the Hamiltonian path into a Hamiltonian cycle by using rotational transformation. The proposed approach could find Hamiltonian cycles from a set of hard graphs collected from the literature, all the Hamiltonian instances (1000 to 5000 vertices) given in TSPLIB, and some instances of FHCP Challenge Set. Moreover, the algorithm has O(n3) worst case time complexity. The performance of the algorithm has been compared with the state-of-the-art algorithms and it was found that HybridHAM outperforms others in terms of running time.
APA, Harvard, Vancouver, ISO, and other styles
7

Selvi, V. "Clustering Analysis of Greedy Heuristic Method in Zero_One Knapsack Problem." International Journal of Emerging Research in Management and Technology 6, no. 7 (June 29, 2018): 39. http://dx.doi.org/10.23956/ijermt.v6i7.181.

Full text
Abstract:
Knapsack problem is a surely understood class of optimization problems, which tries to expand the profit of items in a knapsack without surpassing its capacity, Knapsack can be solved by several algorithms such like Greedy, dynamic programming, Branch & bound etc. The solution to the zero_one knapsack problem (KP) can be viewed as the result of a sequence of decision. Clustering is the process of resolving that type of applications. Different clustering application for grouping elements with equal priority. In this paper we are introducing greedy heuristic algorithm for solving zero_one knapsack problem. We will exhibit a relative investigation of the Greedy, dynamic programming, B&B and Genetic algorithms regarding of the complexity of time requirements, and the required programming efforts and compare the total value for each of them. Greedy and Genetic algorithms can be used to solve the 0-1 Knapsack problem within a reasonable time complexity. The worst-case time complexity (Big-O) of both algorithms is O(N). Using the greedy method, the algorithm can produce high quality clusters while reduce time the best partioning avoid the memory confinement problem during the process.
APA, Harvard, Vancouver, ISO, and other styles
8

Silva Arantes, Jesimar da, Márcio da Silva Arantes, Claudio Fabiano Motta Toledo, Onofre Trindade Júnior, and Brian Charles Williams. "Heuristic and Genetic Algorithm Approaches for UAV Path Planning under Critical Situation." International Journal on Artificial Intelligence Tools 26, no. 01 (February 2017): 1760008. http://dx.doi.org/10.1142/s0218213017600089.

Full text
Abstract:
The present paper applies a heuristic and genetic algorithms approaches to the path planning problem for Unmanned Aerial Vehicles (UAVs), during an emergency landing, without putting at risk people and properties. The path re-planning can be caused by critical situations such as equipment failures or extreme environmental events, which lead the current UAV mission to be aborted by executing an emergency landing. This path planning problem is introduced through a mathematical formulation, where all problem constraints are properly described. Planner algorithms must define a new path to land the UAV following problem constraints. Three path planning approaches are introduced: greedy heuristic, genetic algorithm and multi-population genetic algorithm. The greedy heuristic aims at quickly find feasible paths, while the genetic algorithms are able to return better quality solutions within a reasonable computational time. These methods are evaluated over a large set of scenarios with different levels of diffculty. Simulations are also conducted by using FlightGear simulator, where the UAV’s behaviour is evaluated for different wind velocities and wind directions. Statistical analysis reveal that combining the greedy heuristic with the genetic algorithms is a good strategy for this problem.
APA, Harvard, Vancouver, ISO, and other styles
9

K, Gayathri Devi. "A Hybrid Firefly Algorithm Approach for Job Shop Scheduling Problem." International Journal for Research in Applied Science and Engineering Technology 9, no. 12 (December 31, 2021): 1436–44. http://dx.doi.org/10.22214/ijraset.2021.39536.

Full text
Abstract:
Abstract: Job shop scheduling has always been one of the most sought out research problems in Combinatorial optimization. Job Shop Scheduling problems (JSSP) are categorized under NP hard problems. In recent years the meta heuristic algorithms have been proved effective to solve hardcore NP problem. Firefly Algorithm is one of such meta heuristic techniques which is nature inspired from firefly characteristic. Its potential can be enhanced further by hybridizing it with other known evolutionary algorithms and thereby getting improved results in less computational time. In this paper we have proposed a new hybrid technique christened as HyFA, by hybridizing Firefly algorithm(FA) with simulated annealing (SA) and Greedy heuristics approach (GHA). The hybrid technique has the advantages of all three algorithms and are combined in such a way that a quicker and better optimal solution is obtained. Our proposed HyFA is coded in Matlab with an objective to minimize the makespan (Cm). The novel hybrid technique is then used to evaluate 1-25 Lawrence problems taken from literature. The results show the proposed technique is more effective not only in getting optimal results but has significantly reduced computational time. Keywords: Scheduling, Optimisation, Job shop scheduling, Meta-heuristics, Firefly, Simulated Annealing, Greedy heuristics Approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Kazakovtsev, Lev, Dmitry Stashkov, Mikhail Gudyma, and Vladimir Kazakovtsev. "Algorithms with greedy heuristic procedures for mixture probability distribution separation." Yugoslav Journal of Operations Research 29, no. 1 (2019): 51–67. http://dx.doi.org/10.2298/yjor171107030k.

Full text
Abstract:
For clustering problems based on the model of mixture probability distribution separation, we propose new Variable Neighbourhood Search algorithms (VNS) and evolutionary genetic algorithms (GA) with greedy agglomerative heuristic procedures and compare them with known algorithms. New genetic algorithms implement a global search strategy with the use of a special crossover operator based on greedy agglomerative heuristic procedures in combination with the EM algorithm (Expectation Maximization). In our new VNS algorithms, this combination is used for forming randomized neighbourhoods to search for better solutions. The results of computational experiments made on classical data sets and the testings of production batches of semiconductor devices shipped for the space industry demonstrate that new algorithms allow us to obtain better results, higher values of the log likelihood objective function, in comparison with the EM algorithm and its modifications.
APA, Harvard, Vancouver, ISO, and other styles
11

Shkaberina, Guzel, Leonid Verenev, Elena Tovbis, Natalia Rezova, and Lev Kazakovtsev. "Clustering Algorithm with a Greedy Agglomerative Heuristic and Special Distance Measures." Algorithms 15, no. 6 (June 1, 2022): 191. http://dx.doi.org/10.3390/a15060191.

Full text
Abstract:
Automatic grouping (clustering) involves dividing a set of objects into subsets (groups) so that the objects from one subset are more similar to each other than to the objects from other subsets according to some criterion. Kohonen neural networks are a class of artificial neural networks, the main element of which is a layer of adaptive linear adders, operating on the principle of “winner takes all”. One of the advantages of Kohonen networks is their ability of online clustering. Greedy agglomerative procedures in clustering consistently improve the result in some neighborhood of a known solution, choosing as the next solution the option that provides the least increase in the objective function. Algorithms using the agglomerative greedy heuristics demonstrate precise and stable results for a k-means model. In our study, we propose a greedy agglomerative heuristic algorithm based on a Kohonen neural network with distance measure variations to cluster industrial products. Computational experiments demonstrate the comparative efficiency and accuracy of using the greedy agglomerative heuristic in the problem of grouping of industrial products into homogeneous production batches.
APA, Harvard, Vancouver, ISO, and other styles
12

Lagodimos, A. G., and V. Leopoulos. "Greedy heuristic algorithms for manpower shift planning." International Journal of Production Economics 68, no. 1 (October 2000): 95–106. http://dx.doi.org/10.1016/s0925-5273(99)00099-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Laube, Uli, and Maik Weinard. "CONDITIONAL INEQUALITIES AND THE SHORTEST COMMON SUPERSTRING PROBLEM." International Journal of Foundations of Computer Science 16, no. 06 (December 2005): 1219–30. http://dx.doi.org/10.1142/s0129054105003777.

Full text
Abstract:
We investigate the shortest common superstring problem (SCSSP). As SCSSP is APX-complete it cannot be approximated within an arbitrarily small performance ratio. One heuristic that is widely used is the notorious greedy heuristic. It is known, that the performance ratio of this heuristic is at least 2 and not worse than 4. It is conjectured that the greedy heuristic's performance ratio is in fact 2 (the greedy conjecture). Even the best algorithms introduced for SCSSP can only guarantee an upper bound of 2.5. In [11] an even stronger version of the greedy conjecture is proven for a restricted class of orders in which strings are merged. We extend these results by broadening the class for which this stronger version can be established. We also show that the Triple inequality, introduced in [11] and crucial for their results, is inherently insufficient to carry the proof for the greedy conjecture in the general case. Finally we describe how linear programming can be used to support research along this line.
APA, Harvard, Vancouver, ISO, and other styles
14

Stern, Roni, Rami Puzis, and Ariel Felner. "Potential Search: A New Greedy Anytime Heuristic Search." Proceedings of the International Symposium on Combinatorial Search 1, no. 1 (August 25, 2010): 119–20. http://dx.doi.org/10.1609/socs.v1i1.18177.

Full text
Abstract:
In this paper we explore a novel approach for anytime heuristic search, in which the node that is most probable to improve the incumbent solution is expanded first. This is especially suited for the "anytime aspect" of anytime algorithms - the possibility that the algorithm will be be halted anytime throughout the search. The potential of a node to improve the incumbent solution is estimated by a custom cost function, resulting in Potential Search, an anytime best-first search. Experimental results on the 15-puzzle and on the key player problem in communication networks (KPP-COM) show that this approach is competitive with state-of-the-art anytime heuristic search algorithms, and is more robust.
APA, Harvard, Vancouver, ISO, and other styles
15

Azriati Mat, Nur, Aida Mauziah Benjamin, and Syariza Abdul-Rahman. "Efficiency of Heuristic Algorithms in Solving Waste Collection Vehicle Routing Problem: A Case Study." Journal of Social Sciences Research, SPI6 (December 25, 2018): 695–700. http://dx.doi.org/10.32861/jssr.spi6.695.700.

Full text
Abstract:
This paper investigated the efficiency of six heuristic algorithms from prior studies in the attempt to solve issues related to waste collection, namely: (i) Nearest Greedy (NG), (ii) Further from Depot (FFD), (iii) Different Initial Customer (DIC), (iv) Savings Approach, (v) Sweep Algorithm, and (vi) Different Initial Customer based on Sweep Algorithm. In fact, these heuristics have been employed to solve several routing problems in past studies, but the performance of each heuristic has never been compared. Hence, this paper looked into the efficiency of these heuristics by testing them on a real case study of waste collection problem in a district located at the north of Peninsular Malaysia. Several solutions obtained from these heuristics were compared with solutions implemented by the waste collection company, especially in terms of the total distance travelled. As a result, the computational results exhibited that DIC generated the best solutions, when compared to other heuristics, with a 12% reduction of the total travel distance.
APA, Harvard, Vancouver, ISO, and other styles
16

Seipp, Jendrik, Thomas Keller, and Malte Helmert. "A Comparison of Cost Partitioning Algorithms for Optimal Classical Planning." Proceedings of the International Conference on Automated Planning and Scheduling 27 (June 5, 2017): 259–68. http://dx.doi.org/10.1609/icaps.v27i1.13823.

Full text
Abstract:
Cost partitioning is a general and principled approach for constructing additive admissible heuristics for state-space search. Cost partitioning approaches for optimal classical planning include optimal cost partitioning, uniform cost partitioning, zero-one cost partitioning, saturated cost partitioning, post-hoc optimization and the canonical heuristic for pattern databases. We compare these algorithms theoretically, showing that saturated cost partitioning dominates greedy zero-one cost partitioning. As a side effect of our analysis, we obtain a new cost partitioning algorithm dominating uniform cost partitioning. We also evaluate these algorithms experimentally on pattern databases, Cartesian abstractions and landmark heuristics, showing that saturated cost partitioning is usually the method of choice on the IPC benchmark suite.
APA, Harvard, Vancouver, ISO, and other styles
17

Kazakovtsev, Lev, Ivan Rozhnov, Guzel Shkaberina, and Viktor Orlov. "K-Means Genetic Algorithms with Greedy Genetic Operators." Mathematical Problems in Engineering 2020 (November 27, 2020): 1–16. http://dx.doi.org/10.1155/2020/8839763.

Full text
Abstract:
The k-means problem is one of the most popular models of cluster analysis. The problem is NP-hard, and modern literature offers many competing heuristic approaches. Sometimes practical problems require obtaining such a result (albeit notExact), within the framework of the k-means model, which would be difficult to improve by known methods without a significant increase in the computation time or computational resources. In such cases, genetic algorithms with greedy agglomerative heuristic crossover operator might be a good choice. However, their computational complexity makes it difficult to use them for large-scale problems. The crossover operator which includes the k-means procedure, taking the absolute majority of the computation time, is essential for such algorithms, and other genetic operators such as mutation are usually eliminated or simplified. The importance of maintaining the population diversity, in particular, with the use of a mutation operator, is more significant with an increase in the data volume and available computing resources such as graphical processing units (GPUs). In this article, we propose a new greedy heuristic mutation operator for such algorithms and investigate the influence of new and well-known mutation operators on the objective function value achieved by the genetic algorithms for large-scale k-means problems. Our computational experiments demonstrate the ability of the new mutation operator, as well as the mechanism for organizing subpopulations, to improve the result of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Montlevich, V. M., and A. N. Ismailova. "EMPIRICAL ANALYSIS OF APPROXIMATE ALGORITHMS FOR INTEGER PROGRAMMING, BASED UPON THE IDEA OF GREEDY CHOICE." Vestnik of Samara University. Natural Science Series 20, no. 3 (May 31, 2017): 115–20. http://dx.doi.org/10.18287/2541-7525-2014-20-3-115-120.

Full text
Abstract:
The article contains the results of empirical studies of heuristic algorithms for integer programming based upon the idea of greedy choice. On the base of numerous computer experiments estimate of average level of inaccuracy of approximate solution are presented.The article contains the results of empirical studies of heuristic algorithms for integer programming based upon the idea of greedy choice. On the base of numerous computer experiments estimate of average level of inaccuracy of approximate solution are presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Etzion, T., and P. R. J. Ostergard. "Greedy and heuristic algorithms for codes and colorings." IEEE Transactions on Information Theory 44, no. 1 (1998): 382–88. http://dx.doi.org/10.1109/18.651069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kazakovtsev, L. A., I. P. Rozhnov, E. A. Popov, M. V. Karaseva, and A. A. Stupina. "Parallel implementation of the greedy heuristic clustering algorithms." IOP Conference Series: Materials Science and Engineering 537 (June 17, 2019): 022052. http://dx.doi.org/10.1088/1757-899x/537/2/022052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jiang, Qingye, Guojie Song, Cong Gao, Yu Wang, Wenjun Si, and Kunqing Xie. "Simulated Annealing Based Influence Maximization in Social Networks." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 127–32. http://dx.doi.org/10.1609/aaai.v25i1.7838.

Full text
Abstract:
The problem of influence maximization, i.e., mining top-k influential nodes from a social network such that the spread of influence in the network is maximized, is NP-hard. Most of the existing algorithms for the prob- lem are based on greedy algorithm. Although greedy algorithm can achieve a good approximation, it is computational expensive. In this paper, we propose a totally different approach based on Simulated Annealing(SA) for the influence maximization problem. This is the first SA based algorithm for the problem. Additionally, we propose two heuristic methods to accelerate the con- vergence process of SA, and a new method of comput- ing influence to speed up the proposed algorithm. Experimental results on four real networks show that the proposed algorithms run faster than the state-of-the-art greedy algorithm by 2-3 orders of magnitude while being able to improve the accuracy of greedy algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Lin, Cai Li Fang, and Cao Kun Yan. "A Novel Reliability-Driven Heuristic for Grid Task Scheduling." Applied Mechanics and Materials 291-294 (February 2013): 2895–98. http://dx.doi.org/10.4028/www.scientific.net/amm.291-294.2895.

Full text
Abstract:
Under dynamic, unreliable grid environment, a novel heuristic based on average reliability of task is proposed, which task into account the failure condition and dynamic workload of Grid. On the basis of the heuristic strategy proposed, this paper improved two classic scheduling algorithms, Min-min and Sufferage, to improve execution reliability of grid task. Moreover, a new greedy algorithm, GR, is proposed in the paper. The simulation experimental results indicate that the algorithms based on the average reliability heuristic are better than Min-min and Sufferage in terms of task completion ratio by 10%, which can guarantee the user's deadline efficiently.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Wei, Wei Lu, and Ning Zhang. "Time-Critical Influence Maximization in Social Networks with Time-Delayed Diffusion Process." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 591–98. http://dx.doi.org/10.1609/aaai.v26i1.8204.

Full text
Abstract:
Influence maximization is a problem of finding a small set of highly influential users in a social network such that the spread of influence under certain propagation models is maximized. In this paper, we consider time-critical influence maximization, in which one wants to maximize influence spread within a given deadline. Since timing is considered in the optimization, we also extend the Independent Cascade (IC) model to incorporate the time delay aspect of influence diffusion in social networks. We show that time-critical influence maximization under the time-delayed IC model maintains desired properties such as submodularity, which allows a greedy algorithm to achieve an approximation ratio of 1-1/e, to circumvent the NP-hardness of the problem. To overcome the inefficiency of the approximation algorithm, we design two heuristic algorithms: the first one is based on a dynamic programming procedure that computes exact influence in tree structures, while the second one converts the problem to one in the original IC model and then applies existing fast heuristics to it. Our simulation results demonstrate that our heuristics achieve the same level of influence spread as the greedy algorithm while running a few orders of magnitude faster, and they also outperform existing algorithms that disregard the deadline constraint and delays in diffusion.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Yongfei, Jun Wu, Liming Zhang, Peng Zhao, Junping Zhou, and Minghao Yin. "An Efficient Heuristic Algorithm for Solving Connected Vertex Cover Problem." Mathematical Problems in Engineering 2018 (September 6, 2018): 1–10. http://dx.doi.org/10.1155/2018/3935804.

Full text
Abstract:
The connected vertex cover (CVC) problem, which has many important applications, is a variant of the vertex cover problem, such as wireless network design, routing, and wavelength assignment problem. A good algorithm for the problem can help us improve engineering efficiency, cost savings, and resources consumption in industrial applications. In this work, we present an efficient algorithm GRASP-CVC (Greedy Randomized Adaptive Search Procedure for Connected Vertex Cover) for CVC in general graphs. The algorithm has two main phases, i.e., construction phase and local search phase. In the construction phase, to construct a high quality feasible initial solution, we design a greedy function and a restricted candidate list. In the local search phase, the configuration checking strategy is adopted to decrease the cycling problem. The experimental results demonstrate that GRASP-CVC is better than other comparison algorithms in terms of effectivity and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
25

Pazis, Jason, and Ronald Parr. "PAC Optimal Exploration in Continuous Space Markov Decision Processes." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 774–81. http://dx.doi.org/10.1609/aaai.v27i1.8678.

Full text
Abstract:
Current exploration algorithms can be classified in two broad categories: Heuristic, and PAC optimal. While numerous researchers have used heuristic approaches such as epsilon-greedy exploration successfully, such approaches lack formal, finite sample guarantees and may need a significant amount of fine-tuning to produce good results. PAC optimal exploration algorithms, on the other hand, offer strong theoretical guarantees but are inapplicable in domains of realistic size. The goal of this paper is to bridge the gap between theory and practice, by introducing C-PACE, an algorithm which offers strong theoretical guarantees and can be applied to interesting, continuous space problems.
APA, Harvard, Vancouver, ISO, and other styles
26

Mandal, Ranjit Kr, Pinaki Mukherjee, and Mousumi Maitra. "Solving Travelling Salesman Problem using Artificial Immune System Optimization (AISO)." Journal of Scientific Research 66, no. 04 (2022): 114–20. http://dx.doi.org/10.37398/jsr.2022.660416.

Full text
Abstract:
Travelling Salesman Problem (TSP) is a typical NP complete combinatorial optimization problem with various applications. In this paper, a nature inspired meta-heuristic optimization algorithm named as Artificial Immune System Optimization (AISO) algorithm is proposed for solving TSP. There are other approaches for solving this problem, namely Greedy Method, Brunch and Bound (B&B), and Dynamic Programming (DP) but they are not very efficient. The time complexity of Greedy approach is O (n2). However, the Greedy method doesn't always converge to an optimum solution whereas the B&B increases search space exponentially and DP finds out optimal solution in O (n22 n) time. The population based meta-heuristic optimization algorithms such as Artificial Immune System Optimization (AISO) and Genetic Algorithm (GA) provide a way to find solution of the TSP in linear time complexity. The proposed algorithm finds out the best cell (optimum solution) using a Survivor Selection (SS) operator which reduces the search space to ensure that effective information is not lost. Dataset, results and convergence graphs are presented and accuracy of the analysis is briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Ji, Jie, Guohua Wu, Jinguo Shuai, Zhen Zhang, Zhen Wang, and Yizhi Ren. "Heuristic Approaches for Enhancing the Privacy of the Leader in IoT Networks." Sensors 19, no. 18 (September 9, 2019): 3886. http://dx.doi.org/10.3390/s19183886.

Full text
Abstract:
The privacy and security of the Internet of Things (IoT) are emerging as popular issues in the IoT. At present, there exist several pieces of research on network analysis on the IoT network, and malicious network analysis may threaten the privacy and security of the leader in the IoT networks. With this in mind, we focus on how to avoid malicious network analysis by modifying the topology of the IoT network and we choose closeness centrality as the network analysis tool. This paper makes three key contributions toward this problem: (1) An optimization problem of removing k edges to minimize (maximize) the closeness value (rank) of the leader; (2) A greedy (greedy and simulated annealing) algorithm to solve the closeness value (rank) case of the proposed optimization problem in polynomial time; and (3)UpdateCloseness (FastTopRank)—algorithm for computing closeness value (rank) efficiently. Experimental results prove the efficiency of our pruning algorithms and show that our heuristic algorithms can obtain accurate solutions compared with the optimal solution (the approximation ratio in the worst case is 0.85) and outperform the solutions obtained by other baseline algorithms (e.g., choose k edges with the highest degree sum).
APA, Harvard, Vancouver, ISO, and other styles
28

Aydın, Nevin. "Heuristic Clustering Algorithms in Ad hoc Networks." EMAJ: Emerging Markets Journal 3, no. 3 (March 5, 2014): 77–80. http://dx.doi.org/10.5195/emaj.2014.39.

Full text
Abstract:
The clustering allows dividing the geographical region to be covered into small zones in which each zone can be handled with a powerful node called clusterhead. The clusterheads have direct communication link with each of its members whereas the member nodes of a cluster must go through the clusterhead to communicate with each other. Since choosing clusterheads optimally is an NP-hard problem, existing solutions to this problem are based on heuristic (mostly greedy) approaches. In this paper, we present three well-known heuristic clustering algorithms: the Lowest-ID, the Highest-Degree, and the Node-Weight.
APA, Harvard, Vancouver, ISO, and other styles
29

Khanna, Munish, Naresh Chauhan, and Dilip Kumar Sharma. "Search for Prioritized Test Cases during Web Application Testing." International Journal of Applied Metaheuristic Computing 10, no. 2 (April 2019): 1–26. http://dx.doi.org/10.4018/ijamc.2019040101.

Full text
Abstract:
Regression testing of evolving software is a critical constituent of the software development process. Due to resources constraints, test case prioritization is one of the strategies followed in regression testing during which a test case that satisfies predefined objectives the most, as the tester perceives, would be executed the earliest. In this study, all the experiments were performed on three web applications consisting of 65 to 100 pages with lines of code ranging from 5000 to 7000. Various state-of-the-art approaches such as, heuristic approaches, Greedy approaches, and meta heuristic approaches were applied so as to identify the prioritized test sequence which maximizes the value of average percentage of fault detection. Performance of these algorithms was compared using different parameters and it was concluded that the Artificial Bee Colony algorithm performs better than all. Two novel greedy algorithms are also proposed in the study, of which the goal is to smartly manage the state of a tie, where a tie exhibits the condition that all the test cases participating in the tie are of equal significance in achieving the objective. It has also been validated that the performance of these novel proposed algorithm(s) is better than that of traditionally followed greedy approach, most of the time.
APA, Harvard, Vancouver, ISO, and other styles
30

Kask, Kalev, Andrew Gelfand, Lars Otten, and Rina Dechter. "Pushing the Power of Stochastic Greedy Ordering Schemes for Inference in Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 54–60. http://dx.doi.org/10.1609/aaai.v25i1.7828.

Full text
Abstract:
We study iterative randomized greedy algorithms for generating (elimination) orderings with small induced width and state space size — two parameters known to bound the complexity of inference in graphical models. We propose and implement the Iterative Greedy Variable Ordering (IGVO) algorithm, a new variant within this algorithm class. An empirical evaluation using different ranking functions and conditions of randomness, demonstrates that IGVO finds significantly better orderings than standard greedy ordering implementations when evaluated within an anytime framework. Additional order of magnitude improvements are demonstrated on a multi-core system, thus further expanding the set of solvable graphical models. The experiments also confirm the superiority of the MinFill heuristic within the iterative scheme.
APA, Harvard, Vancouver, ISO, and other styles
31

Basmassi, M. A., L. Benameur, and J. A. Chentoufi. "A NOVEL GREEDY GENETIC ALGORITHM TO SOLVE COMBINATORIAL OPTIMIZATION PROBLEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-4/W3-2020 (November 23, 2020): 117–20. http://dx.doi.org/10.5194/isprs-archives-xliv-4-w3-2020-117-2020.

Full text
Abstract:
Abstract. In this paper, a modified genetic algorithm based on greedy sequential algorithm is presented to solve combinatorial optimization problem. The algorithm proposed here is a hybrid of heuristic and computational intelligence algorithm where greedy sequential algorithm is used as operator inside genetic algorithm like crossover and mutation. The greedy sequential function is used to correct non realizable solution after crossover and mutation which contribute to increase the rate of convergence and upgrade the population by improving the quality of chromosomes toward the chromatic number. Experiments on a set of 6 well-known DIMACS benchmark instances of graph coloring problem to test this approach show that the proposed algorithm achieves competitive results in comparison with three states of art algorithms in terms of either success rate and solution quality.
APA, Harvard, Vancouver, ISO, and other styles
32

Wilt, Christopher, Jordan Thayer, and Wheeler Ruml. "A Comparison of Greedy Search Algorithms." Proceedings of the International Symposium on Combinatorial Search 1, no. 1 (August 25, 2010): 129–36. http://dx.doi.org/10.1609/socs.v1i1.18182.

Full text
Abstract:
We discuss the relationships between three approaches to greedy heuristic search: best-first, hill-climbing, and beam search. We consider the design decisions within each family and point out their oft-overlooked similarities. We consider the following best-first searches: weighted A*, greedy search, ASeps, window A* and multi-state commitment k-weighted A*. For hill climbing algorithms, we consider enforced hill climbing and LSS-LRTA*. We also consider a variety of beam searches, including BULB and beam-stack search. We show how to best configure beam search in order to maximize robustness. An empirical analysis on six standard benchmarks reveals that beam search and best-first search have remarkably similar performance, and outperform hill-climbing approaches in terms of both time to solution and solution quality. Of these, beam search is preferable for very large problems and best first search is better on problems where the goal cannot be reached from all states.
APA, Harvard, Vancouver, ISO, and other styles
33

Lesh, N., and M. Mitzenmacher. "BubbleSearch: A simple heuristic for improving priority-based greedy algorithms." Information Processing Letters 97, no. 4 (February 2006): 161–69. http://dx.doi.org/10.1016/j.ipl.2005.08.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fadlallah, Ghassan, Djamal Rebaine, and Hamid Mcheick. "A Greedy Scheduling Approach for Peripheral Mobile Intelligent Systems." IoT 2, no. 2 (April 30, 2021): 249–74. http://dx.doi.org/10.3390/iot2020014.

Full text
Abstract:
Smart, pervasive devices have recently experienced accelerated technological development in the fields of hardware, software, and wireless connections. The promotion of various kinds of collaborative mobile computing requires an upgrade in network connectivity with wireless technologies, as well as enhanced peer-to-peer communication. Mobile computing also requires appropriate scheduling methods to speed up the implementation and processing of various computing applications by better managing network resources. Scheduling techniques are relevant to the modern architectural models that support the IoT paradigm, particularly smart collaborative mobile computing architectures at the network periphery. In this regard, load-balancing techniques have also become necessary to exploit all the available capabilities and thus the speed of implementation. However, since the problem of scheduling and load-balancing, which we addressed in this study, is known to be NP-hard, the heuristic approach is well justified. We thus designed and validated a greedy scheduling and load-balancing algorithm to improve the utilization of resources. We conducted a comparison study with the longest cloudlet fact processing (LCFP), shortest cloudlet fact processing (SCFP), and Min-Min heuristic algorithms. The choice of those three algorithms is based on the efficiency and simplicity of their mechanisms, as reported in the literature, for allocating tasks to devices. The simulation we conducted showed the superiority of our approach over those algorithms with respect to the overall completion time criterion.
APA, Harvard, Vancouver, ISO, and other styles
35

Prihozhy, A. A. "Exact and greedy algorithms of allocating experts to maximum set of programmer teams." «System analysis and applied information science», no. 1 (June 8, 2022): 40–46. http://dx.doi.org/10.21122/2309-4923-2022-1-40-46.

Full text
Abstract:
The allocation of experts to programmer teams, which meet constraints on professional competences related to programming technologies, languages and tools an IT project specifies is a hard combinatorial problem. This paper solves the problem of forming the maximum number of teams whose experts meet all the constraints within each team. It develops and compares two algorithms: a heuristic greedy and exact optimal. The greedy algorithm iteratively solves the set cover problem on a matrix of expert competences until can create the next workable team of remaining experts. The paper proves that the allocation greedy algorithm is not accurate even if the set cover algorithm is exact. We call the allocation algorithm as double greedy if the set cover algorithm is greedy. The exact algorithm we propose finds optimal solution in three steps: generating a set of all non-redundant teams, producing a graph of team’s independency, and searching for a maximum clique in the graph. The algorithm of generating the non-redundant teams traverses a search tree constructed in such a way as to guarantee the creation of all non-redundant teams and absorbing all redundant teams. The edges of the non-redundant team independency graph connect teams that have no common expert. The maximum clique search algorithm we propose accounts for the problem and graph features. Experimental results show that the exact algorithm is a reference one, and the double-greedy algorithm is very fast and can yield suboptimal solutions for large-size allocation problems.
APA, Harvard, Vancouver, ISO, and other styles
36

Kralev, Velin, and Radoslava Kraleva. "A comparative analysis between two heuristic algorithms for the graph vertex coloring problem." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 3 (June 1, 2023): 2981. http://dx.doi.org/10.11591/ijece.v13i3.pp2981-2989.

Full text
Abstract:
This study focuses on two heuristic algorithms for the graph vertex coloring problem: the sequential (greedy) coloring algorithm (SCA) and the Welsh–Powell algorithm (WPA). The code of the algorithms is presented and discussed. The methodology and conditions of the experiments are presented. The execution time of the algorithms was calculated as the average of four different starts of the algorithms for all analyzed graphs, taking into consideration the multitasking mode of the operating system. In the graphs with less than 600 vertices, in 90% of cases, both algorithms generated the same solutions. In only 10% of cases, the WPA algorithm generates better solutions. However, in the graphs with more than 1,000 vertices, in 35% of cases, the WPA algorithm generates better solutions. The results show that the difference in the execution time of the algorithms for all graphs is acceptable, but the quality of the solutions generated by the WPA algorithm in more than 20% of cases is better compared to the SC algorithm. The results also show that the quality of the solutions is not related to the number of iterations performed by the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Sheibani, Kaveh. "An Incorporation of the Fuzzy Greedy Search Heuristic With Evolutionary Approaches for Combinatorial Optimization in Operations Management." International Journal of Applied Evolutionary Computation 8, no. 2 (April 2017): 58–72. http://dx.doi.org/10.4018/ijaec.2017040104.

Full text
Abstract:
Although greedy algorithms are important, nowadays it is well assumed that the solutions they obtain can be used as a starting point for more sophisticated methods. This paper describes an evolutionary approach which is based on genetic algorithms (GA). A constructive heuristic, so-called fuzzy greedy search (FGS) is employed to generate an initial population for the proposed GA. The effectiveness and efficiency of the proposed hybrid method are demonstrated on permutation flow-shop scheduling as one of the most widely studied hard combinatorial optimization problems in the area of operational research.
APA, Harvard, Vancouver, ISO, and other styles
38

Qiu, Liqing, Shuang Zhang, Chunmei Gu, and Xiangbo Tian. "Scalable Influence Maximization Meets Efficiency and Effectiveness in Large-Scale Social Networks." International Journal of Software Engineering and Knowledge Engineering 30, no. 08 (August 2020): 1079–96. http://dx.doi.org/10.1142/s0218194020400161.

Full text
Abstract:
Influence maximization is a problem that aims to select top [Formula: see text] influential nodes to maximize the spread of influence in social networks. The classical greedy-based algorithms and their improvements are relatively slow or not scalable. The efficiency of heuristic algorithms is fast but their accuracy is unacceptable. Some algorithms improve the accuracy and efficiency by consuming a large amount of memory usage. To overcome the above shortcoming, this paper proposes a fast and scalable algorithm for influence maximization, called K-paths, which utilizes the influence tree to estimate the influence spread. Additionally, extensive experiments demonstrate that the K-paths algorithm outperforms the comparison algorithms in terms of efficiency while keeping competitive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Et.al, C. Jayasri. "A Novel Swarm Intelligence Optimized Spectrum Sensing Approach For Cognitive Radio Network." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 10, 2021): 136–43. http://dx.doi.org/10.17762/turcomat.v12i6.1278.

Full text
Abstract:
Spectrum sensing technique have been employed for the detection of various spectrum holes in the transmission of data for the secondary users that do not interfere with the transmission of data of the primary user. The technique known as Cognitive Radio (CR) is the one that efficiently uses the entire spectrum. The primary component of the CR is Spectrum sensing. There are certainly other factors that are considered to be important such as capabilities of cognition and awareness of sensing as well. Identified are different heuristic algorithms that are developed for solving numeric problems in optimization. As the problem has been established as NP-hard, it is essential to bring a low computation complexity heuristic solution. A greedy algorithm is used for optimizing spectrum sharing. Particle Swarm Optimization (PSO) remains an efficient and popular algorithm due to its low need for a tuning parameter, high accuracy, low time for processing, fast convergence, and simplicity. In this work, the PSO has been proposed for spectrum sensing, and this has shown better performance than the Greedy Algorithm used for the CR network spectrum sensing.
APA, Harvard, Vancouver, ISO, and other styles
40

Thayer, Jordan, Austin Dionne, and Wheeler Ruml. "Learning Inadmissible Heuristics During Search." Proceedings of the International Conference on Automated Planning and Scheduling 21 (March 22, 2011): 250–57. http://dx.doi.org/10.1609/icaps.v21i1.13474.

Full text
Abstract:
Suboptimal search algorithms offer shorter solving times by sacrificing guaranteed solution optimality. While optimal searchalgorithms like A* and IDA* require admissible heuristics, suboptimalsearch algorithms need not constrain their guidance in this way. Previous work has explored using off-line training to transform admissible heuristics into more effective inadmissible ones. In this paper we demonstrate that this transformation can be performed on-line, during search. In addition to not requiring training instances and extensive pre-computation, an on-line approach allows the learned heuristic to be tailored to a specific problem instance. We evaluate our techniques in four different benchmark domains using both greedy best-first search and bounded suboptimal search. We find that heuristics learned on-line result in both faster search andbetter solutions while relying only on information readily available in any best-first search.
APA, Harvard, Vancouver, ISO, and other styles
41

Namatevs, Ivars, and Ludmila Aleksejeva. "Decision Algorithm for Heuristic Donor-Recipient Matching." MENDEL 23, no. 1 (June 1, 2017): 33–40. http://dx.doi.org/10.13164/mendel.2017.1.033.

Full text
Abstract:
This paper introduces the application of artificial intelligence paradigm towards precision medicine in renal transplantation. The match of the optimal donor-recipient pair in kidney transplantation in Latvian Transplant Centre (LTC) has been constrained by the lack of prediction models and algorithms. Consequently, LTC seeks for practical intelligent computing solution to assist the clinical setting decision-makers during their search for the optimal donor-recipient match. Therefore, by optimizing both the donor and recipient profiles, prioritizing importance of the features, and based on greedy algorithm approach, advanced decision algorithm has been created. The strength of proposed algorithm lies in identification of suitable donors for a specific recipient based on evaluation of criteria by points principle. Experimental study demonstrates that the decision algorithm for heuristic donor-recipient matching integrated in machine learning approach improves the ability of optimal allocation of renal in LTC. It is an important step towards personalized medicine in clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
42

Bonet, B., and H. Geffner. "mGPT: A Probabilistic Planner Based on Heuristic Search." Journal of Artificial Intelligence Research 24 (December 28, 2005): 933–44. http://dx.doi.org/10.1613/jair.1688.

Full text
Abstract:
We describe the version of the GPT planner used in the probabilistic track of the 4th International Planning Competition (IPC-4). This version, called mGPT, solves Markov Decision Processes specified in the PPDDL language by extracting and using different classes of lower bounds along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations where the alternative probabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state and the greedy policy.
APA, Harvard, Vancouver, ISO, and other styles
43

Ábrahám, Gyula, György Dósa, Tibor Dulai, Zsolt Tuza, and Ágnes Werner-Stark. "Efficient Pre-Solve Algorithms for the Schwerin and Falkenauer_U Bin Packing Benchmark Problems for Getting Optimal Solutions with High Probability." Mathematics 9, no. 13 (July 1, 2021): 1540. http://dx.doi.org/10.3390/math9131540.

Full text
Abstract:
Bin Packing is one of the research areas of Operations Research with many industrial applications, as well as rich theoretical impact. In this article, the authors deal with Bin Packing on the practical side: they consider two Bin Packing Benchmark classes. These benchmark problems are often used to check the “usefulness”, efficiency of algorithms. The problem is well-known to be NP-hard. Instead of introducing some exact, heuristic, or approximation method (as usual), the problem is attacked here with some kind of greedy algorithm. These algorithms are very fast; on the other hand, they are not always able to provide optimal solutions. Nevertheless, they can be considered as pre-processing algorithms for solving the problem. It is shown that almost all problems in the considered two benchmark classes are, in fact, easy to solve. In case of the Schwerin class, where there are 200 instances, it is obtained that all instances are solved by the greedy algorithm, optimally, in a very short time. The Falkenauer U class is a little bit harder, but, here, still more than 91% of the instances are solved optimally very fast, with the help of another greedy algorithm. Based on the above facts, the main contribution of the paper is to show that pre-processing is very useful for solving such kinds of problems.
APA, Harvard, Vancouver, ISO, and other styles
44

Bereta, Michał. "Monte Carlo Tree Search Algorithm for the Euclidean Steiner Tree Problem." Journal of Telecommunications and Information Technology 4 (December 20, 2017): 71–81. http://dx.doi.org/10.26636/jtit.2017.122017.

Full text
Abstract:
This study is concerned with a novel Monte Carlo Tree Search algorithm for the problem of minimal Euclidean Steiner tree on a plane. Given p p p points (terminals) on a plane, the goal is to find a connection between all the points, so that the total sum of the lengths of edges is as low as possible, while an addition of extra points (Steiner points) is allowed. Finding the minimum Steiner tree is known to be np-hard. While exact algorithms exist for this problem in 2D, their efficiency decreases when the number of terminals grows. A novel algorithm based on Upper Confidence Bound for Trees is proposed. It is adapted to the specific characteristics of Steiner trees. A simple heuristic for fast generation of feasible solutions based on Fermat points is proposed together with a correction procedure. By combing Monte Carlo Tree Search and the proposed heuristics, the proposed algorithm is shown to work better than both the greedy heuristic and pure Monte Carlo simulations. Results of numerical experiments for randomly generated and benchmark library problems (from OR-Lib) are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Imai, Tatsuya, and Akihiro Kishimoto. "A Novel Technique for Avoiding Plateaus of Greedy Best-First Search in Satisficing Planning." Proceedings of the International Symposium on Combinatorial Search 2, no. 1 (August 19, 2021): 197–98. http://dx.doi.org/10.1609/socs.v2i1.18208.

Full text
Abstract:
Heuristic functions play an important role in drastically improving performance of satisficing planners based on greedy best-first search (GBFS). While automatic generation of heuristic functions (e.g., (Hoffmann and Nebel 2001; Helmert 2006)) enables state-of-the-art satisficing planners to solve very complicated planning problems including benchmarks in the International Planning Competitions, accurate evaluations of nodes still remain as a challenging task. Although GBFS is fundamental and powerful in planning, it has an essential drawback when heuristic functions return inaccurate estimates. Assume that a heuristic function underestimates the difficulties of unpromising nodes. Then, since GBFS must expand nodes with small heuristic values first, it spends most of time in searching only unpromising areas and delays moving to the promising part.Previous work tackles this issue by adding a diversity to search, which is an ability in simultaneously exploring different parts of the search space to bypass large errors in heuristic functions. Several algorithms combined with diversity (e.g., K-best-first search (KBFS) in (Felner, Kraus, and Korf 2003)) are empirically shown to be superior to naive best-first search algorithms. However, they still have limited diversity, since they do not immediately expand nodes mistakenly evaluated as very unpromising ones.This paper presents a new technique called diverse best-first search (DBFS), which incorporates a diversity into search in a different way than previous search-based approaches. We show empirical results clearly showing that DBFS is effective in satisficing planning.
APA, Harvard, Vancouver, ISO, and other styles
46

Song, Heping, and Guoli Wang. "Sparse Signal Recovery via ECME Thresholding Pursuits." Mathematical Problems in Engineering 2012 (2012): 1–22. http://dx.doi.org/10.1155/2012/478931.

Full text
Abstract:
The emerging theory of compressive sensing (CS) provides a new sparse signal processing paradigm for reconstructing sparse signals from the undersampled linear measurements. Recently, numerous algorithms have been developed to solve convex optimization problems for CS sparse signal recovery. However, in some certain circumstances, greedy algorithms exhibit superior performance than convex methods. This paper is a followup to the recent paper of Wang and Yin (2010), who refine BP reconstructions via iterative support detection (ISD). The heuristic idea of ISD was applied to greedy algorithms. We developed two approaches for accelerating the ECME iteration. The described algorithms, named ECME thresholding pursuits (EMTP), introduced two greedy strategies that each iteration detects a support setIby thresholding the result of the ECME iteration and estimates the reconstructed signal by solving a truncated least-squares problem on the support setI. Two effective support detection strategies are devised for the sparse signals with components having a fast decaying distribution of nonzero components. The experimental studies are presented to demonstrate that EMTP offers an appealing alternative to state-of-the-art algorithms for sparse signal recovery.
APA, Harvard, Vancouver, ISO, and other styles
47

Richter, Silvia, Jordan Thayer, and Wheeler Ruml. "The Joy of Forgetting: Faster Anytime Search via Restarting." Proceedings of the International Conference on Automated Planning and Scheduling 20 (May 25, 2021): 137–44. http://dx.doi.org/10.1609/icaps.v20i1.13412.

Full text
Abstract:
Anytime search algorithms solve optimisation problems by quickly finding a (usually suboptimal) first solution and then finding improved solutions when given additional time. To deliver an initial solution quickly, they are typically greedy with respect to the heuristic cost-to-go estimate h. In this paper, we show that this low-h bias can cause poor performance if the greedy search makes early mistakes. Building on this observation, we present a new anytime approach that restarts the search from the initial state every time a new solution is found. We demonstrate the utility of our method via experiments in PDDL planning as well as other domains, and show that it is particularly useful for problems where the heuristic has systematic errors.
APA, Harvard, Vancouver, ISO, and other styles
48

Łapczyńska, Dagmara, Konrad Łapczyński, Anna Burduk, and Jose Machado. "Solving the problem of scheduling the production process based on heuristic algorithms." JUCS - Journal of Universal Computer Science 28, no. 3 (March 28, 2022): 292–310. http://dx.doi.org/10.3897/jucs.80750.

Full text
Abstract:
The paper deals with a production scheduling process, which is a problematic and it requires considering a lot of various factors while making the decision. Due to the specificity of the production system analysed in the practical example, the production scheduling problem was classified as a Job-shop Scheduling Problem (JSP). The production scheduling process, especially in the case of JSP, involves the analysis of a variety of data simultaneously and is well known as NP-hard problem. The research was performed in partnership with a company from the automotive industry. The production scheduling process is a task that is usually performed by process engineers. Thus, it can often be affected by mistakes of human nature e.g. habits, differences in experience and knowledge of engineers (their know-how), etc. The usage of heuristic algorithms was proposed as the solution. The chosen methods are genetic and greedy algorithms, as both of them are suitable to resolve a problem that requires analysing a lot of data. The paper presents both approaches: practical and theoretical aspects of the usefulness and effectiveness of genetic and greedy algorithms in a production scheduling process.
APA, Harvard, Vancouver, ISO, and other styles
49

Akbari, Abbas, Ahmad Khonsari, and Seyed Mohammad Ghoreyshi. "Thermal-Aware Virtual Machine Allocation for Heterogeneous Cloud Data Centers." Energies 13, no. 11 (June 4, 2020): 2880. http://dx.doi.org/10.3390/en13112880.

Full text
Abstract:
In recent years, a large and growing body of literature has addressed the energy-efficient resource management problem in data centers. Due to the fact that cooling costs still remain the major portion of the total data center energy cost, thermal-aware resource management techniques have been employed to make additional energy savings. In this paper, we formulate the problem of minimizing the total energy consumption of a heterogeneous data center (MITEC) as a non-linear integer optimization problem. We consider both computing and cooling energy consumption and provide a thermal-aware Virtual Machine (VM) allocation heuristic based on the genetic algorithm. Experimental results show that, using the proposed formulation, up to 30 % energy saving is achieved compared to thermal-aware greedy algorithms and power-aware VM allocation heuristics.
APA, Harvard, Vancouver, ISO, and other styles
50

Chug, Anuradha, and Sandhya Tarwani. "Identifying the Optimal Refactoring Dependencies Using Heuristic Search Algorithms to Maximize Maintainability." International Journal of Software Engineering and Knowledge Engineering 31, no. 06 (June 2021): 803–35. http://dx.doi.org/10.1142/s0218194021500248.

Full text
Abstract:
Bad smells represent imperfection in the design of the software system and trigger the urge to refactor the source code. The quality of object-oriented software has always been a major concern for the developer team and refactoring techniques help them to focus on this aspect by transforming the code in a way such that the behavior of the software can be preserved. Rigorous research has been done in this field to improve the quality of the software using various techniques. But, one of the issues still remains unsettled, i.e. the overhead effort to refactor the code in order to yield the maximum maintainability value. In this paper, a quantitative evaluation method has been proposed to improve the maintainability value by identifying the most optimum refactoring dependencies in advance with the help of various meta-heuristic algorithms, including A*, AO*, Hill-Climbing and Greedy approaches. A comparison has been done between the maintainability values of the software used, before and after applying the proposed methodology. The results of this study show that the Greedy algorithm is the most promising algorithm amongst all the algorithms in determining the most optimum refactoring sequence resulting in 18.56% and 9.90% improvements in the maintainability values of jTDS and ArtOfIllusion projects, respectively. Further, this study would be beneficial for the software maintenance team as refactoring sequences will be available beforehand, thereby helping the team in maintaining the software with much ease to enhance the maintainability of the software. The proposed methodology will help the maintenance team to focus on a limited portion of the software due to prioritization of the classes, in turn helping them in completing their work within the budget and time constraints.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography