Journal articles on the topic 'Randomised search algorithms'

To see the other types of publications on this topic, follow the link: Randomised search algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Randomised search algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jansen, Thomas, and Christine Zarges. "Analysis of Randomised Search Heuristics for Dynamic Optimisation." Evolutionary Computation 23, no. 4 (December 2015): 513–41. http://dx.doi.org/10.1162/evco_a_00164.

Full text
Abstract:
Dynamic optimisation is an area of application where randomised search heuristics like evolutionary algorithms and artificial immune systems are often successful. The theoretical foundation of this important topic suffers from a lack of a generally accepted analytical framework as well as a lack of widely accepted example problems. This article tackles both problems by discussing necessary conditions for useful and practically relevant theoretical analysis as well as introducing a concrete family of dynamic example problems that draws inspiration from a well-known static example problem and exhibits a bi-stable dynamic. After the stage has been set this way, the framework is made concrete by presenting the results of thorough theoretical and statistical analysis for mutation-based evolutionary algorithms and artificial immune systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Xiao, Peng, Soumitra Pal, and Sanguthevar Rajasekaran. "Randomised sequential and parallel algorithms for efficient quorum planted motif search." International Journal of Data Mining and Bioinformatics 18, no. 2 (2017): 105. http://dx.doi.org/10.1504/ijdmb.2017.086457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xiao, Peng, Soumitra Pal, and Sanguthevar Rajasekaran. "Randomised sequential and parallel algorithms for efficient quorum planted motif search." International Journal of Data Mining and Bioinformatics 18, no. 2 (2017): 105. http://dx.doi.org/10.1504/ijdmb.2017.10007475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lissovoi, Andrei, Pietro S. Oliveto, and John Alasdair Warwicker. "On the Time Complexity of Algorithm Selection Hyper-Heuristics for Multimodal Optimisation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2322–29. http://dx.doi.org/10.1609/aaai.v33i01.33012322.

Full text
Abstract:
Selection hyper-heuristics are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently selection hyperheuristics choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise a standard unimodal benchmark function from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper we extend our understanding to the domain of multimodal optimisation by considering a hyper-heuristic from the literature that can switch between elitist and nonelitist heuristics during the run. We first identify the range of parameters that allow the hyper-heuristic to hillclimb efficiently and prove that it can optimise a standard hillclimbing benchmark function in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where the hyper-heuristic is efficient by swiftly escaping local optima and ones where it is not. For a function class called CLIFFd where a new gradient of increasing fitness can be identified after escaping local optima, the hyper-heuristic is extremely efficient while a wide range of established elitist and non-elitist algorithms are not, including the well-studied Metropolis algorithm. We complete the picture with an analysis of another standard benchmark function called JUMPd as an example to highlight problem characteristics where the hyper-heuristic is inefficient. Yet, it still outperforms the wellestablished non-elitist Metropolis algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Lissovoi, Andrei, Pietro S. Oliveto, and John Alasdair Warwicker. "Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes." Evolutionary Computation 28, no. 3 (September 2020): 437–61. http://dx.doi.org/10.1162/evco_a_00258.

Full text
Abstract:
Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time [Formula: see text], instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to [Formula: see text] low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the [Formula: see text] heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to [Formula: see text]) and shed some light on the best choices for the parameter [Formula: see text] in various situations.
APA, Harvard, Vancouver, ISO, and other styles
6

Corus, Dogan, and Pietro S. Oliveto. "On the Benefits of Populations for the Exploitation Speed of Standard Steady-State Genetic Algorithms." Algorithmica 82, no. 12 (July 15, 2020): 3676–706. http://dx.doi.org/10.1007/s00453-020-00743-1.

Full text
Abstract:
Abstract It is generally accepted that populations are useful for the global exploration of multi-modal optimisation problems. Indeed, several theoretical results are available showing such advantages over single-trajectory search heuristics. In this paper we provide evidence that evolving populations via crossover and mutation may also benefit the optimisation time for hillclimbing unimodal functions. In particular, we prove bounds on the expected runtime of the standard ($$\mu +1$$ μ + 1 ) GA for OneMax that are lower than its unary black box complexity and decrease in the leading constant with the population size up to $$\mu =o\left( \sqrt{\log n}\right) $$ μ = o log n . Our analysis suggests that the optimal mutation strategy is to flip two bits most of the time. To achieve the results we provide two interesting contributions to the theory of randomised search heuristics: (1) A novel application of drift analysis which compares absorption times of different Markov chains without defining an explicit potential function. (2) The inversion of fundamental matrices to calculate the absorption times of the Markov chains. The latter strategy was previously proposed in the literature but to the best of our knowledge this is the first time is has been used to show non-trivial bounds on expected runtimes.
APA, Harvard, Vancouver, ISO, and other styles
7

Serraino, Giuseppe Filiberto, and Gavin J. Murphy. "Effects of cerebral near-infrared spectroscopy on the outcome of patients undergoing cardiac surgery: a systematic review of randomised trials." BMJ Open 7, no. 9 (September 2017): e016613. http://dx.doi.org/10.1136/bmjopen-2017-016613.

Full text
Abstract:
ObjectivesGoal-directed optimisation of cerebral oxygenation using near-infrared spectroscopy (NIRS) during cardiopulmonary bypass is widely used. We tested the hypotheses that the use of NIRS cerebral oximetry results in reductions in cerebral injury (neurocognitive function, serum biomarkers), injury to other organs including the heart and brain, transfusion rates, mortality and resource use.DesignSystematic review and meta-analysis.SettingTertiary cardiac surgery centres in North America, Europe and Asia.ParticipantsA search of Cochrane Central Register of Controlled Trials, ClinicalTrials.gov, Medline, Embase, and the Cumulative Index to Nursing and Allied Health Literature Plus from inception to November 2016 identified 10 randomised trials, enrolling a total of 1466 patients, all in adult cardiac surgery.InterventionsNIRS-based algorithms designed to optimise cerebral oxygenation versus standard care (non-NIRS-based) protocols in cardiac surgery patients during cardiopulmonary bypass.Outcome measuresMortality, organ injury affecting the brain, heart and kidneys, red cell transfusion and resource use.ResultsTwo of the 10 trials identified in the literature search were considered at low risk of bias. Random-effects meta-analysis demonstrated similar mortality (risk ratio (RR) 0.76, 95% CI 0.30 to 1.96), major morbidity including stroke (RR 1. 08, 95% CI 0.40 to 2.91), red cell transfusion and resource use in NIRS-treated patients and controls, with little or no heterogeneity. Grades of Recommendation, Assessment, Development and Evaluation of the quality of the evidence was low or very low for all of the outcomes assessed.ConclusionsThe results of this systematic review did not support the hypotheses that cerebral NIRS-based algorithms have clinical benefits in cardiac surgery.Trial registration numberPROSPERO CRD42015027696.
APA, Harvard, Vancouver, ISO, and other styles
8

Hassan, N., R. Slight, D. Weiand, A. Vellinga, G. Morgan, F. Aboushareb, and S. P. Slight. "Predicting infection and sepsis; what predictors have been used to train machine learning algorithms? A systematic review." International Journal of Pharmacy Practice 29, Supplement_1 (March 26, 2021): i18. http://dx.doi.org/10.1093/ijpp/riab016.022.

Full text
Abstract:
Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (>18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR<60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit < 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.
APA, Harvard, Vancouver, ISO, and other styles
9

Oliva, Antonio, Gerardo Altamura, Mario Cesare Nurchis, Massimo Zedda, Giorgio Sessa, Francesca Cazzato, Giovanni Aulino, et al. "Assessing the potentiality of algorithms and artificial intelligence adoption to disrupt patient primary care with a safer and faster medication management: a systematic review protocol." BMJ Open 12, no. 5 (May 2022): e057399. http://dx.doi.org/10.1136/bmjopen-2021-057399.

Full text
Abstract:
IntroductionIn primary care, almost 75% of outpatient visits by family doctors and general practitioners involve continuation or initiation of drug therapy. Due to the enormous amount of drugs used by outpatients in unmonitored situations, the potential risk of adverse events due to an error in the use or prescription of drugs is much higher than in a hospital setting. Artificial intelligence (AI) application can help healthcare professionals to take charge of patient safety by improving error detection, patient stratification and drug management. The aim is to investigate the impact of AI algorithms on drug management in primary care settings and to compare AI or algorithms with standard clinical practice to define the medication fields where a technological support could lead to better results.Methods and analysisA systematic review and meta-analysis of literature will be conducted querying PubMed, Cochrane and ISI Web of Science from the inception to December 2021. The primary outcome will be the reduction of medication errors obtained by AI application. The search strategy and the study selection will be conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses and the population, intervention, comparator and outcome framework. Quality of included studies will be appraised adopting the quality assessment tool for observational cohort and cross-sectional studies for non-randomised controlled trials as well as the quality assessment of controlled intervention studies of National Institute of Health for randomised controlled trials.Ethics and disseminationFormal ethical approval is not required since no human beings are involved. The results will be disseminated widely through peer-reviewed publications.
APA, Harvard, Vancouver, ISO, and other styles
10

Tett, Simon F. B., Kuniko Yamazaki, Michael J. Mineter, Coralia Cartis, and Nathan Eizenberg. "Calibrating climate models using inverse methods: case studies with HadAM3, HadAM3P and HadCM3." Geoscientific Model Development 10, no. 9 (September 28, 2017): 3567–89. http://dx.doi.org/10.5194/gmd-10-3567-2017.

Full text
Abstract:
Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Wilson, GDF. "A phylogenetic analysis of the isopod family Janiridae (Crustacea)." Invertebrate Systematics 8, no. 3 (1994): 749. http://dx.doi.org/10.1071/it9940749.

Full text
Abstract:
A phylogeny of the isopod family Janiridae and genera from presumptive outgroups, Acanthaspidiidae, Joeropsididae and Microparasellidae is estimated. Characters were gathered from the published literature, and assembled into a data matrix for cladistic analysis. The data, when evaluated with heuristic search algorithms, yielded eight most-parsimonious trees, none of which supported the monophyly of the Janiridae. To evaluate the impact of homoplasy, characters with a rescaled consistency less than 0.1 were deleted, resulting in four somewhat different trees that were non-monophyletic for the janirids. With the smaller data set, trees supporting janirid monophyly were 10 steps longer. A permutation tail probability test found substantially more hierarchical information in the janirid data set than in randomised data. Internal topologies of the shortest trees were evaluated as hypotheses for new family-level groups, although new family-level classifications cannot be proposed at this time owing to insufficient evidence. The Janiridae therefore cannot be considered monophyletic.
APA, Harvard, Vancouver, ISO, and other styles
12

Camungao, Ricardo Q. "Slim-Tree Clustering Large Application based on Randomized Search (STCLARANS) Algorithm Simulator." Journal of Advanced Research in Dynamical and Control Systems 11, no. 12-SPECIAL ISSUE (December 31, 2019): 1253–58. http://dx.doi.org/10.5373/jardcs/v11sp12/20193333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mishra, Abhishek, and Pramod Kumar Mishra. "A Randomized Scheduling Algorithm for Multiprocessor Environments Using Local Search." Parallel Processing Letters 26, no. 01 (March 2016): 1650002. http://dx.doi.org/10.1142/s012962641650002x.

Full text
Abstract:
The LOCAL(A, B) randomized task scheduling algorithm is proposed for fully connected multiprocessors. It combines two given task scheduling algorithms (A, and B) using local neighborhood search to give a hybrid of the two given algorithms. Objective is to show that such type of hybridization can give much better performance results in terms of parallel execution times. Two task scheduling algorithms are selected: DSC (Dominant Sequence Clustering as algorithm A), and CPPS (Cluster Pair Priority Scheduling as algorithm B) and a hybrid is created (the LOCAL(DSC, CPPS) or simply the LOCAL task scheduling algorithm). The LOCAL task scheduling algorithm has time complexity O(|V||E|(|V |+|E|)), where V is the set of vertices, and E is the set of edges in the task graph. The LOCAL task scheduling algorithm is compared with six other algorithms: CPPS, DCCL (Dynamic Computation Communication Load), DSC, EZ (Edge Zeroing), LC (Linear Clustering), and RDCC (Randomized Dynamic Computation Communication). Performance evaluation of the LOCAL task scheduling algorithm shows that it gives up to 80.47 % improvement of NSL (Normalized Schedule Length) over other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Cresswell, Kathrin, Margaret Callaghan, Sheraz Khan, Zakariya Sheikh, Hajar Mozaffar, and Aziz Sheikh. "Investigating the use of data-driven artificial intelligence in computerised decision support systems for health and social care: A systematic review." Health Informatics Journal 26, no. 3 (January 22, 2020): 2138–47. http://dx.doi.org/10.1177/1460458219900452.

Full text
Abstract:
There is growing interest in the potential of artificial intelligence to support decision-making in health and social care settings. There is, however, currently limited evidence of the effectiveness of these systems. The aim of this study was to investigate the effectiveness of artificial intelligence-based computerised decision support systems in health and social care settings. We conducted a systematic literature review to identify relevant randomised controlled trials conducted between 2013 and 2018. We searched the following databases: MEDLINE, EMBASE, CINAHL, PsycINFO, Web of Science, Cochrane Library, ASSIA, Emerald, Health Business Fulltext Elite, ProQuest Public Health, Social Care Online, and grey literature sources. Search terms were conceptualised into three groups: artificial intelligence-related terms, computerised decision support -related terms, and terms relating to health and social care. Terms within groups were combined using the Boolean operator OR, and groups were combined using the Boolean operator AND. Two reviewers independently screened studies against the eligibility criteria and two independent reviewers extracted data on eligible studies onto a customised sheet. We assessed the quality of studies through the Critical Appraisal Skills Programme checklist for randomised controlled trials. We then conducted a narrative synthesis. We identified 68 hits of which five studies satisfied the inclusion criteria. These studies varied substantially in relation to quality, settings, outcomes, and technologies. None of the studies was conducted in social care settings, and three randomised controlled trials showed no difference in patient outcomes. Of these, one investigated the use of Bayesian triage algorithms on forced expiratory volume in 1 second (FEV1) and health-related quality of life in lung transplant patients. Another investigated the effect of image pattern recognition on neonatal development outcomes in pregnant women, and another investigated the effect of the Kalman filter technique for warfarin dosing suggestions on time in therapeutic range. The remaining two randomised controlled trials, investigating computer vision and neural networks on medication adherence and the impact of learning algorithms on assessment time of patients with gestational diabetes, showed statistically significant and clinically important differences to the control groups receiving standard care. However, these studies tended to be of low quality lacking detailed descriptions of methods and only one study used a double-blind design. Although the evidence of effectiveness of data-driven artificial intelligence to support decision-making in health and social care settings is limited, this work provides important insights on how a meaningful evidence base in this emerging field needs to be developed going forward. It is unlikely that any single overall message surrounding effectiveness will emerge - rather effectiveness of interventions is likely to be context-specific and calls for inclusion of a range of study designs to investigate mechanisms of action.
APA, Harvard, Vancouver, ISO, and other styles
15

Gelly, Sylvain, Sylvie Ruette, and Olivier Teytaud. "Comparison-Based Algorithms Are Robust and Randomized Algorithms Are Anytime." Evolutionary Computation 15, no. 4 (December 2007): 411–34. http://dx.doi.org/10.1162/evco.2007.15.4.411.

Full text
Abstract:
Randomized search heuristics (e.g., evolutionary algorithms, simulated annealing etc.) are very appealing to practitioners, they are easy to implement and usually provide good performance. The theoretical analysis of these algorithms usually focuses on convergence rates. This paper presents a mathematical study of randomized search heuristics which use comparison based selection mechanism. The two main results are that comparison-based algorithms are the best algorithms for some robustness criteria and that introducing randomness in the choice of offspring improves the anytime behavior of the algorithm. An original Estimation of Distribution Algorithm combining both results is proposed and successfully experimented.
APA, Harvard, Vancouver, ISO, and other styles
16

Wilson, William G., William G. Laidlaw, and Kris Vasudevan. "Residual statics estimation using the genetic algorithm." GEOPHYSICS 59, no. 5 (May 1994): 766–74. http://dx.doi.org/10.1190/1.1443634.

Full text
Abstract:
An optimization problem as complex as residual statics estimation in seismic image processing requires novel techniques. One interesting technique, the genetic algorithm, is based loosely on the optimization process forming the basis of biological evolution. The objective of this paper is to examine this algorithm’s applicability to residual statics estimation and present three new ingredients that help the algorithm successfully resolve residual statics. These three ingredients include (1) breaking the population into subpopulations with restricted breeding between the subpopulations, (2) localizing the search, to varying degrees, about the uncorrected input stack, and (3) modifying the optimization function to take account of CDP‐dependent structural features. Introducing subpopulations has the effect of enhancing the search when the volume of phase space being searched is large and limited information is given about where the algorithm should concentrate its efforts. Subpopulations work well initially, but as the performance increases, the search efficiency decreases. Thus, search efficiency is dependent on both the subpopulation size and the present performance of the subpopulation. The greediness of genetic algorithms in their rapid acceptance of a local minimum can be recompensed through a more cautious and user‐controlled exploration of the phase space. Specifically, genetic algorithms can be "fed" the uncorrected input stack as a way of biasing the search in the region of phase space that contains the rough event images observable in most uncorrected seismic stacks. We compare three types of starting populations: (1) a randomized population, (2) a biased start with a randomized population save one individual representing the input stack, and (3) a biased start restricted to a slowly expanding (generation‐dependent) volume of phase space. Efficient searches also require an optimization function that places the perfectly corrected seismic image at the function’s global maximum. Constructing such a function is nontrivial, and we implement a seismic data set that allows us to examine the genetic algorithm’s sensitivity to inappropriate optimization functions.
APA, Harvard, Vancouver, ISO, and other styles
17

Kazakovtsev, Lev, Dmitry Stashkov, Mikhail Gudyma, and Vladimir Kazakovtsev. "Algorithms with greedy heuristic procedures for mixture probability distribution separation." Yugoslav Journal of Operations Research 29, no. 1 (2019): 51–67. http://dx.doi.org/10.2298/yjor171107030k.

Full text
Abstract:
For clustering problems based on the model of mixture probability distribution separation, we propose new Variable Neighbourhood Search algorithms (VNS) and evolutionary genetic algorithms (GA) with greedy agglomerative heuristic procedures and compare them with known algorithms. New genetic algorithms implement a global search strategy with the use of a special crossover operator based on greedy agglomerative heuristic procedures in combination with the EM algorithm (Expectation Maximization). In our new VNS algorithms, this combination is used for forming randomized neighbourhoods to search for better solutions. The results of computational experiments made on classical data sets and the testings of production batches of semiconductor devices shipped for the space industry demonstrate that new algorithms allow us to obtain better results, higher values of the log likelihood objective function, in comparison with the EM algorithm and its modifications.
APA, Harvard, Vancouver, ISO, and other styles
18

Litvinas, Linas. "A hybrid of Bayesian-based global search with Hooke–Jeeves local refinement for multi-objective optimization problems." Nonlinear Analysis: Modelling and Control 27 (March 28, 2022): 1–22. http://dx.doi.org/10.15388/namc.2022.27.26558.

Full text
Abstract:
The proposed multi-objective optimization algorithm hybridizes random global search with a local refinement algorithm. The global search algorithm mimics the Bayesian multi-objective optimization algorithm. The site of current computation of the objective functions by the proposed algorithm is selected by randomized simulation of the bi-objective selection by the Bayesian-based algorithm. The advantage of the new algorithm is that it avoids the inner complexity of Bayesian algorithms. A version of the Hooke–Jeeves algorithm is adapted for the local refinement of the approximation of the Pareto front. The developed hybrid algorithm is tested under conditions previously applied to test other Bayesian algorithms so that performance could be compared. Other experiments were performed to assess the efficiency of the proposed algorithm under conditions where the previous versions of Bayesian algorithms were not appropriate because of the number of objectives and/or dimensionality of the decision space.
APA, Harvard, Vancouver, ISO, and other styles
19

Sung, Chi Wan, and Shiu Yin Yuen. "Analysis of (1+1) Evolutionary Algorithm and Randomized Local Search with Memory." Evolutionary Computation 19, no. 2 (June 2011): 287–323. http://dx.doi.org/10.1162/evco_a_00029.

Full text
Abstract:
This paper considers the scenario of the (1+1) evolutionary algorithm (EA) and randomized local search (RLS) with memory. Previously explored solutions are stored in memory until an improvement in fitness is obtained; then the stored information is discarded. This results in two new algorithms: (1+1) EA-m (with a raw list and hash table option) and RLS-m+ (and RLS-m if the function is a priori known to be unimodal). These two algorithms can be regarded as very simple forms of tabu search. Rigorous theoretical analysis of the expected time to find the globally optimal solutions for these algorithms is conducted for both unimodal and multimodal functions. A unified mathematical framework, involving the new concept of spatially invariant neighborhood, is proposed. Under this framework, both (1+1) EA with standard uniform mutation and RLS can be considered as particular instances and in the most general cases, all functions can be considered to be unimodal. Under this framework, it is found that for unimodal functions, the improvement by memory assistance is always positive but at most by one half. For multimodal functions, the improvement is significant; for functions with gaps and another hard function, the order of growth is reduced; for at least one example function, the order can change from exponential to polynomial. Empirical results, with a reasonable fitness evaluation time assumption, verify that (1+1) EA-m and RLS-m+ are superior to their conventional counterparts. Both new algorithms are promising for use in a memetic algorithm. In particular, RLS-m+ makes the previously impractical RLS practical, and surprisingly, does not require any extra memory in actual implementation.
APA, Harvard, Vancouver, ISO, and other styles
20

Arunkumar, S., and T. Chockalingam. "Genetic search algorithms and their randomized operators." Computers & Mathematics with Applications 25, no. 5 (March 1993): 91–100. http://dx.doi.org/10.1016/0898-1221(93)90202-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mühlenthaler, Moritz, and Alexander Raß. "Runtime analysis of discrete particle swarm optimization algorithms: A survey." it - Information Technology 61, no. 4 (August 27, 2019): 177–85. http://dx.doi.org/10.1515/itit-2019-0009.

Full text
Abstract:
Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.
APA, Harvard, Vancouver, ISO, and other styles
22

Gelfand, Andrew, Kalev Kask, and Rina Dechter. "Stopping Rules for Randomized Greedy Triangulation Schemes." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 1043–48. http://dx.doi.org/10.1609/aaai.v25i1.8021.

Full text
Abstract:
Many algorithms for performing inference in graphical models have complexity that is exponential in the treewidth — a parameter of the underlying graph structure. Computing the (minimal) treewidth is NPcomplete, so stochastic algorithms are sometimes used to find low width tree decompositions. A common approach for finding good decompositions is iteratively executing a greedy triangulation algorithm (e.g. minfill) with randomized tie-breaking. However, utilizing a stochastic algorithm as part of the inference task introduces a new problem — namely, deciding how long the stochastic algorithm should be allowed to execute before performing inference on the best tree decomposition found so far. We refer to this dilemma as the Stopping Problem and formalize it in terms of the total time needed to answer a probabilistic query. We propose a rule for discontinuing the search for improved decompositions and demonstrate the benefit (in terms of time saved) of applying this rule to Bayes and Markov network instances.
APA, Harvard, Vancouver, ISO, and other styles
23

Sørensen, Anders, Karsten Juhl Jørgensen, and Klaus Munkholm. "Clinical practice guideline recommendations on tapering and discontinuing antidepressants for depression: a systematic review." Therapeutic Advances in Psychopharmacology 12 (January 2022): 204512532110676. http://dx.doi.org/10.1177/20451253211067656.

Full text
Abstract:
Background: Tapering and discontinuing antidepressants are important aspects of the management of patients with depression and should therefore be considered in clinical practice guidelines. Objectives: We aimed to assess the extent and content, and appraise the quality, of guidance on tapering and discontinuing antidepressants in major clinical practice guidelines on depression. Methods: Systematic review of clinical practice guidelines on depression issued by national health authorities and major national or international professional organisations in the United Kingdom, the United States, Canada, Australia, Singapore, Ireland and New Zealand (PROSPERO CRD42020220682). We searched PubMed, 14 guideline registries and the websites of relevant organisations (last search 25 May 2021). The clinical practice guidelines were assessed for recommendations and information relevant to tapering and discontinuing antidepressants. The quality of the clinical practice guidelines as they pertained to tapering and discontinuation was assessed using the AGREE II tool. Results: Of the 21 included clinical practice guidelines, 15 (71%) recommended that antidepressants are tapered gradually or slowly, but none provided guidance on dose reductions, how to distinguish withdrawal symptoms from relapse or how to manage withdrawal symptoms. Psychological challenges were not addressed in any clinical practice guideline, and the treatment algorithms and flow charts did not include discontinuation. The quality of the clinical practice guidelines was overall low. Conclusion: Current major clinical practice guidelines provide little support for clinicians wishing to help patients discontinue or taper antidepressants in terms of mitigating and managing withdrawal symptoms. Patients who have deteriorated upon following current guidance on tapering and discontinuing antidepressants thus cannot be concluded to have experienced a relapse. Better guidance requires better randomised trials investigating interventions for discontinuing or tapering antidepressants.
APA, Harvard, Vancouver, ISO, and other styles
24

Odili, Julius Beneoluchi, A. Noraziah, and M. Zarina. "A Comparative Performance Analysis of Computational Intelligence Techniques to Solve the Asymmetric Travelling Salesman Problem." Computational Intelligence and Neuroscience 2021 (April 20, 2021): 1–13. http://dx.doi.org/10.1155/2021/6625438.

Full text
Abstract:
This paper presents a comparative performance analysis of some metaheuristics such as the African Buffalo Optimization algorithm (ABO), Improved Extremal Optimization (IEO), Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), Cooperative Genetic Ant System (CGAS), and the heuristic, Randomized Insertion Algorithm (RAI) to solve the asymmetric Travelling Salesman Problem (ATSP). Quite unlike the symmetric Travelling Salesman Problem, there is a paucity of research studies on the asymmetric counterpart. This is quite disturbing because most real-life applications are actually asymmetric in nature. These six algorithms were chosen for their performance comparison because they have posted some of the best results in literature and they employ different search schemes in attempting solutions to the ATSP. The comparative algorithms in this study employ different techniques in their search for solutions to ATSP: the African Buffalo Optimization employs the modified Karp–Steele mechanism, Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO) employs the path construction with patching technique, Cooperative Genetic Ant System uses natural selection and ordering; Randomized Insertion Algorithm uses the random insertion approach, and the Improved Extremal Optimization uses the grid search strategy. After a number of experiments on the popular but difficult 15 out of the 19 ATSP instances in TSPLIB, the results show that the African Buffalo Optimization algorithm slightly outperformed the other algorithms in obtaining the optimal results and at a much faster speed.
APA, Harvard, Vancouver, ISO, and other styles
25

Hui, L. C. K. "Randomized Competitive Algorithms for Successful and Unsuccessful Search." Computer Journal 39, no. 5 (May 1, 1996): 427–38. http://dx.doi.org/10.1093/comjnl/39.5.427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fränti, P., and J. Kivijärvi. "Randomised Local Search Algorithm for the Clustering Problem." Pattern Analysis & Applications 3, no. 4 (December 1, 2000): 358–69. http://dx.doi.org/10.1007/s100440070007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kazakovtsev, Lev, Ivan Rozhnov, Aleksey Popov, and Elena Tovbis. "Self-Adjusting Variable Neighborhood Search Algorithm for Near-Optimal k-Means Clustering." Computation 8, no. 4 (November 5, 2020): 90. http://dx.doi.org/10.3390/computation8040090.

Full text
Abstract:
The k-means problem is one of the most popular models in cluster analysis that minimizes the sum of the squared distances from clustered objects to the sought cluster centers (centroids). The simplicity of its algorithmic implementation encourages researchers to apply it in a variety of engineering and scientific branches. Nevertheless, the problem is proven to be NP-hard which makes exact algorithms inapplicable for large scale problems, and the simplest and most popular algorithms result in very poor values of the squared distances sum. If a problem must be solved within a limited time with the maximum accuracy, which would be difficult to improve using known methods without increasing computational costs, the variable neighborhood search (VNS) algorithms, which search in randomized neighborhoods formed by the application of greedy agglomerative procedures, are competitive. In this article, we investigate the influence of the most important parameter of such neighborhoods on the computational efficiency and propose a new VNS-based algorithm (solver), implemented on the graphics processing unit (GPU), which adjusts this parameter. Benchmarking on data sets composed of up to millions of objects demonstrates the advantage of the new algorithm in comparison with known local search algorithms, within a fixed time, allowing for online computation.
APA, Harvard, Vancouver, ISO, and other styles
28

CHAN, TIMOTHY M. "ON ENUMERATING AND SELECTING DISTANCES." International Journal of Computational Geometry & Applications 11, no. 03 (June 2001): 291–304. http://dx.doi.org/10.1142/s0218195901000511.

Full text
Abstract:
Given an n-point set, the problems of enumerating the k closest pairs and selecting the k-th smallest distance are revisited. For the enumeration problem, we give simpler randomized and deterministic algorithms with O(n log n+k) running time in any fixed-dimensional Euclidean space. For the selection problem, we give a randomized algorithm with running time O(n log n+n2/3k1/3 log 5/3n) in the Euclidean plane. We also describe output-sensitive results for halfspace range counting that are of use in more general distance selection problems. None of our algorithms requires parametric search.
APA, Harvard, Vancouver, ISO, and other styles
29

Hadka, David, and Patrick Reed. "Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework." Evolutionary Computation 21, no. 2 (May 2013): 231–59. http://dx.doi.org/10.1162/evco_a_00075.

Full text
Abstract:
This study introduces the Borg multi-objective evolutionary algorithm (MOEA) for many-objective, multimodal optimization. The Borg MOEA combines [Formula: see text]-dominance, a measure of convergence speed named [Formula: see text]-progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework. A comparative study on 33 instances of 18 test problems from the DTLZ, WFG, and CEC 2009 test suites demonstrates Borg meets or exceeds six state of the art MOEAs on the majority of the tested problems. The performance for each test problem is evaluated using a 1,000 point Latin hypercube sampling of each algorithm's feasible parameteri- zation space. The statistical performance of every sampled MOEA parameterization is evaluated using 50 replicate random seed trials. The Borg MOEA is not a single algorithm; instead it represents a class of algorithms whose operators are adaptively selected based on the problem. The adaptive discovery of key operators is of particular importance for benchmarking how variation operators enhance search for complex many-objective problems.
APA, Harvard, Vancouver, ISO, and other styles
30

DAVILA, JAIME, and SANGUTHEVAR RAJASEKARAN. "RANDOMIZED SORTING ON THE POPS NETWORK." International Journal of Foundations of Computer Science 16, no. 01 (February 2005): 105–16. http://dx.doi.org/10.1142/s0129054105002899.

Full text
Abstract:
Partitioned Optical Passive Stars (POPS) network has been presented recently as a desirable model of parallel computation. Several papers have been published that address fundamental problems on this architecture. The algorithms presented in this paper are valid for POPS(d,g) where d>g and use randomization. We present an algorithm that solves the problem of sparse enumeration sorting of d∊ keys in [Formula: see text] time and hence performs better than previous algorithms. We also present algorithms that allow us to do stable sorting of integers in the range [1, log n] and [Formula: see text] in [Formula: see text] time. When g=n∊, for any constant 0<∊<½ this allows us to do sorting of integers in the range [1,n] in [Formula: see text] time. We finally use these algorithms to solve the problem of multiple binary search in the case where we have d∊ keys in [Formula: see text] time and in the case where we have integer keys in the range [1,n] in [Formula: see text] time, when g=n∊, for some constant 0<∊<½.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Yongfei, Jun Wu, Liming Zhang, Peng Zhao, Junping Zhou, and Minghao Yin. "An Efficient Heuristic Algorithm for Solving Connected Vertex Cover Problem." Mathematical Problems in Engineering 2018 (September 6, 2018): 1–10. http://dx.doi.org/10.1155/2018/3935804.

Full text
Abstract:
The connected vertex cover (CVC) problem, which has many important applications, is a variant of the vertex cover problem, such as wireless network design, routing, and wavelength assignment problem. A good algorithm for the problem can help us improve engineering efficiency, cost savings, and resources consumption in industrial applications. In this work, we present an efficient algorithm GRASP-CVC (Greedy Randomized Adaptive Search Procedure for Connected Vertex Cover) for CVC in general graphs. The algorithm has two main phases, i.e., construction phase and local search phase. In the construction phase, to construct a high quality feasible initial solution, we design a greedy function and a restricted candidate list. In the local search phase, the configuration checking strategy is adopted to decrease the cycling problem. The experimental results demonstrate that GRASP-CVC is better than other comparison algorithms in terms of effectivity and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
32

Babaie-Kafaki, Saman, and Saeed Rezaee. "A randomized adaptive trust region line search method." An International Journal of Optimization and Control: Theories & Applications (IJOCTA) 10, no. 2 (July 27, 2020): 259–63. http://dx.doi.org/10.11121/ijocta.01.2020.00900.

Full text
Abstract:
Hybridizing the trust region, line search and simulated annealing methods, we develop a heuristic algorithm for solving unconstrained optimization problems. We make some numerical experiments on a set of CUTEr test problems to investigate efficiency of the suggested algorithm. The results show that the algorithm is practically promising.
APA, Harvard, Vancouver, ISO, and other styles
33

Van Haaren, Jan, and Jesse Davis. "Markov Network Structure Learning: A Randomized Feature Generation Approach." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1148–54. http://dx.doi.org/10.1609/aaai.v26i1.8315.

Full text
Abstract:
The structure of a Markov network is typically learned in one of two ways. The first approach is to treat this task as a global search problem. However, these algorithms are slow as they require running the expensive operation of weight (i.e., parameter) learning many times. The second approach involves learning a set of local models and then combining them into a global model. However, it can be computationally expensive to learn the local models for datasets that contain a large number of variables and/or examples. This paper pursues a third approach that views Markov network structure learning as a feature generation problem. The algorithm combines a data-driven, specific-to-general search strategy with randomization to quickly generate a large set of candidate features that all have support in the data. It uses weight learning, with L1 regularization, to select a subset of generated features to include in the model. On a large empirical study, we find that our algorithm is equivalently accurate to other state-of-the-art methods while exhibiting a much faster run time.
APA, Harvard, Vancouver, ISO, and other styles
34

Choi, Jung Chan, Zhongqiang Liu, Suzanne Lacasse, and Elin Skurtveit. "Leak-Off Pressure Using Weakly Correlated Geospatial Information and Machine Learning Algorithms." Geosciences 11, no. 4 (April 19, 2021): 181. http://dx.doi.org/10.3390/geosciences11040181.

Full text
Abstract:
Leak-off pressure (LOP) is a key parameter to determine the allowable weight of drilling mud in a well and the in situ horizontal stress. The LOP test is run in situ and is frequently used by the petroleum industry. If the well pressure exceeds the LOP, wellbore instability may occur, with hydraulic fracturing and large mud losses in the formation. A reliable prediction of LOP is required to ensure safe and economical drilling operations. The prediction of LOP is challenging because it is affected by the usually complex earlier geological loading history, and the values of LOP and their measurements can vary significantly geospatially. This paper investigates the ability of machine learning algorithms to predict leak-off pressure on the basis of geospatial information of LOP measurements. About 3000 LOP test data were collected from 1800 exploration wells offshore Norway. Three machine learning algorithms (the deep neural network (DNN), random forest (RF), and support vector machine (SVM) algorithms) optimized by three hyperparameter search methods (the grid search, randomized search and Bayesian search) were compared with multivariate regression analysis. The Bayesian search algorithm needed fewer iterations than the grid search algorithms to find an optimal combination of hyperparameters. The three machine learning algorithms showed better performance than the multivariate linear regression when the features of the geospatial inputs were properly scaled. The RF algorithm gave the most promising results regardless of data scaling. If the data were not scaled, the DNN and SVM algorithms, even with optimized parameters, did not provide significantly improved test scores compared to the multivariate regression analysis. The analyses also showed that when the number of data points in a geographical setting is much smaller than that of other geographical areas, the prediction accuracy reduces significantly.
APA, Harvard, Vancouver, ISO, and other styles
35

Bang, Ban Ha. "EFFICIENT METAHEURISTIC ALGORITHMS FOR THE MULTI-STRIPE TRAVELLING SALESMAN PROBLEM." Journal of Computer Science and Cybernetics 36, no. 3 (August 18, 2020): 233–50. http://dx.doi.org/10.15625/1813-9663/36/3/14770.

Full text
Abstract:
The Multi-stripe Travelling Salesman Problem (Ms-TSP) is an extension of the Travelling Salesman Problem (TSP). In the \textit{q}-stripe TSP with $q \geq 1$, the objective function sums the costs for travelling from one customer to each of the next \textit{q} customers along the tour. The resulting \textit{q}-stripe TSP generalizes the TSP and forms a special case of the Quadratic Assignment Problem. To solve medium and large size instances, a metaheuristic algorithm is proposed. The proposed algorithm has two main components, which are construction and improvement phases. The construction phase generates a solution using Greedy Randomized Adaptive Search Procedure (GRASP) while the optimization phase improves the solution with several variants of Variable Neighborhood Search, both coupled with a technique called Shaking Technique to escape from local optima. In addition, Adaptive Memory is integrated into our algorithms to balance between the diversification and intensification. To show the efficiency of our proposed metaheuristic algorithms, we extensively experiment on benchmark instances. The results indicate that the developed algorithms can produce efficient and effective solutions at a reasonable computation time.
APA, Harvard, Vancouver, ISO, and other styles
36

Lonklang, Aphilak, and János Botzheim. "Improved Rapidly Exploring Random Tree with Bacterial Mutation and Node Deletion for Offline Path Planning of Mobile Robot." Electronics 11, no. 9 (May 3, 2022): 1459. http://dx.doi.org/10.3390/electronics11091459.

Full text
Abstract:
The path-planning algorithm aims to find the optimal path between the starting and goal points without collision. One of the most popular algorithms is the optimized Rapidly exploring Random Tree (RRT*). The strength of RRT* algorithm is the collision-free path. It is the main reason why RRT-based algorithms are used in path planning for mobile robots. The RRT* algorithm generally creates the node for randomly making a tree branch to reach the goal point. The weakness of the RRT* algorithm is in the random process when the randomized nodes fall into the obstacle regions. The proposed algorithm generates a new random environment by removing the obstacle regions from the global environment. The objective is to minimize the number of unusable nodes from the randomizing process. The results show better performance in computational time and overall path-planning length. Bacterial mutation and local search algorithms are combined at post-processing to get a better path length and reduce the number of nodes. The proposed algorithm is tested in simulation.
APA, Harvard, Vancouver, ISO, and other styles
37

Qiang, Ning, and Feng Ju Kang. "A Hybrid Particle Swarm Optimization for Solving Vehicle Routing Problem with Stochastic Demands." Advanced Materials Research 971-973 (June 2014): 1467–72. http://dx.doi.org/10.4028/www.scientific.net/amr.971-973.1467.

Full text
Abstract:
As one of the most popular supply chain management problems, the Vehicle Routing Problem (VRP) has been thoroughly studied in the last decades, most of these studies focus on deterministic problem where the customer demands are known in advance. But the Vehicle Routing Problem with Stochastic Demands (VRPSD) has not received enough consideration. In the VRPSD, the vehicle does not know the customer demands until the vehicle arrive to them. This paper use a hybrid algorithm for solving VRPSD, the hybrid algorithm based on Particle Swarm Optimization (PSO) Algorithm, combines a Greedy Randomized Adaptive Search Procedure (GRASP) algorithm, and Variable Neighborhood Search (VNS) algorithm. A real number encoding method is designed to build a suitable mapping between solutions of problem and particles in PSO. A number of computational studies, along with comparisons with other existing algorithms, showed that the proposed hybrid algorithm is a feasible and effective approach for Vehicle Routing Problem with Stochastic Demands.
APA, Harvard, Vancouver, ISO, and other styles
38

Pan, Jia, Christian Lauterbach, and Dinesh Manocha. "g-Planner: Real-time Motion Planning and Global Navigation using GPUs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1245–51. http://dx.doi.org/10.1609/aaai.v24i1.7732.

Full text
Abstract:
We present novel randomized algorithms for solving global motion planning problems that exploit the computational capabilities of many-core GPUs. Our approach uses thread and data parallelism to achieve high performance for all components of sample-based algorithms, including random sampling, nearest neighbor computation, local planning, collision queries and graph search. The approach can efficiently solve both the multi-query and single-query versions of the problem and obtain considerable speedups over prior CPU-based algorithms. We demonstrate the efficiency of our algorithms by applying them to a number of 6DOF planning benchmarks in 3D environments. Overall, this is the first algorithm that can perform real-time motion planning and global navigation using commodity hardware.
APA, Harvard, Vancouver, ISO, and other styles
39

Ngo, Son Tung, Jafreezal Jaafar, Aziz Abdul Izzatdin, Giang Truong Tong, and Anh Ngoc Bui. "Some metaheuristic algorithms for solving multiple cross-functional team selection problems." PeerJ Computer Science 8 (August 9, 2022): e1063. http://dx.doi.org/10.7717/peerj-cs.1063.

Full text
Abstract:
We can find solutions to the team selection problem in many different areas. The problem solver needs to scan across a large array of available solutions during their search. This problem belongs to a class of combinatorial and NP-Hard problems that requires an efficient search algorithm to maintain the quality of solutions and a reasonable execution time. The team selection problem has become more complicated in order to achieve multiple goals in its decision-making process. This study introduces a multiple cross-functional team (CFT) selection model with different skill requirements for candidates who meet the maximum required skills in both deep and wide aspects. We introduced a method that combines a compromise programming (CP) approach and metaheuristic algorithms, including the genetic algorithm (GA) and ant colony optimization (ACO), to solve the proposed optimization problem. We compared the developed algorithms with the MIQP-CPLEX solver on 500 programming contestants with 37 skills and several randomized distribution datasets. Our experimental results show that the proposed algorithms outperformed CPLEX across several assessment aspects, including solution quality and execution time. The developed method also demonstrated the effectiveness of the multi-criteria decision-making process when compared with the multi-objective evolutionary algorithm (MOEA).
APA, Harvard, Vancouver, ISO, and other styles
40

Morgan, R., and M. Gallagher. "Using Landscape Topology to Compare Continuous Metaheuristics: A Framework and Case Study on EDAs and Ridge Structure." Evolutionary Computation 20, no. 2 (June 2012): 277–99. http://dx.doi.org/10.1162/evco_a_00070.

Full text
Abstract:
In this paper we extend a previously proposed randomized landscape generator in combination with a comparative experimental methodology to study the behavior of continuous metaheuristic optimization algorithms. In particular, we generate two-dimensional landscapes with parameterized, linear ridge structure, and perform pairwise comparisons of algorithms to gain insight into what kind of problems are easy and difficult for one algorithm instance relative to another. We apply this methodology to investigate the specific issue of explicit dependency modeling in simple continuous estimation of distribution algorithms. Experimental results reveal specific examples of landscapes (with certain identifiable features) where dependency modeling is useful, harmful, or has little impact on mean algorithm performance. Heat maps are used to compare algorithm performance over a large number of landscape instances and algorithm trials. Finally, we perform a meta-search in the landscape parameter space to find landscapes which maximize the performance between algorithms. The results are related to some previous intuition about the behavior of these algorithms, but at the same time lead to new insights into the relationship between dependency modeling in EDAs and the structure of the problem landscape. The landscape generator and overall methodology are quite general and extendable and can be used to examine specific features of other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

DEVILLERS, OLIVIER, and PHILIPPE GUIGUE. "THE SHUFFLING BUFFER." International Journal of Computational Geometry & Applications 11, no. 05 (October 2001): 555–72. http://dx.doi.org/10.1142/s021819590100064x.

Full text
Abstract:
The complexity of randomized incremental algorithms is analyzed with the assumption of a random order of the input. To guarantee this hypothesis, the n data have to be known in advance in order to be mixed what contradicts with the on-line nature of the algorithm. We present the shuffling buffer technique to introduce sufficient randomness to guarantee an improvement on the worst case complexity by knowing only k data in advance. Typically, an algorithm with O(n2) worst-case complexity and O(n) or O(n log n) randomized complexity has an [Formula: see text] complexity for the shuffling buffer. We illustrate this with binary search trees, the number of Delaunay triangles or the number of trapezoids in a trapezoidal map created during an incremental construction.
APA, Harvard, Vancouver, ISO, and other styles
42

Hu, Ta-Yin, and Li-Wen Chen. "Traffic Signal Optimization with Greedy Randomized Tabu Search Algorithm." Journal of Transportation Engineering 138, no. 8 (August 2012): 1040–50. http://dx.doi.org/10.1061/(asce)te.1943-5436.0000404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

LENGLER, J., and A. STEGER. "Drift Analysis and Evolutionary Algorithms Revisited." Combinatorics, Probability and Computing 27, no. 4 (June 20, 2018): 643–66. http://dx.doi.org/10.1017/s0963548318000275.

Full text
Abstract:
One of the easiest randomized greedy optimization algorithms is the following evolutionary algorithm which aims at maximizing a function f: {0,1}n → ℝ. The algorithm starts with a random search point ξ ∈ {0,1}n, and in each round it flips each bit of ξ with probability c/n independently at random, where c > 0 is a fixed constant. The thus created offspring ξ' replaces ξ if and only if f(ξ') ≥ f(ξ). The analysis of the runtime of this simple algorithm for monotone and for linear functions turned out to be highly non-trivial. In this paper we review known results and provide new and self-contained proofs of partly stronger results.
APA, Harvard, Vancouver, ISO, and other styles
44

WANG, LUSHENG, and LIANG DONG. "RANDOMIZED ALGORITHMS FOR MOTIF DETECTION." Journal of Bioinformatics and Computational Biology 03, no. 05 (October 2005): 1039–52. http://dx.doi.org/10.1142/s0219720005001508.

Full text
Abstract:
Motivation: Motif detection for DNA sequences has many important applications in biological studies, e.g. locating binding sites regulatory signals, designing genetic probes etc. In this paper, we propose a randomized algorithm, design an improved EM algorithm and combine them to form a software tool. Results: (1) We design a randomized algorithm for consensus pattern problem. We can show that with high probability, our randomized algorithm finds a pattern in polynomial time with cost error at most ∊ × l for each string, where l is the length of the motif and ∊ can be any positive number given by the user. (2) We design an improved EM algorithm that outperforms the original EM algorithm. (3) We develop a software tool, MotifDetector, that uses our randomized algorithm to find good seeds and uses the improved EM algorithm to do local search. We compare MotifDetector with Buhler and Tompa's PROJECTION which is considered to be the best known software for motif detection. Simulations show that MotifDetector is slower than PROJECTION when the pattern length is relatively small, and outperforms PROJECTION when the pattern length becomes large. Availability: It is available for free at , subject to copyright restrictions.
APA, Harvard, Vancouver, ISO, and other styles
45

Babaie-Kafaki, Saman, and Saeed Rezaee. "A randomized nonmonotone adaptive trust region method based on the simulated annealing strategy for unconstrained optimization." International Journal of Intelligent Computing and Cybernetics 12, no. 3 (August 12, 2019): 389–99. http://dx.doi.org/10.1108/ijicc-12-2018-0178.

Full text
Abstract:
PurposeThe purpose of this paper is to employ stochastic techniques to increase efficiency of the classical algorithms for solving nonlinear optimization problems.Design/methodology/approachThe well-known simulated annealing strategy is employed to search successive neighborhoods of the classical trust region (TR) algorithm.FindingsAn adaptive formula for computing the TR radius is suggested based on an eigenvalue analysis conducted on the memoryless Broyden-Fletcher-Goldfarb-Shanno updating formula. Also, a (heuristic) randomized adaptive TR algorithm is developed for solving unconstrained optimization problems. Results of computational experiments on a set of CUTEr test problems show that the proposed randomization scheme can enhance efficiency of the TR methods.Practical implicationsThe algorithm can be effectively used for solving the optimization problems which appear in engineering, economics, management, industry and other areas.Originality/valueThe proposed randomization scheme improves computational costs of the classical TR algorithm. Especially, the suggested algorithm avoids resolving the TR subproblems for many times.
APA, Harvard, Vancouver, ISO, and other styles
46

Xiao, Xue, Qing Hong Wu, and Ying Zhang. "Recognition of Paper Currency Research Based on AGA-BP Neural Network." Advanced Materials Research 989-994 (July 2014): 3968–72. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.3968.

Full text
Abstract:
The genetic algorithm is a randomized search method for a class of reference biological evolution of the law evolved, with global implicit parallelism inherent and better optimization. This paper presents an adaptive genetic algorithm to optimize the use of BP neural network method, namely the structure of weights and thresholds to optimize BP neural network to achieve the recognition of banknotes oriented. Experimental results show that after using genetic algorithms to optimize BP neural network controller can accurately and quickly achieved recognition effect on banknote recognition accuracy compared to traditional BP neural network has been greatly improved, improved network adaptive capacity and generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
47

Karp, Richard M., and Yanjun Zhang. "Randomized parallel algorithms for backtrack search and branch-and-bound computation." Journal of the ACM 40, no. 3 (July 1993): 765–89. http://dx.doi.org/10.1145/174130.174145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Neumann, Frank, and Ingo Wegener. "Randomized local search, evolutionary algorithms, and the minimum spanning tree problem." Theoretical Computer Science 378, no. 1 (June 2007): 32–40. http://dx.doi.org/10.1016/j.tcs.2006.11.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Buaba, Ruben, Abdollah Homaifar, William Hendrix, Seung Son, Wei-keng Liao, and Alok Choudhary. "Randomized Algorithm for Approximate Nearest Neighbor Search in High Dimensions." Journal of Pattern Recognition Research 9, no. 1 (2014): 111–22. http://dx.doi.org/10.13176/11.599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Raja Balachandar, S., and K. Kannan. "Randomized gravitational emulation search algorithm for symmetric traveling salesman problem." Applied Mathematics and Computation 192, no. 2 (September 2007): 413–21. http://dx.doi.org/10.1016/j.amc.2007.03.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography