Добірка наукової літератури з теми "Randomised search algorithms"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Randomised search algorithms".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Randomised search algorithms"

1

Jansen, Thomas, and Christine Zarges. "Analysis of Randomised Search Heuristics for Dynamic Optimisation." Evolutionary Computation 23, no. 4 (December 2015): 513–41. http://dx.doi.org/10.1162/evco_a_00164.

Повний текст джерела
Анотація:
Dynamic optimisation is an area of application where randomised search heuristics like evolutionary algorithms and artificial immune systems are often successful. The theoretical foundation of this important topic suffers from a lack of a generally accepted analytical framework as well as a lack of widely accepted example problems. This article tackles both problems by discussing necessary conditions for useful and practically relevant theoretical analysis as well as introducing a concrete family of dynamic example problems that draws inspiration from a well-known static example problem and exhibits a bi-stable dynamic. After the stage has been set this way, the framework is made concrete by presenting the results of thorough theoretical and statistical analysis for mutation-based evolutionary algorithms and artificial immune systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xiao, Peng, Soumitra Pal, and Sanguthevar Rajasekaran. "Randomised sequential and parallel algorithms for efficient quorum planted motif search." International Journal of Data Mining and Bioinformatics 18, no. 2 (2017): 105. http://dx.doi.org/10.1504/ijdmb.2017.086457.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xiao, Peng, Soumitra Pal, and Sanguthevar Rajasekaran. "Randomised sequential and parallel algorithms for efficient quorum planted motif search." International Journal of Data Mining and Bioinformatics 18, no. 2 (2017): 105. http://dx.doi.org/10.1504/ijdmb.2017.10007475.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lissovoi, Andrei, Pietro S. Oliveto, and John Alasdair Warwicker. "On the Time Complexity of Algorithm Selection Hyper-Heuristics for Multimodal Optimisation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2322–29. http://dx.doi.org/10.1609/aaai.v33i01.33012322.

Повний текст джерела
Анотація:
Selection hyper-heuristics are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently selection hyperheuristics choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise a standard unimodal benchmark function from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper we extend our understanding to the domain of multimodal optimisation by considering a hyper-heuristic from the literature that can switch between elitist and nonelitist heuristics during the run. We first identify the range of parameters that allow the hyper-heuristic to hillclimb efficiently and prove that it can optimise a standard hillclimbing benchmark function in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where the hyper-heuristic is efficient by swiftly escaping local optima and ones where it is not. For a function class called CLIFFd where a new gradient of increasing fitness can be identified after escaping local optima, the hyper-heuristic is extremely efficient while a wide range of established elitist and non-elitist algorithms are not, including the well-studied Metropolis algorithm. We complete the picture with an analysis of another standard benchmark function called JUMPd as an example to highlight problem characteristics where the hyper-heuristic is inefficient. Yet, it still outperforms the wellestablished non-elitist Metropolis algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lissovoi, Andrei, Pietro S. Oliveto, and John Alasdair Warwicker. "Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes." Evolutionary Computation 28, no. 3 (September 2020): 437–61. http://dx.doi.org/10.1162/evco_a_00258.

Повний текст джерела
Анотація:
Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time [Formula: see text], instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to [Formula: see text] low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the [Formula: see text] heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to [Formula: see text]) and shed some light on the best choices for the parameter [Formula: see text] in various situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Corus, Dogan, and Pietro S. Oliveto. "On the Benefits of Populations for the Exploitation Speed of Standard Steady-State Genetic Algorithms." Algorithmica 82, no. 12 (July 15, 2020): 3676–706. http://dx.doi.org/10.1007/s00453-020-00743-1.

Повний текст джерела
Анотація:
Abstract It is generally accepted that populations are useful for the global exploration of multi-modal optimisation problems. Indeed, several theoretical results are available showing such advantages over single-trajectory search heuristics. In this paper we provide evidence that evolving populations via crossover and mutation may also benefit the optimisation time for hillclimbing unimodal functions. In particular, we prove bounds on the expected runtime of the standard ($$\mu +1$$ μ + 1 ) GA for OneMax that are lower than its unary black box complexity and decrease in the leading constant with the population size up to $$\mu =o\left( \sqrt{\log n}\right) $$ μ = o log n . Our analysis suggests that the optimal mutation strategy is to flip two bits most of the time. To achieve the results we provide two interesting contributions to the theory of randomised search heuristics: (1) A novel application of drift analysis which compares absorption times of different Markov chains without defining an explicit potential function. (2) The inversion of fundamental matrices to calculate the absorption times of the Markov chains. The latter strategy was previously proposed in the literature but to the best of our knowledge this is the first time is has been used to show non-trivial bounds on expected runtimes.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Serraino, Giuseppe Filiberto, and Gavin J. Murphy. "Effects of cerebral near-infrared spectroscopy on the outcome of patients undergoing cardiac surgery: a systematic review of randomised trials." BMJ Open 7, no. 9 (September 2017): e016613. http://dx.doi.org/10.1136/bmjopen-2017-016613.

Повний текст джерела
Анотація:
ObjectivesGoal-directed optimisation of cerebral oxygenation using near-infrared spectroscopy (NIRS) during cardiopulmonary bypass is widely used. We tested the hypotheses that the use of NIRS cerebral oximetry results in reductions in cerebral injury (neurocognitive function, serum biomarkers), injury to other organs including the heart and brain, transfusion rates, mortality and resource use.DesignSystematic review and meta-analysis.SettingTertiary cardiac surgery centres in North America, Europe and Asia.ParticipantsA search of Cochrane Central Register of Controlled Trials, ClinicalTrials.gov, Medline, Embase, and the Cumulative Index to Nursing and Allied Health Literature Plus from inception to November 2016 identified 10 randomised trials, enrolling a total of 1466 patients, all in adult cardiac surgery.InterventionsNIRS-based algorithms designed to optimise cerebral oxygenation versus standard care (non-NIRS-based) protocols in cardiac surgery patients during cardiopulmonary bypass.Outcome measuresMortality, organ injury affecting the brain, heart and kidneys, red cell transfusion and resource use.ResultsTwo of the 10 trials identified in the literature search were considered at low risk of bias. Random-effects meta-analysis demonstrated similar mortality (risk ratio (RR) 0.76, 95% CI 0.30 to 1.96), major morbidity including stroke (RR 1. 08, 95% CI 0.40 to 2.91), red cell transfusion and resource use in NIRS-treated patients and controls, with little or no heterogeneity. Grades of Recommendation, Assessment, Development and Evaluation of the quality of the evidence was low or very low for all of the outcomes assessed.ConclusionsThe results of this systematic review did not support the hypotheses that cerebral NIRS-based algorithms have clinical benefits in cardiac surgery.Trial registration numberPROSPERO CRD42015027696.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hassan, N., R. Slight, D. Weiand, A. Vellinga, G. Morgan, F. Aboushareb, and S. P. Slight. "Predicting infection and sepsis; what predictors have been used to train machine learning algorithms? A systematic review." International Journal of Pharmacy Practice 29, Supplement_1 (March 26, 2021): i18. http://dx.doi.org/10.1093/ijpp/riab016.022.

Повний текст джерела
Анотація:
Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (>18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR<60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit < 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Oliva, Antonio, Gerardo Altamura, Mario Cesare Nurchis, Massimo Zedda, Giorgio Sessa, Francesca Cazzato, Giovanni Aulino, et al. "Assessing the potentiality of algorithms and artificial intelligence adoption to disrupt patient primary care with a safer and faster medication management: a systematic review protocol." BMJ Open 12, no. 5 (May 2022): e057399. http://dx.doi.org/10.1136/bmjopen-2021-057399.

Повний текст джерела
Анотація:
IntroductionIn primary care, almost 75% of outpatient visits by family doctors and general practitioners involve continuation or initiation of drug therapy. Due to the enormous amount of drugs used by outpatients in unmonitored situations, the potential risk of adverse events due to an error in the use or prescription of drugs is much higher than in a hospital setting. Artificial intelligence (AI) application can help healthcare professionals to take charge of patient safety by improving error detection, patient stratification and drug management. The aim is to investigate the impact of AI algorithms on drug management in primary care settings and to compare AI or algorithms with standard clinical practice to define the medication fields where a technological support could lead to better results.Methods and analysisA systematic review and meta-analysis of literature will be conducted querying PubMed, Cochrane and ISI Web of Science from the inception to December 2021. The primary outcome will be the reduction of medication errors obtained by AI application. The search strategy and the study selection will be conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses and the population, intervention, comparator and outcome framework. Quality of included studies will be appraised adopting the quality assessment tool for observational cohort and cross-sectional studies for non-randomised controlled trials as well as the quality assessment of controlled intervention studies of National Institute of Health for randomised controlled trials.Ethics and disseminationFormal ethical approval is not required since no human beings are involved. The results will be disseminated widely through peer-reviewed publications.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tett, Simon F. B., Kuniko Yamazaki, Michael J. Mineter, Coralia Cartis, and Nathan Eizenberg. "Calibrating climate models using inverse methods: case studies with HadAM3, HadAM3P and HadCM3." Geoscientific Model Development 10, no. 9 (September 28, 2017): 3567–89. http://dx.doi.org/10.5194/gmd-10-3567-2017.

Повний текст джерела
Анотація:
Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Randomised search algorithms"

1

Gutierrez, Soto Claudio. "Exploring the reuse of past search results in information retrieval." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30034/document.

Повний текст джерела
Анотація:
Les recherches passées constituent pourtant une source d'information utile pour les nouveaux utilisateurs (nouvelles requêtes). En raison de l'absence de collections ad-hoc de RI, à ce jour il y a un faible intérêt de la communauté RI autour de l'utilisation des recherches passées. En effet, la plupart des collections de RI existantes sont composées de requêtes indépendantes. Ces collections ne sont pas appropriées pour évaluer les approches fondées sur les requêtes passées parce qu'elles ne comportent pas de requêtes similaires ou qu'elles ne fournissent pas de jugements de pertinence. Par conséquent, il n'est pas facile d'évaluer ce type d'approches. En outre, l'élaboration de ces collections est difficile en raison du coût et du temps élevés nécessaires. Une alternative consiste à simuler les collections. Par ailleurs, les documents pertinents de requêtes passées similaires peuvent être utilisées pour répondre à une nouvelle requête. De nombreuses contributions ont été proposées portant sur l'utilisation de techniques probabilistes pour améliorer les résultats de recherche. Des solutions simples à mettre en œuvre pour la réutilisation de résultats de recherches peuvent être proposées au travers d'algorithmes probabilistes. De plus, ce principe peut également bénéficier d'un clustering des recherches antérieures selon leurs similarités. Ainsi, dans cette thèse un cadre pour simuler des collections pour des approches basées sur les résultats de recherche passées est mis en œuvre et évalué. Quatre algorithmes probabilistes pour la réutilisation des résultats de recherches passées sont ensuite proposés et évalués. Enfin, une nouvelle mesure dans un contexte de clustering est proposée
Past searches provide a useful source of information for new users (new queries). Due to the lack of ad-hoc IR collections, to this date there is a weak interest of the IR community on the use of past search results. Indeed, most of the existing IR collections are composed of independent queries. These collections are not appropriate to evaluate approaches rooted in past queries because they do not gather similar queries due to the lack of relevance judgments. Therefore, there is no easy way to evaluate the convenience of these approaches. In addition, elaborating such collections is difficult due to the cost and time needed. Thus a feasible alternative is to simulate such collections. Besides, relevant documents from similar past queries could be used to answer the new query. This principle could benefit from clustering of past searches according to their similarities. Thus, in this thesis a framework to simulate ad-hoc approaches based on past search results is implemented and evaluated. Four randomized algorithms to improve precision are proposed and evaluated, finally a new measure in the clustering context is proposed
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Borenstein, Yossi. "Problem hardness for randomized search heuristics with comparison-based selection : a focus on evolutionary algorithms." Thesis, University of Essex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Karlsson, Albin. "Evaluation of the Complexity of Procedurally Generated Maze Algorithms." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16839.

Повний текст джерела
Анотація:
Background. Procedural Content Generation (PCG) in Video Games can be used as a tool for efficiently producing large varieties of new content using less manpower, making it ideal for smaller teams of developers who wants to compete with games made by larger teams. One particular facet of PCG is the generation of mazes. Designers that want their game to feature mazes also need to know how to evaluate their maze-complexity, in order to know which maze fits the difficulty curve best. Objectives. This project aims to investigate the difference in complexity between the maze generation algorithms recursive backtracker (RecBack), Prim’s algorithm (Prims), and recursive division (RecDiv), in terms completion time, when solved using a depth-first-search (DFS) algorithm. In order to understand which parameters affect completion time/complexity, investigate possible connections between completion time, and the distribution of branching paths, distribution of corridors, and length of the path traversed by DFS. Methods. The main methodology was an implementation in the form of a C# application, which randomly generated 100 mazes for each algorithm for five different maze grid resolutions (16x16, 32x32, 64x64, 128x128, 256x256). Each one of the generated mazes was solved using a DFS algorithm, whose traversed nodes, solving path, and completion time was recorded. Additionally, branch distribution and corridor distribution data was gathered for each generated maze. Results. The initial results showed that mazes generated by Prims algorithm had the lowest complexity (shortest completion time), the shortest solving path, the lowest amount of traversed nodes, and the lowest proportion of 2-branches, but the highest proportion of all other branch types. Additionally Prims had the highest proportion of 4-6 length paths, but the lowest proportion of 2 and 3 length paths. Later mazes generated by RecDiv had intermediate complexity, intermediate solving path, intermediate traversed nodes, intermediate proportion of all branch types, and the highest proportion of 2-length paths, but the lowest proportion of 4-6 length paths. Finally mazes generated by RecBack had opposite statistics from Prims: the highest complexity, the longest solving path, the highest amount of traversed nodes, the highest proportion of 2-branches, but lowest proportion of all other branch types, and the highest proportion of 3-length paths, but the lowest of 2-length paths. Conclusions. Prims algorithm had the lowest complexity, RecDiv intermediate complexity, and RecBack the highest complexity. Increased solving path length, traversed nodes, and increased proportions of 2-branches, seem to correlate with increased complexity. However the corridor distribution results are too small and diverse to identify a pattern affecting completion time. However the corridor distribution results are too diverse to make it possible to discern a pattern affecting completion time by just observing the data.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Björklund, Henrik. "Combinatorial Optimization for Infinite Games on Graphs." Doctoral thesis, Uppsala University, Department of Information Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4751.

Повний текст джерела
Анотація:

Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pourhassan, Mojgan. "Parameterised complexity analysis of evolutionary algorithms for combinatorial optimization problems." Thesis, 2017. http://hdl.handle.net/2440/109799.

Повний текст джерела
Анотація:
Evolutionary algorithms are general problem solvers that have been successfully used in solving combinatorial optimization problems. However, due to the great amount of randomness in these algorithms, theoretical understanding of them is quite challenging. In this thesis we analyse the parameterized complexity of evolutionary algorithms on combinatorial optimization problems. Studying the parameterized complexity of these algorithms can help us understand how different parameters of problems influence the runtime behaviour of the algorithm and consequently lead us in finding better performing algorithms. We focus on two NP-hard combinatorial optimization problems; the generalized travelling salesman problem (GTSP) and the vertex cover problem (VCP). For solving the GTSP, two hierarchical approaches with different neighbourhood structures have been proposed in the literature. In this thesis, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective and complementary abilities of the two approaches are pointed out by presenting instances where they mutually outperform each other. After investigating the runtime behaviour of the mentioned randomised algorithms on GTSP, we turn our attention to the VCP. Evolutionary multi-objective optimization for the classical vertex cover problem has been previously analysed in the context of parameterized complexity analysis. We extend the analysis to the weighted version of the problem. We also examine a dynamic version of the classical problem and analyse evolutionary algorithms with respect to their ability to maintain a 2-approximation. Inspired by the concept of duality, an edge-based evolutionary algorithm for solving the VCP has been introduced in the literature. Here we show that this edge-based EA is able to maintain a 2-approximation solution in the dynamic setting. Moreover, using the dual form of the problem, we extend the edge-based approach to the weighted vertex cover problem.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2017.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chisholm, Michael. "Learning classification rules by randomized iterative local search /." 1999. http://hdl.handle.net/1957/11732.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Randomised search algorithms"

1

Theory Of Randomized Search Heuristics Foundations And Recent Developments. World Scientific Publishing Company, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Randomised search algorithms"

1

Millard, Alan G., David R. White, and John A. Clark. "Searching for Pareto-optimal Randomised Algorithms." In Search Based Software Engineering, 183–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33119-0_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Garg, Pramika, Avish Jha, and Amogh Shukla. "Randomised Analysis of Backtracking-based Search Algorithms in Elucidating Sudoku Puzzles Using a Dual Serial/Parallel Approach." In Inventive Computation and Information Technologies, 281–95. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6723-7_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Duch, Amalia, Vladimir Estivill-Castro, and Conrado Martínez. "Randomized K-Dimensional Binary Search Trees." In Algorithms and Computation, 198–209. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-49381-6_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Taillard, Éric D. "Randomized Methods." In Design of Heuristic Algorithms for Hard Optimization, 155–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13714-3_7.

Повний текст джерела
Анотація:
AbstractThis chapter is devoted to random and memory-free methods that repeatedly construct solutions or modify them. Among the most popular techniques, there are simulated annealing, threshold accepting, great deluge and demon algorithms, noising methods, late acceptance hill climbing, variable neighborhood search, and GRASP.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fister, Iztok, Xin-She Yang, Janez Brest, and Iztok Fister. "On the Randomized Firefly Algorithm." In Cuckoo Search and Firefly Algorithm, 27–48. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02141-6_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jansen, Thomas. "Evolutionary Algorithms and Other Randomized Search Heuristics." In Analyzing Evolutionary Algorithms, 7–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-17339-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Clementi, A., L. Kučera, and J. D. P. Rolim. "A Randomized Parallel Search Strategy." In Parallel Algorithms for Irregular Problems: State of the Art, 213–27. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4757-6130-6_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lehre, Per Kristian, and Carsten Witt. "Concentrated Hitting Times of Randomized Search Heuristics with Variable Drift." In Algorithms and Computation, 686–97. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13075-0_54.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hui, Lucas Chi Kwong, and Charles U. Martel. "Randomized competitive algorithms for successful and unsuccessful search on self-adjusting linear lists." In Algorithms and Computation, 426–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57568-5_274.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Watanabe, Osamu, Takeshi Sawai, and Hayato Takahashi. "Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem." In Stochastic Algorithms: Foundations and Applications, 50–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39816-5_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Randomised search algorithms"

1

Liu, Mingmou, Xiaoyin Pan, and Yitong Yin. "Randomized Approximate Nearest Neighbor Search with Limited Adaptivity." In SPAA '16: 28th ACM Symposium on Parallelism in Algorithms and Architectures. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2935764.2935776.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

El-Shaer, Ahmed H., and Masayoshi Tomizuka. "Multi-Objective H2/H∞ Static Output Feedback Control: A Hybrid LMI/Genetic Algorithm Approach." In ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4126.

Повний текст джерела
Анотація:
This paper is concerned with the synthesis of static output feedback (SOF) control satisfying multi-objective H2/H∞ performance for LTI systems. Given an initially stabilizing SOF gain Ko, an estimate of the set of stabilizing gains is obtained using Monte Carlo based randomized search. A synthesis algorithm which combines both linear matrix inequalities (LMIs) and genetic algorithms (GA) is proposed. To improve the efficiency of this hybrid algorithm, the GA employs a projection-like operation so that the solution candidates lie in the interior of the set of stabilizing gains in a neighborhood of Ko.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Merlin, Jerlin C., and Hieu Dinh. "Poster: Randomized algorithms for planted Motif Search." In 2012 IEEE 2nd International Conference on Computational Advances in Bio and Medical Sciences (ICCABS). IEEE, 2012. http://dx.doi.org/10.1109/iccabs.2012.6182654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lima, Rafael Zuolo Coppini. "Dimension Reduction for Projective Clustering." In Encontro de Teoria da Computação. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/etc.2022.223040.

Повний текст джерела
Анотація:
The high dimensionality of data may be a barrier to algorithmic efficiency, mainly because of the well known “curse of dimensionality” that imposes exponential time and/or memory complexity for algorithms. It is natural then to search for ways to break this curse by relaxing the problem with approximate versions and by finding good ways to reduce the dimension of data. Our aim is to integrate and state slightly stronger results of approximation via dimension reduction for clustering under the l_2^2 metric. The dimension reduction is achieved by combining randomized techniques (the Johnson-Lindenstrauss Lemma) and deterministic techniques (the singular value decomposition).
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Vieira, Luciano T., Beatriz de S. L. P. de Lima, Alexandre G. Evsukoff, and Breno P. Jacob. "Application of Genetic Algorithms to the Synthesis of Riser Configurations." In ASME 2003 22nd International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2003. http://dx.doi.org/10.1115/omae2003-37231.

Повний текст джерела
Анотація:
The purpose of this work is to describe the application of Genetic Algorithms in the search of the best configuration of catenary riser systems in deep waters. Particularly, an optimization methodology based on genetic algorithms is implemented on a computer program, in order to seek an optimum geometric configuration for a steel catenary riser in a lazy-wave configuration. This problem is characterized by a very large space of possible solutions; the use of traditional methods is an exhaustive work, since there is a large number of variables and parameters that define this type of system. Genetic algorithms are more robust than the more commonly used optimization techniques. They use random choice as a tool to guide a search toward regions of the search space with likely improvements. Some differences such as the coding of the parameter set, the search from a population of points, the use of objective functions and randomized operators are factors that contribute to the robustness of a genetic algorithm and result in advantages over traditional techniques. The implemented methodology has as baseline one or more criteria established by the experience of the offshore engineer. The implementation of an intelligent methodology oriented specifically to the optimization and synthesis of riser configurations will not only facilitate the work of manipulating a huge mass of data, but also assure the best alternative between all the possible ones, searching in a much larger space of possible solutions than classical methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Franti, Pasi, Marko Tuononen, and Olli Virmajoki. "Deterministic and randomized local search algorithms for clustering." In 2008 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2008. http://dx.doi.org/10.1109/icme.2008.4607565.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kumar, Ashok, Suresh Chandrasekaran, A. Chockalingam, and B. Sundar Rajan. "Near-Optimal Large-MIMO Detection Using Randomized MCMC and Randomized Search Algorithms." In ICC 2011 - 2011 IEEE International Conference on Communications. IEEE, 2011. http://dx.doi.org/10.1109/icc.2011.5963229.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Júnior, Darci José Mendes, Luciana Brugiolo Gonçalves, and Stênio Sã R. F. Soares. "An ILS algorithm with RVND for the green vehicle routing problems with time-varying speeds." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4452.

Повний текст джерела
Анотація:
The environmental impacts of human action have led several countries to create stricter laws and tax breaks to reduce this damage. Thereby, the Green Logistic has been increasingly sought to meet the requirements and needs for a more sustainable development. This work presents an ILS (Iterated Local Search) algorithm combined with RVND (Random Variable Neighborhood Search) and compare it with a GRASP (Greed Randomized Search Procedure) algorithm where each one has two variations: minimize distance and minimize emission. The results show the effectiveness of the ILS approach and heuristics that minimize the total distance covered do not present themselves as good solutions in terms of sustainability.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cohen, Eldan, Richard Valenzano, and Sheila McIlraith. "Type-WA*: Using Exploration in Bounded Suboptimal Planning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/557.

Повний текст джерела
Анотація:
Previous work on satisficing planning using greedy best-first search (GBFS) has shown that non-greedy, randomized exploration can help escape uninformative heuristic regions and solve hard problems faster. Despite their success when used with GBFS, such exploration techniques cannot be directly applied to bounded suboptimal algorithms like Weighted A* (WA*) without losing the solution-quality guarantees. In this work, we present Type-WA*, a novel bounded suboptimal planning algorithm that augments WA* with type-based exploration while still satisfying WA*'s theoretical solution-quality guarantee. Our empirical analysis shows that Type-WA* significantly increases the number of solved problems, when used in conjunction with each of three popular heuristics. Our analysis also provides insight into the runtime vs. solution cost trade-off.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nath, Ravindra, and Renu Jain. "Using Randomized Search Algorithms to Estimate HMM Learning Parameters." In 2009 IEEE International Advance Computing Conference (IACC 2009). IEEE, 2009. http://dx.doi.org/10.1109/iadcc.2009.4808999.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Randomised search algorithms"

1

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Повний текст джерела
Анотація:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії