Letteratura scientifica selezionata sul tema "Algorithms"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Algorithms".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Algorithms"

1

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu, and Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Testo completo
Abstract (sommario):
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Moret, Bernard M. E., and Henry D. D. Shapiro. "Algorithms and Experiments: The New (and Old) Methodology." JUCS - Journal of Universal Computer Science 7, no. (5) (May 28, 2001): 434–46. https://doi.org/10.3217/jucs-007-05-0434.

Testo completo
Abstract (sommario):
The last twenty years have seen enormous progress in the design of algorithms, but little of it has been put into practice. Because many recently developed algorithms are hard to characterize theoretically and have large running_time coefficients, the gap between theory and practice has widened over these years. Experimentation is indispensable in the assessment of heuristics for hard problems, in the characterization of asymptotic behavior of complex algorithms, and in the comparison of competing designs for tractable problems. Implementation, although perhaps not rigorous experimentation, was characteristic of early work in algorithms and data structures. Donald Knuth has throughout insisted on testing every algorithm and conducting analyses that can predict behavior on actual data, more recently, Jon Bentley has vividly illustrated the difficulty of implementation and the value of testing. Numerical analysts have long understood the need for standardized test suites to ensure robustness, precision and efficiency of numerical libraries. It is only recently, however, that the algorithms community has shown signs of returning to implementation and testing as an integral part of algorithm development. The emerging disciplines of experimental algorithmics and algorithm engineering have revived and are extending many of the approaches used by computing pioneers such as Floyd and Knuth and are placing on a formal basis many of Bentley's observations. We reflect on these issues, looking back at the last thirty years of algorithm development and forward to new challenges: designing cache_aware algorithms, algorithms for mixed models of computation, algorithms for external memory, and algorithms for scientific research.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Gościniak, Ireneusz, and Krzysztof Gdawiec. "Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications." Entropy 22, no. 7 (July 2, 2020): 734. http://dx.doi.org/10.3390/e22070734.

Testo completo
Abstract (sommario):
There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Nico, Nico, Novrido Charibaldi, and Yuli Fauziah. "Comparison of Memetic Algorithm and Genetic Algorithm on Nurse Picket Scheduling at Public Health Center." International Journal of Artificial Intelligence & Robotics (IJAIR) 4, no. 1 (May 30, 2022): 9–23. http://dx.doi.org/10.25139/ijair.v4i1.4323.

Testo completo
Abstract (sommario):

 One of the most significant aspects of the working world is the concept of a picket schedule. It is difficult for the scheduler to make an archive since there are frequently many issues with the picket schedule. These issues include schedule clashes, requests for leave, and trading schedules. Evolutionary algorithms have been successful in solving a wide variety of scheduling issues. Evolutionary algorithms are very susceptible to data convergence. But no one has discussed where to start from, where the data converges from making schedules using evolutionary algorithms. The best algorithms among evolutionary algorithms for scheduling are genetic algorithms and memetics algorithms. When it comes to the two algorithms, using genetic algorithms or memetics algorithms may not always offer the optimum outcomes in every situation. Therefore, it is necessary to compare the genetic algorithm and the algorithm's memetic algorithm to determine which one is suitable for the nurse picket schedule. From the results of this study, the memetic algorithm is better than the genetic algorithm in making picket schedules. The memetic algorithm with a population of 10000 and a generation of 5000 does not produce convergent data. While for the genetic algorithm, when the population is 5000 and the generation is 50, the data convergence starts. For accuracy, the memetic algorithm violates only 24 of the 124 existing constraints (80,645%). The genetic algorithm violates 27 of the 124 constraints (78,225%). The average runtime used to generate optimal data using the memetic algorithm takes 20.935592 seconds. For the genetic algorithm, it takes longer, as much as 53.951508 seconds.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Omar, Hoger K., Kamal H. Jihad, and Shalau F. Hussein. "Comparative analysis of the essential CPU scheduling algorithms." Bulletin of Electrical Engineering and Informatics 10, no. 5 (October 1, 2021): 2742–50. http://dx.doi.org/10.11591/eei.v10i5.2812.

Testo completo
Abstract (sommario):
CPU scheduling algorithms have a significant function in multiprogramming operating systems. When the CPU scheduling is effective a high rate of computation could be done correctly and also the system will maintain in a stable state. As well as, CPU scheduling algorithms are the main service in the operating systems that fulfill the maximum utilization of the CPU. This paper aims to compare the characteristics of the CPU scheduling algorithms towards which one is the best algorithm for gaining a higher CPU utilization. The comparison has been done between ten scheduling algorithms with presenting different parameters, such as performance, algorithm’s complexity, algorithm’s problem, average waiting times, algorithm’s advantages-disadvantages, allocation way, etc. The main purpose of the article is to analyze the CPU scheduler in such a way that suits the scheduling goals. However, knowing the algorithm type which is most suitable for a particular situation by showing its full properties.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Hairol Anuar, Siti Haryanti, Zuraida Abal Abas, Norhazwani Mohd Yunos, Nurul Hafizah Mohd Zaki, Nurul Akmal Hashim, Mohd Fariddudin Mokhtar, Siti Azirah Asmai, Zaheera Zainal Abidin, and Ahmad Fadzli Nizam. "Comparison between Louvain and Leiden Algorithm for Network Structure: A Review." Journal of Physics: Conference Series 2129, no. 1 (December 1, 2021): 012028. http://dx.doi.org/10.1088/1742-6596/2129/1/012028.

Testo completo
Abstract (sommario):
Abstract In the real network, there must be a large and complex network. The solution to understand that kind of network structure is using the community detection algorithms. There are a lot of other algorithms out there to perform community detection. Each of the algorithms has its own advantages and disadvantages with different types and scale of complex network. The Louvain has been experimented that shows bad connected in community and disconnected when running the algorithm iteratively. In this paper, two algorithm based on agglomerative method (Louvain and Leiden) are introduced and reviewed. The concept and benefit are summarized in detail by comparison. Finally, the Leiden algorithm’s property is considered the latest and fastest algorithm than the Louvain algorithm. For the future, the comparison can help in choosing the best community detection algorithms even though these algorithms have different definitions of community.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Gangavane, Ms H. N. "A Comparison of ABK-Means Algorithm with Traditional Algorithms." International Journal of Trend in Scientific Research and Development Volume-1, Issue-4 (June 30, 2017): 614–21. http://dx.doi.org/10.31142/ijtsrd2197.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Jay, Kishore Sahani, and Kumar Yadav Arvind. "The Bees Algorithms in Optimization: An Overview." MATHEMATICS EDUCATION LV, no. 3, September 2021 (September 30, 2021): 20–28. https://doi.org/10.5281/zenodo.7275730.

Testo completo
Abstract (sommario):
            Metaheuristic algorithms have become powerful tools for modeling and optimization. In this article, we provide an overview of Bee Algorithms and their applications. We will briefly introduce algorithms such as bee algorithms, virtual bee algorithm, artificial bee algorithm, bee mating algorithm, etc. We also briefly the main characteristics of these algorithms and outline some recent applications of these algorithms. 
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Belazi, Akram, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla, and José-Luis Sánchez-Romero. "Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization." Mathematics 10, no. 7 (April 3, 2022): 1166. http://dx.doi.org/10.3390/math10071166.

Testo completo
Abstract (sommario):
The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan, and Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning." International Journal of Intelligent Systems 2023 (November 10, 2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Testo completo
Abstract (sommario):
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Tesi sul tema "Algorithms"

1

Yarmolskyy, Oleksandr. "Využití distribuovaných a stochastických algoritmů v síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-370918.

Testo completo
Abstract (sommario):
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Crocomo, Márcio Kassouf. "Algoritmo de otimização bayesiano com detecção de comunidades." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23012013-160605/.

Testo completo
Abstract (sommario):
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destaca-se o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõe-se dois novos algoritmos denominados CD-BOA e StrOp. Mostra-se que ambos apresentam vantagens significativas em relação ao BOA. O CD-BOA mostra-se mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CD-BOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturando-os em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CD-BOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CD-BOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva<br>ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Davidsdottir, Agnes. "Algorithms, Turing machines and algorithmic undecidability." Thesis, Uppsala universitet, Algebra och geometri, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441282.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tikhonenko, Dmitrii. "Managing algorithmic drivers in a blocked-lane scenario." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/670829.

Testo completo
Abstract (sommario):
Due to the emergence of new technologies, algorithm-assisted drivers are close to becoming a reality. In this thesis, different aspects of managing such drivers in a blocked-lane scenario are discussed. The first chapter presents an algorithm for the optimal merging of self-interested drivers. The optimal policy can include undesirable velocity oscillations. We propose measures for a central planner to eradicate them, and we test the efficiency of our algorithm versus popular heuristic policies. In the second chapter, a mechanism for positional bidding of the drivers is developed. It allows trading of highway positions of the drivers with heterogeneous time valuations, resulting in a socially beneficial outcome. The final chapter presents a deep learning policy for centralized clearing of the bottleneck in the shortest time. Its use is fast enough to allow future operational applications, and a training set consists of globally optimal merging policies.<br>En aquesta tesi, es discuteixen diferents aspectes de la gestió els conductors assistits per algoritmes en un escenari de carril bloquejat. El primer capítol presenta un algorisme de la gestió òptima dels conductors egoistes. La política òptima pot incloure oscil.lacions de velocitat no desitjades. Proposem mesures per a un planificador central per erradicar-les i comprovem l’eficiència del nostre algoritme enfront de les polítiques heurístiques populars. En el segon capítol, es desenvolupa un mecanisme per a la licitació posicional dels conductors. Permet negociar posicions per carretera dels conductors amb valoracions de temps heterogènies, donant lloc a un resultat socialment beneficiós. El capítol final presenta una política d’aprenentatge profund per a l’aclariment centralitzat del coll d’ampolla en el menor temps possible. El seu ús és prou ràpid per permetre futures aplicacions operatives, i un conjunt de formació consisteix en polítiques de fusió `òptimes a nivell mundial.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Sauerland, Volkmar [Verfasser]. "Algorithm Engineering for some Complex Practise Problems : Exact Algorithms, Heuristics and Hybrid Evolutionary Algorithms / Volkmar Sauerland." Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1026442745/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Aguiar, Marilton Sanchotene de. "Análise formal da complexidade de algoritmos genéticos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1998. http://hdl.handle.net/10183/25941.

Testo completo
Abstract (sommario):
O objetivo do trabalho é estudar a viabilidade de tratar problemas de otimização, considerados intratáveis, através de Algoritmos Genéticos, desenvolvendo critérios para a avaliação qualitativa de um Algoritmo Genético. Dentro deste tema, abordam-se estudos sobre complexidade, classes de problemas, análise e desenvolvimento de algoritmos e Algoritmos Genéticos, este ultimo sendo objeto central do estudo. Como produto do estudo deste tema, é proposto um método de desenvolvimento de Algoritmos Genéticos, utilizando todo o estudo formal de tipos de problemas, desenvolvimento de algoritmos aproximativos e análise da complexidade. O fato de um problema ser teoricamente resolvível por um computador não é suficiente para o problema ser na prática resolvível. Um problema é denominado tratável se no pior caso possui um algoritmo razoavelmente eficiente. E um algoritmo é dito razoavelmente eficiente quando existe um polinômio p tal que para qualquer entrada de tamanho n o algoritmo termina com no máximo p(n) passos [SZW 84]. Já que um polinômio pode ser de ordem bem alta, então um algoritmo de complexidade polinomial pode ser muito ineficiente. Genéticos é que se pode encontrar soluções aproximadas de problemas de grande complexidade computacional mediante um processo de evolução simulada[LAG 96]. Como produto do estudo deste tema, é proposto um método de desenvolvimento de Algoritmos Genéticos com a consciência de qualidade, utilizando todo o estudo formal de tipos de problemas, desenvolvimento de algoritmos aproximativos e análise da complexidade. Uma axiomatização tem o propósito de dar a semântica do algoritmo, ou seja, ela define, formalmente, o funcionamento do algoritmo, mais especificamente das funções e procedimentos do algoritmo. E isto, possibilita ao projetista de algoritmos uma maior segurança no desenvolvimento, porque para provar a correção de um Algoritmo Genético que satisfaça esse modelo só é necessário provar que os procedimentos satisfazem os axiomas. Para ter-se consciência da qualidade de um algoritmo aproximativo, dois fatores são relevantes: a exatidão e a complexidade. Este trabalho levanta os pontos importantes para o estudo da complexidade de um Algoritmo Genético. Infelizmente, são fatores conflitantes, pois quanto maior a exatidão, pior ( mais alta) é a complexidade, e vice-versa. Assim, um estudo da qualidade de um Algoritmo Genético, considerado um algoritmo aproximativo, só estaria completa com a consideração destes dois fatores. Mas, este trabalho proporciona um grande passo em direção do estudo da viabilidade do tratamento de problemas de otimização via Algoritmos Genéticos.<br>The objective of the work is to study the viability of treating optimization problems, considered intractable, through Genetic Algorithms, developing approaches for the qualitative evaluation of a Genetic Algorithm. Inside this theme, approached areas: complexity, classes of problems, analysis and development of algorithms and Genetic Algorithms, this last one being central object of the study. As product of the study of this theme, a development method of Genetic Algorithms is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The fact that a problem theoretically solvable isn’t enough to mean that it is solvable in pratice. A problem is denominated easy if in the worst case it possesses an algorithm reasonably efficient. And an algorithm is said reasonably efficient when a polynomial p exists such that for any entrance size n the algorithm terminates at maximum of p(n) steps [SZW 84]. Since a polynomial can be of very high order, then an algorithm of polynomial complexity can be very inefficient. The premise of the Genetic Algorithms is that one can find approximate solutions of problems of great computational complexity by means of a process of simulated evolution [LAG 96]. As product of the study of this theme, a method of development of Genetic Algorithms with the quality conscience is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The axiom set has the purpose of giving the semantics of the algorithm, in other words, it defines formally the operation of the algorithm, more specifically of the functions and procedures of the algorithm. And this, facilitates the planner of algorithms a larger safety in the development, because in order to prove the correction of a Genetic Algorithm that satisfies that model it is only necessary to prove that the procedures satisfy the axioms. To have conscience of the quality of an approximate algorithm, two factors are important: the accuracy and the complexity. This work lifts the important points for the study of the complexity of a Genetic Algorithm. Unhappily, they are conflicting factors, because as larger the accuracy, worse (higher) it is the complexity, and vice-versa. Thus, a study of the quality of a Genetic Algorithm, considered an approximate algorithm, would be only complete with the consideration of these two factors. But, this work provides a great step in direction of the study of the viability of the treatment of optimization problems through Genetic Algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Testo completo
Abstract (sommario):
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.<br>Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Violich, Stephen Scott. "Fusing Loopless Algorithms for Combinatorial Generation." Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1075.

Testo completo
Abstract (sommario):
Loopless algorithms are an interesting challenge in the field of combinatorial generation. These algorithms must generate each combinatorial object from its predecessor in no more than a constant number of instructions, thus achieving theoretically minimal time complexity. This constraint rules out powerful programming techniques such as iteration and recursion, which makes loopless algorithms harder to develop and less intuitive than other algorithms. This thesis discusses a divide-and-conquer approach by which loopless algorithms can be developed more easily and intuitively: fusing loopless algorithms. If a combinatorial generation problem can be divided into subproblems, it may be possible to conquer it looplessly by fusing loopless algorithms for its subproblems. A key advantage of this approach is that is allows existing loopless algorithms to be reused. This approach is not novel, but it has not been generalised before. This thesis presents a general framework for fusing loopless algorithms, and discusses its implications. It then applies this approach to two combinatorial generation problems and presents two new loopless algorithms. The first new algorithm, MIXPAR, looplessly generates well-formed parenthesis strings comprising two types of parentheses. It is the first loopless algorithm for generating these objects. The second new algorithm, MULTPERM, generates multiset permutations in linear space using only arrays, a benchmark recently set by Korsh and LaFollette (2004). Algorithm MULTPERM is evaluated against Korsh and LaFollette's algorithm, and shown to be simpler and more efficient in both space and time.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 209-214).<br>We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.<br>by Quan Li.<br>Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Astete, morales Sandra. "Contributions to Convergence Analysis of Noisy Optimization Algorithms." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS327/document.

Testo completo
Abstract (sommario):
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées<br>This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Libri sul tema "Algorithms"

1

Smith, Jeffrey Dean. Design and analysis of algorithms. Boston: PWS-KENT Pub. Co., 1989.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1991.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1988.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

H, Jamieson Leah, Gannon Dennis B. 1947-, and Douglass Robert J, eds. The characteristics of parallel algorithms. Cambridge, Mass: MIT Press, 1987.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Chang, Lin, William D. Chey, John Kellow, Jan Tack, and William E. Whitehead, eds. Rome IV Diagnostic Algorithms. Raleigh, NC USA: The Rome Foundation, 2016. http://dx.doi.org/10.24890/algorithms.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Müller-Hannemann, Matthias, and Stefan Schirra. Algorithm engineering: Bridging the gap between algorithm theory and practice. Berlin: Springer, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Ebers, Martin, and Marta Cantero Gamito, eds. Algorithmic Governance and Governance of Algorithms. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-50559-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Brent, R. P. Algorithms for minimization without derivatives. Mineola, N.Y: Dover Publications, 2002.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Wilf, Herbert S. Algorithms and complexity. Englewood Cliffs, N.J: Prentice-Hall, 1986.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Asano, Tetsuo, Toshihide Ibaraki, Hiroshi Imai, and Takao Nishizeki, eds. Algorithms. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Capitoli di libri sul tema "Algorithms"

1

Taillard, Éric D. "Elements of Graphs and Complexity Theory." In Design of Heuristic Algorithms for Hard Optimization, 3–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13714-3_1.

Testo completo
Abstract (sommario):
AbstractThis chapter recalls some elements and definitions in graph theory and complexity theory. On the one hand, basic algorithmic courses very often include graph algorithms. Some of these algorithms have simply been transposed to solve difficult optimization problems in a heuristic way. On the other hand, it is important to be able to determine whether a problem falls into the category of difficult problems. Indeed, one will not develop a heuristic algorithm if there is an efficient algorithm to find an exact solution. Another objective of this chapter is to make the book self-contained.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lalov, Boyan. "Algorithms Creating Algorithms." In Artificial Neural Networks – ICANN 2010, 303–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_39.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Leiserson, Charles E. "Algorithmic analysis of multithreaded algorithms." In Algorithms and Computation, 132. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63890-3_15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Matzner, Tobias. "Algorithms and humans." In Algorithms, 150–61. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Matzner, Tobias. "Theorizing complimentary abstractions." In Algorithms, 29–50. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Matzner, Tobias. "Algorithms and subjects." In Algorithms, 131–49. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Matzner, Tobias. "Algorithms as relational from the beginning." In Algorithms, 15–28. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Matzner, Tobias. "Algorithms and data." In Algorithms, 107–30. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Matzner, Tobias. "Algorithms and material conditions." In Algorithms, 53–79. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Matzner, Tobias. "Conclusion." In Algorithms, 162–63. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003299851-11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Algorithms"

1

Anjum, Ishraq Md, Davorin Peceli, Francesco Capuano, and Bedrich Rus. "High-Power Laser Pulse Shape Optimization with Hybrid Stochastic Optimization Algorithms." In Frontiers in Optics, JD4A.55. Washington, D.C.: Optica Publishing Group, 2024. https://doi.org/10.1364/fio.2024.jd4a.55.

Testo completo
Abstract (sommario):
We evaluate five optimization algorithms for laser pulse temporal shape optimization, using a semi-physical model of a high-power laser. Hybrid algorithms combine Differential Evolution and Bayesian optimization algorithm exploration with Nelder-Mead exploitation, exhibiting superior performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kardasz, Piotr, Krzysztof Kolebski, and Piotr Jednaszewski. "UTILIZING ALGORITHMS FOR IMAGING MEDICAL RESULTS FROM CT, MRI, AND PET INTO VIRTUAL REALITY." In 24th SGEM International Multidisciplinary Scientific GeoConference 2024, 89–96. STEF92 Technology, 2024. https://doi.org/10.5593/sgem2024/2.1/s07.12.

Testo completo
Abstract (sommario):
Utilizing algorithms for imaging medical results from CT, MRI, and PET into Virtual Reality. There is a growing interest in the field of medical imaging in understanding the impact of algorithm parameters on the quality and interpretability of the obtained images. In the developed software VRMed3D aimed at educating students using medical images in virtual reality technology, Gaussian noise was utilized to identify the most effective edge detection algorithms. The search was for an algorithm that provided the most optimal quality of generated images while maintaining the shortest possible waiting time. The aim of this study is to present the results of tests on the impact of changing input parameters of algorithms on the outcome. The article discusses the current state of research, then describes the conducted analysis of algorithms: Gaussian noise, grouping neighboring illumination values, noise removal, and edge detection. Finally, the effects of algorithm configurations on the final outcome are discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sambasivan, Lokesh Kumar, Joydeb Mukherjee, and Dinkar Mylaraswamy. "Benchmarking Diagnostic Algorithms." In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-28194.

Testo completo
Abstract (sommario):
The problem of fault diagnosis has gained considerable significance in a cost conscious aerospace industry. This has resulted in the development of novel methods as well as novel approaches. Following a cost benefit analysis, wherein a diagnostic algorithm has to buy its way on an aircraft, the practitioner is faced with the difficult problem of choosing the best algorithm from among many candidates. Despite the appeal of multiple algorithm solutions, practical applications are limited by engineering resources—the practitioner is forced to pick few among many. Specifically, this paper addresses this important engineering decision making step. Our approach to evaluating diagnostic algorithms is based on two key elements—non recurring engineering cost and recurring engineering cost. Corresponding to each of these criterions, we define metrics. Since development data is a major cost element, non recurring engineering cost is derived using a metric that measures how well an algorithm has used this data. Recurring cost is measured with respect to the algorithm’s robustness and hence the cost associated with sustaining it. Further, we outline procedures for calculating these metrics, making minimal assumptions regarding algorithm internals; allowing the practitioner to evaluate both in-house as well as third party algorithms. The utility of this benchmarking procedure is illustrated using two sets of examples. One of them is a standard vowel recognition problem, while the second one is related to gas turbine diagnosis. For each of these problems, we evaluate a series of candidate algorithms and illustrate the utility of the proposed approach to filter out weak ones. Concluding sections discuss the use of these procedures for exiting technical feasibility and entering engineering feasibility on the technology readiness level (TRL).
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Perju, Veaceslav, and Dorian Saranciuc. "Evaluation of the Multi-Algorithms Targets Recognition Systems." In 12th International Conference on Electronics, Communications and Computing. Technical University of Moldova, 2022. http://dx.doi.org/10.52326/ic-ecco.2022/cs.05.

Testo completo
Abstract (sommario):
This paper presents the evaluation’s results of the new classes of the target recognition systems – multi- algorithms unimodal systems and multi-algorithms multimodal systems. The structures and the graphs of the systems are described. The mathematical descriptions and the formulas for evaluation of the system’s costs depending on the algorithm’s recognition probability and the relation between the costs of the algorithm’s software and the system’s hardware are presented. The approach to determine the cost of a system for an established threshold level of the system's recognition probability is proposed. The relation of the system's cost to the system's recognition probability for different values of the algorithm's recognition probability is evaluated as well as the rating of the target recognition systems based on their recognition probabilities and costs.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Kunz, Philipp, Ilche Georgievski, and Marco Aiello. "Towards a Framework for Learning of Algorithms: The Case of Learned Comparison Sorts." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/481.

Testo completo
Abstract (sommario):
Designing algorithms is cumbersome and error-prone. This, among other things, has increasingly led to efforts to extend or even replace designing algorithms with machine learning models. While previous research has demonstrated that some machine learning models possess Turing-completeness, the findings are largely theoretical, and solutions for specific algorithmic tasks remain unclear. With this in mind, we investigate the feasibility of learning representations of classical algorithms from data on their execution, enabling their application to different inputs. We propose a novel and general framework for algorithm learning consisting of a model of computation that facilitates algorithm analysis across various levels of abstraction. We formalize the problem of learning an algorithm using an algebraic approach for graph traversal. We apply this framework to comparison sorts and evaluate the inferred machine learning models' performance, demonstrating the applicability of the approach in terms of accuracy and sensitivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Badi, Manjulata, Sheila Mahapatra, and Saurav Raj. "A SOLUTION OF RPM FOR ACTIVE AND REACTIVE LOADING COMPENSATION." In TOPICS IN INTELLIGENT COMPUTING AND INDUSTRY DESIGN (ICID). Volkson Press, 2022. http://dx.doi.org/10.26480/icpesd.02.2022.135.140.

Testo completo
Abstract (sommario):
A solution for Reactive Power Management considering both the active and reactive power loading is obtained by Butterfly Optimization Algorithm (BOA) hybridized to form of Butterfly Optimization Algorithm-Grey Wolf Optimization (BOA-GWO) algorithms. The objective of the research is to optimize the operating cost and power loss for optimum reactive power solution with the proposed combined algorithm and a comparative account with other algorithms depicts the proposed algorithm’s superiority. In view of best result obtained for real power cost reduction is 3.5% and percentage loss as 3.493% with the validations of state of art literature. The proposed technique delivers superior results when compared with other established algorithms in the literature.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

McCormick, N. J., and Z. Tao. "Algorithms for bioluminescence estimation." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.thff3.

Testo completo
Abstract (sommario):
Two algorithms are compared for estimating the spatial dependence of bioluminescence using in situ measurements of the irradiance and scalar irradiance. The conservation algorithm is based on the conservation principle of radiative transfer, but it requires a separate estimation of the difficult-to-measure absorption coefficient; the two-stream algorithm incorporates additional equations to estimate the absorption coefficient and the albedo of single scattering. The algorithms have been numerically tested with fluctuations in measurements simulated with a random sampling approach. The effectiveness of the algorithms depends upon the strength of the bioluminescent source compared to the background seasurface illumination and other factors.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Caizergues, Emma, François Durand, and Fabien Mathieu. "Anytime Sorting Algorithms." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/785.

Testo completo
Abstract (sommario):
This paper addresses the anytime sorting problem, aiming to develop algorithms providing tentative estimates of the sorted list at each execution step. Comparisons are treated as steps, and the Spearman's footrule metric evaluates estimation accuracy. We propose a general approach for making any sorting algorithm anytime and introduce two new algorithms: multizip sort and Corsort. Simulations showcase the superior performance of both algorithms compared to existing methods. Multizip sort keeps a low global complexity, while Corsort produces intermediate estimates surpassing previous algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

AbuKwaik, Mohammad, Yousef Sardahi, and Ahmad M. Alshorman. "Optimizing Control Systems: A Comparative Analysis of 77 Multi-Objective Evolutionary Algorithms." In ASME 2024 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2024. https://doi.org/10.1115/imece2024-140634.

Testo completo
Abstract (sommario):
Abstract Multi-objective optimization is integral to control systems, particularly in engineering, where multiple objectives must be concurrently satisfied. This study pioneers a comprehensive comparison of 77 evolutionary algorithms for optimizing a second-order system under a proportional-integral-derivative (PID) controller. We emphasize the challenges of balancing conflicting metrics like peak time and percentage overshoot and how PID controller tuning is crucial in achieving this balance. To assess the algorithms’ performance, we utilize four different performance metrics, which encompass Generational Distance (GD), Spacing, Spread, and computation time. The optimization results are analyzed using two methods. Firstly, after 1 million iterations, outcomes are visually represented through discrete charts for each algorithm, detailing their performance metrics. The Multiple single objective Pareto sampling II (MSOPSII) algorithm demonstrates the most effective distribution of the solution set and maintains uniformity well. The Indicator-Based Evolutionary Algorithm (IBEA) excels in the Spacing Metric, while the Speed-constrained Multi-objective Particle Swarm Optimization (SMPSO) algorithm exhibits superior convergence. Secondly, algorithms with optimal performance are evaluated by considering multiple metrics simultaneously. The theta-Dominance-based Evolutionary Algorithm (tDEA) leads in combined convergence and spread, Multi-Tasking Constrained Multi-objective Optimization (MTCMO) stands out for convergence and spacing, and MSOPSII and IBEA perform best in spread and spacing. Across all three metrics, MOEA/D with Adaptive Weight Adjustment (MOEADAWA) emerges as the top performer. These outcomes not only advance our understanding of algorithmic efficiency in multi-objective scenarios but also provide valuable insights for the application of these algorithms in practical control system challenges. This research fills a critical gap in existing literature, offering an in-depth evaluation of a wide range of algorithms in the context of control systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Fadhil, Heba. "Metaheuristic Algorithms in Optimization and its Application: A Review." In The 3rd International Conference On Engineering And Innovative Technology. Salahaddin University-Erbil, 2025. https://doi.org/10.31972/iceit2024.013.

Testo completo
Abstract (sommario):
Metaheuristic algorithms are an intelligent way of thinking and working developed for resolving diverse issues about optimization. The number of potential solutions for such problems often is too large to be properly analyzed using standard procedures; thus, these algorithms are highly flexible and can be useful in many cases where needed to predict different types of optimizations accurately. Metaheuristics take inspiration from several natural processes like evolution or animal behavior, which allow them to show strength without being specific only towards one area. Some Metaheuristics algorithms are commonly being used like : Genetic Algorithm (GA), Simulated Annealing (SA), Evolutionary Algorithm (EA), Tabu Search (TS), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Ant Colony Optimization (ACO), and Cuckoo Search Approach (CSA). All of them derives from this initial set of solutions and employ heuristics to get from this set of solutions.. The objective of this paper is to thoroughly analyze different metaheuristic algorithms. Their principles, mechanisms and the area where they are applied and will delve into. This paper provides a qualitative analysis of these algorithmic performances in diverse settings that underscore their strong suits as well as their weaknesses. The discourse also makes mention of some specific examples like how metaheuristic algorithms find utility application in various fields which include but are not limited to engineering or computer science, even economics and healthcare later down the line receive due consideration with an eye towards specific results; showing not only how effective these individual algorithms can be when applied under differing scenarios but also pointing out areas deserving further research efforts be directed onto them.
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Algorithms"

1

Marty, Frédéric, and Thierry Warin. Deciphering Algorithmic Collusion: Insights from Bandit Algorithms and Implications for Antitrust Enforcement. CIRANO, December 2023. http://dx.doi.org/10.54932/iwpg7510.

Testo completo
Abstract (sommario):
This paper examines algorithmic collusion from legal and economic perspectives, highlighting the growing role of algorithms in digital markets and their potential for anti-competitive behavior. Using bandit algorithms as a model, traditionally applied in uncertain decision-making contexts, we illuminate the dynamics of implicit collusion without overt communication. Legally, the challenge is discerning and classifying these algorithmic signals, especially as unilateral communications. Economically, distinguishing between rational pricing and collusive patterns becomes intricate with algorithm-driven decisions. The paper emphasizes the imperative for competition authorities to identify unusual market behaviors, hinting at shifting the burden of proof to firms with algorithmic pricing. Balancing algorithmic transparency and collusion prevention is crucial. While regulations might address these concerns, they could hinder algorithmic development. As this form of collusion becomes central in antitrust, understanding through models like bandit algorithms is vital, since these last ones may converge faster towards an anticompetitive equilibrium.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tipton, Kelley, Brian F. Leas, Emilia Flores, Christopher Jepson, Jaya Aysola, Jordana Cohen, Michael Harhay, et al. Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare. Agency for Healthcare Research and Quality (AHRQ), December 2023. http://dx.doi.org/10.23970/ahrqepccer268.

Testo completo
Abstract (sommario):
Objectives. To examine the evidence on whether and how healthcare algorithms (including algorithm-informed decision tools) exacerbate, perpetuate, or reduce racial and ethnic disparities in access to healthcare, quality of care, and health outcomes, and examine strategies that mitigate racial and ethnic bias in the development and use of algorithms. Data sources. We searched published and grey literature for relevant studies published between January 2011 and February 2023. Based on expert guidance, we determined that earlier articles are unlikely to reflect current algorithms. We also hand-searched reference lists of relevant studies and reviewed suggestions from experts and stakeholders. Review methods. Searches identified 11,500 unique records. Using predefined criteria and dual review, we screened and selected studies to assess one or both Key Questions (KQs): (1) the effect of algorithms on racial and ethnic disparities in health and healthcare outcomes and (2) the effect of strategies or approaches to mitigate racial and ethnic bias in the development, validation, dissemination, and implementation of algorithms. Outcomes of interest included access to healthcare, quality of care, and health outcomes. We assessed studies’ methodologic risk of bias (ROB) using the ROBINS-I tool and piloted an appraisal supplement to assess racial and ethnic equity-related ROB. We completed a narrative synthesis and cataloged study characteristics and outcome data. We also examined four Contextual Questions (CQs) designed to explore the context and capture insights on practical aspects of potential algorithmic bias. CQ 1 examines the problem’s scope within healthcare. CQ 2 describes recently emerging standards and guidance on how racial and ethnic bias can be prevented or mitigated during algorithm development and deployment. CQ 3 explores stakeholder awareness and perspectives about the interaction of algorithms and racial and ethnic disparities in health and healthcare. We addressed these CQs through supplemental literature reviews and conversations with experts and key stakeholders. For CQ 4, we conducted an in-depth analysis of a sample of six algorithms that have not been widely evaluated before in the published literature to better understand how their design and implementation might contribute to disparities. Results. Fifty-eight studies met inclusion criteria, of which three were included for both KQs. One study was a randomized controlled trial, and all others used cohort, pre-post, or modeling approaches. The studies included numerous types of clinical assessments: need for intensive care or high-risk care management; measurement of kidney or lung function; suitability for kidney or lung transplant; risk of cardiovascular disease, stroke, lung cancer, prostate cancer, postpartum depression, or opioid misuse; and warfarin dosing. We found evidence suggesting that algorithms may: (a) reduce disparities (i.e., revised Kidney Allocation System, prostate cancer screening tools); (b) perpetuate or exacerbate disparities (e.g., estimated glomerular filtration rate [eGFR] for kidney function measurement, cardiovascular disease risk assessments); and/or (c) have no effect on racial or ethnic disparities. Algorithms for which mitigation strategies were identified are included in KQ 2. We identified six types of strategies often used to mitigate the potential of algorithms to contribute to disparities: removing an input variable; replacing a variable; adding one or more variables; changing or diversifying the racial and ethnic composition of the patient population used to train or validate a model; creating separate algorithms or thresholds for different populations; and modifying the statistical or analytic techniques used by an algorithm. Most mitigation efforts improved proximal outcomes (e.g., algorithmic calibration) for targeted populations, but it is more challenging to infer or extrapolate effects on longer term outcomes, such as racial and ethnic disparities. The scope of racial and ethnic bias related to algorithms and their application is difficult to quantify, but it clearly extends across the spectrum of medicine. Regulatory, professional, and corporate stakeholders are undertaking numerous efforts to develop standards for algorithms, often emphasizing the need for transparency, accountability, and representativeness. Conclusions. Algorithms have been shown to potentially perpetuate, exacerbate, and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (e.g., kidney transplant allocation) or disparities in care (e.g., prostate cancer screening that historically led to Black men receiving more low-yield biopsies). It is important to note that in such cases the rationale for using race and ethnicity was clearly delineated and did not conflate race and ethnicity with ancestry and/or genetic predisposition. However, when algorithms include race and ethnicity without clear rationale, they may perpetuate the incorrect notion that race is a biologic construct and contribute to disparities. Finally, some algorithms may reduce or perpetuate disparities without containing race and ethnicity as an input. Several modeling studies showed that applying algorithms out of context of original development (e.g., illness severity scores used for crisis standards of care) could perpetuate or exacerbate disparities. On the other hand, algorithms may also reduce disparities by standardizing care and reducing opportunities for implicit bias (e.g., Lung Allocation Score for lung transplantation). Several mitigation strategies have been shown to potentially reduce the contribution of algorithms to racial and ethnic disparities. Results of mitigation efforts are highly context specific, relating to unique combinations of algorithm, clinical condition, population, setting, and outcomes. Important future steps include increasing transparency in algorithm development and implementation, increasing diversity of research and leadership teams, engaging diverse patient and community groups in the development to implementation lifecycle, promoting stakeholder awareness (including patients) of potential algorithmic risk, and investing in further research to assess the real-world effect of algorithms on racial and ethnic disparities before widespread implementation.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Johansen, Richard A., Christina L. Saltus, Molly K. Reif, and Kaytee L. Pokrzywinski. A Review of Empirical Algorithms for the Detection and Quantification of Harmful Algal Blooms Using Satellite-Borne Remote Sensing. U.S. Army Engineer Research and Development Center, June 2022. http://dx.doi.org/10.21079/11681/44523.

Testo completo
Abstract (sommario):
Harmful Algal Blooms (HABs) continue to be a global concern, especially since predicting bloom events including the intensity, extent, and geographic location, remain difficult. However, remote sensing platforms are useful tools for monitoring HABs across space and time. The main objective of this review was to explore the scientific literature to develop a near-comprehensive list of spectrally derived empirical algorithms for satellite imagers commonly utilized for the detection and quantification HABs and water quality indicators. This review identified the 29 WorldView-2 MSI algorithms, 25 Sentinel-2 MSI algorithms, 32 Landsat-8 OLI algorithms, 9 MODIS algorithms, and 64 MERIS/Sentinel-3 OLCI algorithms. This review also revealed most empirical-based algorithms fell into one of the following general formulas: two-band difference algorithm (2BDA), three-band difference algorithm (3BDA), normalized-difference chlorophyll index (NDCI), or the cyanobacterial index (CI). New empirical algorithm development appears to be constrained, at least in part, due to the limited number of HAB-associated spectral features detectable in currently operational imagers. However, these algorithms provide a foundation for future algorithm development as new sensors, technologies, and platforms emerge.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Baader, Franz, and Rafael Peñaloza. Axiom Pinpointing in General Tableaux. Aachen University of Technology, 2007. http://dx.doi.org/10.25368/2022.159.

Testo completo
Abstract (sommario):
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableaux algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Baader, Franz, and Rafael Peñaloza. Pinpointing in Terminating Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.166.

Testo completo
Abstract (sommario):
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableau algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Arthur, Jennifer Ann. Genetic Algorithms. Office of Scientific and Technical Information (OSTI), August 2017. http://dx.doi.org/10.2172/1375151.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Sahni, Sartaj. Parallel Algorithms. Fort Belvoir, VA: Defense Technical Information Center, June 1999. http://dx.doi.org/10.21236/ada369856.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Vazirani, Umesh. Quantum Algorithms. Fort Belvoir, VA: Defense Technical Information Center, January 2013. http://dx.doi.org/10.21236/ada579025.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Baader, Franz, and Rafael Peñaloza. Blocking and Pinpointing in Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.165.

Testo completo
Abstract (sommario):
Axiom pinpointing has been introduced in description logics (DLs) to help the used understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in consideration. Several pinpointing algorithms have been described as extensions of the standard tableau-based reasoning algorithms for deciding consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL, using specific traits of them. In the past, we have developed a general approach for extending tableau-based algorithms into pinpointing algorithms. In this paper we explore some issues of termination of general tableaux and their pinpointing extensions. We also define a subclass of tableaux that allows the use of so-called blocking conditions, which stop the execution of the algorithm once a pattern is found, and adapt the pinpointing extensions accordingly, guaranteeing its correctness and termination.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Cordeiro de Amorim, Renato. A survey on feature weighting based K-Means algorithms. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ser.v1i2.79.

Testo completo
Abstract (sommario):
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia