Letteratura scientifica selezionata sul tema "Algorithms"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Algorithms".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Algorithms"

1

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu e Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped". Intelligent Data Analysis 26, n. 5 (5 settembre 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Testo completo
Abstract (sommario):
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gościniak, Ireneusz, e Krzysztof Gdawiec. "Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications". Entropy 22, n. 7 (2 luglio 2020): 734. http://dx.doi.org/10.3390/e22070734.

Testo completo
Abstract (sommario):
There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Gangavane, Ms H. N. "A Comparison of ABK-Means Algorithm with Traditional Algorithms". International Journal of Trend in Scientific Research and Development Volume-1, Issue-4 (30 giugno 2017): 614–21. http://dx.doi.org/10.31142/ijtsrd2197.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Nico, Nico, Novrido Charibaldi e Yuli Fauziah. "Comparison of Memetic Algorithm and Genetic Algorithm on Nurse Picket Scheduling at Public Health Center". International Journal of Artificial Intelligence & Robotics (IJAIR) 4, n. 1 (30 maggio 2022): 9–23. http://dx.doi.org/10.25139/ijair.v4i1.4323.

Testo completo
Abstract (sommario):
One of the most significant aspects of the working world is the concept of a picket schedule. It is difficult for the scheduler to make an archive since there are frequently many issues with the picket schedule. These issues include schedule clashes, requests for leave, and trading schedules. Evolutionary algorithms have been successful in solving a wide variety of scheduling issues. Evolutionary algorithms are very susceptible to data convergence. But no one has discussed where to start from, where the data converges from making schedules using evolutionary algorithms. The best algorithms among evolutionary algorithms for scheduling are genetic algorithms and memetics algorithms. When it comes to the two algorithms, using genetic algorithms or memetics algorithms may not always offer the optimum outcomes in every situation. Therefore, it is necessary to compare the genetic algorithm and the algorithm's memetic algorithm to determine which one is suitable for the nurse picket schedule. From the results of this study, the memetic algorithm is better than the genetic algorithm in making picket schedules. The memetic algorithm with a population of 10000 and a generation of 5000 does not produce convergent data. While for the genetic algorithm, when the population is 5000 and the generation is 50, the data convergence starts. For accuracy, the memetic algorithm violates only 24 of the 124 existing constraints (80,645%). The genetic algorithm violates 27 of the 124 constraints (78,225%). The average runtime used to generate optimal data using the memetic algorithm takes 20.935592 seconds. For the genetic algorithm, it takes longer, as much as 53.951508 seconds.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Omar, Hoger K., Kamal H. Jihad e Shalau F. Hussein. "Comparative analysis of the essential CPU scheduling algorithms". Bulletin of Electrical Engineering and Informatics 10, n. 5 (1 ottobre 2021): 2742–50. http://dx.doi.org/10.11591/eei.v10i5.2812.

Testo completo
Abstract (sommario):
CPU scheduling algorithms have a significant function in multiprogramming operating systems. When the CPU scheduling is effective a high rate of computation could be done correctly and also the system will maintain in a stable state. As well as, CPU scheduling algorithms are the main service in the operating systems that fulfill the maximum utilization of the CPU. This paper aims to compare the characteristics of the CPU scheduling algorithms towards which one is the best algorithm for gaining a higher CPU utilization. The comparison has been done between ten scheduling algorithms with presenting different parameters, such as performance, algorithm’s complexity, algorithm’s problem, average waiting times, algorithm’s advantages-disadvantages, allocation way, etc. The main purpose of the article is to analyze the CPU scheduler in such a way that suits the scheduling goals. However, knowing the algorithm type which is most suitable for a particular situation by showing its full properties.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Hairol Anuar, Siti Haryanti, Zuraida Abal Abas, Norhazwani Mohd Yunos, Nurul Hafizah Mohd Zaki, Nurul Akmal Hashim, Mohd Fariddudin Mokhtar, Siti Azirah Asmai, Zaheera Zainal Abidin e Ahmad Fadzli Nizam. "Comparison between Louvain and Leiden Algorithm for Network Structure: A Review". Journal of Physics: Conference Series 2129, n. 1 (1 dicembre 2021): 012028. http://dx.doi.org/10.1088/1742-6596/2129/1/012028.

Testo completo
Abstract (sommario):
Abstract In the real network, there must be a large and complex network. The solution to understand that kind of network structure is using the community detection algorithms. There are a lot of other algorithms out there to perform community detection. Each of the algorithms has its own advantages and disadvantages with different types and scale of complex network. The Louvain has been experimented that shows bad connected in community and disconnected when running the algorithm iteratively. In this paper, two algorithm based on agglomerative method (Louvain and Leiden) are introduced and reviewed. The concept and benefit are summarized in detail by comparison. Finally, the Leiden algorithm’s property is considered the latest and fastest algorithm than the Louvain algorithm. For the future, the comparison can help in choosing the best community detection algorithms even though these algorithms have different definitions of community.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Agapie, Alexandru. "Theoretical Analysis of Mutation-Adaptive Evolutionary Algorithms". Evolutionary Computation 9, n. 2 (giugno 2001): 127–46. http://dx.doi.org/10.1162/106365601750190370.

Testo completo
Abstract (sommario):
Adaptive evolutionary algorithms require a more sophisticated modeling than their static-parameter counterparts. Taking into account the current population is not enough when implementing parameter-adaptation rules based on success rates (evolution strategies) or on premature convergence (genetic algorithms). Instead of Markov chains, we use random systems with complete connections - accounting for a complete, rather than recent, history of the algorithm's evolution. Under the new paradigm, we analyze the convergence of several mutation-adaptive algorithms: a binary genetic algorithm, the 1/5 success rule evolution strategy, a continuous, respectively a dynamic (1+1) evolutionary algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Belazi, Akram, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla e José-Luis Sánchez-Romero. "Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization". Mathematics 10, n. 7 (3 aprile 2022): 1166. http://dx.doi.org/10.3390/math10071166.

Testo completo
Abstract (sommario):
The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan e Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning". International Journal of Intelligent Systems 2023 (10 novembre 2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Testo completo
Abstract (sommario):
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

SHAH, I. "DIRECT ALGORITHMS FOR FINDING MINIMAL UNSATISFIABLE SUBSETS IN OVER-CONSTRAINED CSPs". International Journal on Artificial Intelligence Tools 20, n. 01 (febbraio 2011): 53–91. http://dx.doi.org/10.1142/s0218213011000036.

Testo completo
Abstract (sommario):
In many situations, an explanation of the reasons behind inconsistency in an overconstrained CSP is required. This explanation can be given in terms of minimal unsatisfiable subsets (MUSes) of constraints. This paper presents algorithms for finding minimal unsatisfiable subsets (MUSes) of constraints in overconstrained CSPs with finite domains and binary constraints. The approach followed is to generate subsets in the subset space, test them for consistency and record the inconsistent subsets found. We present three algorithms as variations of this basic approach. Each algorithm generates subsets in the subset space in a different order and curtails search by employing various search pruning mechanisms. The proposed algorithms are anytime algorithms: a time limit can be set on an algorithm's search and the algorithm can be made to find a subset of MUSes. Experimental evaluation of the proposed algorithms demonstrates that they perform two to three orders of magnitude better than the existing indirect algorithms. Furthermore, the algorithms are able to find MUSes in large CSP benchmarks.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Algorithms"

1

Yarmolskyy, Oleksandr. "Využití distribuovaných a stochastických algoritmů v síti". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-370918.

Testo completo
Abstract (sommario):
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Crocomo, Márcio Kassouf. "Algoritmo de otimização bayesiano com detecção de comunidades". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23012013-160605/.

Testo completo
Abstract (sommario):
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destaca-se o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõe-se dois novos algoritmos denominados CD-BOA e StrOp. Mostra-se que ambos apresentam vantagens significativas em relação ao BOA. O CD-BOA mostra-se mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CD-BOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturando-os em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CD-BOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CD-BOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva
ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Davidsdottir, Agnes. "Algorithms, Turing machines and algorithmic undecidability". Thesis, Uppsala universitet, Algebra och geometri, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441282.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tikhonenko, Dmitrii. "Managing algorithmic drivers in a blocked-lane scenario". Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/670829.

Testo completo
Abstract (sommario):
Due to the emergence of new technologies, algorithm-assisted drivers are close to becoming a reality. In this thesis, different aspects of managing such drivers in a blocked-lane scenario are discussed. The first chapter presents an algorithm for the optimal merging of self-interested drivers. The optimal policy can include undesirable velocity oscillations. We propose measures for a central planner to eradicate them, and we test the efficiency of our algorithm versus popular heuristic policies. In the second chapter, a mechanism for positional bidding of the drivers is developed. It allows trading of highway positions of the drivers with heterogeneous time valuations, resulting in a socially beneficial outcome. The final chapter presents a deep learning policy for centralized clearing of the bottleneck in the shortest time. Its use is fast enough to allow future operational applications, and a training set consists of globally optimal merging policies.
En aquesta tesi, es discuteixen diferents aspectes de la gestió els conductors assistits per algoritmes en un escenari de carril bloquejat. El primer capítol presenta un algorisme de la gestió òptima dels conductors egoistes. La política òptima pot incloure oscil.lacions de velocitat no desitjades. Proposem mesures per a un planificador central per erradicar-les i comprovem l’eficiència del nostre algoritme enfront de les polítiques heurístiques populars. En el segon capítol, es desenvolupa un mecanisme per a la licitació posicional dels conductors. Permet negociar posicions per carretera dels conductors amb valoracions de temps heterogènies, donant lloc a un resultat socialment beneficiós. El capítol final presenta una política d’aprenentatge profund per a l’aclariment centralitzat del coll d’ampolla en el menor temps possible. El seu ús és prou ràpid per permetre futures aplicacions operatives, i un conjunt de formació consisteix en polítiques de fusió `òptimes a nivell mundial.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Sauerland, Volkmar [Verfasser]. "Algorithm Engineering for some Complex Practise Problems : Exact Algorithms, Heuristics and Hybrid Evolutionary Algorithms / Volkmar Sauerland". Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1026442745/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Aguiar, Marilton Sanchotene de. "Análise formal da complexidade de algoritmos genéticos". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1998. http://hdl.handle.net/10183/25941.

Testo completo
Abstract (sommario):
O objetivo do trabalho é estudar a viabilidade de tratar problemas de otimização, considerados intratáveis, através de Algoritmos Genéticos, desenvolvendo critérios para a avaliação qualitativa de um Algoritmo Genético. Dentro deste tema, abordam-se estudos sobre complexidade, classes de problemas, análise e desenvolvimento de algoritmos e Algoritmos Genéticos, este ultimo sendo objeto central do estudo. Como produto do estudo deste tema, é proposto um método de desenvolvimento de Algoritmos Genéticos, utilizando todo o estudo formal de tipos de problemas, desenvolvimento de algoritmos aproximativos e análise da complexidade. O fato de um problema ser teoricamente resolvível por um computador não é suficiente para o problema ser na prática resolvível. Um problema é denominado tratável se no pior caso possui um algoritmo razoavelmente eficiente. E um algoritmo é dito razoavelmente eficiente quando existe um polinômio p tal que para qualquer entrada de tamanho n o algoritmo termina com no máximo p(n) passos [SZW 84]. Já que um polinômio pode ser de ordem bem alta, então um algoritmo de complexidade polinomial pode ser muito ineficiente. Genéticos é que se pode encontrar soluções aproximadas de problemas de grande complexidade computacional mediante um processo de evolução simulada[LAG 96]. Como produto do estudo deste tema, é proposto um método de desenvolvimento de Algoritmos Genéticos com a consciência de qualidade, utilizando todo o estudo formal de tipos de problemas, desenvolvimento de algoritmos aproximativos e análise da complexidade. Uma axiomatização tem o propósito de dar a semântica do algoritmo, ou seja, ela define, formalmente, o funcionamento do algoritmo, mais especificamente das funções e procedimentos do algoritmo. E isto, possibilita ao projetista de algoritmos uma maior segurança no desenvolvimento, porque para provar a correção de um Algoritmo Genético que satisfaça esse modelo só é necessário provar que os procedimentos satisfazem os axiomas. Para ter-se consciência da qualidade de um algoritmo aproximativo, dois fatores são relevantes: a exatidão e a complexidade. Este trabalho levanta os pontos importantes para o estudo da complexidade de um Algoritmo Genético. Infelizmente, são fatores conflitantes, pois quanto maior a exatidão, pior ( mais alta) é a complexidade, e vice-versa. Assim, um estudo da qualidade de um Algoritmo Genético, considerado um algoritmo aproximativo, só estaria completa com a consideração destes dois fatores. Mas, este trabalho proporciona um grande passo em direção do estudo da viabilidade do tratamento de problemas de otimização via Algoritmos Genéticos.
The objective of the work is to study the viability of treating optimization problems, considered intractable, through Genetic Algorithms, developing approaches for the qualitative evaluation of a Genetic Algorithm. Inside this theme, approached areas: complexity, classes of problems, analysis and development of algorithms and Genetic Algorithms, this last one being central object of the study. As product of the study of this theme, a development method of Genetic Algorithms is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The fact that a problem theoretically solvable isn’t enough to mean that it is solvable in pratice. A problem is denominated easy if in the worst case it possesses an algorithm reasonably efficient. And an algorithm is said reasonably efficient when a polynomial p exists such that for any entrance size n the algorithm terminates at maximum of p(n) steps [SZW 84]. Since a polynomial can be of very high order, then an algorithm of polynomial complexity can be very inefficient. The premise of the Genetic Algorithms is that one can find approximate solutions of problems of great computational complexity by means of a process of simulated evolution [LAG 96]. As product of the study of this theme, a method of development of Genetic Algorithms with the quality conscience is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The axiom set has the purpose of giving the semantics of the algorithm, in other words, it defines formally the operation of the algorithm, more specifically of the functions and procedures of the algorithm. And this, facilitates the planner of algorithms a larger safety in the development, because in order to prove the correction of a Genetic Algorithm that satisfies that model it is only necessary to prove that the procedures satisfy the axioms. To have conscience of the quality of an approximate algorithm, two factors are important: the accuracy and the complexity. This work lifts the important points for the study of the complexity of a Genetic Algorithm. Unhappily, they are conflicting factors, because as larger the accuracy, worse (higher) it is the complexity, and vice-versa. Thus, a study of the quality of a Genetic Algorithm, considered an approximate algorithm, would be only complete with the consideration of these two factors. But, this work provides a great step in direction of the study of the viability of the treatment of optimization problems through Genetic Algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Testo completo
Abstract (sommario):
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Violich, Stephen Scott. "Fusing Loopless Algorithms for Combinatorial Generation". Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1075.

Testo completo
Abstract (sommario):
Loopless algorithms are an interesting challenge in the field of combinatorial generation. These algorithms must generate each combinatorial object from its predecessor in no more than a constant number of instructions, thus achieving theoretically minimal time complexity. This constraint rules out powerful programming techniques such as iteration and recursion, which makes loopless algorithms harder to develop and less intuitive than other algorithms. This thesis discusses a divide-and-conquer approach by which loopless algorithms can be developed more easily and intuitively: fusing loopless algorithms. If a combinatorial generation problem can be divided into subproblems, it may be possible to conquer it looplessly by fusing loopless algorithms for its subproblems. A key advantage of this approach is that is allows existing loopless algorithms to be reused. This approach is not novel, but it has not been generalised before. This thesis presents a general framework for fusing loopless algorithms, and discusses its implications. It then applies this approach to two combinatorial generation problems and presents two new loopless algorithms. The first new algorithm, MIXPAR, looplessly generates well-formed parenthesis strings comprising two types of parentheses. It is the first loopless algorithm for generating these objects. The second new algorithm, MULTPERM, generates multiset permutations in linear space using only arrays, a benchmark recently set by Korsh and LaFollette (2004). Algorithm MULTPERM is evaluated against Korsh and LaFollette's algorithm, and shown to be simpler and more efficient in both space and time.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Astete, morales Sandra. "Contributions to Convergence Analysis of Noisy Optimization Algorithms". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS327/document.

Testo completo
Abstract (sommario):
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées
This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Algorithms"

1

Smith, Jeffrey Dean. Design and analysis of algorithms. Boston: PWS-KENT Pub. Co., 1989.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Designing efficient algorithms for parallel computers. New York: McGraw-Hill, 1987.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Designing efficient algorithms for parallel computers. Maidenhead: McGraw, 1988.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Chang, Lin, William D. Chey, John Kellow, Jan Tack e William E. Whitehead, a cura di. Rome IV Diagnostic Algorithms. Raleigh, NC USA: The Rome Foundation, 2016. http://dx.doi.org/10.24890/algorithms.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Griffiths, P. 1947 Oct. 29- e Hill I. D. 1926-, a cura di. Applied statistics algorithms. Chichester: Published by E. Horwood for the Royal Statistical Society, London, 1985.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Rapanotti, L. Algorithm engineering for integral and dynamic problems. Amsterdam: Gordon & Breach, 2001.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2a ed. Reading, Mass: Addison-Wesley Pub. Co., 1991.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2a ed. Reading, Mass: Addison-Wesley Pub. Co., 1988.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

VerfasserIn, Van Gelder Allen, a cura di. Computer algorithms: Introduction to design and analysis. 3a ed. Delhi: Pearson Education, 2009.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ebers, Martin, e Marta Cantero Gamito, a cura di. Algorithmic Governance and Governance of Algorithms. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-50559-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Algorithms"

1

Taillard, Éric D. "Elements of Graphs and Complexity Theory". In Design of Heuristic Algorithms for Hard Optimization, 3–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13714-3_1.

Testo completo
Abstract (sommario):
AbstractThis chapter recalls some elements and definitions in graph theory and complexity theory. On the one hand, basic algorithmic courses very often include graph algorithms. Some of these algorithms have simply been transposed to solve difficult optimization problems in a heuristic way. On the other hand, it is important to be able to determine whether a problem falls into the category of difficult problems. Indeed, one will not develop a heuristic algorithm if there is an efficient algorithm to find an exact solution. Another objective of this chapter is to make the book self-contained.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lalov, Boyan. "Algorithms Creating Algorithms". In Artificial Neural Networks – ICANN 2010, 303–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_39.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Leiserson, Charles E. "Algorithmic analysis of multithreaded algorithms". In Algorithms and Computation, 132. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63890-3_15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Bez, Helmut, e Tony Croft. "Quantum algorithms 2: Simon's algorithm". In Quantum Computation, 333–42. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003264569-23.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Galil, Zvi. "Recent progress in string algorithms". In Algorithms, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7_49.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Pippenger, Nicholas. "Selection networks". In Algorithms, 2–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7_50.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Nagamochi, Hiroshi, e Toshihide Ibaraki. "Computing edge-connectivity in multiple and capacitated graphs". In Algorithms, 12–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7_51.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Imai, Hiroshi, e Kazuo Iwano. "Efficient sequential and parallel algorithms for planar minimum cost flow". In Algorithms, 21–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7_52.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Watanabe, Osamu, e Seinosuke Toda. "Structural analyses on the complexity of inverting functions". In Algorithms, 31–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7_53.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Allender, Eric. "Oracles versus proof techniques that do not relativize". In Algorithms, 39–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52921-7_54.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Algorithms"

1

Sambasivan, Lokesh Kumar, Joydeb Mukherjee e Dinkar Mylaraswamy. "Benchmarking Diagnostic Algorithms". In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-28194.

Testo completo
Abstract (sommario):
The problem of fault diagnosis has gained considerable significance in a cost conscious aerospace industry. This has resulted in the development of novel methods as well as novel approaches. Following a cost benefit analysis, wherein a diagnostic algorithm has to buy its way on an aircraft, the practitioner is faced with the difficult problem of choosing the best algorithm from among many candidates. Despite the appeal of multiple algorithm solutions, practical applications are limited by engineering resources—the practitioner is forced to pick few among many. Specifically, this paper addresses this important engineering decision making step. Our approach to evaluating diagnostic algorithms is based on two key elements—non recurring engineering cost and recurring engineering cost. Corresponding to each of these criterions, we define metrics. Since development data is a major cost element, non recurring engineering cost is derived using a metric that measures how well an algorithm has used this data. Recurring cost is measured with respect to the algorithm’s robustness and hence the cost associated with sustaining it. Further, we outline procedures for calculating these metrics, making minimal assumptions regarding algorithm internals; allowing the practitioner to evaluate both in-house as well as third party algorithms. The utility of this benchmarking procedure is illustrated using two sets of examples. One of them is a standard vowel recognition problem, while the second one is related to gas turbine diagnosis. For each of these problems, we evaluate a series of candidate algorithms and illustrate the utility of the proposed approach to filter out weak ones. Concluding sections discuss the use of these procedures for exiting technical feasibility and entering engineering feasibility on the technology readiness level (TRL).
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Perju, Veaceslav, e Dorian Saranciuc. "Evaluation of the Multi-Algorithms Targets Recognition Systems". In 12th International Conference on Electronics, Communications and Computing. Technical University of Moldova, 2022. http://dx.doi.org/10.52326/ic-ecco.2022/cs.05.

Testo completo
Abstract (sommario):
This paper presents the evaluation’s results of the new classes of the target recognition systems – multi- algorithms unimodal systems and multi-algorithms multimodal systems. The structures and the graphs of the systems are described. The mathematical descriptions and the formulas for evaluation of the system’s costs depending on the algorithm’s recognition probability and the relation between the costs of the algorithm’s software and the system’s hardware are presented. The approach to determine the cost of a system for an established threshold level of the system's recognition probability is proposed. The relation of the system's cost to the system's recognition probability for different values of the algorithm's recognition probability is evaluated as well as the rating of the target recognition systems based on their recognition probabilities and costs.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

McCormick, N. J., e Z. Tao. "Algorithms for bioluminescence estimation". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.thff3.

Testo completo
Abstract (sommario):
Two algorithms are compared for estimating the spatial dependence of bioluminescence using in situ measurements of the irradiance and scalar irradiance. The conservation algorithm is based on the conservation principle of radiative transfer, but it requires a separate estimation of the difficult-to-measure absorption coefficient; the two-stream algorithm incorporates additional equations to estimate the absorption coefficient and the albedo of single scattering. The algorithms have been numerically tested with fluctuations in measurements simulated with a random sampling approach. The effectiveness of the algorithms depends upon the strength of the bioluminescent source compared to the background seasurface illumination and other factors.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Badi, Manjulata, Sheila Mahapatra e Saurav Raj. "A SOLUTION OF RPM FOR ACTIVE AND REACTIVE LOADING COMPENSATION". In TOPICS IN INTELLIGENT COMPUTING AND INDUSTRY DESIGN (ICID). Volkson Press, 2022. http://dx.doi.org/10.26480/icpesd.02.2022.135.140.

Testo completo
Abstract (sommario):
A solution for Reactive Power Management considering both the active and reactive power loading is obtained by Butterfly Optimization Algorithm (BOA) hybridized to form of Butterfly Optimization Algorithm-Grey Wolf Optimization (BOA-GWO) algorithms. The objective of the research is to optimize the operating cost and power loss for optimum reactive power solution with the proposed combined algorithm and a comparative account with other algorithms depicts the proposed algorithm’s superiority. In view of best result obtained for real power cost reduction is 3.5% and percentage loss as 3.493% with the validations of state of art literature. The proposed technique delivers superior results when compared with other established algorithms in the literature.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Showalter, Mark, Dennis Hong e Daniel Larimer. "Development and Comparison of Gait Generation Algorithms for Hexapedal Robots Based on Kinematics With Considerations for Workspace". In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49616.

Testo completo
Abstract (sommario):
This paper explores the interdependance of walking algorithm and limb workspace for the Multi-Appendage Robotic System (MARS). While MARS is a hexapedal robot, the tasks of defining the workspace and walking agorthm for all six limbs can be abstracted to a single limb using the constraint of a tripedal statically stable gait. Thus, by understanding the behavior of an individual limb, two walking algorithms have been developed which allow MARS to walk on level terain. Both algorithms are adaptive in that they continously update based on control inputs. The differences between the two algorithms is that they were developed for different limb workspaces. The simpler algorithm developed for a 2D workspace was implemented, resulting in smooth gait generation with near instantaneous response to control input. This accomplishment demonstrates the feasibility of implementing a more sophisticated algorithem which allows for inputs of: x and y velocity, walking height, yaw, pitch and roll. This algorithm uses a 3D workspace developed to afford near maximum step length.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Jesus, Alexandre D., Arnaud Liefooghe, Bilel Derbel e Luís Paquete. "Algorithm selection of anytime algorithms". In GECCO '20: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3377930.3390185.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Maiti, Arnab, e Palash Dey. "Parameterized Algorithms for Kidney Exchange". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/58.

Testo completo
Abstract (sommario):
In kidney exchange programs, multiple patient-donor pairs each of whom are otherwise incompatible, exchange their donors to receive compatible kidneys. The Kidney Exchange problem is typically modelled as a directed graph where every vertex is either an altruistic donor or a pair of patient and donor; directed edges are added from a donor to its compatible patients. The computational task is to find if there exists a collection of disjoint cycles and paths starting from altruistic donor vertices of length at most l_c and l_p respectively that covers at least some specific number t of non-altruistic vertices (patients). We study parameterized algorithms for the kidney exchange problem in this paper. Specifically, we design FPT algorithms parameterized by each of the following parameters: (1) the number of patients who receive kidney, (2) treewidth of the input graph + max{l_p, l_c}, and (3) the number of vertex types in the input graph when l_p <= l_c. We also present interesting algorithmic and hardness results on the kernelization complexity of the problem. Finally, we present an approximation algorithm for an important special case of Kidney Exchange.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bulavintsev, Vadim, e Dmitry Zhdanov. "Method for Adaptation of Algorithms to GPU Architecture". In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-930-941.

Testo completo
Abstract (sommario):
We propose a generalized method for adapting and optimizing algorithms for efficient execution on modern graphics processing units (GPU). The method consists of several steps. First, build a control flow graph (CFG) of the algorithm. Next, transform the CFG into a tree of loops and merge non-parallelizable loops into parallelizable ones. Finally, map the resulting loops tree to the tree of GPU computational units, unrolling the algorithm’s loops as necessary for the match. The mapping should be performed bottom-up, from the lowest GPU architecture levels to the highest ones, to minimize off-chip memory access and maximize register file usage. The method provides programmer with a convenient and robust mental framework and strategy for GPU code optimization. We demonstrate the method by adapting to a GPU the DPLL backtracking search algorithm for solving the Boolean satisfiability problem (SAT). The resulting GPU version of DPLL outperforms the CPU version in raw tree search performance sixfold for regular Boolean satisfiability problems and twofold for irregular ones.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Sun, Tao, Dongsheng Li, Zhe Quan, Hao Jiang, Shengguo Li e Yong Dou. "Heavy-ball Algorithms Always Escape Saddle Points". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/488.

Testo completo
Abstract (sommario):
Nonconvex optimization algorithms with random initialization have attracted increasing attention recently. It has been showed that many first-order methods always avoid saddle points with random starting points. In this paper, we answer a question: can the nonconvex heavy-ball algorithms with random initialization avoid saddle points? The answer is yes! Direct using the existing proof technique for the heavy-ball algorithms is hard due to that each iteration of the heavy-ball algorithm consists of current and last points. It is impossible to formulate the algorithms as iteration like xk+1= g(xk) under some mapping g. To this end, we design a new mapping on a new space. With some transfers, the heavy-ball algorithm can be interpreted as iterations after this mapping. Theoretically, we prove that heavy-ball gradient descent enjoys larger stepsize than the gradient descent to escape saddle points to escape the saddle point. And the heavy-ball proximal point algorithm is also considered; we also proved that the algorithm can always escape the saddle point.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ghosh, Anjan, e Paparao Palacharla. "Efficient optical preprocessing using split-step algorithms". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.tudd2.

Testo completo
Abstract (sommario):
A new and efficient split-step polynomial preconditioning algorithm is discussed. In this algorithm, basic polynomial preconditioning iterations are repeated several times, each time with an improved matrix. With minor modifications this algorithm can be used for matrix inversion and condition number estimation. The split-step algorithms consist of two nested iterative loops involving matrix multiplications and thus are attractive for parallel optical processors. The details of realization of splitstep algorithms on three different optical matrix multiplier architectures are outlined. The condition number after m outer loop iterations is given by1/{1−[1−1/c(a)] pm }, where p is the number of inner loop iterations and C(A) is the initial condition number.1 This strategy reduces the condition number approximately at the rate O(p −m ), much faster compared to other preconditioning algorithms. The results of numerical experiments on the split-step algorithm applied to a case study of an optimum phased array antenna design problem show that this new preconditioning algorithm is a viable tool for improving the performance of analog optical processors. Preliminary analyses of the split-step algorithm demonstrate its robustness with regard to the spatial errors and noise present in most optical processors. Using these split-step algorithms an interconnected optical processor capable of computing solutions with a specified accuracy can be designed.
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Algorithms"

1

Marty, Frédéric, e Thierry Warin. Deciphering Algorithmic Collusion: Insights from Bandit Algorithms and Implications for Antitrust Enforcement. CIRANO, dicembre 2023. http://dx.doi.org/10.54932/iwpg7510.

Testo completo
Abstract (sommario):
This paper examines algorithmic collusion from legal and economic perspectives, highlighting the growing role of algorithms in digital markets and their potential for anti-competitive behavior. Using bandit algorithms as a model, traditionally applied in uncertain decision-making contexts, we illuminate the dynamics of implicit collusion without overt communication. Legally, the challenge is discerning and classifying these algorithmic signals, especially as unilateral communications. Economically, distinguishing between rational pricing and collusive patterns becomes intricate with algorithm-driven decisions. The paper emphasizes the imperative for competition authorities to identify unusual market behaviors, hinting at shifting the burden of proof to firms with algorithmic pricing. Balancing algorithmic transparency and collusion prevention is crucial. While regulations might address these concerns, they could hinder algorithmic development. As this form of collusion becomes central in antitrust, understanding through models like bandit algorithms is vital, since these last ones may converge faster towards an anticompetitive equilibrium.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tipton, Kelley, Brian F. Leas, Emilia Flores, Christopher Jepson, Jaya Aysola, Jordana Cohen, Michael Harhay et al. Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare. Agency for Healthcare Research and Quality (AHRQ), dicembre 2023. http://dx.doi.org/10.23970/ahrqepccer268.

Testo completo
Abstract (sommario):
Objectives. To examine the evidence on whether and how healthcare algorithms (including algorithm-informed decision tools) exacerbate, perpetuate, or reduce racial and ethnic disparities in access to healthcare, quality of care, and health outcomes, and examine strategies that mitigate racial and ethnic bias in the development and use of algorithms. Data sources. We searched published and grey literature for relevant studies published between January 2011 and February 2023. Based on expert guidance, we determined that earlier articles are unlikely to reflect current algorithms. We also hand-searched reference lists of relevant studies and reviewed suggestions from experts and stakeholders. Review methods. Searches identified 11,500 unique records. Using predefined criteria and dual review, we screened and selected studies to assess one or both Key Questions (KQs): (1) the effect of algorithms on racial and ethnic disparities in health and healthcare outcomes and (2) the effect of strategies or approaches to mitigate racial and ethnic bias in the development, validation, dissemination, and implementation of algorithms. Outcomes of interest included access to healthcare, quality of care, and health outcomes. We assessed studies’ methodologic risk of bias (ROB) using the ROBINS-I tool and piloted an appraisal supplement to assess racial and ethnic equity-related ROB. We completed a narrative synthesis and cataloged study characteristics and outcome data. We also examined four Contextual Questions (CQs) designed to explore the context and capture insights on practical aspects of potential algorithmic bias. CQ 1 examines the problem’s scope within healthcare. CQ 2 describes recently emerging standards and guidance on how racial and ethnic bias can be prevented or mitigated during algorithm development and deployment. CQ 3 explores stakeholder awareness and perspectives about the interaction of algorithms and racial and ethnic disparities in health and healthcare. We addressed these CQs through supplemental literature reviews and conversations with experts and key stakeholders. For CQ 4, we conducted an in-depth analysis of a sample of six algorithms that have not been widely evaluated before in the published literature to better understand how their design and implementation might contribute to disparities. Results. Fifty-eight studies met inclusion criteria, of which three were included for both KQs. One study was a randomized controlled trial, and all others used cohort, pre-post, or modeling approaches. The studies included numerous types of clinical assessments: need for intensive care or high-risk care management; measurement of kidney or lung function; suitability for kidney or lung transplant; risk of cardiovascular disease, stroke, lung cancer, prostate cancer, postpartum depression, or opioid misuse; and warfarin dosing. We found evidence suggesting that algorithms may: (a) reduce disparities (i.e., revised Kidney Allocation System, prostate cancer screening tools); (b) perpetuate or exacerbate disparities (e.g., estimated glomerular filtration rate [eGFR] for kidney function measurement, cardiovascular disease risk assessments); and/or (c) have no effect on racial or ethnic disparities. Algorithms for which mitigation strategies were identified are included in KQ 2. We identified six types of strategies often used to mitigate the potential of algorithms to contribute to disparities: removing an input variable; replacing a variable; adding one or more variables; changing or diversifying the racial and ethnic composition of the patient population used to train or validate a model; creating separate algorithms or thresholds for different populations; and modifying the statistical or analytic techniques used by an algorithm. Most mitigation efforts improved proximal outcomes (e.g., algorithmic calibration) for targeted populations, but it is more challenging to infer or extrapolate effects on longer term outcomes, such as racial and ethnic disparities. The scope of racial and ethnic bias related to algorithms and their application is difficult to quantify, but it clearly extends across the spectrum of medicine. Regulatory, professional, and corporate stakeholders are undertaking numerous efforts to develop standards for algorithms, often emphasizing the need for transparency, accountability, and representativeness. Conclusions. Algorithms have been shown to potentially perpetuate, exacerbate, and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (e.g., kidney transplant allocation) or disparities in care (e.g., prostate cancer screening that historically led to Black men receiving more low-yield biopsies). It is important to note that in such cases the rationale for using race and ethnicity was clearly delineated and did not conflate race and ethnicity with ancestry and/or genetic predisposition. However, when algorithms include race and ethnicity without clear rationale, they may perpetuate the incorrect notion that race is a biologic construct and contribute to disparities. Finally, some algorithms may reduce or perpetuate disparities without containing race and ethnicity as an input. Several modeling studies showed that applying algorithms out of context of original development (e.g., illness severity scores used for crisis standards of care) could perpetuate or exacerbate disparities. On the other hand, algorithms may also reduce disparities by standardizing care and reducing opportunities for implicit bias (e.g., Lung Allocation Score for lung transplantation). Several mitigation strategies have been shown to potentially reduce the contribution of algorithms to racial and ethnic disparities. Results of mitigation efforts are highly context specific, relating to unique combinations of algorithm, clinical condition, population, setting, and outcomes. Important future steps include increasing transparency in algorithm development and implementation, increasing diversity of research and leadership teams, engaging diverse patient and community groups in the development to implementation lifecycle, promoting stakeholder awareness (including patients) of potential algorithmic risk, and investing in further research to assess the real-world effect of algorithms on racial and ethnic disparities before widespread implementation.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Johansen, Richard A., Christina L. Saltus, Molly K. Reif e Kaytee L. Pokrzywinski. A Review of Empirical Algorithms for the Detection and Quantification of Harmful Algal Blooms Using Satellite-Borne Remote Sensing. U.S. Army Engineer Research and Development Center, giugno 2022. http://dx.doi.org/10.21079/11681/44523.

Testo completo
Abstract (sommario):
Harmful Algal Blooms (HABs) continue to be a global concern, especially since predicting bloom events including the intensity, extent, and geographic location, remain difficult. However, remote sensing platforms are useful tools for monitoring HABs across space and time. The main objective of this review was to explore the scientific literature to develop a near-comprehensive list of spectrally derived empirical algorithms for satellite imagers commonly utilized for the detection and quantification HABs and water quality indicators. This review identified the 29 WorldView-2 MSI algorithms, 25 Sentinel-2 MSI algorithms, 32 Landsat-8 OLI algorithms, 9 MODIS algorithms, and 64 MERIS/Sentinel-3 OLCI algorithms. This review also revealed most empirical-based algorithms fell into one of the following general formulas: two-band difference algorithm (2BDA), three-band difference algorithm (3BDA), normalized-difference chlorophyll index (NDCI), or the cyanobacterial index (CI). New empirical algorithm development appears to be constrained, at least in part, due to the limited number of HAB-associated spectral features detectable in currently operational imagers. However, these algorithms provide a foundation for future algorithm development as new sensors, technologies, and platforms emerge.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Baader, Franz, e Rafael Peñaloza. Axiom Pinpointing in General Tableaux. Aachen University of Technology, 2007. http://dx.doi.org/10.25368/2022.159.

Testo completo
Abstract (sommario):
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableaux algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Baader, Franz, e Rafael Peñaloza. Pinpointing in Terminating Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.166.

Testo completo
Abstract (sommario):
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableau algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Arthur, Jennifer Ann. Genetic Algorithms. Office of Scientific and Technical Information (OSTI), agosto 2017. http://dx.doi.org/10.2172/1375151.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Sahni, Sartaj. Parallel Algorithms. Fort Belvoir, VA: Defense Technical Information Center, giugno 1999. http://dx.doi.org/10.21236/ada369856.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Vazirani, Umesh. Quantum Algorithms. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2013. http://dx.doi.org/10.21236/ada579025.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Housley, R. Guidelines for Cryptographic Algorithm Agility and Selecting Mandatory-to-Implement Algorithms. RFC Editor, novembre 2015. http://dx.doi.org/10.17487/rfc7696.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Cordeiro de Amorim, Renato. A survey on feature weighting based K-Means algorithms. Web of Open Science, dicembre 2020. http://dx.doi.org/10.37686/ser.v1i2.79.

Testo completo
Abstract (sommario):
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia