Academic literature on the topic 'Algorithems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithems"

1

Tian, Xin Cheng, and Xiao Hong Deng. "Trajectory Interpolation in CNC Grinding of Indexable Inserts." Advanced Materials Research 97-101 (March 2010): 2007–10. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.2007.

Full text
Abstract:
Most of the commenly adotped interpolation algorithems are for the curve machining on CNC machine tools with Cartesian coordinates configuration. For a CNC machine tool with non-Cartesian configuration, new trajectory interpolation algorithem must be developed to machine complex part sueface. Based on the analysis of the geometric characteristics and the CNC grinding principle of indexable inserts, this paper proposes two interpolation algorithems to grind the nose surface of indexable inserts with constant or variable back angle. The algorithm precision analysis is also made in this paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Morsy, H., M. El-khatib, W. Hussein, and M. Mahgoub. "INVESTIGATION OF MODERN CONTROL ALGORITHEMS IN MECHATRONIC SYSTEM." International Conference on Applied Mechanics and Mechanical Engineering 15, no. 15 (May 1, 2012): 1–20. http://dx.doi.org/10.21608/amme.2012.37057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ye, Jun. "Applying Immune Algorithems to the Calculation of Sound Insulation of Walls." Applied Mechanics and Materials 584-586 (July 2014): 1853–57. http://dx.doi.org/10.4028/www.scientific.net/amm.584-586.1853.

Full text
Abstract:
Building wall plays a key role in the noise isolation. As a there are lot of open holes in the wall for various construction equipments, pipes and lines, it is an important issue how to determine the maximum area of wall cracks with the given expect sound insulation. The calculation model is established with immune algorithm, the expected value of the sound isolation is defined as objective function, the areal density, thickness and Young modulus of monolayer wall are defined as bounded variable. The global maximum value of objective function is obtained by the MATLAB program and so to determine the materials, thickness and construction details which reach to the sound insulation.
APA, Harvard, Vancouver, ISO, and other styles
4

Murtuza, Syed. "A Concis Presentation of Supiervised Learing Algorithems for Feedforward Neural Netwoks." IFAC Proceedings Volumes 27, no. 9 (August 1994): 91–94. http://dx.doi.org/10.1016/s1474-6670(17)45902-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

LIU, Yanfang. "Application of integration algorithems for elasto-plasticity constitutive model for anisotropic sheet materials." Chinese Journal of Mechanical Engineering (English Edition) 19, no. 04 (2006): 554. http://dx.doi.org/10.3901/cjme.2006.04.554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Duvvada, Rajeswara Rao, Shaik Ayesha Fathima, and Shaik Noorjahan. "A Comprehensive Analysis and Design of Land Cover Usage from Satellite Images using Machine Learning Algorithems." Journal of Visual Language and Computing 2022, no. 1 (July 5, 2022): 25–35. http://dx.doi.org/10.18293/jvlc2022-n1-014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Devi, Kapila, and Saroj Ratnoo. "Cluster analysis of socio-economic factors and academic performance of school students." Indonesian Journal of Electrical Engineering and Computer Science 31, no. 3 (September 1, 2023): 1568. http://dx.doi.org/10.11591/ijeecs.v31.i3.pp1568-1577.

Full text
Abstract:
The objective of the paper is to examine the academic performance of students’ vis-a-vis socio-economic factors using clustering analysis. The grades obtained in the 10<sup>th</sup> class are taken as the measure of academic performance. The variables such gender, caste, parental education and occupation. are considered as the socio-economic indicators. Three clustering algorithems are employed. The K-medoid performs better in the validation process to form the groupings based on intra-cluster homogeneity and inter-cluster heterogeneity. The clustering analysis results in two interesting groups of the students. One of the clusters is dominated by the students of general category and the other one by the scheduled caste category. Next, the appropriate statistical tests are applied to determine the factors that significantly differ in the two clusters. Cluster analysis shows that caste, parents' education and occupation, and family income are the differentiating factors between the two groups. However, we are unable to establish significant difference between the academic performance of the two groups of students at a 5% significance. The research carried out in this paper may be beneficial for making policies to bridge the gap in the educational attainment of the students from deprived sections of society.
APA, Harvard, Vancouver, ISO, and other styles
8

Gangavane, Ms H. N. "A Comparison of ABK-Means Algorithm with Traditional Algorithms." International Journal of Trend in Scientific Research and Development Volume-1, Issue-4 (June 30, 2017): 614–21. http://dx.doi.org/10.31142/ijtsrd2197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gościniak, Ireneusz, and Krzysztof Gdawiec. "Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications." Entropy 22, no. 7 (July 2, 2020): 734. http://dx.doi.org/10.3390/e22070734.

Full text
Abstract:
There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu, and Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Full text
Abstract:
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Algorithems"

1

Saadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.

Full text
Abstract:
Dans cette thèse, nous étudions des thématiques autour des algorithmes stochastiques et c'est pour cette raison que nous débuterons ce manuscrit par des éléments généraux sur ces algorithmes en donnant des résultats historiques pour poser les bases de nos travaux. Ensuite, nous étudierons un algorithme de bandit issu des travaux de N arendra et Shapiro dont l'objectif est de déterminer parmi un choix de plusieurs sources laquelle profite le plus à l'utilisateur en évitant toutefois de passer trop de temps à tester celles qui sont moins per­formantes. Notre but est dans un premier temps de comprendre les faiblesses structurelles de cet algorithme pour ensuite proposer une procédure optimale pour une quantité qui mesure les performances d'un algorithme de bandit, le regret. Dans nos résultats, nous proposerons un algorithme appelé NS sur-pénalisé qui permet d'obtenir une borne de regret optimale au sens minimax au travers d'une étude fine de l'algorithme stochastique sous-jacent à cette procédure. Un second travail sera de donner des vitesses de convergence pour le processus apparaissant dans l'étude de la convergence en loi de l'algorithme NS sur-pénalisé. La par­ticularité de l'algorithme est qu'il ne converge pas en loi vers une diffusion comme la plupart des algorithmes stochastiques mais vers un processus à sauts non-diffusif ce qui rend l'étude de la convergence à l'équilibre plus technique. Nous emploierons une technique de couplage afin d'étudier cette convergence. Le second travail de cette thèse s'inscrit dans le cadre de l'optimisation d'une fonc­tion au moyen d'un algorithme stochastique. Nous étudierons une version stochastique de l'algorithme déterministe de boule pesante avec amortissement. La particularité de cet al­gorithme est d'être articulé autour d'une dynamique qui utilise une moyennisation sur tout le passé de sa trajectoire. La procédure fait appelle à une fonction dite de mémoire qui, selon les formes qu'elle prend, offre des comportements intéressants. Dans notre étude, nous verrons que deux types de mémoire sont pertinents : les mémoires exponentielles et poly­nomiales. Nous établirons pour commencer des résultats de convergence dans le cas général où la fonction à minimiser est non-convexe. Dans le cas de fonctions fortement convexes, nous obtenons des vitesses de convergence optimales en un sens que nous définirons. En­fin, l'étude se termine par un résultat de convergence en loi du processus après une bonne renormalisation. La troisième partie s'articule autour des algorithmes de McKean-Vlasov qui furent intro­duit par Anatoly Vlasov et étudié, pour la première fois, par Henry McKean dans l'optique de la modélisation de la loi de distribution du plasma. Notre objectif est de proposer un al­gorithme stochastique capable d'approcher la mesure invariante du processus. Les méthodes pour approcher une mesure invariante sont connues dans le cas des diffusions et de certains autre processus mais ici la particularité du processus de McKean-Vlasov est de ne pas être une diffusion linéaire. En effet, le processus a de la mémoire comme les processus de boule pesante. De ce fait, il nous faudra développer une méthode alternative pour contourner ce problème. Nous aurons besoin d'introduire la notion de pseudo-trajectoires afin de proposer une procédure efficace
In this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
APA, Harvard, Vancouver, ISO, and other styles
2

Corbineau, Marie-Caroline. "Proximal and interior point optimization strategies in image recovery." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC085/document.

Full text
Abstract:
Les problèmes inverses en traitement d'images peuvent être résolus en utilisant des méthodes variationnelles classiques, des approches basées sur l'apprentissage profond, ou encore des stratégies bayésiennes. Bien que différentes, ces approches nécessitent toutes des algorithmes d'optimisation efficaces. L'opérateur proximal est un outil important pour la minimisation de fonctions non lisses. Dans cette thèse, nous illustrons la polyvalence des algorithmes proximaux en les introduisant dans chacune des trois méthodes de résolution susmentionnées.Tout d'abord, nous considérons une formulation variationnelle sous contraintes dont la fonction objectif est composite. Nous développons PIPA, un nouvel algorithme proximal de points intérieurs permettant de résoudre ce problème. Dans le but d'accélérer PIPA, nous y incluons une métrique variable. La convergence de PIPA est prouvée sous certaines conditions et nous montrons que cette méthode est plus rapide que des algorithmes de l'état de l'art au travers de deux exemples numériques en traitement d'images.Dans une deuxième partie, nous étudions iRestNet, une architecture neuronale obtenue en déroulant un algorithme proximal de points intérieurs. iRestNet nécessite l'expression de l'opérateur proximal de la barrière logarithmique et des dérivées premières de cet opérateur. Nous fournissons ces expressions pour trois types de contraintes. Nous montrons ensuite que sous certaines conditions, cette architecture est robuste à une perturbation sur son entrée. Enfin, iRestNet démontre de bonnes performances pratiques en restauration d'images par rapport à une approche variationnelle et à d'autres méthodes d'apprentissage profond.La dernière partie de cette thèse est consacrée à l'étude d'une méthode d'échantillonnage stochastique pour résoudre des problèmes inverses dans un cadre bayésien. Nous proposons une version accélérée de l'algorithme proximal de Langevin non ajusté, baptisée PP-ULA. Cet algorithme est incorporé à un échantillonneur de Gibbs hybride utilisé pour réaliser la déconvolution et la segmentation d'images ultrasonores. PP-ULA utilise le principe de majoration-minimisation afin de gérer les distributions non log-concaves. Comme le montrent nos expériences réalisées sur des données ultrasonores simulées et réelles, PP-ULA permet une importante réduction du temps d'exécution tout en produisant des résultats de déconvolution et de segmentation très satisfaisants
Inverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data
APA, Harvard, Vancouver, ISO, and other styles
3

Harris, Steven C. "A genetic algorithm for robust simulation optimization." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178645751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alkindy, Bassam. "Combining approaches for predicting genomic evolution." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2012/document.

Full text
Abstract:
En bio-informatique, comprendre comment les molécules d’ADN ont évolué au cours du temps reste un problème ouvert etcomplexe. Des algorithmes ont été proposés pour résoudre ce problème, mais ils se limitent soit à l’évolution d’un caractèredonné (par exemple, un nucléotide précis), ou se focalisent a contrario sur de gros génomes nucléaires (plusieurs milliardsde paires de base), ces derniers ayant connus de multiples événements de recombinaison – le problème étant NP completquand on considère l’ensemble de toutes les opérations possibles sur ces séquences, aucune solution n’existe à l’heureactuelle. Dans cette thèse, nous nous attaquons au problème de reconstruction des séquences ADN ancestrales en nousfocalisant sur des chaînes nucléotidiques de taille intermédiaire, et ayant connu assez peu de recombinaison au coursdu temps : les génomes de chloroplastes. Nous montrons qu’à cette échelle le problème de la reconstruction d’ancêtrespeut être résolu, même quand on considère l’ensemble de tous les génomes chloroplastiques complets actuellementdisponibles. Nous nous concentrons plus précisément sur l’ordre et le contenu ancestral en gènes, ainsi que sur lesproblèmes techniques que cette reconstruction soulève dans le cas des chloroplastes. Nous montrons comment obtenirune prédiction des séquences codantes d’une qualité telle qu’elle permette ladite reconstruction, puis comment obtenir unarbre phylogénétique en accord avec le plus grand nombre possible de gènes, sur lesquels nous pouvons ensuite appuyernotre remontée dans le temps – cette dernière étant en cours de finalisation. Ces méthodes, combinant l’utilisation d’outilsdéjà disponibles (dont la qualité a été évaluée) à du calcul haute performance, de l’intelligence artificielle et de la biostatistique,ont été appliquées à une collection de plus de 450 génomes chloroplastiques
In Bioinformatics, understanding how DNA molecules have evolved over time remains an open and complex problem.Algorithms have been proposed to solve this problem, but they are limited either to the evolution of a given character (forexample, a specific nucleotide), or conversely focus on large nuclear genomes (several billion base pairs ), the latter havingknown multiple recombination events - the problem is NP complete when you consider the set of all possible operationson these sequences, no solution exists at present. In this thesis, we tackle the problem of reconstruction of ancestral DNAsequences by focusing on the nucleotide chains of intermediate size, and have experienced relatively little recombinationover time: chloroplast genomes. We show that at this level the problem of the reconstruction of ancestors can be resolved,even when you consider the set of all complete chloroplast genomes currently available. We focus specifically on the orderand ancestral gene content, as well as the technical problems this raises reconstruction in the case of chloroplasts. Weshow how to obtain a prediction of the coding sequences of a quality such as to allow said reconstruction and how toobtain a phylogenetic tree in agreement with the largest number of genes, on which we can then support our back in time- the latter being finalized. These methods, combining the use of tools already available (the quality of which has beenassessed) in high performance computing, artificial intelligence and bio-statistics were applied to a collection of more than450 chloroplast genomes
APA, Harvard, Vancouver, ISO, and other styles
5

Astete, morales Sandra. "Contributions to Convergence Analysis of Noisy Optimization Algorithms." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS327/document.

Full text
Abstract:
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées
This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods
APA, Harvard, Vancouver, ISO, and other styles
6

Glaudin, Lilian. "Stratégies multicouche, avec mémoire, et à métrique variable en méthodes de point fixe pour l'éclatement d'opérateurs monotones et l'optimisation." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS119.

Full text
Abstract:
Plusieurs stratégies sans liens apparents coexistent pour mettre en œuvre les algorithmes de résolution de problèmes d'inclusion monotone dans les espaces hilbertiens. Nous proposons un cadre synthétique permettant d'englober diverses approches algorithmiques pour la construction de point fixe, clarifions et généralisons leur théorie asymptotique, et concevons de nouveaux schémas itératifs pour l'analyse non linéaire et l'optimisation convexe. Notre méthodologie, qui est ancrée sur un modèle de compositions de quasicontractions moyennées, nous permet de faire avancer sur plusieurs fronts la théorie des algorithmes de point fixe et d'impacter leurs domaines d'applications. Des exemples numériques sont fournis dans le contexte de la restauration d'image, où nous proposons un nouveau point de vue pour la formulation des problèmes variationnels
Several apparently unrelated strategies coexist to implement algorithms for solving monotone inclusions in Hilbert spaces. We propose a synthetic framework for fixed point construction which makes it possible to capture various algorithmic approaches, clarify and generalize their asymptotic behavior, and design new iterative schemes for nonlinear analysis and convex optimization. Our methodology, which is anchored on an averaged quasinonexpansive operator composition model, allows us to advance the theory of fixed point algorithms on several fronts, and to impact their application fields. Numerical examples are provided in the context of image restoration, where we propose a new viewpoint on the formulation of variational problems
APA, Harvard, Vancouver, ISO, and other styles
7

Fontaine, Allyx. "Analyses et preuves formelles d'algorithmes distribués probabilistes." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0091/document.

Full text
Abstract:
L’intérêt porté aux algorithmes probabilistes est, entre autres,dû à leur simplicité. Cependant, leur analyse peut devenir très complexeet ce particulièrement dans le domaine du distribué. Nous mettons en évidencedes algorithmes, optimaux en terme de complexité en bits résolvantles problèmes du MIS et du couplage maximal dans les anneaux, qui suiventle même schéma. Nous élaborons une méthode qui unifie les résultatsde bornes inférieures pour la complexité en bits pour les problèmes duMIS, du couplage maximal et de la coloration. La complexité de ces analysespouvant facilement mener à l’erreur et l’existence de nombreux modèlesdépendant d’hypothèses implicites nous ont motivés à modéliserde façon formelle les algorithmes distribués probabilistes correspondant ànotre modèle (par passage de messages, anonyme et synchrone), en vuede prouver formellement des propriétés relatives à leur analyse. Pour cela,nous développons une bibliothèque, RDA, basée sur l’assistant de preuveCoq
Probabilistic algorithms are simple to formulate. However, theiranalysis can become very complex, especially in the field of distributedcomputing. We present algorithms - optimal in terms of bit complexityand solving the problems of MIS and maximal matching in rings - that followthe same scheme.We develop a method that unifies the bit complexitylower bound results to solve MIS, maximal matching and coloration problems.The complexity of these analyses, which can easily lead to errors,together with the existence of many models depending on implicit assumptionsmotivated us to formally model the probabilistic distributed algorithmscorresponding to our model (message passing, anonymous andsynchronous). Our aim is to formally prove the properties related to theiranalysis. For this purpose, we develop a library, called RDA, based on theCoq proof assistant
APA, Harvard, Vancouver, ISO, and other styles
8

Dementiev, Roman. "Algorithm engineering for large data sets hardware, software, algorithms." Saarbrücken VDM, Müller, 2006. http://d-nb.info/986494429/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dementiev, Roman. "Algorithm engineering for large data sets : hardware, software, algorithms /." Saarbrücken : VDM-Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3029033&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Algorithems"

1

Smith, Jeffrey Dean. Design and analysis of algorithms. Boston: PWS-KENT Pub. Co., 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

The advent of the algorithm: The 300 year journey from an idea to the computer. San Diego, Calif: Harcourt, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Berlinski, David. The advent of the algorithm: The idea that rules the world. New York: Harcourt, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Müller-Hannemann, Matthias, and Stefan Schirra. Algorithm engineering: Bridging the gap between algorithm theory and practice. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

VerfasserIn, Van Gelder Allen, ed. Computer algorithms: Introduction to design and analysis. 3rd ed. Delhi: Pearson Education, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Algorithm engineering for integral and dynamic problems. Amsterdam: Gordon & Breach, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lance, Chambers, ed. The practical handbook of genetic algorithms: Applications. 2nd ed. Boca Raton, Fla: Chapman & Hall/CRC, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

H, Jamieson Leah, Gannon Dennis B. 1947-, and Douglass Robert J, eds. The characteristics of parallel algorithms. Cambridge, Mass: MIT Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Algorithems"

1

Bez, Helmut, and Tony Croft. "Quantum algorithms 2: Simon's algorithm." In Quantum Computation, 333–42. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003264569-23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hendrix, Eligius M. T., and Ana Maria A. C. Rocha. "On Local Convergence of Stochastic Global Optimization Algorithms." In Computational Science and Its Applications – ICCSA 2021, 456–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86976-2_31.

Full text
Abstract:
AbstractIn engineering optimization with continuous variables, the use of Stochastic Global Optimization (SGO) algorithms is popular due to the easy availability of codes. All algorithms have a global and local search character, where the global behaviour tries to avoid getting trapped in local optima and the local behaviour intends to reach the lowest objective function values. As the algorithm parameter set includes a final convergence criterion, the algorithm might be running for a while around a reached minimum point. Our question deals with the local search behaviour after the algorithm reached the final stage. How fast do practical SGO algorithms actually converge to the minimum point? To investigate this question, we run implementations of well known SGO algorithms in a final local phase stage.
APA, Harvard, Vancouver, ISO, and other styles
3

Bansal, Jagdish Chand, Prathu Bajpai, Anjali Rawat, and Atulya K. Nagar. "Conclusion and Further Research Directions." In Sine Cosine Algorithm for Optimization, 105–6. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9722-8_6.

Full text
Abstract:
AbstractThe increasing complexity of real-world optimization problems demands fast, robust, and efficient meta-heuristic algorithms. The popularity of these intelligent techniques is gaining popularity day by day among researchers from various disciplines of science and engineering. The sine cosine algorithm is a simple population-based stochastic approach for handling different optimization problems. In this work, we have discussed the basic sine cosine algorithm for continuous optimization problems, the multi-objective sine cosine algorithm for handling multi-objective optimization problems, and the discrete (or binary) versions of sine cosine algorithm for discrete optimization problems. Sine cosine algorithm (SCA) has reportedly shown competitive results when compared to other meta-heuristic algorithms. The easy implementation and less number of parameters make the SCA algorithm, a recommended choice for performing various optimization tasks. In this present chapter, we have studied different modifications and strategies for the advancement of the sine cosine algorithm. The incorporation of concepts like opposition-based learning, quantum simulation, and hybridization with other meta-heuristic algorithms have increased the efficiency and robustness of the SCA algorithm, and meanwhile, these techniques have also increased the application spectrum of the sine cosine algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Ludes-Adamy, Peter, and Marcus Schütte. "Immer der Reihe nach – Algorithmen in mathematisch-informatischen Lernumgebungen für den Primarbereich." In Informatisch-algorithmische Grundbildung im Mathematikunterricht der Primarstufe, 3–21. WTM-Verlag Münster, 2022. http://dx.doi.org/10.37626/ga9783959872126.0.01.

Full text
Abstract:
Kinder in der Grundschule lernen Algorithmen meist nicht explizit als solche kennen, sondern wenden zunächst erstmal Verfahren mit bestimmten geordneten Arbeitsschritten an. In einer Pilotstudie und zwei weiteren Erhebungswellen haben Schülerinnen und Schüler Lernumgebungen zu mathematisch-informatischen Inhalten, auch zum Thema Algorithmus, bearbeitet. Der Beitrag stellt Teilergebnisse vor und geht der Frage nach, inwiefern Kinder Algorithmen in dem ihnen bekannten Feld der Mathematik identifizieren und beschreiben können. In diesem Zusammenhang werden die gemeinsam hervorgebrachten Rahmungen der Schülerinnen und Schüler der Primarstufe vom Konzept des Algorithmus rekonstruiert. Children in primary school do not learn about algorithms explicitly but use organized procedures. In the context of the dissertation of Peter Ludes-Adamy, primary school children participated in a project where they worked on learning environments containing mathematics and computer science tasks, including algorithms. The article aims to provide insight into this qualitative study and tries to answer the question, how children can identify and describe algorithms in the known field of mathematics. In this regard, primary school children’s collectively constructed framings of the concept of algorithms will be reconstructed.
APA, Harvard, Vancouver, ISO, and other styles
5

Das, Sahana, Kaushik Roy, and Chanchal Kumar Saha. "A Linear Time Series Analysis of Fetal Heart Rate to Detect the Variability." In Handbook of Research on Recent Developments in Intelligent Communication Application, 471–95. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1785-6.ch018.

Full text
Abstract:
Real time analysis and interpretation of fetal heart rate (FHR) is the challenge posed to every clinician. Different algorithms had been developed, tried and subsequently incorporated into Cardiotocograph (CTG) machines for automated diagnosis. Feature extraction and accurate detection of baseline and its variability has been the focus of this chapter. Algorithms by Dawes and Redman and Ayres-de-Campos have been discussed in this chapter. The authors are pleased to propose an algorithm for extracting the variability of fetal heart. The algorithm's accuracy and degree of agreement with clinician's diagnosis had been established by various statistical methods. This algorithm has been compared with an algorithm proposed by Nidhal and the new algorithm is found to be better at detecting variability in both ante-partum and intra-partum period.
APA, Harvard, Vancouver, ISO, and other styles
6

Manoj, Suvvala, and Rajendran T. "Higher Prediction Accuracy on Parkinson Disease Patients Using Random Forest Algorithm over DTA." In Advances in Parallel Computing Algorithms, Tools and Paradigms. IOS Press, 2022. http://dx.doi.org/10.3233/apc220081.

Full text
Abstract:
Improving the prediction accuracy for Parkinson’s patients is achieved by applying Innovative Parkinson’s disease prediction utilizing classifiers that use machine learning methods and evaluating their performance. In this proposed work, the Innovative Parkinson’s disease prediction has been carried out using the Random Forest algorithm and the Decision Tree algorithm. It was tested over a dataset consisting of 757 records. Both algorithms were subjected to a programming experiment in which N=10 iterations were used to discover the symptoms of Innovative Parkinson’s disease prediction and their accurate analysis. The G-power test is around 80% accurate. From the implemented experiment by performing the independent sample t-test, the Random Forest algorithm’s Parkinson’s disease prediction accuracy is significantly (0.028) better than the Decision Tree algorithm. The accuracy of Innovative Parkinson’s disease prediction was compared between two algorithms, and the Random Forest algorithm appears to be higher at 93% than the Decision Tree algorithm’s accuracy of 91%. This research will use the most up-to-date Machine Learning Classifiers to create an innovative Parkinson’s disease prediction technique for the early detection of Parkinson’s disease and other related issues.
APA, Harvard, Vancouver, ISO, and other styles
7

Bouarara, Hadj Ahmed. "A Survey of Computational Intelligence Algorithms and Their Applications." In Handbook of Research on Soft Computing and Nature-Inspired Algorithms, 133–76. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2128-0.ch005.

Full text
Abstract:
This chapter subscribes in the framework of an analytical study about the computational intelligence algorithms. These algorithms are numerous and can be classified in two great families: evolutionary algorithms (genetic algorithms, genetic programming, evolutionary strategy, differential evolutionary, paddy field algorithm) and swarm optimization algorithms (particle swarm optimisation PSO, ant colony optimization (ACO), bacteria foraging optimisation, wolf colony algorithm, fireworks algorithm, bat algorithm, cockroaches colony algorithm, social spiders algorithm, cuckoo search algorithm, wasp swarm optimisation, mosquito optimisation algorithm). We have detailed each algorithm following a structured organization (the origin of the algorithm, the inspiration source, the summary, and the general process). This paper is the fruit of many years of research in the form of synthesis which groups the contributions proposed by various researchers in this field. It can be the starting point for the designing and modelling new algorithms or improving existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Bouarara, Hadj Ahmed. "A Survey of Computational Intelligence Algorithms and Their Applications." In Robotic Systems, 1886–929. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1754-3.ch090.

Full text
Abstract:
This chapter subscribes in the framework of an analytical study about the computational intelligence algorithms. These algorithms are numerous and can be classified in two great families: evolutionary algorithms (genetic algorithms, genetic programming, evolutionary strategy, differential evolutionary, paddy field algorithm) and swarm optimization algorithms (particle swarm optimisation PSO, ant colony optimization (ACO), bacteria foraging optimisation, wolf colony algorithm, fireworks algorithm, bat algorithm, cockroaches colony algorithm, social spiders algorithm, cuckoo search algorithm, wasp swarm optimisation, mosquito optimisation algorithm). We have detailed each algorithm following a structured organization (the origin of the algorithm, the inspiration source, the summary, and the general process). This paper is the fruit of many years of research in the form of synthesis which groups the contributions proposed by various researchers in this field. It can be the starting point for the designing and modelling new algorithms or improving existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Roy, Provas Kumar. "New Efficient Evolutionary Algorithm Applied to Optimal Reactive Power Dispatch." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 321–39. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6252-0.ch016.

Full text
Abstract:
Evolutionary Algorithms (EAs) are well-known optimization techniques to deal with nonlinear and complex optimization problems. However, most of these population-based algorithms are computationally expensive due to the slow nature of the evolutionary process. To overcome this drawback and to improve the convergence rate, this chapter employs Quasi-Opposition-Based Learning (QOBL) in conventional Biogeography-Based Optimization (BBO) technique. The proposed Quasi-Oppositional BBO (QOBBO) is comprehensively developed and successfully applied for solving the Optimal Reactive Power Dispatch (ORPD) problem by minimizing the transmission loss when both equality and inequality constraints are satisfied. The proposed QOBBO algorithm's performance is studied with comparisons of Canonical Genetic Algorithm (CGA), five versions of Particle Swarm Optimization (PSO), Local Search-Based Self-Adaptive Differential Evolution (L-SADE), Seeker Optimization Algorithm (SOA), and BBO on the IEEE 30-bus, IEEE 57-bus, and IEEE 118-bus power systems. The simulation results show that the proposed QOBBO approach performed better than the other listed algorithms and can be efficiently used to solve small-, medium-, and large-scale ORPD problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Youcef, Bouras. "Research Information." In Advanced Deep Learning Applications in Big Data Analytics, 218–72. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-2791-7.ch011.

Full text
Abstract:
This chapter describes the framework of an analytical study around the computational intelligence algorithms, which are prompted by natural mechanisms and complex biological phenomena. These algorithms are numerous and can be classified in two great families: firstly the family of evolutionary algorithms (EA) such as genetic algorithms (GAs), genetic programming (GP), evolutionary strategy (ES), differential evolutionary (DE), paddy field algorithm (PFA); secondly, the swarm intelligence algorithms (SIA) such as particle swarm optimisation (PSO), ant colony optimization (ACO), bacteria foraging optimisation (BFO), wolf colony algorithm (WCA), fireworks algorithm (FA), bat algorithm (BA), cockroaches algorithm (CA), social spiders algorithm (SSA), cuckoo search algorithm (CSA), wasp swarm optimisation (WSO), mosquito optimisation algorithm (MOA). The authors have detailed the functioning of each algorithm following a structured organization (the descent of the algorithm, the inspiration source, the summary, and the general process) that offers for readers a thorough understanding. This study is the fruit of many years of research in the form of synthesis, which groups the contributions offered by several researchers in the meta-heuristic field. It can be the beginning point for planning and modelling new algorithms or improving existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithems"

1

Shoukang, Qin. "The Weighted Vector Algorithems in One Person and One Criterion." In The International Symposium on the Analytic Hierarchy Process. Creative Decisions Foundation, 1999. http://dx.doi.org/10.13033/isahp.y1999.077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Haitao, and Linjie Luo. "Applying Curriculum Learning on Path-based Knowledge Graph Reasoning Algorithems." In 2021 3rd International Conference on Natural Language Processing (ICNLP). IEEE, 2021. http://dx.doi.org/10.1109/icnlp52887.2021.00019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kanungo, T., M. Y. Jaisimha, J. Palmer, and R. M. Haralick. "Methodology for analyzing the performance of detection tasks." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1992. http://dx.doi.org/10.1364/oam.1992.fcc3.

Full text
Abstract:
There has been increasing interest in quantitative performance evaluation of computer vision algorithms. The usual method is to vary parameters of the input images or parameters of the algorithms and then construct operating curves that relate the probability of misdetection and false alarm for each parameter setting. Such an analysis does not integrate the performance of the numerous operating curves. In this paper we outline a methodology for summarizing many operating curves into a few performance curves. This methodology is adapted from the human psychophysics literature and is general to any detection algorithm. We demonstrated the methodology by comparing the performance of two line detection algorithms. The task was to detect the presence or absence of a vertical edge in the middle of an image containing a grating mask and additive Gaussian noise. We compared the Burns line finder and an algorithm using the facet edge detector and the Hough transform. To determine each algorithm's performance curve, we estimated the contrast necessary for an unbiased 75% correct detection as a function of the orientation of the grating mask. These functions were further characterized in terms of the algorithm's orientation selectivity and overall performance. An algorithm with the best overall performance need not have the best orientation selectivity. These performance curves can be used to optimize the design of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Showalter, Mark, Dennis Hong, and Daniel Larimer. "Development and Comparison of Gait Generation Algorithms for Hexapedal Robots Based on Kinematics With Considerations for Workspace." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49616.

Full text
Abstract:
This paper explores the interdependance of walking algorithm and limb workspace for the Multi-Appendage Robotic System (MARS). While MARS is a hexapedal robot, the tasks of defining the workspace and walking agorthm for all six limbs can be abstracted to a single limb using the constraint of a tripedal statically stable gait. Thus, by understanding the behavior of an individual limb, two walking algorithms have been developed which allow MARS to walk on level terain. Both algorithms are adaptive in that they continously update based on control inputs. The differences between the two algorithms is that they were developed for different limb workspaces. The simpler algorithm developed for a 2D workspace was implemented, resulting in smooth gait generation with near instantaneous response to control input. This accomplishment demonstrates the feasibility of implementing a more sophisticated algorithem which allows for inputs of: x and y velocity, walking height, yaw, pitch and roll. This algorithm uses a 3D workspace developed to afford near maximum step length.
APA, Harvard, Vancouver, ISO, and other styles
5

Schmidt, David P., and Christopher J. Rutland. "Reducing Grid Dependency in Droplet Collision Modeling." In ASME 2001 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/2001-ice-395.

Full text
Abstract:
Abstract A faster, more accurate replacement for existing collision algorithms has been developed. The method, called the NTC algorithm, is not grid dependent, and is much faster than older algorithms. Calculations with sixty thousand parcels required only a few CPU minutes. However, there is a significant need to develop mesh-independent momentum coupling between the gas and spray, so that the collision algorithm’s full accuracy can be fully realized.
APA, Harvard, Vancouver, ISO, and other styles
6

Perju, Veaceslav, and Dorian Saranciuc. "Evaluation of the Multi-Algorithms Targets Recognition Systems." In 12th International Conference on Electronics, Communications and Computing. Technical University of Moldova, 2022. http://dx.doi.org/10.52326/ic-ecco.2022/cs.05.

Full text
Abstract:
This paper presents the evaluation’s results of the new classes of the target recognition systems – multi- algorithms unimodal systems and multi-algorithms multimodal systems. The structures and the graphs of the systems are described. The mathematical descriptions and the formulas for evaluation of the system’s costs depending on the algorithm’s recognition probability and the relation between the costs of the algorithm’s software and the system’s hardware are presented. The approach to determine the cost of a system for an established threshold level of the system's recognition probability is proposed. The relation of the system's cost to the system's recognition probability for different values of the algorithm's recognition probability is evaluated as well as the rating of the target recognition systems based on their recognition probabilities and costs.
APA, Harvard, Vancouver, ISO, and other styles
7

Chandrasekaran, Arvind. "Neural Networks." In 8th International Conference on Software Engineering. Academy & Industry Research Collaboration, 2023. http://dx.doi.org/10.5121/csit.2023.131208.

Full text
Abstract:
The neural networks review the categories, explaining the organization algorithm techniques required to improve the generalization performance and the Feedforward Neural Network (FNN) learning speed. They are needed to discover the research trends changes under the six categories of the optimization algorithms for the learning rate, learning algorithms which are gradient-free. Metaheuristic algorithms collectively and new research directions are recommended for the researchers to facilitate the algorithm's understanding of the natural world applications to solve the complex engineering, management, and problems in the health sciences. FNN gained research attention for making an informed decision. The literature survey focuses on optimization technology and learning algorithms. The optimization techniques and the FNN learning algorithms identified are segregated into six categories based on the mathematical model, problem identification, proposed solution, and technical reasoning. FNN contributions rapidly increase the ability to make informed decisions reliably.
APA, Harvard, Vancouver, ISO, and other styles
8

Du Pont, Bryony L., and Jonathan Cagan. "An Extended Pattern Search Approach to Wind Farm Layout Optimization." In ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/detc2010-28748.

Full text
Abstract:
An extended pattern search approach is presented for optimizing the placement of wind turbines on a wind farm. The algorithm will develop a two-dimensional layout for a given number of turbines, employing an objective function that minimizes costs while maximizing the total power production of the farm. The farm cost is developed using an established simplified model that is a function of the number of turbines. The power development of the farm is estimated using an established simplified wake model, which accounts for the aerodynamic effects of turbine blades on downstream wind speed, to which the power output is directly proportional. The interaction of the turbulent wakes developed by turbines in close proximity largely determines the power capability of the farm. As pattern search algorithms are deterministic, multiple extensions are presented to aid escaping local optima by infusing stochastic characteristics into the algorithm. This stochasticity improves the algorithm’s performance, yielding better results than purely deterministic search methods. Three test cases are presented: a) constant, unidirectional wind, b) constant, multidirectional wind, and c) varying, multidirectional wind. Resulting layouts developed by this extended pattern search algorithm develop more power than previously explored algorithms with the same evaluation models and objective functions. In addition, the algorithm’s layouts motivate a heuristic that yields the best layouts found to date.
APA, Harvard, Vancouver, ISO, and other styles
9

Guan, Yue, Qifan Zhang, and Panagiotis Tsiotras. "Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/339.

Full text
Abstract:
We explore the use of policy approximations to reduce the computational cost of learning Nash equilibria in zero-sum stochastic games. We propose a new Q-learning type algorithm that uses a sequence of entropy-regularized soft policies to approximate the Nash policy during the Q-function updates. We prove that under certain conditions, by updating the entropy regularization, the algorithm converges to a Nash equilibrium. We also demonstrate the proposed algorithm's ability to transfer previous training experiences, enabling the agents to adapt quickly to new environments. We provide a dynamic hyper-parameter scheduling scheme to further expedite convergence. Empirical results applied to a number of stochastic games verify that the proposed algorithm converges to the Nash equilibrium, while exhibiting a major speed-up over existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Lei, Lijian Zhou, Shaohui Jia, Li Yi, Haichong Yu, and Xiaoming Han. "An Automatic Segmentation Algorithm Used in Pipeline Integrity Alignment Sheet Design." In 2010 8th International Pipeline Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ipc2010-31036.

Full text
Abstract:
Pipeline segmentation design is the first step to design alignment sheet. In this step, several rectangular boxes are used to cover pipeline and each box will become the basic unit of alignment sheet design. After studying various pipeline alignment sheet mapping technologies, the author found that traditional manual design method, which can take advantage of designers’ subjectivity, causes low work efficiency. By reviewing and studying existing works at home and abroad, the author believed that it is possible and feasible to develop an automatic segmentation algorithm based on existing curve simplification algorithms to improve to improve the efficiency of pipeline section design and alignment sheet mapping. Based on several classical curve simplification algorithms, the author proposed the automatic segmentation algorithm, which automatically adjusts the location of rectangular boxes according to the number of pipeline/circle intersection points and pipeline/ rectangular box intersection points. Finally, through comparing time and result with the traditional manual method, the author proved the algorithm’s effectiveness and feasibility.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Algorithems"

1

Gubaydullina, Zulian, Jan René Judek, Marco Lorenz, and Markus Spiwoks. Gestaltungswille und Algorithm Aversion – Die Auswirkungen der Einflussnahme im Prozess der algorithmischen Entscheidungsfindung auf die Algorithm Aversion. Sonderforschungsgruppe Instituionenanalyse, June 2021. http://dx.doi.org/10.46850/sofia.9783941627925.

Full text
Abstract:
Obwohl Algorithmen in vielen Anwendungsgebieten präzisere Prognosen abgeben als Menschen, weigern sich Entscheidungsträger häufig, auf Algorithmen zurückzugreifen. In einem ökonomischen Experiment untersuchen wir, ob das Ausmaß dieses als „Algorithm Aversion“ bekannten Phänomens reduziert werden kann, indem Entscheidungsträgern eine Einflussmöglichkeit auf die Ausgestaltung des Algorithmus eingeräumt wird (Einflussmöglichkeit auf den algorithmischen Input). Zusätzlich replizieren wir die Studie von Dietvorst, Simmons & Massey (2018). Darin zeigt sich, dass die Algorithm Aversion deutlich zurückgeht, sofern die Subjekte am Ende die Ergebnisse des Algorithmus – und sei es nur um wenige Prozent – verändern können (Einflussmöglichkeit auf den algorithmischen Output). In der vorliegenden Studie bestätigt sich, dass die Algorithm Aversion bei einer Einflussmöglichkeit auf den algorithmischen Output signifikant zurückgeht. Eine Einflussmöglichkeit auf den algorithmischen Input scheint allerdings nur bedingt geeignet, die Algorithm Aversion zu reduzieren. Die begrenzte Möglichkeit zur Modifikation des algorithmischen Outputs reduziert die Algorithm Aversion effektiver als die Möglichkeit, Einfluss auf den algorithmischen Input zu nehmen.
APA, Harvard, Vancouver, ISO, and other styles
2

Judek, Jan René. Die Bereitschaft zur Nutzung von Algorithmen variiert mit der sozialen Information über die schwache vs. starke Akzeptanz: Eine experimentelle Studie zur Algorithm Aversion. Sonderforschungsgruppe Institutionenanalyse, 2022. http://dx.doi.org/10.46850/sofia.9783947850037.

Full text
Abstract:
Der Prozess der Entscheidungsfindung wird in verschiedensten Kontexten immer häufiger von Algorithmen unterstützt. Das Phänomen der Algorithm Aversion steht der Entfaltung des technologischen Potenzials, das Algorithmen mit sich bringen, jedoch entgegen. Wirtschaftsakteure neigen dazu, ihre Entscheidungen an den Entscheidungen anderer Wirtschaftsakteure auszurichten. Daher wird in einem experimentellen Ansatz die Bereitschaft zur Nutzung eines Algorithmus bei der Abgabe von Aktienkursprognosen untersucht, wenn Informationen über die vorherige Nutzungsrate eines Algorithmus bereitgestellt werden. Es zeigt sich, dass Entscheidungsträger häufiger einen Algorithmus verwenden, wenn die Mehrheit der zuvor entscheidenden Wirtschaftsakteure diesen ebenfalls verwendet hat. Die Bereitschaft, einen Algorithmus zu verwenden, variiert mit der sozialen Information über die vorherige schwache beziehungsweise starke Akzeptanz. Zudem zeigt die Affinität zur Technikinteraktion der Wirtschaftsakteure einen Einfluss auf das Entscheidungsverhalten.
APA, Harvard, Vancouver, ISO, and other styles
3

Johansen, Richard A., Christina L. Saltus, Molly K. Reif, and Kaytee L. Pokrzywinski. A Review of Empirical Algorithms for the Detection and Quantification of Harmful Algal Blooms Using Satellite-Borne Remote Sensing. U.S. Army Engineer Research and Development Center, June 2022. http://dx.doi.org/10.21079/11681/44523.

Full text
Abstract:
Harmful Algal Blooms (HABs) continue to be a global concern, especially since predicting bloom events including the intensity, extent, and geographic location, remain difficult. However, remote sensing platforms are useful tools for monitoring HABs across space and time. The main objective of this review was to explore the scientific literature to develop a near-comprehensive list of spectrally derived empirical algorithms for satellite imagers commonly utilized for the detection and quantification HABs and water quality indicators. This review identified the 29 WorldView-2 MSI algorithms, 25 Sentinel-2 MSI algorithms, 32 Landsat-8 OLI algorithms, 9 MODIS algorithms, and 64 MERIS/Sentinel-3 OLCI algorithms. This review also revealed most empirical-based algorithms fell into one of the following general formulas: two-band difference algorithm (2BDA), three-band difference algorithm (3BDA), normalized-difference chlorophyll index (NDCI), or the cyanobacterial index (CI). New empirical algorithm development appears to be constrained, at least in part, due to the limited number of HAB-associated spectral features detectable in currently operational imagers. However, these algorithms provide a foundation for future algorithm development as new sensors, technologies, and platforms emerge.
APA, Harvard, Vancouver, ISO, and other styles
4

Baader, Franz, and Rafael Peñaloza. Axiom Pinpointing in General Tableaux. Aachen University of Technology, 2007. http://dx.doi.org/10.25368/2022.159.

Full text
Abstract:
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableaux algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
APA, Harvard, Vancouver, ISO, and other styles
5

Baader, Franz, and Rafael Peñaloza. Pinpointing in Terminating Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.166.

Full text
Abstract:
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableau algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
APA, Harvard, Vancouver, ISO, and other styles
6

Filiz, Ibrahim, Jan René Judek, Marco Lorenz, and Markus Spiwoks. Die Tragik der Algorithm Aversion. Sonderforschungsgruppe Institutionenanalyse, 2021. http://dx.doi.org/10.46850/sofia.9783941627888.

Full text
Abstract:
Algorithmen bewältigen viele Aufgaben bereits zuverlässiger als menschliche Experten. Trotzdem zeigen einige Wirtschaftssubjekte eine ablehnende Haltung gegenüber Algorithmen (Algorithm Aversion). In manchen Entscheidungssituationen kann ein Fehler schwerwiegende Konsequenzen haben, in anderen Entscheidungssituationen nicht. Wir untersuchen im Rahmen eines Framing-Experimentes den Zusammenhang zwischen der Tragweite der Entscheidungssituation einerseits und der Häufigkeit der Algorithm Aversion andererseits. Dabei zeigt sich, dass die Algorithm Aversion umso häufiger auftritt, je gravierender die möglichen Konsequenzen einer Entscheidung sind. Gerade bei besonders wichtigen Entscheidungen führt somit die Algorithm Aversion zu einer Reduzierung der Erfolgswahrscheinlichkeit. Das kann man als die Tragik der Algorithm Aversion bezeichnen.
APA, Harvard, Vancouver, ISO, and other styles
7

Champlin, Craig, and John P. H. Steele. DTPH56-14H-CAP06 Pipeline Assessment through 4-Dimensional Anomaly Detection and Characterization. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), December 2016. http://dx.doi.org/10.55274/r0011766.

Full text
Abstract:
The team intended to develop two algorithms for matching anomalies across coincident internal pipeline inspections to assess corrosion growth rates. The first algorithm would match boxed anomalies. The second algorithm would match raw signals. The goal for each algorithm is slightly different. The boxed algorithm is intended to do a complete mapping of individual called-out anomalies from one inspection to the next. The raw signal algorithm velocity corrects and aligns raw inspections signals.
APA, Harvard, Vancouver, ISO, and other styles
8

Baader, Franz, and Rafael Peñaloza. Blocking and Pinpointing in Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.165.

Full text
Abstract:
Axiom pinpointing has been introduced in description logics (DLs) to help the used understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in consideration. Several pinpointing algorithms have been described as extensions of the standard tableau-based reasoning algorithms for deciding consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL, using specific traits of them. In the past, we have developed a general approach for extending tableau-based algorithms into pinpointing algorithms. In this paper we explore some issues of termination of general tableaux and their pinpointing extensions. We also define a subclass of tableaux that allows the use of so-called blocking conditions, which stop the execution of the algorithm once a pattern is found, and adapt the pinpointing extensions accordingly, guaranteeing its correctness and termination.
APA, Harvard, Vancouver, ISO, and other styles
9

Lorenz, Markus. Auswirkungen des Decoy-Effekts auf die Algorithm Aversion. Sonderforschungsgruppe Institutionenanalyse, 2022. http://dx.doi.org/10.46850/sofia.9783947850013.

Full text
Abstract:
Limitations in the human decision-making process restrict the technological potential of algorithms, which is also referred to as "algorithm aversion". This study uses a laboratory experiment with participants to investigate whether a phenomenon known since 1982 as the "decoy effect" is suitable for reducing algorithm aversion. For numerous analogue products, such as cars, drinks or newspaper subscriptions, the Decoy Effect is known to have a strong influence on human decision-making behaviour. Surprisingly, the decisions between forecasts by humans and Robo Advisors (algorithms) investigated in this study are not influenced by the Decoy Effect at all. This is true both a priori and after observing forecast errors.
APA, Harvard, Vancouver, ISO, and other styles
10

Baader, Franz, Jan Hladik, and Rafael Peñaloza. PSpace Automata with Blocking for Description Logics. Aachen University of Technology, 2006. http://dx.doi.org/10.25368/2022.157.

Full text
Abstract:
In Description Logics (DLs), both tableau-based and automatabased algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSpace-complete logics, it is often very hard to design optimal tableau-based algorithms for ExpTime-complete DLs. In contrast, the automata-based approach is usually well-suited to prove ExpTime upper-bounds, but its direct application will usually also yield an ExpTime-algorithm for a PSpace-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSpace-algorithm. We illustrate the usefulness of this approach by proving a new PSpace upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography