To see the other types of publications on this topic, follow the link: Algorithm.

Dissertations / Theses on the topic 'Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yarmolskyy, Oleksandr. "Využití distribuovaných a stochastických algoritmů v síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-370918.

Full text
Abstract:
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.
APA, Harvard, Vancouver, ISO, and other styles
2

Harris, Steven C. "A genetic algorithm for robust simulation optimization." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178645751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Full text
Abstract:

Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.


Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.

APA, Harvard, Vancouver, ISO, and other styles
4

Maciel, Cristiano Baptista Faria. "A memetic algorithm for logistics network design problems." Master's thesis, Instituto Superior de Economia e Gestão, 2014. http://hdl.handle.net/10400.5/8601.

Full text
Abstract:
Mestrado em Decisão Económica e Empresarial
Neste trabalho, um algoritmo memético é desenvolvido com o intuito de ser aplicado a uma rede logística, com três níveis, múltiplos períodos, seleção do meio de transporte e com recurso a outsourcing. O algoritmo memético pode ser aplicado a uma rede logística existente, no sentido de otimizar a sua configuração ou, se necessário, pode ser utilizado para criar uma rede logística de raiz. A produção pode ser internalizada e é permitido o envio direto de produtos para os clientes. Neste problema, as capacidades das diferentes infraestruturas podem ser expandidas ao longo do período temporal. Caso se trate uma infraestrutura já existente, após uma expansão, já não pode ser encerrada. Sempre que se abre uma nova infraestrutura, a mesma também não pode ser encerrada. A heurística é capaz de determinar o número e localizações das infraestrutura a operar, as capacidades e o fluxo de mercadoria na rede logística.
This thesis describes a memetic algorithm applied to the design of a three-echelon logistics network over multiple periods with transportation mode selection and outsourcing. The memetic algorithm can be applied to an existing supply chain in order to obtain an optimized configuration or, if required, it can be used to define a new logistics network. In addition, production can be outsourced and direct shipments of products to customer zones are possible. In this problem, the capacity of an existing or new facility can be expanded over the time horizon. In this case, the facility cannot be closed. Existing facilities, once closed, cannot be reopened. New facilities cannot be closed, once opened. The heuristic is able to determine the number and locations of facilities (i.e. plants and warehouses), capacity levels as well as the flow of products throughout the supply chain.
APA, Harvard, Vancouver, ISO, and other styles
5

Dementiev, Roman. "Algorithm engineering for large data sets hardware, software, algorithms." Saarbrücken VDM, Müller, 2006. http://d-nb.info/986494429/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dementiev, Roman. "Algorithm engineering for large data sets : hardware, software, algorithms /." Saarbrücken : VDM-Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3029033&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Johansson, Björn, and Emil Österberg. "Algorithms for Large Matrix Multiplications : Assessment of Strassen's Algorithm." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230742.

Full text
Abstract:
1968 var Strassens algoritm en av de stora genombrotten inom matrisanalyser. I denna rapport kommer teorin av Volker Strassens algoritm för matrismultiplikationer tillsammans med teorier om precisioner att presenteras. Även fördelar med att använda denna algoritm jämfört med naiva matrismultiplikation och dess implikationer, samt hur den presterar jämfört med den naiva algoritmen kommer att presenteras. Strassens algoritm kommer också att bli bedömd på hur dess resultat skiljer sig för olika precisioner när matriserna blir större, samt hur dess teoretiska komplexitet skiljer sig gentemot den erhållna komplexiteten. Studier hittade att Strassens algoritm överträffade den naiva algoritmen för matriser av storlek 1024×1024 och större. Den erhållna komplexiteten var lite större än Volker Strassens teoretiska. Den optimala precisionen i detta fall var dubbelprecisionen, Float64. Sättet algoritmen implementeras på i koden påverkar dess prestanda. Ett flertal olika faktorer behövs ha i åtanke för att förbättra Strassens algoritm: optimera dess avbrottsvillkor, sättet som matriserna paddas för att de ska vara mer användbara för rekursiv tillämpning och hur de implementeras t.ex. parallella beräkningar. Även om det kunde bevisas att Strassen algoritm överträffade den naiva efter en viss matrisstorlek så är den inte den mest effektiva; t.ex visades detta med Strassen-Winograd. Man behöver vara uppmärksam på hur undermatriserna allokeras, för att inte ta upp onödigt minne. För fördjupning kan man läsa på om cache-oblivious och cache-aware algoritmer.
Strassen’s algorithm was one of the breakthroughs in matrix analysis in 1968. In this report the thesis of Volker Strassen’s algorithm for matrix multipli- cations along with theories about precisions will be shown. The benefits of using this algorithm compared to naive matrix multiplication and its implica- tions, how its performance compare to the naive algorithm, will be displayed. Strassen’s algorithm will also be assessed on how the output differ when the matrix sizes grow larger, as well as how the theoretical complexity of the al- gorithm differs from the achieved complexity. The studies found that Strassen’s algorithm outperformed the naive matrix multiplication at matrix sizes 1024 1024 and above. The achieved complex- ity was a little higher compared to Volker Strassen’s theoretical. The optimal precision for this case were the double precision, Float64. How the algorithm is implemented in code matters for its performance. A number of techniques need to be considered in order to improve Strassen’s algorithm, optimizing its termination criterion, the manner by which it is padded in order to make it more usable for recursive application and the way it is implemented e.g. parallel computing. Even tough it could be proved that Strassen’s algorithm outperformed the Naive after reaching a certain matrix size, it is still not the most efficient one; e.g. as shown with Strassen-Winograd. One need to be careful of how the sub-matrices are being allocated, to not use unnecessary memory. For further reading one can study cache-oblivious and cache-aware algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Čápek, Pavel. "Srovnání nástrojů pro animaci algoritmů." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-192639.

Full text
Abstract:
The diploma thesis focuses on software tools which enable algorithm animation. In theoretical section of the work are introduced different ways how to present algorithms. Then the field of algorithm animation is described; it's history, development and current state. In the last part of theoretical section are shown possibilities how to use algorithm animation in teaching. Practical section of the thesis focuses on comparison of selected software tools. Selected tools are evaluated based on several criteria. The applications are then compared by multi-criteria decision making methods. Main goal of this thesis is to compare the selected software tools. Partial goals are to introduce advantages of using such applications compared to writing the algorithm in text form.
APA, Harvard, Vancouver, ISO, and other styles
10

Rafique, Abid. "Communication optimization in iterative numerical algorithms : an algorithm-architecture interaction." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/17837.

Full text
Abstract:
Trading communication with redundant computation can increase the silicon efficiency of common hardware accelerators like FPGA and GPU in accelerating sparse iterative numerical algorithms. While iterative numerical algorithms are extensively used in solving large-scale sparse linear system of equations and eigenvalue problems, they are challenging to accelerate as they spend most of their time in communication-bound operations, like sparse matrix-vector multiply (SpMV) and vector-vector operations. Communication is used in a general sense to mean moving the matrix and the vectors within the custom memory hierarchy of the FPGA and between processors in the GPU; the cost of which is much higher than performing the actual computation due to technological reasons. Additionally, the dependency between the operations hinders overlapping computation with communication. As a result, although GPU and FPGA are offering large peak floating-point performance, their sustained performance is nonetheless very low due to high communication costs leading to poor silicon efficiency. In this thesis, we provide a systematic study to minimize the communication cost thereby increase the silicon efficiency. For small-to-medium datasets, we exploit large on-chip memory of the FPGA to load the matrix only once and then use explicit blocking to perform all iterations at the communication cost of a single iteration. For large sparse datasets, it is now a well-known idea to unroll k iterations using a matrix powers kernel which replaces SpMV and two additional kernels, TSQR and BGS, which replace vector-vector operations. While this approach can provide a Θ(k) reduction in the communication cost, the extent of the unrolling depends on the growth in redundant computation, the underlying architecture and the memory model. In this work, we show how to select the unroll factor k in an architecture-agnostic manner to provide communication-computation tradeoff on FPGA and GPU. To this end, we exploit inverse-memory hierarchy of the GPUs to map matrix power kernel and present a new algorithm for the FPGAs which matches with their strength to reduce redundant computation to allow large k and hence higher speedups. We provide predictive models of the matrix powers kernel to understand the communication-computation tradeoff on GPU and FPGA. We highlight extremely low efficiency of the GPU in TSQR due to off-chip sharing of data across different building blocks and show how we can use on-chip memory of the FPGA to eliminate this off-chip access and hence achieve better efficiency. Finally, we demonstrate how to compose all the kernels by using a unified architecture and exploit on-chip memory of the FPGA to share data across these kernels. Using the Lanczos Iteration as a case study to solve symmetric extremal eigenvalue problem, we show that the efficiency of FPGAs can be increased from 1.8% to 38% for small- to-medium scale dense matrices whereas up to 7.8% for large-scale structured banded matrices. We show that although GPU shows better efficiency for certain kernels like the matrix powers kernel, the overall efficiency is even lower due to increase in communication cost while sharing data across different kernels through off-chip memory. As the Lanczos Iteration is at the heart of all modern iterative numerical algorithms, our results are applicable to a broad class of iterative numerical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Saadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.

Full text
Abstract:
Dans cette thèse, nous étudions des thématiques autour des algorithmes stochastiques et c'est pour cette raison que nous débuterons ce manuscrit par des éléments généraux sur ces algorithmes en donnant des résultats historiques pour poser les bases de nos travaux. Ensuite, nous étudierons un algorithme de bandit issu des travaux de N arendra et Shapiro dont l'objectif est de déterminer parmi un choix de plusieurs sources laquelle profite le plus à l'utilisateur en évitant toutefois de passer trop de temps à tester celles qui sont moins per­formantes. Notre but est dans un premier temps de comprendre les faiblesses structurelles de cet algorithme pour ensuite proposer une procédure optimale pour une quantité qui mesure les performances d'un algorithme de bandit, le regret. Dans nos résultats, nous proposerons un algorithme appelé NS sur-pénalisé qui permet d'obtenir une borne de regret optimale au sens minimax au travers d'une étude fine de l'algorithme stochastique sous-jacent à cette procédure. Un second travail sera de donner des vitesses de convergence pour le processus apparaissant dans l'étude de la convergence en loi de l'algorithme NS sur-pénalisé. La par­ticularité de l'algorithme est qu'il ne converge pas en loi vers une diffusion comme la plupart des algorithmes stochastiques mais vers un processus à sauts non-diffusif ce qui rend l'étude de la convergence à l'équilibre plus technique. Nous emploierons une technique de couplage afin d'étudier cette convergence. Le second travail de cette thèse s'inscrit dans le cadre de l'optimisation d'une fonc­tion au moyen d'un algorithme stochastique. Nous étudierons une version stochastique de l'algorithme déterministe de boule pesante avec amortissement. La particularité de cet al­gorithme est d'être articulé autour d'une dynamique qui utilise une moyennisation sur tout le passé de sa trajectoire. La procédure fait appelle à une fonction dite de mémoire qui, selon les formes qu'elle prend, offre des comportements intéressants. Dans notre étude, nous verrons que deux types de mémoire sont pertinents : les mémoires exponentielles et poly­nomiales. Nous établirons pour commencer des résultats de convergence dans le cas général où la fonction à minimiser est non-convexe. Dans le cas de fonctions fortement convexes, nous obtenons des vitesses de convergence optimales en un sens que nous définirons. En­fin, l'étude se termine par un résultat de convergence en loi du processus après une bonne renormalisation. La troisième partie s'articule autour des algorithmes de McKean-Vlasov qui furent intro­duit par Anatoly Vlasov et étudié, pour la première fois, par Henry McKean dans l'optique de la modélisation de la loi de distribution du plasma. Notre objectif est de proposer un al­gorithme stochastique capable d'approcher la mesure invariante du processus. Les méthodes pour approcher une mesure invariante sont connues dans le cas des diffusions et de certains autre processus mais ici la particularité du processus de McKean-Vlasov est de ne pas être une diffusion linéaire. En effet, le processus a de la mémoire comme les processus de boule pesante. De ce fait, il nous faudra développer une méthode alternative pour contourner ce problème. Nous aurons besoin d'introduire la notion de pseudo-trajectoires afin de proposer une procédure efficace
In this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
APA, Harvard, Vancouver, ISO, and other styles
12

Glaudin, Lilian. "Stratégies multicouche, avec mémoire, et à métrique variable en méthodes de point fixe pour l'éclatement d'opérateurs monotones et l'optimisation." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS119.

Full text
Abstract:
Plusieurs stratégies sans liens apparents coexistent pour mettre en œuvre les algorithmes de résolution de problèmes d'inclusion monotone dans les espaces hilbertiens. Nous proposons un cadre synthétique permettant d'englober diverses approches algorithmiques pour la construction de point fixe, clarifions et généralisons leur théorie asymptotique, et concevons de nouveaux schémas itératifs pour l'analyse non linéaire et l'optimisation convexe. Notre méthodologie, qui est ancrée sur un modèle de compositions de quasicontractions moyennées, nous permet de faire avancer sur plusieurs fronts la théorie des algorithmes de point fixe et d'impacter leurs domaines d'applications. Des exemples numériques sont fournis dans le contexte de la restauration d'image, où nous proposons un nouveau point de vue pour la formulation des problèmes variationnels
Several apparently unrelated strategies coexist to implement algorithms for solving monotone inclusions in Hilbert spaces. We propose a synthetic framework for fixed point construction which makes it possible to capture various algorithmic approaches, clarify and generalize their asymptotic behavior, and design new iterative schemes for nonlinear analysis and convex optimization. Our methodology, which is anchored on an averaged quasinonexpansive operator composition model, allows us to advance the theory of fixed point algorithms on several fronts, and to impact their application fields. Numerical examples are provided in the context of image restoration, where we propose a new viewpoint on the formulation of variational problems
APA, Harvard, Vancouver, ISO, and other styles
13

Fontaine, Allyx. "Analyses et preuves formelles d'algorithmes distribués probabilistes." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0091/document.

Full text
Abstract:
L’intérêt porté aux algorithmes probabilistes est, entre autres,dû à leur simplicité. Cependant, leur analyse peut devenir très complexeet ce particulièrement dans le domaine du distribué. Nous mettons en évidencedes algorithmes, optimaux en terme de complexité en bits résolvantles problèmes du MIS et du couplage maximal dans les anneaux, qui suiventle même schéma. Nous élaborons une méthode qui unifie les résultatsde bornes inférieures pour la complexité en bits pour les problèmes duMIS, du couplage maximal et de la coloration. La complexité de ces analysespouvant facilement mener à l’erreur et l’existence de nombreux modèlesdépendant d’hypothèses implicites nous ont motivés à modéliserde façon formelle les algorithmes distribués probabilistes correspondant ànotre modèle (par passage de messages, anonyme et synchrone), en vuede prouver formellement des propriétés relatives à leur analyse. Pour cela,nous développons une bibliothèque, RDA, basée sur l’assistant de preuveCoq
Probabilistic algorithms are simple to formulate. However, theiranalysis can become very complex, especially in the field of distributedcomputing. We present algorithms - optimal in terms of bit complexityand solving the problems of MIS and maximal matching in rings - that followthe same scheme.We develop a method that unifies the bit complexitylower bound results to solve MIS, maximal matching and coloration problems.The complexity of these analyses, which can easily lead to errors,together with the existence of many models depending on implicit assumptionsmotivated us to formally model the probabilistic distributed algorithmscorresponding to our model (message passing, anonymous andsynchronous). Our aim is to formally prove the properties related to theiranalysis. For this purpose, we develop a library, called RDA, based on theCoq proof assistant
APA, Harvard, Vancouver, ISO, and other styles
14

Pelikan, Martin. "Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/toc/fy053/2004116659.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kouchinsky, Alan J. "Determination of smoke algoritm [i.e. algorithm] activation for video image detection." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7223.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Dept of Fire Protection Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
16

Mirzazadeh, Mehdi. "Adaptive Comparison-Based Algorithms for Evaluating Set Queries." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1147.

Full text
Abstract:
In this thesis we study a problem that arises in answering boolean queries submitted to a search engine. Usually a search engine stores the set of IDs of documents containing each word in a pre-computed sorted order and to evaluate a query like "computer AND science" the search engine has to evaluate the union of the sets of documents containing the words "computer" and "science". More complex queries will result in more complex set expressions. In this thesis we consider the problem of evaluation of a set expression with union and intersection as operators and ordered sets as operands. We explore properties of comparison-based algorithms for the problem. A proof of a set expression is the set of comparisons that a comparison-based algorithm performs before it can determine the result of the expression. We discuss the properties of the proofs of set expressions and based on how complex the smallest proofs of a set expression E are, we define a measurement for determining how difficult it is for E to be computed. Then, we design an algorithm that is adaptive to the difficulty of the input expression and we show that the running time of the algorithm is roughly proportional to difficulty of the input expression, where the factor is roughly logarithmic in the number of the operands of the input expression.
APA, Harvard, Vancouver, ISO, and other styles
17

Dutta, Himanshu Shekhar. "Survey of Approximation Algorithms for Set Cover Problem." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12118/.

Full text
Abstract:
In this thesis, I survey 11 approximation algorithms for unweighted set cover problem. I have also implemented the three algorithms and created a software library that stores the code I have written. The algorithms I survey are: 1. Johnson's standard greedy; 2. f-frequency greedy; 3. Goldsmidt, Hochbaum and Yu's modified greedy; 4. Halldorsson's local optimization; 5. Dur and Furer semi local optimization; 6. Asaf Levin's improvement to Dur and Furer; 7. Simple rounding; 8. Randomized rounding; 9. LP duality; 10. Primal-dual schema; and 11. Network flow technique. Most of the algorithms surveyed are refinements of standard greedy algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Violich, Stephen Scott. "Fusing Loopless Algorithms for Combinatorial Generation." Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1075.

Full text
Abstract:
Loopless algorithms are an interesting challenge in the field of combinatorial generation. These algorithms must generate each combinatorial object from its predecessor in no more than a constant number of instructions, thus achieving theoretically minimal time complexity. This constraint rules out powerful programming techniques such as iteration and recursion, which makes loopless algorithms harder to develop and less intuitive than other algorithms. This thesis discusses a divide-and-conquer approach by which loopless algorithms can be developed more easily and intuitively: fusing loopless algorithms. If a combinatorial generation problem can be divided into subproblems, it may be possible to conquer it looplessly by fusing loopless algorithms for its subproblems. A key advantage of this approach is that is allows existing loopless algorithms to be reused. This approach is not novel, but it has not been generalised before. This thesis presents a general framework for fusing loopless algorithms, and discusses its implications. It then applies this approach to two combinatorial generation problems and presents two new loopless algorithms. The first new algorithm, MIXPAR, looplessly generates well-formed parenthesis strings comprising two types of parentheses. It is the first loopless algorithm for generating these objects. The second new algorithm, MULTPERM, generates multiset permutations in linear space using only arrays, a benchmark recently set by Korsh and LaFollette (2004). Algorithm MULTPERM is evaluated against Korsh and LaFollette's algorithm, and shown to be simpler and more efficient in both space and time.
APA, Harvard, Vancouver, ISO, and other styles
19

Lin, Han-Hsuan. "Topics in quantum algorithms : adiabatic algorithm, quantum money, and bomb query complexity." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99300.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 111-115).
In this thesis, I present three results on quantum algorithms and their complexity. The first one is a numerical study on the quantum adiabatic algorithm( QAA) . We tested the performance of the QAA on random instances of MAX 2-SAT on 20 qubits and showed 3 strategics that improved QAA's performance, including a counter intuitive strategy of decreasing the overall evolution time. The second result is a security proof for the quantum money by knots proposed by Farhi et. al. We proved that quantum money by knots can not be cloned in a black box way unless graph isomorphism is efficiently solvable by a quantum computer. Lastly we defined a modified quantum query model, which we called bomb query complexity B(J), inspired by the Elitzur-Vaidman bomb-testing problem. We completely characterized bomb query complexity be showing that B(f) = [Theta](Q(f)2 ). This result implies a new method to find upper bounds on quantum query complexity, which we applied on the maximum bipartite matching problem to get an algorithm with O(n1.75) quantum query complexity, improving from the best known trivial O(n2 ) upper bound.
by Han-Hsuan Lin.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Sauerland, Volkmar [Verfasser]. "Algorithm Engineering for some Complex Practise Problems : Exact Algorithms, Heuristics and Hybrid Evolutionary Algorithms / Volkmar Sauerland." Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1026442745/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ramage, Stephen Edward Andrew. "Advances in meta-algorithmic software libraries for distributed automated algorithm configuration." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52809.

Full text
Abstract:
A meta-algorithmic procedure is a computer procedure that operates upon another algorithm and its associated design space to produce another algorithm with desirable properties (e.g., faster runtime, better solution quality, ...; see e.g., Hoos [2008]). Many meta-algorithmic procedures have runtimes that are dominated by the runtime of the algorithm being operated on. This holds in particular for automatic algorithm configurators, such as ParamILS, SMAC, and GGA, which serve to optimize the design (expressed through user settable parameters) of an algorithm under certain use cases. Consequently, one can gain improved performance of the meta-algorithm if evaluations of the algorithm under study can be done in parallel. In this thesis, we explore a distributed version of the automatic configurator, SMAC, called pSMAC, and the library, AEATK, that it was built upon, which has proved general and versatile enough to support many other meta-algorithmic procedures.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
22

Liakhovitch, Evgueni. "Genetic algorithm using restricted sequence alignments." Ohio : Ohio University, 2000. http://www.ohiolink.edu/etd/view.cgi?ohiou1172598174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kang, Seunghwa. "On the design of architecture-aware algorithms for emerging applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39503.

Full text
Abstract:
This dissertation maps various kernels and applications to a spectrum of programming models and architectures and also presents architecture-aware algorithms for different systems. The kernels and applications discussed in this dissertation have widely varying computational characteristics. For example, we consider both dense numerical computations and sparse graph algorithms. This dissertation also covers emerging applications from image processing, complex network analysis, and computational biology. We map these problems to diverse multicore processors and manycore accelerators. We also use new programming models (such as Transactional Memory, MapReduce, and Intel TBB) to address the performance and productivity challenges in the problems. Our experiences highlight the importance of mapping applications to appropriate programming models and architectures. We also find several limitations of current system software and architectures and directions to improve those. The discussion focuses on system software and architectural support for nested irregular parallelism, Transactional Memory, and hybrid data transfer mechanisms. We believe that the complexity of parallel programming can be significantly reduced via collaborative efforts among researchers and practitioners from different domains. This dissertation participates in the efforts by providing benchmarks and suggestions to improve system software and architectures.
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Yong Joo. "Block Lanczos algorithm." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hess, Tylor (Tylor Joseph). "Algorithm deployment platform." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104283.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 75-81).
Algorithm users, such as researchers, clinicians, engineers, and scientists, want to run advanced, custom, new research algorithms. For example, doctors want to run algorithms developed by researchers for clinical applications. These algorithm users see an algorithm as a black box. They want to input data and get results without having to understand the intricacies of algorithm implementation and without having to download, install, configure, and debug complex software. We refer to these algorithm users as black-box users. Researchers and developers create the algorithms; therefore they understand the algorithms' inner workings. We refer to these algorithm developers as glass-box users. There is a need for a platform or technology that allows algorithm developers to efficiently deploy algorithms. We propose the best way to do this is as a web application. Therefore, there is a need to deploy algorithms as web applications without having to learn web development. We developed a web application that enables algorithm users to run developers' algorithms on data stored locally or in cloud storage services.' To deploy algorithms as web applications, developers upload their algorithms to cloud computing services.2 The developer has the option to create an object native to the language in which the algorithm was developed. The platform turns this object into HTML displayed to the algorithm users, so developers can deploy algorithms as web applications without having to learn web development, which is beneficial, since algorithms are often not developed in web-friendly languages. In addition, our platform allows developers to turn the computers that they developed their algorithms on into cloud computing resources, instead of leveraging existing cloud computing services. Using the developer's computer instead of existing cloud computing services is beneficial because their computers were already configured with the appropriate operating system, installed programs, licensed software, etc. to run the algorithms. We evaluated our design with three in-depth interviews, a twenty-one-person focus group, and a survey of six users, who estimated that our platform would significantly reduce deployment time.
by Tylor Hess.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Nicholson, Lori Eileen. "Quantum Algorithm Animator." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/262.

Full text
Abstract:
The design and development of quantum algorithms present a challenge, especially for inexperienced computer science students. Despite the numerous common concepts with classical computer science, quantum computation is still considered a branch of theoretical physics not commonly used by computer scientists. Experimental research into the development of a quantum computer makes the use of quantum mechanics in organizing computation more attractive, however the physical realization of a working quantum computer may still be decades away. This study introduces quantum computing to computer science students using a quantum algorithm animator called QuAL. QuAL's design uses features common to classical algorithm animators guided by an exploratory study but refined to animate the esoteric and interesting aspects of quantum algorithms. In addition, this study investigates the potential for the animation of a quantum sorting algorithm to help novice computer science students understand the formidable concepts of quantum computing. The animations focus on the concepts required to understand enough about quantum algorithms to entice student interest and promote the integration of quantum computational concepts into computer science applications and curricula. The experimental case study showed no significant improvement in student learning when using QuAL's initial prototype. Possible reasons include the animator's presentation of concepts and the study's pedagogical framework such as choice of algorithm (Wallace and Narayanan's sorting algorithm), design of pre- and post tests, and the study's small size (20 students) and brief duration (2 hours). Nonetheless, the animation system was well received by students. Future work includes enhancing this animation tool for illustrating elusive concepts in quantum computing.
APA, Harvard, Vancouver, ISO, and other styles
27

Bailey, James Patrick. "Octanary branching algorithm." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/13801.

Full text
Abstract:
Master of Science
Department of Industrial and Manufacturing Systems Engineering
Todd Easton
Integer Programs (IP) are a class of discrete optimization that have been used commercially to improve various systems. IPs are often used to reach an optimal financial objective with constraints based upon resources, operations and other restrictions. While incredibly beneficial, IPs have been shown to be NP-complete with many IPs remaining unsolvable. Traditionally, Branch and Bound (BB) has been used to solve IPs. BB is an iterative algorithm that enumerates all potential integer solutions for a given IP. BB can guarantee an optimal solution, if it exists, in finite time. However, BB can require an exponential number of nodes to be evaluated before terminating. As a result, the memory of a computer using BB can be exceeded or it can take an excessively long time to find the solution. This thesis introduces a modified BB scheme called the Octanary Branching Algorithm (OBA). OBA introduces eight children in each iteration to more effectively partition the feasible region of the linear relaxation of the IP. OBA also introduces equality constraints in four of the children in order to reduce the dimension of the remaining nodes. OBA can guarantee an optimal solution, if it exists, in finite time. In addition, OBA has been shown to have some theoretical improvements over traditional BB. During computational tests, OBA was able to find the first, second and third integer solution with 64.8%, 27.9% and 29.3% fewer nodes evaluated, respectively, than CPLEX. These integers were 44.9%, 54.7% and 58.2% closer to the optimal solution, respectively, when compared to CPLEX. It is recommended that commercial solvers incorporate OBA in the initialization and random diving phases of BB.
APA, Harvard, Vancouver, ISO, and other styles
28

Wladis, Simon. "Simulating Grover's Algorithm." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297556.

Full text
Abstract:
The purpose of this paper is to implement and simulate Grover's algorithm on one and several qubits on a classical computer. The theory behind the algorithm and its components are described in detail. This paper provides a proof of concept for one of the most remarkable results in the theory of quantum computation. I have constructed a library in Python to simulate the gates used in the algorithm that can be used up to an arbitrary number of qubits. The results of the simulations are supposed to demonstrate the characteristics of the algorithm and advantages compared to classical search.
APA, Harvard, Vancouver, ISO, and other styles
29

Vin, Emmanuelle. "Genetic algorithm applied to generalized cell formation problems." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210160.

Full text
Abstract:
The objective of the cellular manufacturing is to simplify the management of the

manufacturing industries. In regrouping the production of different parts into clusters,

the management of the manufacturing is reduced to manage different small

entities. One of the most important problems in the cellular manufacturing is the

design of these entities called cells. These cells represent a cluster of machines that

can be dedicated to the production of one or several parts. The ideal design of a

cellular manufacturing is to make these cells totally independent from one another,

i.e. that each part is dedicated to only one cell (i.e. if it can be achieved completely

inside this cell). The reality is a little more complex. Once the cells are created,

there exists still some traffic between them. This traffic corresponds to a transfer of

a part between two machines belonging to different cells. The final objective is to

reduce this traffic between the cells (called inter-cellular traffic).

Different methods exist to produce these cells and dedicated them to parts. To

create independent cells, the choice can be done between different ways to produce

each part. Two interdependent problems must be solved:

• the allocation of each operation on a machine: each part is defined by one or

several sequences of operations and each of them can be achieved by a set of

machines. A final sequence of machines must be chosen to produce each part.

• the grouping of each machine in cells producing traffic inside and outside the

cells.

In function of the solution to the first problem, different clusters will be created to

minimise the inter-cellular traffic.

In this thesis, an original method based on the grouping genetic algorithm (Gga)

is proposed to solve simultaneously these two interdependent problems. The efficiency

of the method is highlighted compared to the methods based on two integrated algorithms

or heuristics. Indeed, to form these cells of machines with the allocation

of operations on the machines, the used methods permitting to solve large scale

problems are generally composed by two nested algorithms. The main one calls the

secondary one to complete the first part of the solution. The application domain goes

beyond the manufacturing industry and can for example be applied to the design of

the electronic systems as explained in the future research.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
30

Pieterse, Vreda. "Topic Maps for Specifying Algorithm Taxonomies : a case Study using Transitive Closure Algorithms." Thesis, University of Pretoria, 2016. http://hdl.handle.net/2263/59307.

Full text
Abstract:
The need for storing and retrieving knowledge about algorithms is addressed by creating a specialised information management scheme. This scheme is operationalised in terms of a topic map of algorithms. Metadata are specified for the adequate and precise description of algorithms. The specification describes both the data elements (called attributes) that are relevant to algorithms as well as the relationship of attributes to one another. In addition, a process is formalised for gathering data about algorithms and capturing it in the proposed topic map. The proposed process model and representation scheme are then illustrated by applying them to gather and represent information about transitive closure algorithms. To ensure that this thesis is self-contained, several themes about transitive closures are covered comprehensively. These include the mathematical domain-specific knowledge about transitive closures, methods for calculating the transitive closure of binary relations and techniques that can be applied in transitive closure algorithms. The work presented in this thesis has a multidisciplinary character. It contributes to the domains of formal aspects, algorithms, mathematical sciences, information sciences and software engineering. It has a strong formal foundation. The confirmation of the correctness of algorithms as well as reasoning regarding the complexity of algorithms are key aspects of this thesis. The content of this thesis revolves around algorithms: their attributes; how they relate to one another; and how new versions of the algorithms may be discovered. The introduction of new mathematical concepts and notational elements as well as new rigorous proofs contained in the thesis, extend the mathematical science domain. The main problem addressed in this thesis is an information management need. The technology, namely topic maps, used here to address the problem originated in the information science domain. It is applied in a new context that ultimately has the potential to lead to the automation of aspects of software implementation. This influences the traditional software engineering life cycle and quality of software products.
Thesis (PhD)--University of Pretoria, 2016.
Computer Science
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
31

Majewsky, Stefan. "Training of Hidden Markov models as an instance of the expectation maximization algorithm." Bachelor's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-226903.

Full text
Abstract:
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.
APA, Harvard, Vancouver, ISO, and other styles
32

Schröder, Anna Marie. "Unboxing The Algorithm : Understandability And Algorithmic Experience In Intelligent Music Recommendation Systems." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43841.

Full text
Abstract:
After decades of black-boxing the existence of algorithms in technologies of daily need, users lack confidence in handling them. This thesis study investigates the use situation of intelligent music recommendation systems and explores how understandability as a principle drawn from sociology, design, and computing can enhance the algorithmic experience. In a Research-Through-Design approach, the project conducted focus user sessions and an expert interview to explore first-hand insights. The analysis showed that users had limited mental models so far but brought curiosity to learn. Explorative prototyping revealed that explanations could improve the algorithmic experience in music recommendation systems. Users could comprehend information the best when it was easy to access and digest, directly related to user behavior, and gave control to correct the algorithm. Concluding, trusting users with more transparent handling of algorithmic workings might make authentic recommendations from intelligent systems applicable in the long run.
APA, Harvard, Vancouver, ISO, and other styles
33

Lawrence, Andrea Williams. "Empirical studies of the value of algorithm animation in algorithm understanding." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Nan. "A Framework of Transforming Vertex Deletion Algorithm to Edge Deletion Algorithm." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504878748832156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Pochet, Juliette. "Evaluation de performance d’une ligne ferroviaire suburbaine partiellement équipée d’un automatisme CBTC." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC005.

Full text
Abstract:
En zone dense, la croissance actuelle du trafic sur les lignes ferroviaires suburbaines conduit les exploitants à déployer des systèmes de contrôle-commande avancés des trains, tels que les systèmes dits « CBTC » (Communication Based Train Control) jusque-là réservés aux systèmes de métro. Les systèmes CBTC mettent en œuvre un pilotage automatique des trains et permettent une amélioration significative des performances. Par ailleurs, ils peuvent inclure un module de supervision de la ligne en charge de réguler la marche des trains en cas d’aléa, améliorant ainsi la robustesse du trafic. Face au problème de régulation, la recherche opérationnelle a produit un certain nombre de méthodes permettant de répondre efficacement aux perturbations, d’une part dans le secteur métro et d’autre part dans le secteur ferroviaire lourd. En tirant profit de l’état de l’art et des avancées faites dans les deux secteurs, les travaux présentés dans ce manuscrit cherchent à contribuer à l’adaptation des fonctions de régulation des systèmes CBTC pour l’exploitation de lignes ferroviaires suburbaines. L’approche du problème débute par la construction de l’architecture fonctionnelle d’un module de supervision pour un système CBTC standard. Nous proposons ensuite une méthode de régulation basée sur une stratégie de commande prédictive et sur une optimisation multi-objectif des consignes des trains automatiques. Afin d’être en mesure d’évaluer précisément les performances d’une ligne ferroviaire suburbaine équipée d’un automatisme CBTC, il est nécessaire de s’équiper d’un outil de simulation microscopique adapté. Nous présentons dans ce manuscrit l’outil SNCF nommé SIMONE qui permet une simulation réaliste du point de vue fonctionnel et dynamique d’un système ferroviaire incluant un système CBTC. Les objectifs des travaux de thèse nous ont naturellement conduits à prendre part, avec l’équipe SNCF, à la spécification, à la conception et à l’implémentation de cet outil. Finalement, grâce à l’outil SIMONE, nous avons pu tester la méthode de régulation proposée sur des scénarios impliquant des perturbations. Afin d’évaluer la qualité des solutions, la méthode multi-objectif proposée a été comparée à une méthode de régulation individuelle basée sur une heuristique simple. La méthode de régulation multi-objectif propose de bonnes solutions au problème, dans la majorité des cas plus satisfaisantes que celles proposées par la régulation individuelle, et avec un temps de calcul jugé acceptable. Le manuscrit se termine par des perspectives de recherche intéressantes
In high-density area, the demand for railway transportation is continuously increasing. Operating companies turn to new intelligent signaling and control systems, such as Communication Based Train Control (CBTC) systems previously deployed on underground systems only. CBTC systems operate trains in automatic pilot and lead to increase the line capacity without expensive modification of infrastructures. They can also include a supervision module in charge of adapting train behavior according to operating objectives and to disturbances, increasing line robustness. In the literature of real-time traffic management, various methods have been proposed to supervise and reschedule trains, on the one hand for underground systems, on the other hand for railway systems. Making the most of the state-of-the-art in both fields, the presented work intend to contribute to the design of supervision and rescheduling functions of CBTC systems operating suburban railway systems. Our approach starts by designing a supervision module for a standard CBTC system. Then, we propose a rescheduling method based on a model predictive control approach and a multi-objective optimization of automatic train commands. In order to evaluate the performances of a railway system, it is necessary to use a microscopic simulation tool including a CBTC model. In this thesis, we present the tool developed by SNCF and named SIMONE. It allows realistic simulation of a railway system and a CBTC system, in terms of functional architecture and dynamics. The presented work has been directly involved in the design and implementation of the tool. Eventually, the proposed rescheduling method was tested with the tool SIMONE on disturbed scenarios. The proposed method was compared to a simple heuristic strategy intending to recover delays. The proposed multi-objective method is able to provide good solutions to the rescheduling problem and over-performs the simple strategy in most cases, with an acceptable process time. We conclude with interesting perspectives for future work
APA, Harvard, Vancouver, ISO, and other styles
36

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
37

Sehovic, Mirsad, and Markus Carlsson. "Nåbarhetstestning i en baneditor : En undersökning i hur nåbarhetstester kan implementeras i en baneditor samt funktionens potential i att ersätta manuell testning." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-36394.

Full text
Abstract:
Denna studie undersöker om det är möjligt att införa nåbarhetstestning i en baneditor. Testets syfte är att ersätta manuell testing, det vill säga att bankonstruktören inte ska behöva spela igenom banan för att säkerställa att denne kommer kunna nå alla nåbara positioner.För att kunna utföra studien skapas en enkel baneditor som testplattform. Vidare utförs en jämförande studie av flera alternativa algoritmer för att fastställa vilken som är mest passande för nåbarhetstestning i en baneditor.Resultatet från den jämförande studien visade att A* (A star) var den mest passande algoritmen för funktionen. Huruvida automatisk testning kan ersätta manuell testning är diskutabelt, men resultatet pekar på en ökad effektivitet i tid när det kommer till banbygge.
The following study examines whether it is possible to implement reachability testing in a map editor designed for 2D-platform games. The purpose of reachability testing is to replace manual testing, that being the level designer having to play through the map just to see if the player can reach all supposedly reachable positions in the map.A simple map editor is created to enable the implementation after which we perform a theoretical study in order to determine which algorithm would be best suited for the implementation of the reachability testing.The results comparing algorithms shows that A* (A star) worked best with the function. Whether or not manual testing can be replaced by automatic testing is open for debate, however the results points to an increase in time efficiency when it comes to level design.
APA, Harvard, Vancouver, ISO, and other styles
38

Kaur, Harpreet. "Algorithms for solving the Rubik's cube : A study of how to solve the Rubik's cube using two famous approaches: The Thistlewaite's algorithm and IDA* algorithm." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168427.

Full text
Abstract:
There are different computational algorithms for solving the Rubik's cube, such as Thistlewaite's algorithm, Kociemba's algorithm and IDA* algorithm. This thesis evaluates the efficiency of two algorithms by analyzing time, performance and how many moves are required to solve the Rubik's cube. The results show that the Thistlewaite's algorithm is less efficient than the IDA* algorithm based on time and performance. The paper attempts to answer which algorithm is more efficient for solving the Rubik's cube. It is importation to mention that this report could not prove which algorithm is most efficient while solving the whole cube due to limited data, literature studies and authors are used as an argument to prove that the Korf's algorithm is more efficient.
APA, Harvard, Vancouver, ISO, and other styles
39

Corbineau, Marie-Caroline. "Proximal and interior point optimization strategies in image recovery." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC085/document.

Full text
Abstract:
Les problèmes inverses en traitement d'images peuvent être résolus en utilisant des méthodes variationnelles classiques, des approches basées sur l'apprentissage profond, ou encore des stratégies bayésiennes. Bien que différentes, ces approches nécessitent toutes des algorithmes d'optimisation efficaces. L'opérateur proximal est un outil important pour la minimisation de fonctions non lisses. Dans cette thèse, nous illustrons la polyvalence des algorithmes proximaux en les introduisant dans chacune des trois méthodes de résolution susmentionnées.Tout d'abord, nous considérons une formulation variationnelle sous contraintes dont la fonction objectif est composite. Nous développons PIPA, un nouvel algorithme proximal de points intérieurs permettant de résoudre ce problème. Dans le but d'accélérer PIPA, nous y incluons une métrique variable. La convergence de PIPA est prouvée sous certaines conditions et nous montrons que cette méthode est plus rapide que des algorithmes de l'état de l'art au travers de deux exemples numériques en traitement d'images.Dans une deuxième partie, nous étudions iRestNet, une architecture neuronale obtenue en déroulant un algorithme proximal de points intérieurs. iRestNet nécessite l'expression de l'opérateur proximal de la barrière logarithmique et des dérivées premières de cet opérateur. Nous fournissons ces expressions pour trois types de contraintes. Nous montrons ensuite que sous certaines conditions, cette architecture est robuste à une perturbation sur son entrée. Enfin, iRestNet démontre de bonnes performances pratiques en restauration d'images par rapport à une approche variationnelle et à d'autres méthodes d'apprentissage profond.La dernière partie de cette thèse est consacrée à l'étude d'une méthode d'échantillonnage stochastique pour résoudre des problèmes inverses dans un cadre bayésien. Nous proposons une version accélérée de l'algorithme proximal de Langevin non ajusté, baptisée PP-ULA. Cet algorithme est incorporé à un échantillonneur de Gibbs hybride utilisé pour réaliser la déconvolution et la segmentation d'images ultrasonores. PP-ULA utilise le principe de majoration-minimisation afin de gérer les distributions non log-concaves. Comme le montrent nos expériences réalisées sur des données ultrasonores simulées et réelles, PP-ULA permet une importante réduction du temps d'exécution tout en produisant des résultats de déconvolution et de segmentation très satisfaisants
Inverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data
APA, Harvard, Vancouver, ISO, and other styles
40

Legay, Sylvain. "Quelques problèmes d'algorithmique et combinatoires en théorie des grapphes." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS030/document.

Full text
Abstract:
Le sujet de cette thèse est la théorie des graphes. Formellement, un graphe est un ensemble de sommets et un ensemble d’arêtes, c’est à dire de paires de sommets, qui relient les sommets. Cette thèse traite de différents problèmes de décisions binaires ou de minimisations liés à la notion de graphe, et cherche, pour chacun de ces problèmes, à déterminer sa classe de complexité, ou à fournir un algorithme. Le premier chapitre concerne le problème de trouver le plus petit sous-graphe connexe tropical dans un graphe sommet-colorié, c’est à dire le plus petit sous-graphe connexe contenant toutes les couleurs. Le deuxième chapitre concerne les problèmes d’homomorphisme tropical, une généralisation des problèmes de coloriage de graphe. On y trouve un lien entre ces problèmes et plusieurs classes de problèmes d’homomorphismes, dont la classe des Problèmes de Satisfaction de Contraintes. Le troisième chapitre concerne deux variantes lointaines du problème de domination, nommément les problèmes d’alliances globales dans un graphe pondéré et le problème de l’ensemble sûr. Le quatrième chapitre concerne la recherche d’une décomposition arborescente étoilée, c’est à dire une décomposition arborescente dont le rayon des sacs est 1. Enfin, le cinquième chapitre concerne une variante du problème de décider du comportement asymptotique de l’itéré du graphe des bicliques
This thesis is about graph theory. Formally, a graph is a set of vertices and a set of edges, which are pair of vertices, linking vertices. This thesis deals with various decision problem linked to the notion of graph, and, for each of these problem, try to find its complexity class, or to give an algorithm. The first chapter is about the problem of finding the smallest connected tropical subgraph of a vertex-colored graph, which is the smallest connecter subgraph containing every colors. The second chapter is about problems of tropical homomorphism, a generalization of coloring problem. A link between these problems and several other class of homomorphism problems can be found in this chapter, especially with the class of Constraint Satisfaction Problem. The third chapter is about two variant of the domination problem, namely the global alliance problems in a weighted graph and the safe set problem. The fourth chapter is about the problem of finding a star tree-decomposition, which is a tree-decomposition where the radius of bags is 1. Finally, the fifth chapter is about a variant of the problem of deciding the asymptotic behavior of the iterated biclique graph
APA, Harvard, Vancouver, ISO, and other styles
41

Komínek, Jan. "Heuristické algoritmy pro optimalizaci." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230306.

Full text
Abstract:
This diploma thesis deals with genetic algorithms and their properties. Particular emphasis is placed on finding the influence of mutation and population size. Genetic algorithms are applied on inverse heat conduction problems (IHCP) in the second part of the thesis. Several different approaches and coding methods were tested. Properties of genetic algorithms were improved by definition of two new genetic operators – manipulation and sorting. Reported theoretical findings were tested on the real data of inverse heat conduction problem. The library for easy implementation of GA for solving general optimization problems in C ++ was created and is described in the last chapter.
APA, Harvard, Vancouver, ISO, and other styles
42

Staicu, Laurian. "Multiple query points parallel search algorithm (Comb algorithm) for multimedia database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59340.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Levitt, Nicholas D. (Nicholas David). "The Kooshball algorithm--a ray tracing region growing algorithm for medical data." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/31053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Norrod, Forrest Eugene. "The E-algorithm: an automatic test generation algorithm for hardware description languages." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/43260.

Full text
Abstract:
Traditional test generation techniques for digital circuits have been rendered inadequate by the increasing levels of integration achieved by VLSI technology. This thesis presents a test generation algorithm, the E-algorithm, that generates tests for circuits described using the VHDL Hardware Description Language. A fault model has been developed that addresses data path faults, faults in control structures, and faults in functional operators. The E-algorithm is able to generate tests for all modeled fault types, and handles a wide variety of circuit types, including sequential circuits. The algorithm has been implemented; preliminary results are given.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

Janagam, Anirudh, and Saddam Hossen. "Analysis of Network Intrusion Detection System with Machine Learning Algorithms (Deep Reinforcement Learning Algorithm)." Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gargulák, David. "Animace algoritmů v prostředí Silverlight." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236635.

Full text
Abstract:
The goal of this work was to create a program for the animation of algorithms in Silverlight. To develop this Silverlight module, platform .NET and programing language C# were used. This work contains basic information about Silverlight module and similar module named Flash.
APA, Harvard, Vancouver, ISO, and other styles
47

Jannesson, Johan. "Seat heating smart algorithm." Thesis, University West, Department of Technology, Mathematics and Computer Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-338.

Full text
Abstract:

The goal of this project was to build a model and a controller for the seat heater and steering wheel heater on the SAAB cars. SAAB manufactures two different car models 9-3 and 9-5. The goal is to control the seat heater in both car models without any temperature sensor in the seat, this due to cost reduction. Several tests have been carried out booth in climate chambers and during road tests. These tests have in the end lead to a mathematical model for the temperature dependence and this model has been used to design an open loop controller for the seat heater.

APA, Harvard, Vancouver, ISO, and other styles
48

Uzor, Chigozirim. "Compact dynamic optimisation algorithm." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/13056.

Full text
Abstract:
In recent years, the field of evolutionary dynamic optimisation has seen significant increase in scientific developments and contributions. This is as a result of its relevance in solving academic and real-world problems. Several techniques such as hyper-mutation, hyper-learning, hyper-selection, change detection and many more have been developed specifically for solving dynamic optimisation problems. However, the complex structure of algorithms employing these techniques make them unsuitable for real-world, real-time dynamic optimisation problem using embedded systems with limited memory. The work presented in this thesis focuses on a compact approach as an alternative to population based optimisation algorithm, suitable for solving real-time dynamic optimisation problems. Specifically, a novel compact dynamic optimisation algorithm suitable for embedded systems with limited memory is presented. Three novel dynamic approaches that augment and enhance the evolving properties of the compact genetic algorithm in dynamic environments are introduced. These are 1.) change detection scheme that measures the degree of dynamic change 2.) mutation schemes whereby the mutation rates is directly linked to the detected degree of change and 3.) change trend scheme the monitors change pattern exhibited by the system. The novel compact dynamic optimization algorithm outlined was applied to two differing dynamic optimization problems. This work evaluates the algorithm in the context of tuning a controller for a physical target system in a dynamic environment and solving a dynamic optimization problem using an artificial dynamic environment generator. The novel compact dynamic optimisation algorithm was compared to some existing dynamic optimisation techniques. Through a series of experiments, it was shown that maintaining diversity at a population level is more efficient than diversity at an individual level. Among the five variants of the novel compact dynamic optimization algorithm, the third variant showed the best performance in terms of response to dynamic changes and solution quality. Furthermore, it was demonstrated that information transfer based on dynamic change patterns can effectively minimize the exploration/exploitation dilemma in a dynamic environment.
APA, Harvard, Vancouver, ISO, and other styles
49

Nallagandla, Shilpa. "Radix 2 division algorithm /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1251871361&sid=5&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Khan, Shoab Ahmad. "Logic and algorithm partitioning." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/13738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography