Dissertations / Theses on the topic 'Algorithms'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Algorithms.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Yarmolskyy, Oleksandr. "Využití distribuovaných a stochastických algoritmů v síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-370918.
Full textCrocomo, Márcio Kassouf. "Algoritmo de otimização bayesiano com detecção de comunidades." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23012013-160605/.
Full textESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation
Davidsdottir, Agnes. "Algorithms, Turing machines and algorithmic undecidability." Thesis, Uppsala universitet, Algebra och geometri, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441282.
Full textTikhonenko, Dmitrii. "Managing algorithmic drivers in a blocked-lane scenario." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/670829.
Full textEn aquesta tesi, es discuteixen diferents aspectes de la gestió els conductors assistits per algoritmes en un escenari de carril bloquejat. El primer capítol presenta un algorisme de la gestió òptima dels conductors egoistes. La política òptima pot incloure oscil.lacions de velocitat no desitjades. Proposem mesures per a un planificador central per erradicar-les i comprovem l’eficiència del nostre algoritme enfront de les polítiques heurístiques populars. En el segon capítol, es desenvolupa un mecanisme per a la licitació posicional dels conductors. Permet negociar posicions per carretera dels conductors amb valoracions de temps heterogènies, donant lloc a un resultat socialment beneficiós. El capítol final presenta una política d’aprenentatge profund per a l’aclariment centralitzat del coll d’ampolla en el menor temps possible. El seu ús és prou ràpid per permetre futures aplicacions operatives, i un conjunt de formació consisteix en polítiques de fusió `òptimes a nivell mundial.
Sauerland, Volkmar [Verfasser]. "Algorithm Engineering for some Complex Practise Problems : Exact Algorithms, Heuristics and Hybrid Evolutionary Algorithms / Volkmar Sauerland." Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1026442745/34.
Full textAguiar, Marilton Sanchotene de. "Análise formal da complexidade de algoritmos genéticos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1998. http://hdl.handle.net/10183/25941.
Full textThe objective of the work is to study the viability of treating optimization problems, considered intractable, through Genetic Algorithms, developing approaches for the qualitative evaluation of a Genetic Algorithm. Inside this theme, approached areas: complexity, classes of problems, analysis and development of algorithms and Genetic Algorithms, this last one being central object of the study. As product of the study of this theme, a development method of Genetic Algorithms is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The fact that a problem theoretically solvable isn’t enough to mean that it is solvable in pratice. A problem is denominated easy if in the worst case it possesses an algorithm reasonably efficient. And an algorithm is said reasonably efficient when a polynomial p exists such that for any entrance size n the algorithm terminates at maximum of p(n) steps [SZW 84]. Since a polynomial can be of very high order, then an algorithm of polynomial complexity can be very inefficient. The premise of the Genetic Algorithms is that one can find approximate solutions of problems of great computational complexity by means of a process of simulated evolution [LAG 96]. As product of the study of this theme, a method of development of Genetic Algorithms with the quality conscience is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The axiom set has the purpose of giving the semantics of the algorithm, in other words, it defines formally the operation of the algorithm, more specifically of the functions and procedures of the algorithm. And this, facilitates the planner of algorithms a larger safety in the development, because in order to prove the correction of a Genetic Algorithm that satisfies that model it is only necessary to prove that the procedures satisfy the axioms. To have conscience of the quality of an approximate algorithm, two factors are important: the accuracy and the complexity. This work lifts the important points for the study of the complexity of a Genetic Algorithm. Unhappily, they are conflicting factors, because as larger the accuracy, worse (higher) it is the complexity, and vice-versa. Thus, a study of the quality of a Genetic Algorithm, considered an approximate algorithm, would be only complete with the consideration of these two factors. But, this work provides a great step in direction of the study of the viability of the treatment of optimization problems through Genetic Algorithms.
Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.
Full textCommittee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Violich, Stephen Scott. "Fusing Loopless Algorithms for Combinatorial Generation." Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1075.
Full textLi, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Astete, morales Sandra. "Contributions to Convergence Analysis of Noisy Optimization Algorithms." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS327/document.
Full textThis thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods
Niedermeier, Rolf. "Invitation to fixed-parameter algorithms /." Oxford [u.a.] : Oxford Univ. Press, 2006. http://www.gbv.de/dms/ilmenau/toc/500666768niede.PDF.
Full textDementiev, Roman. "Algorithm engineering for large data sets hardware, software, algorithms." Saarbrücken VDM, Müller, 2006. http://d-nb.info/986494429/04.
Full textDementiev, Roman. "Algorithm engineering for large data sets : hardware, software, algorithms /." Saarbrücken : VDM-Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3029033&prov=M&dok_var=1&dok_ext=htm.
Full textKhungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.
Full textIncludes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
Johansson, Björn, and Emil Österberg. "Algorithms for Large Matrix Multiplications : Assessment of Strassen's Algorithm." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230742.
Full textStrassen’s algorithm was one of the breakthroughs in matrix analysis in 1968. In this report the thesis of Volker Strassen’s algorithm for matrix multipli- cations along with theories about precisions will be shown. The benefits of using this algorithm compared to naive matrix multiplication and its implica- tions, how its performance compare to the naive algorithm, will be displayed. Strassen’s algorithm will also be assessed on how the output differ when the matrix sizes grow larger, as well as how the theoretical complexity of the al- gorithm differs from the achieved complexity. The studies found that Strassen’s algorithm outperformed the naive matrix multiplication at matrix sizes 1024 1024 and above. The achieved complex- ity was a little higher compared to Volker Strassen’s theoretical. The optimal precision for this case were the double precision, Float64. How the algorithm is implemented in code matters for its performance. A number of techniques need to be considered in order to improve Strassen’s algorithm, optimizing its termination criterion, the manner by which it is padded in order to make it more usable for recursive application and the way it is implemented e.g. parallel computing. Even tough it could be proved that Strassen’s algorithm outperformed the Naive after reaching a certain matrix size, it is still not the most efficient one; e.g. as shown with Strassen-Winograd. One need to be careful of how the sub-matrices are being allocated, to not use unnecessary memory. For further reading one can study cache-oblivious and cache-aware algorithms.
Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.
Full textUtvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.
Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.
Hnízdilová, Bohdana. "Registrace ultrazvukových sekvencí s využitím evolučních algoritmů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442502.
Full textKostrygin, Anatolii. "Precise Analysis of Epidemic Algorithms." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX042/document.
Full textEpidemic algorithms are distributed algorithms in which the agents in thenetwork involve peers similarly to the spread of epidemics. In this work, we focus on randomized rumor spreading -- a class of epidemic algorithms based on the paradigm that nodes call random neighbors and exchange information with these contacts. Randomized rumor spreading has found numerous applications from the consistency maintenance of replicated databases to newsspreading in social networks. Numerous mathematical analyses of different rumor spreading algorithms can be found in the literature. Some of them provide extremely sharp estimates for the performance of such processes, but most of them are based on the inherent properties of concrete algorithms.We develop new simple and generic method to analyze randomized rumor spreading processes in fully connected networks. In contrast to all previous works, which heavily exploit the precise definition of the process under investigation, we only need to understand the probability and the covariance of the events that uninformed nodes become informed. This universality allows us to easily analyze the classic push, pull, and push-pull protocols both in their pure version and in several variations such as when messages fail with constant probability or when nodes call a random number of others each round. Some dynamic models can be analyzed as well, e.g., when the network is a random graph sampled independently each round [Clementi et al. (ESA 2013)]. Despite this generality, our method determines the expected rumor spreading time precisely apart from additive constants, which is more precise than almost all previous works. We also prove tail bounds showing that a deviation from the expectation by more than an additive number of r rounds occurs with probability at most exp(−Ω(r)).We further use our method to discuss the common assumption that nodes can answer any number of incoming calls. We observe that the restriction that only one call can be answered leads to a significant increase of the runtime of the push-pull protocol. In particular, the double logarithmic end phase of the process now takes logarithmic time. This also increases the message complexity from the asymptotically optimal Θ(n log log n) [Karp, Shenker, Schindelhauer, Vöcking (FOCS 2000)] to Θ(n log n). We propose a simple variation of the push-pull protocol that reverts back to the double logarithmic end phase and thus to the Θ(n log log n) message complexity
Jartoux, Bruno. "On combinatorial approximation algorithms in geometry." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1078/document.
Full textThe analysis of approximation techniques is a key topic in computational geometry, both for practical and theoretical reasons. In this thesis we discuss sampling tools for geometric structures and geometric approximation algorithms in combinatorial optimization. Part I focuses on the combinatorics of geometric set systems. We start by discussing packing problems in set systems, including extensions of a lemma of Haussler, mainly the so-called shallow packing lemma. For said lemma we also give an optimal lower bound that had been conjectured but not established in previous work on the topic. Then we use this lemma, together with the recently introduced polynomial partitioning technique, to study a combinatorial analogue of the Macbeath regions from convex geometry: Mnets, for which we unify previous existence results and upper bounds, and also give some lower bounds. We highlight their connection with epsilon-nets, staples of computational and combinatorial geometry, for example by observing that the unweighted epsilon-net bound of Chan et al. (SODA 2012) or Varadarajan (STOC 2010) follows directly from our results on Mnets. Part II deals with local-search techniques applied to geometric restrictions of classical combinatorial optimization problems. Over the last ten years such techniques have produced the first polynomial-time approximation schemes for various problems, such as that of computing a minimum-sized hitting set for a collection of input disks from a set of input points. In fact, it was shown that for many of these problems, local search with radius Θ(1/epsilon²) gives a (1 + epsilon)-approximation with running time n^{O(1/epsilon²)}. However the question of whether the exponent of n could be decreased to o(1/epsilon²) was left open. We answer it in the negative: the approximation guarantee of local search cannot be improved for any of these problems. The key ingredient is a new lower bound on locally expanding planar graphs, which is then used to show the impossibility results
Rafique, Abid. "Communication optimization in iterative numerical algorithms : an algorithm-architecture interaction." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/17837.
Full textSaadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.
Full textIn this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
Butt, Rehman. "Performance Comparison of AI Algorithms : Anytime Algorithms." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5845.
Full textAddress: NaN Mob. +46 - 737 - 40 19 17
Mirzazadeh, Mehdi. "Adaptive Comparison-Based Algorithms for Evaluating Set Queries." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1147.
Full textDutta, Himanshu Shekhar. "Survey of Approximation Algorithms for Set Cover Problem." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12118/.
Full textSilva, Isaac Dayan Bastos da. "An?lise e compara??o entre algoritmos de percola??o." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17000.
Full textIn this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.
Nesta disserta??o estudamos e comparamos dois algoritmos de percola??o, um elaborado por Elias e o outro por Newman e Ziff, utilizando ferramentas te?ricas da complexidade de algoritmos e um algoritmo que efetuou uma compara??o experimental. Dividimos este trabalho em tr?s cap?tulos. O primeiro aborda algumas defini??es e teoremas necess?rios a um estudo matem?tico mais formal da percola??o. O segundo apresenta t?cnicas utilizadas para o c?lculo estimativo de complexidade de algoritmos, sejam elas: pior caso, melhor caso e caso m?dio. Utilizamos a t?cnica do pior caso para estimar a complexidade de ambos algoritmos e assim podermos compar?-los. O ?ltimo cap?tulo mostra diversas caracter?sticas de cada um dos algoritmos e atrav?s da estima- tiva te?rica da complexidade e da compara??o entre os tempos de execu??o da parte mais importante de cada um, conseguimos comparar esses importantes algoritmos que simulam a percola??o
Knauer, Christian. "Algorithms for comparing geometric patterns (Algorithmen zum Vergleich geometrischer Muster) /." [S.l. : s.n.], 2001. http://www.diss.fu-berlin.de/2002/110/index.html.
Full textEriksson, Daniel. "Algorithmic Design of Graphical Resources for Games Using Genetic Algorithms." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139332.
Full textPelikan, Martin. "Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/toc/fy053/2004116659.html.
Full textKang, Seunghwa. "On the design of architecture-aware algorithms for emerging applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39503.
Full textReis, Márcio Félix 1986. "O problema da árvore geradora com muitas folhas." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275511.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-25T08:59:09Z (GMT). No. of bitstreams: 1 Reis_MarcioFelix_M.pdf: 1988657 bytes, checksum: 6ee5ea6ba406aea3ccb7e3332e679eab (MD5) Previous issue date: 2014
Resumo: Neste trabalho estudamos o problema da árvore geradora com muitas folhas (PAGMF). Este problema pode ser usado como abstração para diversos problemas práticos e sabe-se que é NP-difícil. Estudamos, implementamos e executamos testes para algoritmos aproximados e exatos para o PAGMF e para um caso particular que considera apenas grafos cúbicos. O principal objetivo do trabalho foi verificar o comportamento prático dos algoritmos estudados. Para as instâncias testadas, em geral, o algoritmo guloso apresentou melhores resultados para o PAGMF e o algoritmo 2-aproximado teve os melhores resultados para os grafos cúbicos
Abstract: In this work we study the maximum leaf spanning tree problem (MLSTP). This problem can be used as an abstraction for many practical problems and is known to be NP-hard. We studied, implemented and executed tests for approximate and exact algorithms for the MLSTP and for a particular case that considers only cubic graphs. The main objective of this study was to investigate the practical behavior of the algorithms studied. For the tested instances, in general, the greedy algorithm showed better results for the MLSTP and the 2-approximate algorithm had the best results for cubic graphs
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Wright, Colin Douglas. "Combinatorial algorithms." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357944.
Full textAbrams, Daniel S. (Daniel Stephen) 1972. "Quantum algorithms." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/85313.
Full textNikolova, Evdokia Velinova. "Strategic algorithms." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54673.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 193-201).
Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components are, for example, uncertainty and economic incentives. Therefore, modem algorithm design is calling for more interdisciplinary approaches, as well as for deeper theoretical understanding, so that the algorithms can apply to more realistic settings and complex systems. Consider, for instance, the classical shortest path algorithm, which, given a graph with specified edge weights, seeks the path minimizing the total weight from a source to a destination. In practice, the edge weights are often uncertain and it is not even clear what we mean by shortest path anymore: is it the path that minimizes the expected weight? Or its variance, or some another metric? With a risk-averse objective function that takes into account both mean and standard deviation, we run into nonconvex optimization challenges that require new theory beyond classical shortest path algorithm design. Yet another shortest path application, routing of packets in the Internet, needs to further incorporate economic incentives to reflect the various business relationships among the Internet Service Providers that affect the choice of packet routes. Strategic Algorithms are algorithms that integrate optimization, uncertainty and economic modeling into algorithm design, with the goal of bringing about new theoretical developments and solving practical applications arising in complex computational-economic systems.
(cont.) In short, this thesis contributes new algorithms and their underlying theory at the interface of optimization, uncertainty and economics. Although the interplay of these disciplines is present in various forms in our work, for the sake of presentation we have divided the material into three categories: 1. In Part I we investigate algorithms at the intersection of Optimization and Uncertainty. The key conceptual contribution in this part is discovering a novel connection between stochastic and nonconvex optimization. Traditional algorithm design has not taken into account the risk inherent in stochastic optimization problems. We consider natural objectives that incorporate risk, which tum out equivalent to certain nonconvex problems from the realm of continuous optimization. As a result, our work advances the state of art in both stochastic and in nonconvex optimization, presenting new complexity results and proposing general purpose efficient approximation algorithms, some of which have shown promising practical performance and have been implemented in a real traffic prediction and navigation system. 2. Part II proposes new algorithm and mechanism design at the intersection of Uncertainty and Economics. In Part I we postulate that the random variables in our models come from given distributions. However, determining those distributions or their parameters is a challenging and fundamental problem in itself. A tool from Economics that has recently gained momentum for measuring the probability distribution of a random variable is an information or prediction market. Such markets, most popularly known for predicting the outcomes of political elections or other events of interest, have shown remarkable accuracy in practice, though at the same time have left open the theoretical and strategic analysis of current implementations, as well as the need for new and improved designs which handle more complex outcome spaces (probability distribution functions) as opposed to binary or n-ary valued distributions. The contributions of this part include a unified strategic analysis of different prediction market designs that have been implemented in practice.
(cont.) We also offer new market designs for handling exponentially large outcome spaces stemming from ranking or permutation-type outcomes, together with algorithmic and complexity analysis. 3. In Part III we consider the interplay of optimization and economics in the context of network routing. This part is motivated by the network of autonomous systems in the Internet where each portion of the network is controlled by an Internet service provider, namely by a self-interested economic agent. The business incentives do not exist merely in addition to the computer protocols governing the network. Although they are not currently integrated in those protocols and are decided largely via private contracting and negotiations, these economic considerations are a principal factor that determines how packets are routed. And vice versa, the demand and flow of network traffic fundamentally affect provider contracts and prices. The contributions of this part are the design and analysis of economic mechanisms for network routing. The mechanisms are based on first- and second-price auctions (the so-called Vickrey-Clarke-Groves, or VCG mechanisms). We first analyze the equilibria and prices resulting from these mechanisms. We then investigate the compatibility of the better understood VCG-mechanisms with the current inter-domain routing protocols, and we demonstrate the critical importance of correct modeling and how it affects the complexity and algorithms necessary to implement the economic mechanisms.
by Evdokia Velinova Nikolova.
Ph.D.
Kolářová, Jana. "Evoluční algoritmy pro ultrazvukovou perfúzní analýzu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-401037.
Full textLin, Han-Hsuan. "Topics in quantum algorithms : adiabatic algorithm, quantum money, and bomb query complexity." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99300.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 111-115).
In this thesis, I present three results on quantum algorithms and their complexity. The first one is a numerical study on the quantum adiabatic algorithm( QAA) . We tested the performance of the QAA on random instances of MAX 2-SAT on 20 qubits and showed 3 strategics that improved QAA's performance, including a counter intuitive strategy of decreasing the overall evolution time. The second result is a security proof for the quantum money by knots proposed by Farhi et. al. We proved that quantum money by knots can not be cloned in a black box way unless graph isomorphism is efficiently solvable by a quantum computer. Lastly we defined a modified quantum query model, which we called bomb query complexity B(J), inspired by the Elitzur-Vaidman bomb-testing problem. We completely characterized bomb query complexity be showing that B(f) = [Theta](Q(f)2 ). This result implies a new method to find upper bounds on quantum query complexity, which we applied on the maximum bipartite matching problem to get an algorithm with O(n1.75) quantum query complexity, improving from the best known trivial O(n2 ) upper bound.
by Han-Hsuan Lin.
Ph. D.
Nanongkai, Danupon. "Graph and geometric algorithms on distributed networks and databases." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41056.
Full textKrüger, Franz David, and Mohamad Nabeel. "Hyperparameter Tuning Using Genetic Algorithms : A study of genetic algorithms impact and performance for optimization of ML algorithms." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42404.
Full textAs machine learning (ML) is being more and more frequent in the business world, information gathering through Data mining (DM) is on the rise, and DM-practitioners are generally using several thumb rules to avoid having to spend a decent amount of time to tune the hyperparameters (parameters that control the learning process) of an ML algorithm to gain a high accuracy score. The proposal in this report is to conduct an approach that systematically optimizes the ML algorithms using genetic algorithms (GA) and to evaluate if and how the model should be constructed to find global solutions for a specific data set. By implementing a GA approach on two ML-algorithms, K-nearest neighbors, and Random Forest, on two numerical data sets, Iris data set and Wisconsin breast cancer data set, the model is evaluated by its accuracy scores as well as the computational time which then is compared towards a search method, specifically exhaustive search. The results have shown that it is assumed that GA works well in finding great accuracy scores in a reasonable amount of time. There are some limitations as the parameter’s significance towards an ML algorithm may vary.
Aronsson, Jonatan. "Novel tree-based algorithms for computational electromagnetics." IEEE, 2009. http://hdl.handle.net/1993/22086.
Full textIto, Masanori, Mitsuru Kawamoto, Noboru Ohnishi, and Yujiro Inouye. "SUPER-EXPONENTIAL ALGORITHMS AND EIGENVECTOR ALGORITHMS FOR BLIND SOURCE SEPARATIOIN." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10429.
Full textCorbineau, Marie-Caroline. "Proximal and interior point optimization strategies in image recovery." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC085/document.
Full textInverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data
Guedj, Michaël. "BSP algorithms for LTL & CTL model checking of security protocols." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1081.
Full textIn a world strongly dependent on distributed data communication, the design of secure infrastructures is a crucial task. Distributed systems and networks are becoming increasingly important, as most of the services and opportunities that characterise the modern society are based on these technologies. Communication among agents over networks has therefore acquired a great deal of research interest. In order to provide effective and reliable means of communication, more and more communication protocols are invented, and for most of them, security is a significant goal
Gupta, Pramod. "Robust clustering algorithms." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39553.
Full textBrolin, Echeverria Paolo, and Joakim Westermark. "Benchmarking Rubik’sRevenge algorithms." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134903.
Full textDenna kandidatexamensuppsats undersöker två olika metoder som används för att lösa Rubiks Kub 4x4x4. Metoderna som analyseras är Reduction och Big Cube. Vi har implementerat kuben samt de bägge lösarna I Python. Genom en serie tester har vi kommit fram till att Big Cube har ett lägre genomsnittligt rotationsantal samt lägre standardavvikelse än Reduction. Reductionmetoden har däremot ett lägre minimumvärde på antalet rotationer och består av färre algoritmer. Det bästa tillvägagångssättet vore att kombinera de båda lösningarna.
van, de Hoef Sebastian. "Extended Consensus Algorithms." Thesis, KTH, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-125809.
Full textZhou, Tianyang 1980. "Modified LLL algorithms." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99356.
Full textMosca, Michele. "Quantum computer algorithms." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.
Full textHein, Birgit. "Quantum search algorithms." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11512/.
Full textZhang, Minghua, and 張明華. "Sequence mining algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B44570119.
Full textThompson, Simon Giles. "Distributed boosting algorithms." Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.
Full textAbu-Bakar, Nordin. "Adaptive genetic algorithms." Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343268.
Full text