Tesi sul tema "Algorithms"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Algorithms.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Algorithms".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Yarmolskyy, Oleksandr. "Využití distribuovaných a stochastických algoritmů v síti". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-370918.

Testo completo
Abstract (sommario):
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Crocomo, Márcio Kassouf. "Algoritmo de otimização bayesiano com detecção de comunidades". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23012013-160605/.

Testo completo
Abstract (sommario):
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destaca-se o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõe-se dois novos algoritmos denominados CD-BOA e StrOp. Mostra-se que ambos apresentam vantagens significativas em relação ao BOA. O CD-BOA mostra-se mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CD-BOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturando-os em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CD-BOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CD-BOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva
ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Davidsdottir, Agnes. "Algorithms, Turing machines and algorithmic undecidability". Thesis, Uppsala universitet, Algebra och geometri, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441282.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tikhonenko, Dmitrii. "Managing algorithmic drivers in a blocked-lane scenario". Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/670829.

Testo completo
Abstract (sommario):
Due to the emergence of new technologies, algorithm-assisted drivers are close to becoming a reality. In this thesis, different aspects of managing such drivers in a blocked-lane scenario are discussed. The first chapter presents an algorithm for the optimal merging of self-interested drivers. The optimal policy can include undesirable velocity oscillations. We propose measures for a central planner to eradicate them, and we test the efficiency of our algorithm versus popular heuristic policies. In the second chapter, a mechanism for positional bidding of the drivers is developed. It allows trading of highway positions of the drivers with heterogeneous time valuations, resulting in a socially beneficial outcome. The final chapter presents a deep learning policy for centralized clearing of the bottleneck in the shortest time. Its use is fast enough to allow future operational applications, and a training set consists of globally optimal merging policies.
En aquesta tesi, es discuteixen diferents aspectes de la gestió els conductors assistits per algoritmes en un escenari de carril bloquejat. El primer capítol presenta un algorisme de la gestió òptima dels conductors egoistes. La política òptima pot incloure oscil.lacions de velocitat no desitjades. Proposem mesures per a un planificador central per erradicar-les i comprovem l’eficiència del nostre algoritme enfront de les polítiques heurístiques populars. En el segon capítol, es desenvolupa un mecanisme per a la licitació posicional dels conductors. Permet negociar posicions per carretera dels conductors amb valoracions de temps heterogènies, donant lloc a un resultat socialment beneficiós. El capítol final presenta una política d’aprenentatge profund per a l’aclariment centralitzat del coll d’ampolla en el menor temps possible. El seu ús és prou ràpid per permetre futures aplicacions operatives, i un conjunt de formació consisteix en polítiques de fusió `òptimes a nivell mundial.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Sauerland, Volkmar [Verfasser]. "Algorithm Engineering for some Complex Practise Problems : Exact Algorithms, Heuristics and Hybrid Evolutionary Algorithms / Volkmar Sauerland". Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1026442745/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Aguiar, Marilton Sanchotene de. "Análise formal da complexidade de algoritmos genéticos". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1998. http://hdl.handle.net/10183/25941.

Testo completo
Abstract (sommario):
O objetivo do trabalho é estudar a viabilidade de tratar problemas de otimização, considerados intratáveis, através de Algoritmos Genéticos, desenvolvendo critérios para a avaliação qualitativa de um Algoritmo Genético. Dentro deste tema, abordam-se estudos sobre complexidade, classes de problemas, análise e desenvolvimento de algoritmos e Algoritmos Genéticos, este ultimo sendo objeto central do estudo. Como produto do estudo deste tema, é proposto um método de desenvolvimento de Algoritmos Genéticos, utilizando todo o estudo formal de tipos de problemas, desenvolvimento de algoritmos aproximativos e análise da complexidade. O fato de um problema ser teoricamente resolvível por um computador não é suficiente para o problema ser na prática resolvível. Um problema é denominado tratável se no pior caso possui um algoritmo razoavelmente eficiente. E um algoritmo é dito razoavelmente eficiente quando existe um polinômio p tal que para qualquer entrada de tamanho n o algoritmo termina com no máximo p(n) passos [SZW 84]. Já que um polinômio pode ser de ordem bem alta, então um algoritmo de complexidade polinomial pode ser muito ineficiente. Genéticos é que se pode encontrar soluções aproximadas de problemas de grande complexidade computacional mediante um processo de evolução simulada[LAG 96]. Como produto do estudo deste tema, é proposto um método de desenvolvimento de Algoritmos Genéticos com a consciência de qualidade, utilizando todo o estudo formal de tipos de problemas, desenvolvimento de algoritmos aproximativos e análise da complexidade. Uma axiomatização tem o propósito de dar a semântica do algoritmo, ou seja, ela define, formalmente, o funcionamento do algoritmo, mais especificamente das funções e procedimentos do algoritmo. E isto, possibilita ao projetista de algoritmos uma maior segurança no desenvolvimento, porque para provar a correção de um Algoritmo Genético que satisfaça esse modelo só é necessário provar que os procedimentos satisfazem os axiomas. Para ter-se consciência da qualidade de um algoritmo aproximativo, dois fatores são relevantes: a exatidão e a complexidade. Este trabalho levanta os pontos importantes para o estudo da complexidade de um Algoritmo Genético. Infelizmente, são fatores conflitantes, pois quanto maior a exatidão, pior ( mais alta) é a complexidade, e vice-versa. Assim, um estudo da qualidade de um Algoritmo Genético, considerado um algoritmo aproximativo, só estaria completa com a consideração destes dois fatores. Mas, este trabalho proporciona um grande passo em direção do estudo da viabilidade do tratamento de problemas de otimização via Algoritmos Genéticos.
The objective of the work is to study the viability of treating optimization problems, considered intractable, through Genetic Algorithms, developing approaches for the qualitative evaluation of a Genetic Algorithm. Inside this theme, approached areas: complexity, classes of problems, analysis and development of algorithms and Genetic Algorithms, this last one being central object of the study. As product of the study of this theme, a development method of Genetic Algorithms is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The fact that a problem theoretically solvable isn’t enough to mean that it is solvable in pratice. A problem is denominated easy if in the worst case it possesses an algorithm reasonably efficient. And an algorithm is said reasonably efficient when a polynomial p exists such that for any entrance size n the algorithm terminates at maximum of p(n) steps [SZW 84]. Since a polynomial can be of very high order, then an algorithm of polynomial complexity can be very inefficient. The premise of the Genetic Algorithms is that one can find approximate solutions of problems of great computational complexity by means of a process of simulated evolution [LAG 96]. As product of the study of this theme, a method of development of Genetic Algorithms with the quality conscience is proposed, using the whole formal study of types of problems, development of approximate algorithms and complexity analysis. The axiom set has the purpose of giving the semantics of the algorithm, in other words, it defines formally the operation of the algorithm, more specifically of the functions and procedures of the algorithm. And this, facilitates the planner of algorithms a larger safety in the development, because in order to prove the correction of a Genetic Algorithm that satisfies that model it is only necessary to prove that the procedures satisfy the axioms. To have conscience of the quality of an approximate algorithm, two factors are important: the accuracy and the complexity. This work lifts the important points for the study of the complexity of a Genetic Algorithm. Unhappily, they are conflicting factors, because as larger the accuracy, worse (higher) it is the complexity, and vice-versa. Thus, a study of the quality of a Genetic Algorithm, considered an approximate algorithm, would be only complete with the consideration of these two factors. But, this work provides a great step in direction of the study of the viability of the treatment of optimization problems through Genetic Algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Testo completo
Abstract (sommario):
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Violich, Stephen Scott. "Fusing Loopless Algorithms for Combinatorial Generation". Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1075.

Testo completo
Abstract (sommario):
Loopless algorithms are an interesting challenge in the field of combinatorial generation. These algorithms must generate each combinatorial object from its predecessor in no more than a constant number of instructions, thus achieving theoretically minimal time complexity. This constraint rules out powerful programming techniques such as iteration and recursion, which makes loopless algorithms harder to develop and less intuitive than other algorithms. This thesis discusses a divide-and-conquer approach by which loopless algorithms can be developed more easily and intuitively: fusing loopless algorithms. If a combinatorial generation problem can be divided into subproblems, it may be possible to conquer it looplessly by fusing loopless algorithms for its subproblems. A key advantage of this approach is that is allows existing loopless algorithms to be reused. This approach is not novel, but it has not been generalised before. This thesis presents a general framework for fusing loopless algorithms, and discusses its implications. It then applies this approach to two combinatorial generation problems and presents two new loopless algorithms. The first new algorithm, MIXPAR, looplessly generates well-formed parenthesis strings comprising two types of parentheses. It is the first loopless algorithm for generating these objects. The second new algorithm, MULTPERM, generates multiset permutations in linear space using only arrays, a benchmark recently set by Korsh and LaFollette (2004). Algorithm MULTPERM is evaluated against Korsh and LaFollette's algorithm, and shown to be simpler and more efficient in both space and time.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Astete, morales Sandra. "Contributions to Convergence Analysis of Noisy Optimization Algorithms". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS327/document.

Testo completo
Abstract (sommario):
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées
This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Niedermeier, Rolf. "Invitation to fixed-parameter algorithms /". Oxford [u.a.] : Oxford Univ. Press, 2006. http://www.gbv.de/dms/ilmenau/toc/500666768niede.PDF.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Dementiev, Roman. "Algorithm engineering for large data sets hardware, software, algorithms". Saarbrücken VDM, Müller, 2006. http://d-nb.info/986494429/04.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Dementiev, Roman. "Algorithm engineering for large data sets : hardware, software, algorithms /". Saarbrücken : VDM-Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3029033&prov=M&dok_var=1&dok_ext=htm.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Testo completo
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Johansson, Björn, e Emil Österberg. "Algorithms for Large Matrix Multiplications : Assessment of Strassen's Algorithm". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230742.

Testo completo
Abstract (sommario):
1968 var Strassens algoritm en av de stora genombrotten inom matrisanalyser. I denna rapport kommer teorin av Volker Strassens algoritm för matrismultiplikationer tillsammans med teorier om precisioner att presenteras. Även fördelar med att använda denna algoritm jämfört med naiva matrismultiplikation och dess implikationer, samt hur den presterar jämfört med den naiva algoritmen kommer att presenteras. Strassens algoritm kommer också att bli bedömd på hur dess resultat skiljer sig för olika precisioner när matriserna blir större, samt hur dess teoretiska komplexitet skiljer sig gentemot den erhållna komplexiteten. Studier hittade att Strassens algoritm överträffade den naiva algoritmen för matriser av storlek 1024×1024 och större. Den erhållna komplexiteten var lite större än Volker Strassens teoretiska. Den optimala precisionen i detta fall var dubbelprecisionen, Float64. Sättet algoritmen implementeras på i koden påverkar dess prestanda. Ett flertal olika faktorer behövs ha i åtanke för att förbättra Strassens algoritm: optimera dess avbrottsvillkor, sättet som matriserna paddas för att de ska vara mer användbara för rekursiv tillämpning och hur de implementeras t.ex. parallella beräkningar. Även om det kunde bevisas att Strassen algoritm överträffade den naiva efter en viss matrisstorlek så är den inte den mest effektiva; t.ex visades detta med Strassen-Winograd. Man behöver vara uppmärksam på hur undermatriserna allokeras, för att inte ta upp onödigt minne. För fördjupning kan man läsa på om cache-oblivious och cache-aware algoritmer.
Strassen’s algorithm was one of the breakthroughs in matrix analysis in 1968. In this report the thesis of Volker Strassen’s algorithm for matrix multipli- cations along with theories about precisions will be shown. The benefits of using this algorithm compared to naive matrix multiplication and its implica- tions, how its performance compare to the naive algorithm, will be displayed. Strassen’s algorithm will also be assessed on how the output differ when the matrix sizes grow larger, as well as how the theoretical complexity of the al- gorithm differs from the achieved complexity. The studies found that Strassen’s algorithm outperformed the naive matrix multiplication at matrix sizes 1024 1024 and above. The achieved complex- ity was a little higher compared to Volker Strassen’s theoretical. The optimal precision for this case were the double precision, Float64. How the algorithm is implemented in code matters for its performance. A number of techniques need to be considered in order to improve Strassen’s algorithm, optimizing its termination criterion, the manner by which it is padded in order to make it more usable for recursive application and the way it is implemented e.g. parallel computing. Even tough it could be proved that Strassen’s algorithm outperformed the Naive after reaching a certain matrix size, it is still not the most efficient one; e.g. as shown with Strassen-Winograd. One need to be careful of how the sub-matrices are being allocated, to not use unnecessary memory. For further reading one can study cache-oblivious and cache-aware algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer". Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Testo completo
Abstract (sommario):

Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.


Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.

Gli stili APA, Harvard, Vancouver, ISO e altri
17

Hnízdilová, Bohdana. "Registrace ultrazvukových sekvencí s využitím evolučních algoritmů". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442502.

Testo completo
Abstract (sommario):
This master´s thesis deals with the registration of ultrasound sequences using evolutionary algorithms. The theoretical part of the thesis describes the process of image registration and its optimalization using genetic and metaheuristic algorithms. The thesis also presents problems that may occur during the registration of ultrasonographic images and various approaches to their registration. In the practical part of the work, several optimization methods for the registration of a number of sequences were implemented and compared.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Kostrygin, Anatolii. "Precise Analysis of Epidemic Algorithms". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX042/document.

Testo completo
Abstract (sommario):
La dissémination collaborative d'une information d'un agent à tous les autres agents d'un système distribué est un problème fondamental qui est particulièrement important lorsque l'on veut obtenir des algorithmes distribués qui sont à la fois robustes et fonctionnent dans un cadre anonyme, c'est-à-dire sans supposer que les agents possèdent des identifiants distincts connus. Ce problème, connu sous le nom de problème de propagation de rumeur , est à la base de nombreux algorithmes de communication sur des réseaux de capteurs sans-fil [Dimakis et al. (2010)] ou des réseaux mobiles ad-hoc. Il est aussi une brique de base centrale pour de nombreux algorithmes distribués avancés [Mosk-Aoyama et Shah (2008)].Les méthodes les plus connues pour surmonter les défis de robustesse et d'anonymat sont les algorithmes basés sur les ragots ( gossip-based algorithms ), c'est-à-dire sur la paradigme que les agents contact aléatoirement les autres agents pour envoyer ou récupérer l'information. Nousproposons une méthode générale d'analyse de la performance des algorithmes basés sur les ragots dans les graphes complets. Contrairement aux résultats précédents basés sur la structure précise des processus étudiés, notre analyse est basée sur la probabilité et la covariance des évènements correspondants au fait qu'un agent non-informé s'informe. Cette universalité nous permet de reproduire les résultats basiques concernant les protocoles classiques de push, pull et push-pull ainsi qu'analyser les certaines variantions telles que les échecs de communications ou les communications simultanés multiples réalisées par chaque agent. De plus, nous sommescapables d'analyser les certains modèles dynamiques quand le réseaux forme un graphe aléatoire échantillonné à nouveau à chaque étape [Clementi et al. (ESA 2013)]. Malgré sa généralité, notre méthode est simple et précise. Elle nous permet de déterminer l'espérance du temps de la diffusion à une constante additive près, ce qu'il est plus précis que la plupart des résultatsprécédents. Nous montrons aussi que la déviation du temps de la diffusion par rapport à son espérance est inférieure d'une constante r avec la probabilité au moins 1 − exp(Ω(r)).À la fin, nous discutons d'une hypothèse classique que les agents peuvent répondre à plusieurs appels entrants. Nous observons que la restriction à un seul appel entrant par agent provoque une décélération importante du temps de la diffusion pour un protocole de push-pull. En particulier, une phase finale du processus prend le temps logarithmique au lieu du temps double logarithmique. De plus, cela augmente le nombre de messages passés de Θ(n log log n) (valeur optimale selon [Karp et al. (FOCS 2000)]) au Θ(n log n) . Nous proposons une variation simple du protocole de push-pull qui rétablit une phase double logarithmique à nouveau et donc le nombre de messages passés redescend sur sa valeur optimal
Epidemic algorithms are distributed algorithms in which the agents in thenetwork involve peers similarly to the spread of epidemics. In this work, we focus on randomized rumor spreading -- a class of epidemic algorithms based on the paradigm that nodes call random neighbors and exchange information with these contacts. Randomized rumor spreading has found numerous applications from the consistency maintenance of replicated databases to newsspreading in social networks. Numerous mathematical analyses of different rumor spreading algorithms can be found in the literature. Some of them provide extremely sharp estimates for the performance of such processes, but most of them are based on the inherent properties of concrete algorithms.We develop new simple and generic method to analyze randomized rumor spreading processes in fully connected networks. In contrast to all previous works, which heavily exploit the precise definition of the process under investigation, we only need to understand the probability and the covariance of the events that uninformed nodes become informed. This universality allows us to easily analyze the classic push, pull, and push-pull protocols both in their pure version and in several variations such as when messages fail with constant probability or when nodes call a random number of others each round. Some dynamic models can be analyzed as well, e.g., when the network is a random graph sampled independently each round [Clementi et al. (ESA 2013)]. Despite this generality, our method determines the expected rumor spreading time precisely apart from additive constants, which is more precise than almost all previous works. We also prove tail bounds showing that a deviation from the expectation by more than an additive number of r rounds occurs with probability at most exp(−Ω(r)).We further use our method to discuss the common assumption that nodes can answer any number of incoming calls. We observe that the restriction that only one call can be answered leads to a significant increase of the runtime of the push-pull protocol. In particular, the double logarithmic end phase of the process now takes logarithmic time. This also increases the message complexity from the asymptotically optimal Θ(n log log n) [Karp, Shenker, Schindelhauer, Vöcking (FOCS 2000)] to Θ(n log n). We propose a simple variation of the push-pull protocol that reverts back to the double logarithmic end phase and thus to the Θ(n log log n) message complexity
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Jartoux, Bruno. "On combinatorial approximation algorithms in geometry". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1078/document.

Testo completo
Abstract (sommario):
L'analyse des techniques d'approximation est centrale en géométrie algorithmique, pour des raisons pratiques comme théoriques. Dans cette thèse nous traitons de l'échantillonnage des structures géométriques et des algorithmes d'approximation géométriques en optimisation combinatoire. La première partie est consacrée à la combinatoire des hypergraphes. Nous débutons par les problèmes de packing, dont des extensions d'un lemme de Haussler, particulièrement le lemme dit de Shallow packing, pour lequel nous donnons aussi un minorant optimal, conjecturé mais pas établi dans les travaux antérieurs. Puis nous appliquons ledit lemme, avec la méthode de partition polynomiale récemment introduite, à l'étude d'un analogue combinatoire des régions de Macbeath de la géométrie convexe : les M-réseaux, pour lesquels nous unifions les résultats d'existence et majorations existants, et donnons aussi quelques minorants. Nous illustrons leur relation aux epsilon-réseaux, structures incontournables en géométrie combinatoire et algorithmique, notamment en observant que les majorants de Chan et al. (SODA 2012) ou Varadarajan (STOC 2010) pour les epsilon-réseaux (uniformes) découlent directement de nos résultats sur les M-réseaux. La deuxième partie traite des techniques de recherche locale appliquées aux restrictions géométriques de problèmes classiques d'optimisation combinatoire. En dix ans, ces techniques ont produit les premiers schémas d'approximation en temps polynomial pour divers problèmes tels que celui de calculer un plus petit ensemble intersectant pour un ensemble de disques donnés en entrée parmi un ensemble de points donnés en entrée. En fait, il a été montré que pour de nombreux tels problèmes, la recherche locale de rayon Θ (1/epsilon²) donne une (1 + epsilon)-approximation en temps n^{O(1/epsilon²)}. Savoir si l'exposant de n pouvait être ramené à o (1/epsilon²) demeurait une question ouverte. Nous répondons par la négative : la garantie d'approximation de la recherche locale n'est améliorable pour aucun desdits problèmes
The analysis of approximation techniques is a key topic in computational geometry, both for practical and theoretical reasons. In this thesis we discuss sampling tools for geometric structures and geometric approximation algorithms in combinatorial optimization. Part I focuses on the combinatorics of geometric set systems. We start by discussing packing problems in set systems, including extensions of a lemma of Haussler, mainly the so-called shallow packing lemma. For said lemma we also give an optimal lower bound that had been conjectured but not established in previous work on the topic. Then we use this lemma, together with the recently introduced polynomial partitioning technique, to study a combinatorial analogue of the Macbeath regions from convex geometry: Mnets, for which we unify previous existence results and upper bounds, and also give some lower bounds. We highlight their connection with epsilon-nets, staples of computational and combinatorial geometry, for example by observing that the unweighted epsilon-net bound of Chan et al. (SODA 2012) or Varadarajan (STOC 2010) follows directly from our results on Mnets. Part II deals with local-search techniques applied to geometric restrictions of classical combinatorial optimization problems. Over the last ten years such techniques have produced the first polynomial-time approximation schemes for various problems, such as that of computing a minimum-sized hitting set for a collection of input disks from a set of input points. In fact, it was shown that for many of these problems, local search with radius Θ(1/epsilon²) gives a (1 + epsilon)-approximation with running time n^{O(1/epsilon²)}. However the question of whether the exponent of n could be decreased to o(1/epsilon²) was left open. We answer it in the negative: the approximation guarantee of local search cannot be improved for any of these problems. The key ingredient is a new lower bound on locally expanding planar graphs, which is then used to show the impossibility results
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Rafique, Abid. "Communication optimization in iterative numerical algorithms : an algorithm-architecture interaction". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/17837.

Testo completo
Abstract (sommario):
Trading communication with redundant computation can increase the silicon efficiency of common hardware accelerators like FPGA and GPU in accelerating sparse iterative numerical algorithms. While iterative numerical algorithms are extensively used in solving large-scale sparse linear system of equations and eigenvalue problems, they are challenging to accelerate as they spend most of their time in communication-bound operations, like sparse matrix-vector multiply (SpMV) and vector-vector operations. Communication is used in a general sense to mean moving the matrix and the vectors within the custom memory hierarchy of the FPGA and between processors in the GPU; the cost of which is much higher than performing the actual computation due to technological reasons. Additionally, the dependency between the operations hinders overlapping computation with communication. As a result, although GPU and FPGA are offering large peak floating-point performance, their sustained performance is nonetheless very low due to high communication costs leading to poor silicon efficiency. In this thesis, we provide a systematic study to minimize the communication cost thereby increase the silicon efficiency. For small-to-medium datasets, we exploit large on-chip memory of the FPGA to load the matrix only once and then use explicit blocking to perform all iterations at the communication cost of a single iteration. For large sparse datasets, it is now a well-known idea to unroll k iterations using a matrix powers kernel which replaces SpMV and two additional kernels, TSQR and BGS, which replace vector-vector operations. While this approach can provide a Θ(k) reduction in the communication cost, the extent of the unrolling depends on the growth in redundant computation, the underlying architecture and the memory model. In this work, we show how to select the unroll factor k in an architecture-agnostic manner to provide communication-computation tradeoff on FPGA and GPU. To this end, we exploit inverse-memory hierarchy of the GPUs to map matrix power kernel and present a new algorithm for the FPGAs which matches with their strength to reduce redundant computation to allow large k and hence higher speedups. We provide predictive models of the matrix powers kernel to understand the communication-computation tradeoff on GPU and FPGA. We highlight extremely low efficiency of the GPU in TSQR due to off-chip sharing of data across different building blocks and show how we can use on-chip memory of the FPGA to eliminate this off-chip access and hence achieve better efficiency. Finally, we demonstrate how to compose all the kernels by using a unified architecture and exploit on-chip memory of the FPGA to share data across these kernels. Using the Lanczos Iteration as a case study to solve symmetric extremal eigenvalue problem, we show that the efficiency of FPGAs can be increased from 1.8% to 38% for small- to-medium scale dense matrices whereas up to 7.8% for large-scale structured banded matrices. We show that although GPU shows better efficiency for certain kernels like the matrix powers kernel, the overall efficiency is even lower due to increase in communication cost while sharing data across different kernels through off-chip memory. As the Lanczos Iteration is at the heart of all modern iterative numerical algorithms, our results are applicable to a broad class of iterative numerical algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Saadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.

Testo completo
Abstract (sommario):
Dans cette thèse, nous étudions des thématiques autour des algorithmes stochastiques et c'est pour cette raison que nous débuterons ce manuscrit par des éléments généraux sur ces algorithmes en donnant des résultats historiques pour poser les bases de nos travaux. Ensuite, nous étudierons un algorithme de bandit issu des travaux de N arendra et Shapiro dont l'objectif est de déterminer parmi un choix de plusieurs sources laquelle profite le plus à l'utilisateur en évitant toutefois de passer trop de temps à tester celles qui sont moins per­formantes. Notre but est dans un premier temps de comprendre les faiblesses structurelles de cet algorithme pour ensuite proposer une procédure optimale pour une quantité qui mesure les performances d'un algorithme de bandit, le regret. Dans nos résultats, nous proposerons un algorithme appelé NS sur-pénalisé qui permet d'obtenir une borne de regret optimale au sens minimax au travers d'une étude fine de l'algorithme stochastique sous-jacent à cette procédure. Un second travail sera de donner des vitesses de convergence pour le processus apparaissant dans l'étude de la convergence en loi de l'algorithme NS sur-pénalisé. La par­ticularité de l'algorithme est qu'il ne converge pas en loi vers une diffusion comme la plupart des algorithmes stochastiques mais vers un processus à sauts non-diffusif ce qui rend l'étude de la convergence à l'équilibre plus technique. Nous emploierons une technique de couplage afin d'étudier cette convergence. Le second travail de cette thèse s'inscrit dans le cadre de l'optimisation d'une fonc­tion au moyen d'un algorithme stochastique. Nous étudierons une version stochastique de l'algorithme déterministe de boule pesante avec amortissement. La particularité de cet al­gorithme est d'être articulé autour d'une dynamique qui utilise une moyennisation sur tout le passé de sa trajectoire. La procédure fait appelle à une fonction dite de mémoire qui, selon les formes qu'elle prend, offre des comportements intéressants. Dans notre étude, nous verrons que deux types de mémoire sont pertinents : les mémoires exponentielles et poly­nomiales. Nous établirons pour commencer des résultats de convergence dans le cas général où la fonction à minimiser est non-convexe. Dans le cas de fonctions fortement convexes, nous obtenons des vitesses de convergence optimales en un sens que nous définirons. En­fin, l'étude se termine par un résultat de convergence en loi du processus après une bonne renormalisation. La troisième partie s'articule autour des algorithmes de McKean-Vlasov qui furent intro­duit par Anatoly Vlasov et étudié, pour la première fois, par Henry McKean dans l'optique de la modélisation de la loi de distribution du plasma. Notre objectif est de proposer un al­gorithme stochastique capable d'approcher la mesure invariante du processus. Les méthodes pour approcher une mesure invariante sont connues dans le cas des diffusions et de certains autre processus mais ici la particularité du processus de McKean-Vlasov est de ne pas être une diffusion linéaire. En effet, le processus a de la mémoire comme les processus de boule pesante. De ce fait, il nous faudra développer une méthode alternative pour contourner ce problème. Nous aurons besoin d'introduire la notion de pseudo-trajectoires afin de proposer une procédure efficace
In this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Butt, Rehman. "Performance Comparison of AI Algorithms : Anytime Algorithms". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5845.

Testo completo
Abstract (sommario):
Commercial computer gaming is a large growing industry that already has its major contributions in the entertainment industry of the world. One of the most important among different types of computer games are Real Time Strategy (RTS) based games. RTS games are considered being the major research subject for Artificial Intelligence (AI). But still the performance of AI in these games is poor by human standards due to some fundamental AI problems those require more research to be better solved for the RTS games. There also exist some AI algorithms those can help us solve these AI problems. Anytime- Algorithms (AA) are algorithms those can optimize their memory and time resources and are considered best for the RTS games. We believe that by making AI algorithms anytime we can optimize their behavior to better solve the AI problems. Although many anytime algorithms are available to solve various kinds of AI problems, but according to our research no such study is been done to compare the performances of different anytime algorithms for an AI problem in RTS games. This study will take care of that by building our own research platform specifically design for comparing performances of our selected anytime algorithms for an AI problem.
Address: NaN Mob. +46 - 737 - 40 19 17
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Mirzazadeh, Mehdi. "Adaptive Comparison-Based Algorithms for Evaluating Set Queries". Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1147.

Testo completo
Abstract (sommario):
In this thesis we study a problem that arises in answering boolean queries submitted to a search engine. Usually a search engine stores the set of IDs of documents containing each word in a pre-computed sorted order and to evaluate a query like "computer AND science" the search engine has to evaluate the union of the sets of documents containing the words "computer" and "science". More complex queries will result in more complex set expressions. In this thesis we consider the problem of evaluation of a set expression with union and intersection as operators and ordered sets as operands. We explore properties of comparison-based algorithms for the problem. A proof of a set expression is the set of comparisons that a comparison-based algorithm performs before it can determine the result of the expression. We discuss the properties of the proofs of set expressions and based on how complex the smallest proofs of a set expression E are, we define a measurement for determining how difficult it is for E to be computed. Then, we design an algorithm that is adaptive to the difficulty of the input expression and we show that the running time of the algorithm is roughly proportional to difficulty of the input expression, where the factor is roughly logarithmic in the number of the operands of the input expression.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Dutta, Himanshu Shekhar. "Survey of Approximation Algorithms for Set Cover Problem". Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12118/.

Testo completo
Abstract (sommario):
In this thesis, I survey 11 approximation algorithms for unweighted set cover problem. I have also implemented the three algorithms and created a software library that stores the code I have written. The algorithms I survey are: 1. Johnson's standard greedy; 2. f-frequency greedy; 3. Goldsmidt, Hochbaum and Yu's modified greedy; 4. Halldorsson's local optimization; 5. Dur and Furer semi local optimization; 6. Asaf Levin's improvement to Dur and Furer; 7. Simple rounding; 8. Randomized rounding; 9. LP duality; 10. Primal-dual schema; and 11. Network flow technique. Most of the algorithms surveyed are refinements of standard greedy algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Silva, Isaac Dayan Bastos da. "An?lise e compara??o entre algoritmos de percola??o". Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17000.

Testo completo
Abstract (sommario):
Made available in DSpace on 2014-12-17T15:26:35Z (GMT). No. of bitstreams: 1 IsaacDBS.pdf: 539336 bytes, checksum: ac9f1f2543159f0c009f0242077b1d5c (MD5) Previous issue date: 2008-07-25
In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.
Nesta disserta??o estudamos e comparamos dois algoritmos de percola??o, um elaborado por Elias e o outro por Newman e Ziff, utilizando ferramentas te?ricas da complexidade de algoritmos e um algoritmo que efetuou uma compara??o experimental. Dividimos este trabalho em tr?s cap?tulos. O primeiro aborda algumas defini??es e teoremas necess?rios a um estudo matem?tico mais formal da percola??o. O segundo apresenta t?cnicas utilizadas para o c?lculo estimativo de complexidade de algoritmos, sejam elas: pior caso, melhor caso e caso m?dio. Utilizamos a t?cnica do pior caso para estimar a complexidade de ambos algoritmos e assim podermos compar?-los. O ?ltimo cap?tulo mostra diversas caracter?sticas de cada um dos algoritmos e atrav?s da estima- tiva te?rica da complexidade e da compara??o entre os tempos de execu??o da parte mais importante de cada um, conseguimos comparar esses importantes algoritmos que simulam a percola??o
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Knauer, Christian. "Algorithms for comparing geometric patterns (Algorithmen zum Vergleich geometrischer Muster) /". [S.l. : s.n.], 2001. http://www.diss.fu-berlin.de/2002/110/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Eriksson, Daniel. "Algorithmic Design of Graphical Resources for Games Using Genetic Algorithms". Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139332.

Testo completo
Abstract (sommario):
Producing many varying instances of the same type of graphical resource for games can be of interest, such as trees or foliage. But when randomly generating graphical resources, you can often end up with many similar looking results or perhaps results that doesn't look like what it is meant to look like. This work investigates whether genetic algorithms can be applied to produce greater varying results when generating graphical resources by basing the fitness of each individual for each genetic generation on how similar the graphical resource is to previously generated resources. This work concludes from the limited work that was performed that while it seems possible that the use of genetic algorithms might be able to produce visually different graphical resources, Blender currently doesn't seem to be able to produce enough results in a reasonable time frame for this to be usable on a large scale.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Pelikan, Martin. "Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms /". Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/toc/fy053/2004116659.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Kang, Seunghwa. "On the design of architecture-aware algorithms for emerging applications". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39503.

Testo completo
Abstract (sommario):
This dissertation maps various kernels and applications to a spectrum of programming models and architectures and also presents architecture-aware algorithms for different systems. The kernels and applications discussed in this dissertation have widely varying computational characteristics. For example, we consider both dense numerical computations and sparse graph algorithms. This dissertation also covers emerging applications from image processing, complex network analysis, and computational biology. We map these problems to diverse multicore processors and manycore accelerators. We also use new programming models (such as Transactional Memory, MapReduce, and Intel TBB) to address the performance and productivity challenges in the problems. Our experiences highlight the importance of mapping applications to appropriate programming models and architectures. We also find several limitations of current system software and architectures and directions to improve those. The discussion focuses on system software and architectural support for nested irregular parallelism, Transactional Memory, and hybrid data transfer mechanisms. We believe that the complexity of parallel programming can be significantly reduced via collaborative efforts among researchers and practitioners from different domains. This dissertation participates in the efforts by providing benchmarks and suggestions to improve system software and architectures.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Reis, Márcio Félix 1986. "O problema da árvore geradora com muitas folhas". [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275511.

Testo completo
Abstract (sommario):
Orientador: Orlando Lee
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-25T08:59:09Z (GMT). No. of bitstreams: 1 Reis_MarcioFelix_M.pdf: 1988657 bytes, checksum: 6ee5ea6ba406aea3ccb7e3332e679eab (MD5) Previous issue date: 2014
Resumo: Neste trabalho estudamos o problema da árvore geradora com muitas folhas (PAGMF). Este problema pode ser usado como abstração para diversos problemas práticos e sabe-se que é NP-difícil. Estudamos, implementamos e executamos testes para algoritmos aproximados e exatos para o PAGMF e para um caso particular que considera apenas grafos cúbicos. O principal objetivo do trabalho foi verificar o comportamento prático dos algoritmos estudados. Para as instâncias testadas, em geral, o algoritmo guloso apresentou melhores resultados para o PAGMF e o algoritmo 2-aproximado teve os melhores resultados para os grafos cúbicos
Abstract: In this work we study the maximum leaf spanning tree problem (MLSTP). This problem can be used as an abstraction for many practical problems and is known to be NP-hard. We studied, implemented and executed tests for approximate and exact algorithms for the MLSTP and for a particular case that considers only cubic graphs. The main objective of this study was to investigate the practical behavior of the algorithms studied. For the tested instances, in general, the greedy algorithm showed better results for the MLSTP and the 2-approximate algorithm had the best results for cubic graphs
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Wright, Colin Douglas. "Combinatorial algorithms". Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357944.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Abrams, Daniel S. (Daniel Stephen) 1972. "Quantum algorithms". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/85313.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Nikolova, Evdokia Velinova. "Strategic algorithms". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54673.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 193-201).
Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components are, for example, uncertainty and economic incentives. Therefore, modem algorithm design is calling for more interdisciplinary approaches, as well as for deeper theoretical understanding, so that the algorithms can apply to more realistic settings and complex systems. Consider, for instance, the classical shortest path algorithm, which, given a graph with specified edge weights, seeks the path minimizing the total weight from a source to a destination. In practice, the edge weights are often uncertain and it is not even clear what we mean by shortest path anymore: is it the path that minimizes the expected weight? Or its variance, or some another metric? With a risk-averse objective function that takes into account both mean and standard deviation, we run into nonconvex optimization challenges that require new theory beyond classical shortest path algorithm design. Yet another shortest path application, routing of packets in the Internet, needs to further incorporate economic incentives to reflect the various business relationships among the Internet Service Providers that affect the choice of packet routes. Strategic Algorithms are algorithms that integrate optimization, uncertainty and economic modeling into algorithm design, with the goal of bringing about new theoretical developments and solving practical applications arising in complex computational-economic systems.
(cont.) In short, this thesis contributes new algorithms and their underlying theory at the interface of optimization, uncertainty and economics. Although the interplay of these disciplines is present in various forms in our work, for the sake of presentation we have divided the material into three categories: 1. In Part I we investigate algorithms at the intersection of Optimization and Uncertainty. The key conceptual contribution in this part is discovering a novel connection between stochastic and nonconvex optimization. Traditional algorithm design has not taken into account the risk inherent in stochastic optimization problems. We consider natural objectives that incorporate risk, which tum out equivalent to certain nonconvex problems from the realm of continuous optimization. As a result, our work advances the state of art in both stochastic and in nonconvex optimization, presenting new complexity results and proposing general purpose efficient approximation algorithms, some of which have shown promising practical performance and have been implemented in a real traffic prediction and navigation system. 2. Part II proposes new algorithm and mechanism design at the intersection of Uncertainty and Economics. In Part I we postulate that the random variables in our models come from given distributions. However, determining those distributions or their parameters is a challenging and fundamental problem in itself. A tool from Economics that has recently gained momentum for measuring the probability distribution of a random variable is an information or prediction market. Such markets, most popularly known for predicting the outcomes of political elections or other events of interest, have shown remarkable accuracy in practice, though at the same time have left open the theoretical and strategic analysis of current implementations, as well as the need for new and improved designs which handle more complex outcome spaces (probability distribution functions) as opposed to binary or n-ary valued distributions. The contributions of this part include a unified strategic analysis of different prediction market designs that have been implemented in practice.
(cont.) We also offer new market designs for handling exponentially large outcome spaces stemming from ranking or permutation-type outcomes, together with algorithmic and complexity analysis. 3. In Part III we consider the interplay of optimization and economics in the context of network routing. This part is motivated by the network of autonomous systems in the Internet where each portion of the network is controlled by an Internet service provider, namely by a self-interested economic agent. The business incentives do not exist merely in addition to the computer protocols governing the network. Although they are not currently integrated in those protocols and are decided largely via private contracting and negotiations, these economic considerations are a principal factor that determines how packets are routed. And vice versa, the demand and flow of network traffic fundamentally affect provider contracts and prices. The contributions of this part are the design and analysis of economic mechanisms for network routing. The mechanisms are based on first- and second-price auctions (the so-called Vickrey-Clarke-Groves, or VCG mechanisms). We first analyze the equilibria and prices resulting from these mechanisms. We then investigate the compatibility of the better understood VCG-mechanisms with the current inter-domain routing protocols, and we demonstrate the critical importance of correct modeling and how it affects the complexity and algorithms necessary to implement the economic mechanisms.
by Evdokia Velinova Nikolova.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Kolářová, Jana. "Evoluční algoritmy pro ultrazvukovou perfúzní analýzu". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-401037.

Testo completo
Abstract (sommario):
This master´s thesis is focused on the application of evolutionary algorithms for interleaving data obtained by ultrasound scanning of tissue. The interleaved curve serves to estimate perfusion parameters, thus allowing to detect possible pathophysiology in the scanned area. The theoretical introduction is devoted to perfusion and its parameters, contrast agents for ultrasonic application, ultrasonic modality scanning, optimization, evolutionary algorithms in general and two selected evolutionary algorithms - genetic algorithm and bee algorithm. These algorithms were tested on noisy data obtained from clinical images of mice with tumor. The final part summarizes the results of the practical part and provides suggestions and recommendations for further possible development.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Lin, Han-Hsuan. "Topics in quantum algorithms : adiabatic algorithm, quantum money, and bomb query complexity". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99300.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 111-115).
In this thesis, I present three results on quantum algorithms and their complexity. The first one is a numerical study on the quantum adiabatic algorithm( QAA) . We tested the performance of the QAA on random instances of MAX 2-SAT on 20 qubits and showed 3 strategics that improved QAA's performance, including a counter intuitive strategy of decreasing the overall evolution time. The second result is a security proof for the quantum money by knots proposed by Farhi et. al. We proved that quantum money by knots can not be cloned in a black box way unless graph isomorphism is efficiently solvable by a quantum computer. Lastly we defined a modified quantum query model, which we called bomb query complexity B(J), inspired by the Elitzur-Vaidman bomb-testing problem. We completely characterized bomb query complexity be showing that B(f) = [Theta](Q(f)2 ). This result implies a new method to find upper bounds on quantum query complexity, which we applied on the maximum bipartite matching problem to get an algorithm with O(n1.75) quantum query complexity, improving from the best known trivial O(n2 ) upper bound.
by Han-Hsuan Lin.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Nanongkai, Danupon. "Graph and geometric algorithms on distributed networks and databases". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41056.

Testo completo
Abstract (sommario):
In this thesis, we study the power and limit of algorithms on various models, aiming at applications in distributed networks and databases. In distributed networks, graph algorithms are fundamental to many applications. We focus on computing random walks which are an important primitive employed in a wide range of applications but has always been computed naively. We show that a faster solution exists and subsequently develop faster algorithms by exploiting random walk properties leading to two immediate applications. We also show that this algorithm is optimal. Our technique in proving a lower bound show the first non-trivial connection between communication complexity and lower bounds of distributed graph algorithms. We show that this technique has a wide range of applications by proving new lower bounds of many problems. Some of these lower bounds show that the existing algorithms are tight. In database searching, we think of the database as a large set of multi-dimensional points stored in a disk and want to help the users to quickly find the most desired point. In this thesis, we develop an algorithm that is significantly faster than previous algorithms both theoretically and experimentally. The insight is to solve the problem on the streaming model which helps emphasize the benefits of sequential access over random disk access. We also introduced the randomization technique to the area. The results were complemented with a lower bound. We also initiat a new direction as an attempt to get a better query. We are the first to quantify the output quality using "user satisfaction" which is made possible by borrowing the idea of modeling users by utility functions from game theory and justify our approach through a geometric analysis.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Krüger, Franz David, e Mohamad Nabeel. "Hyperparameter Tuning Using Genetic Algorithms : A study of genetic algorithms impact and performance for optimization of ML algorithms". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42404.

Testo completo
Abstract (sommario):
Maskininlärning har blivit allt vanligare inom näringslivet. Informationsinsamling med Data mining (DM) har expanderats och DM-utövare använder en mängd tumregler för att effektivisera tillvägagångssättet genom att undvika en anständig tid att ställa in hyperparametrarna för en given ML-algoritm för nå bästa träffsäkerhet. Förslaget i denna rapport är att införa ett tillvägagångssätt som systematiskt optimerar ML-algoritmerna med hjälp av genetiska algoritmer (GA), utvärderar om och hur modellen ska konstrueras för att hitta globala lösningar för en specifik datamängd. Genom att implementera genetiska algoritmer på två utvalda ML-algoritmer, K-nearest neighbors och Random forest, med två numeriska datamängder, Iris-datauppsättning och Wisconsin-bröstcancerdatamängd. Modellen utvärderas med träffsäkerhet och beräkningstid som sedan jämförs med sökmetoden exhaustive search. Resultatet har visat att GA fungerar bra för att hitta bra träffsäkerhetspoäng på en rimlig tid. Det finns vissa begränsningar eftersom parameterns betydelse varierar för olika ML-algoritmer.
As machine learning (ML) is being more and more frequent in the business world, information gathering through Data mining (DM) is on the rise, and DM-practitioners are generally using several thumb rules to avoid having to spend a decent amount of time to tune the hyperparameters (parameters that control the learning process) of an ML algorithm to gain a high accuracy score. The proposal in this report is to conduct an approach that systematically optimizes the ML algorithms using genetic algorithms (GA) and to evaluate if and how the model should be constructed to find global solutions for a specific data set. By implementing a GA approach on two ML-algorithms, K-nearest neighbors, and Random Forest, on two numerical data sets, Iris data set and Wisconsin breast cancer data set, the model is evaluated by its accuracy scores as well as the computational time which then is compared towards a search method, specifically exhaustive search. The results have shown that it is assumed that GA works well in finding great accuracy scores in a reasonable amount of time. There are some limitations as the parameter’s significance towards an ML algorithm may vary.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Aronsson, Jonatan. "Novel tree-based algorithms for computational electromagnetics". IEEE, 2009. http://hdl.handle.net/1993/22086.

Testo completo
Abstract (sommario):
Tree-based methods have wide applications for solving large-scale problems in electromagnetics, astrophysics, quantum chemistry, fluid mechanics, acoustics, and many more areas. This thesis focuses on their applicability for solving large-scale problems in electromagnetics. The Barnes-Hut (BH) algorithm and the Fast Multipole Method (FMM) are introduced along with a survey of important previous work. The required theory for applying those methods to problems in electromagnetics is presented with particular emphasis on the capacitance extraction problem and broadband full-wave scattering. A novel single source approximation is introduced for approximating clusters of electrostatic sources in multi-layered media. The approximation is derived by matching the spectra of the field in the vicinity of the stationary phase point. Combined with the BH algorithm, a new algorithm is shown to be an efficient method for evaluating electrostatic fields in multilayered media. Specifically, the new BH algorithm is well suited for fast capacitance extraction. The BH algorithm is also adapted to the scalar Helmholtz kernel by using the same methodology to derive an accurate single source approximation. The result is a fast algorithm that is suitable for accelerating the solution of the Electric Field Integral Equation (EFIE) for electrically small structures. Finally, a new version of FMM is presented that is stable and efficient from the low frequency regime to mid-range frequencies. By applying analytical derivatives to the field expansions at the observation points, the proposed method can rapidly evaluate vectorial kernels that arise in the FMM-accelerated solution of EFIE, the Magnetic Field Integral Equation (MFIE), and the Combined Field Integral Equation (CFIE).
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Ito, Masanori, Mitsuru Kawamoto, Noboru Ohnishi e Yujiro Inouye. "SUPER-EXPONENTIAL ALGORITHMS AND EIGENVECTOR ALGORITHMS FOR BLIND SOURCE SEPARATIOIN". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10429.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Corbineau, Marie-Caroline. "Proximal and interior point optimization strategies in image recovery". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC085/document.

Testo completo
Abstract (sommario):
Les problèmes inverses en traitement d'images peuvent être résolus en utilisant des méthodes variationnelles classiques, des approches basées sur l'apprentissage profond, ou encore des stratégies bayésiennes. Bien que différentes, ces approches nécessitent toutes des algorithmes d'optimisation efficaces. L'opérateur proximal est un outil important pour la minimisation de fonctions non lisses. Dans cette thèse, nous illustrons la polyvalence des algorithmes proximaux en les introduisant dans chacune des trois méthodes de résolution susmentionnées.Tout d'abord, nous considérons une formulation variationnelle sous contraintes dont la fonction objectif est composite. Nous développons PIPA, un nouvel algorithme proximal de points intérieurs permettant de résoudre ce problème. Dans le but d'accélérer PIPA, nous y incluons une métrique variable. La convergence de PIPA est prouvée sous certaines conditions et nous montrons que cette méthode est plus rapide que des algorithmes de l'état de l'art au travers de deux exemples numériques en traitement d'images.Dans une deuxième partie, nous étudions iRestNet, une architecture neuronale obtenue en déroulant un algorithme proximal de points intérieurs. iRestNet nécessite l'expression de l'opérateur proximal de la barrière logarithmique et des dérivées premières de cet opérateur. Nous fournissons ces expressions pour trois types de contraintes. Nous montrons ensuite que sous certaines conditions, cette architecture est robuste à une perturbation sur son entrée. Enfin, iRestNet démontre de bonnes performances pratiques en restauration d'images par rapport à une approche variationnelle et à d'autres méthodes d'apprentissage profond.La dernière partie de cette thèse est consacrée à l'étude d'une méthode d'échantillonnage stochastique pour résoudre des problèmes inverses dans un cadre bayésien. Nous proposons une version accélérée de l'algorithme proximal de Langevin non ajusté, baptisée PP-ULA. Cet algorithme est incorporé à un échantillonneur de Gibbs hybride utilisé pour réaliser la déconvolution et la segmentation d'images ultrasonores. PP-ULA utilise le principe de majoration-minimisation afin de gérer les distributions non log-concaves. Comme le montrent nos expériences réalisées sur des données ultrasonores simulées et réelles, PP-ULA permet une importante réduction du temps d'exécution tout en produisant des résultats de déconvolution et de segmentation très satisfaisants
Inverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Guedj, Michaël. "BSP algorithms for LTL & CTL model checking of security protocols". Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1081.

Testo completo
Abstract (sommario):
Dans un monde fortement dépendant de la communication de données distribuées, la conception d’infrastructures sécurisées est une tâche cruciale. Les systèmes et réseaux distribués prennent de plus en plus d’importance, car la plupart des services et des possibilités qui caractérisent la société moderne sont basés sur ces technologies.La communication entre les agents sur les réseaux a donc suscité un grand intérêt pour la recherche. Afin de fournir des moyens de communication efficaces et fiables, de plus en plus de protocoles de communication sont inventés, et pour la plupart d’entre eux, la sécurité est un objectif important
In a world strongly dependent on distributed data communication, the design of secure infrastructures is a crucial task. Distributed systems and networks are becoming increasingly important, as most of the services and opportunities that characterise the modern society are based on these technologies. Communication among agents over networks has therefore acquired a great deal of research interest. In order to provide effective and reliable means of communication, more and more communication protocols are invented, and for most of them, security is a significant goal
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Gupta, Pramod. "Robust clustering algorithms". Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39553.

Testo completo
Abstract (sommario):
One of the most widely used techniques for data clustering is agglomerative clustering. Such algorithms have been long used across any different fields ranging from computational biology to social sciences to computer vision in part because they are simple and their output is easy to interpret. However, many of these algorithms lack any performance guarantees when the data is noisy, incomplete or has outliers, which is the case for most real world data. It is well known that standard linkage algorithms perform extremely poorly in presence of noise. In this work we propose two new robust algorithms for bottom-up agglomerative clustering and give formal theoretical guarantees for their robustness. We show that our algorithms can be used to cluster accurately in cases where the data satisfies a number of natural properties and where the traditional agglomerative algorithms fail. We also extend our algorithms to an inductive setting with similar guarantees, in which we randomly choose a small subset of points from a much larger instance space and generate a hierarchy over this sample and then insert the rest of the points to it to generate a hierarchy over the entire instance space. We then do a systematic experimental analysis of various linkage algorithms and compare their performance on a variety of real world data sets and show that our algorithms do much better at handling various forms of noise as compared to other hierarchical algorithms in the presence of noise.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Brolin, Echeverria Paolo, e Joakim Westermark. "Benchmarking Rubik’sRevenge algorithms". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134903.

Testo completo
Abstract (sommario):
This Bachelor thesis paper investigates 2 different methods used to solve the Rubik’s Cube 4x4x4 puzzle. The analyzed methods are Reduction and Big Cube method. We have implemented the cube and the two solvers in Python. Through a series of tests we have concluded that the Big Cube method has a better average move count as well as a low standard deviation in comparison to the Reduction method. However the reduction method has a lower minimum move count and consists of fewer algorithms. The best approach would be to combine both methods to form an optimal solution.
Denna kandidatexamensuppsats undersöker två olika metoder som används för att lösa Rubiks Kub 4x4x4. Metoderna som analyseras är Reduction och Big Cube. Vi har implementerat kuben samt de bägge lösarna I Python. Genom en serie tester har vi kommit fram till att Big Cube har ett lägre genomsnittligt rotationsantal samt lägre standardavvikelse än Reduction. Reductionmetoden har däremot ett lägre minimumvärde på antalet rotationer och består av färre algoritmer. Det bästa tillvägagångssättet vore att kombinera de båda lösningarna.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

van, de Hoef Sebastian. "Extended Consensus Algorithms". Thesis, KTH, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-125809.

Testo completo
Abstract (sommario):
An extension of the linear consensus protocol for agents moving in the plane is considered. For single integrator agents the use of a vector perpendicular to the standard consensus feedback leads to a large family of trajectories. If the new perpendicular term is applied only sustained oscillations are facilitated. For special congurations the form of the system trajectories is given in form of eigenvalues and {vectors of the system matrix. A proof is given that this additional term does not eect stability. On the other hand it is motivated that robustness against discrete implementation and switching topologies can be decreased. The control strategy is also applied to agents with double integrator dynamics. Stability can be archived with suciently high velocity feedback and the eect of this feedback on the system performance is further discussed. Using the results for single integrators a self{triggered consensus control strategy is proposed based on the assumption of bounded input magnitude of the other agents. Additional communication of the actual input leads to asymptotic convergence. By applying similar reasoning it is shown how local controllers at the agents can avoid circular regions in state{space while moving towards consensus.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Zhou, Tianyang 1980. "Modified LLL algorithms". Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99356.

Testo completo
Abstract (sommario):
Lattice basis reduction arises from many applications, such as cryptography, communications, GPS and so on. This thesis is concerned with the widely used LLL reduction. We cast it as a QRZ matrix factorization for real bases. Based on the matrix factorization, we first give the real version of the LLL algorithm (the original LLL algorithm is for integer bases). Then we propose three modified algorithms to improve the computational efficiency, while the reduced matrices satisfy the LLL-reduced criteria. The first modified algorithm, to be referred to as MLLLPIVOT, uses a block pivoting strategy. The second one, to be called MLLLINSERT, uses a greedy insertion strategy. The last one, to be called MLLLLAZY, uses a "lazy" size-reduction strategy. Extensive simulation results are given to show the improvements and different performance of the three algorithms. In addition, numerical stability of the LLL algorithm and the three modified algorithms is considered. The simulations indicate that on average the computational efficiency (measured by CPU time) of the four algorithms have the increasing order: LLL < MLLLPIVOT < MLLLINSERT < MLLLLAZY, and the four algorithms are backward numerically stable in most cases, and in some extreme cases, the numerical stability of these algorithms is in an opposite order. Furthermore, we also give the complexity analysis of LLL, MLLLINSERT and MLLLLAZY under the assumption of using exact arithmetic.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Mosca, Michele. "Quantum computer algorithms". Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Hein, Birgit. "Quantum search algorithms". Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11512/.

Testo completo
Abstract (sommario):
In this thesis two quantum search algorithms on two different graphs, a hypercube and a d-dimensional square lattice, are analysed and some applications of the lattice search are discussed. The approach in this thesis generalises a picture drawn by Shenvi, Kempe and Whaley, which was later adapted by Ambainis, Kempe and Rivosh. It defines a one parameter family of unitary operators U_λ with parameter λ. It will be shown that two eigenvalues of U_λ form an avoided crossing at the λ-value where U_λ is equal to the old search operator. This generalised picture opens the way for a construction of two approximate eigen- vectors at the crossing and gives rise to a 2×2 model Hamiltonian that is used to approximate the operator U_λ near the crossing. The thus defined Hamiltonian can be used to calculate the leading order of search time and success probability for the search. To the best of my knowledge only the scaling of these quantities has been known. For the algorithm searching the regular lattice, a generalisation of the model Hamiltonian for m target vertices is constructed. This algorithm can be used to send a signal from one vertex of the graph to a set of vertices. The signal is transmitted between these vertices exclusively and is localised only on the sender and the receiving vertices while the probability to measure the signal at one of the remaining vertices is significantly smaller. However, this effect can be used to introduce an additional sender to search settings and send a continuous signal to all target vertices where the signal will localise. This effect is an improvement compared to the original search algorithm as it does not need to know the number of target vertices.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Zhang, Minghua, e 張明華. "Sequence mining algorithms". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B44570119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Thompson, Simon Giles. "Distributed boosting algorithms". Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Abu-Bakar, Nordin. "Adaptive genetic algorithms". Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343268.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia