Teses / dissertações sobre o tema "Computer algorithms"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Computer algorithms".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Mosca, Michele. "Quantum computer algorithms". Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.
Texto completo da fonteNyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer". Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.
Texto completo da fonteUtvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.
Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.
Rhodes, Daniel Thomas. "Hardware accelerated computer graphics algorithms". Thesis, Nottingham Trent University, 2008. http://irep.ntu.ac.uk/id/eprint/201/.
Texto completo da fonteMims, Mark McGrew. "Dynamical stability of quantum algorithms /". Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004342.
Texto completo da fonteLi, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.
Texto completo da fonteCataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Tran, Chan-Hung. "Fast clipping algorithms for computer graphics". Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26336.
Texto completo da fonteApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Viloria, John A. (John Alexander) 1978. "Optimizing clustering algorithms for computer vision". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86847.
Texto completo da fonteKhungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.
Texto completo da fonteIncludes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
O'Brien, Neil. "Algorithms for scientific computing". Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/355716/.
Texto completo da fonteNofal, Samer. "Algorithms for argument systems". Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12173/.
Texto completo da fonteYu, Chia Woo. "Improved algorithms for hybrid video coding". Thesis, University of Warwick, 2007. http://wrap.warwick.ac.uk/3841/.
Texto completo da fonteBarbosa, Rafael da Ponte. "New algorithms for distributed submodular maximization". Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/95545/.
Texto completo da fonteNguyen, Trung Thanh. "Continuous dynamic optimisation using evolutionary algorithms". Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1296/.
Texto completo da fonteMatsakis, Nicolaos. "Approximation algorithms for packing and buffering problems". Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/82141/.
Texto completo da fonteAlam, Intekhab Asim. "Real time tracking using nature-inspired algorithms". Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8253/.
Texto completo da fonteKing, David Jonathan. "Functional programming and graph algorithms". Thesis, University of Glasgow, 1996. http://theses.gla.ac.uk/1629/.
Texto completo da fonteTruong, Ngoc Cuong. "Algorithms for appliance usage prediction". Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/367540/.
Texto completo da fonteEriksson, Daniel. "Algorithmic Design of Graphical Resources for Games Using Genetic Algorithms". Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139332.
Texto completo da fonteThemelis, Andreas. "Proximal algorithms for structured nonconvex optimization". Thesis, IMT Alti Studi Lucca, 2018. http://e-theses.imtlucca.it/262/1/Themelis_phdthesis.pdf.
Texto completo da fonteTyler, J. E. M. "Speech recognition by computer : algorithms and architectures". Thesis, University of Greenwich, 1988. http://gala.gre.ac.uk/8707/.
Texto completo da fonteShoker, Leor. "Signal processing algorithms for brain computer interfacing". Thesis, Cardiff University, 2006. http://orca.cf.ac.uk/56097/.
Texto completo da fonteRICCA, MARCO. "Energy aware control algorithms for computer networks". Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2497193.
Texto completo da fontePUTZU, LORENZO. "Computer aided diagnosis algorithms for digital microscopy". Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266877.
Texto completo da fonteZhou, Tianyang 1980. "Modified LLL algorithms". Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99356.
Texto completo da fonteSchuilenburg, Alexander Marius. "Parallelisation of algorithms". Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/22211.
Texto completo da fonteKarunarathne, Lalith. "Network coding via evolutionary algorithms". Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/57047/.
Texto completo da fonteElabed, Jamal. "Implementing parallel sorting algorithms". Virtual Press, 1989. http://liblink.bsu.edu/uhtbin/catkey/543997.
Texto completo da fonteDepartment of Computer Science
Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.
Texto completo da fonteCommittee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Abdul, Karim Mohamad Sharis. "Computer-aided aesthetics in evolutionary computer aided design". Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/27913.
Texto completo da fonteYang, Meng. "Algorithms in computer-aided design of VLSI circuits". Thesis, Edinburgh Napier University, 2006. http://researchrepository.napier.ac.uk/Output/6493.
Texto completo da fonteNikolova, Evdokia Velinova. "Strategic algorithms". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54673.
Texto completo da fonteCataloged from PDF version of thesis.
Includes bibliographical references (p. 193-201).
Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components are, for example, uncertainty and economic incentives. Therefore, modem algorithm design is calling for more interdisciplinary approaches, as well as for deeper theoretical understanding, so that the algorithms can apply to more realistic settings and complex systems. Consider, for instance, the classical shortest path algorithm, which, given a graph with specified edge weights, seeks the path minimizing the total weight from a source to a destination. In practice, the edge weights are often uncertain and it is not even clear what we mean by shortest path anymore: is it the path that minimizes the expected weight? Or its variance, or some another metric? With a risk-averse objective function that takes into account both mean and standard deviation, we run into nonconvex optimization challenges that require new theory beyond classical shortest path algorithm design. Yet another shortest path application, routing of packets in the Internet, needs to further incorporate economic incentives to reflect the various business relationships among the Internet Service Providers that affect the choice of packet routes. Strategic Algorithms are algorithms that integrate optimization, uncertainty and economic modeling into algorithm design, with the goal of bringing about new theoretical developments and solving practical applications arising in complex computational-economic systems.
(cont.) In short, this thesis contributes new algorithms and their underlying theory at the interface of optimization, uncertainty and economics. Although the interplay of these disciplines is present in various forms in our work, for the sake of presentation we have divided the material into three categories: 1. In Part I we investigate algorithms at the intersection of Optimization and Uncertainty. The key conceptual contribution in this part is discovering a novel connection between stochastic and nonconvex optimization. Traditional algorithm design has not taken into account the risk inherent in stochastic optimization problems. We consider natural objectives that incorporate risk, which tum out equivalent to certain nonconvex problems from the realm of continuous optimization. As a result, our work advances the state of art in both stochastic and in nonconvex optimization, presenting new complexity results and proposing general purpose efficient approximation algorithms, some of which have shown promising practical performance and have been implemented in a real traffic prediction and navigation system. 2. Part II proposes new algorithm and mechanism design at the intersection of Uncertainty and Economics. In Part I we postulate that the random variables in our models come from given distributions. However, determining those distributions or their parameters is a challenging and fundamental problem in itself. A tool from Economics that has recently gained momentum for measuring the probability distribution of a random variable is an information or prediction market. Such markets, most popularly known for predicting the outcomes of political elections or other events of interest, have shown remarkable accuracy in practice, though at the same time have left open the theoretical and strategic analysis of current implementations, as well as the need for new and improved designs which handle more complex outcome spaces (probability distribution functions) as opposed to binary or n-ary valued distributions. The contributions of this part include a unified strategic analysis of different prediction market designs that have been implemented in practice.
(cont.) We also offer new market designs for handling exponentially large outcome spaces stemming from ranking or permutation-type outcomes, together with algorithmic and complexity analysis. 3. In Part III we consider the interplay of optimization and economics in the context of network routing. This part is motivated by the network of autonomous systems in the Internet where each portion of the network is controlled by an Internet service provider, namely by a self-interested economic agent. The business incentives do not exist merely in addition to the computer protocols governing the network. Although they are not currently integrated in those protocols and are decided largely via private contracting and negotiations, these economic considerations are a principal factor that determines how packets are routed. And vice versa, the demand and flow of network traffic fundamentally affect provider contracts and prices. The contributions of this part are the design and analysis of economic mechanisms for network routing. The mechanisms are based on first- and second-price auctions (the so-called Vickrey-Clarke-Groves, or VCG mechanisms). We first analyze the equilibria and prices resulting from these mechanisms. We then investigate the compatibility of the better understood VCG-mechanisms with the current inter-domain routing protocols, and we demonstrate the critical importance of correct modeling and how it affects the complexity and algorithms necessary to implement the economic mechanisms.
by Evdokia Velinova Nikolova.
Ph.D.
Rahwan, Talal. "Algorithms for coalition formation in multi-agent systems". Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/49525/.
Texto completo da fonteHe, Dayu. "Algorithms for Graph Drawing Problems". Thesis, State University of New York at Buffalo, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10284151.
Texto completo da fonteA graph G is called planar if it can be drawn on the plan such that no two distinct edges intersect each other but at common endpoints. Such drawing is called a plane embedding of G. A plane graph is a graph with a fixed embedding. A straight-line drawing G of a graph G = (V, E) is a drawing where each vertex of V is drawn as a distinct point on the plane and each edge of G is drawn as a line segment connecting two end vertices. In this thesis, we study a set of planar graph drawing problems.
First, we consider the problem of monotone drawing: A path P in a straight line drawing Γ is monotone if there exists a line l such that the orthogonal projections of the vertices of P on l appear along l in the order they appear in P. We call l a monotone line (or monotone direction) of P. G is called a monotone drawing of G if it contains at least one monotone path Puw between every pair of vertices u,w of G. Monotone drawings were recently introduced by Angelini et al. and represent a new visualization paradigm, and is also closely related to several other important graph drawing problems. As in many graph drawing problems, one of the main concerns of this research is to reduce the drawing size, which is the size of the smallest integer grid such that every graph in the graph class can be drawn in such a grid. We present two approaches for the problem of monotone drawings of trees. Our first approach show that every n-vertex tree T admits a monotone drawing on a grid of size O(n1.205) × O( n1.205) grid. Our second approach further reduces the size of drawing to 12n × 12n, which is asymptotically optimal. Both of our two drawings can be constructed in O(n) time.
We also consider monotone drawings of 3-connected plane graphs. We prove that the classical Schnyder drawing of 3-connected plane graphs is a monotone drawing on a f × f grid, which can be constructed in O(n) time.
Second, we consider the problem of orthogonal drawing. An orthogonal drawing of a plane graph G is a planar drawing of G such that each vertex of G is drawn as a point on the plane, and each edge is drawn as a sequence of horizontal and vertical line segments with no crossings. Orthogonal drawing has attracted much attention due to its various applications in circuit schematics, relationship diagrams, data flow diagrams etc. . Rahman et al. gave a necessary and sufficient condition for a plane graph G of maximum degree 3 to have an orthogonal drawing without bends. An orthogonal drawing D(G) is orthogonally convex if all faces of D(G) are orthogonally convex polygons. Chang et al. gave a necessary and sufficient condition (which strengthens the conditions in the previous result) for a plane graph G of maximum degree 3 to have an orthogonal convex drawing without bends. We further strengthen the results such that if G satisfies the same conditions as in previous papers, it not only has an orthogonally convex drawing, but also a stronger star-shaped orthogonal drawing.
Zhu, Huanzhou. "Developing graph-based co-scheduling algorithms with GPU acceleration". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/92000/.
Texto completo da fontePieterse, Vreda. "Topic Maps for Specifying Algorithm Taxonomies : a case Study using Transitive Closure Algorithms". Thesis, University of Pretoria, 2016. http://hdl.handle.net/2263/59307.
Texto completo da fonteThesis (PhD)--University of Pretoria, 2016.
Computer Science
PhD
Unrestricted
Lu, Xin. "Efficient algorithms for scalable video coding". Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/59744/.
Texto completo da fonteMalek, Fadi. "Polynomial zerofinding matrix algorithms". Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9980.
Texto completo da fonteAcharyya, Amit. "Resource constrained signal processing algorithms and architectures". Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/179167/.
Texto completo da fonteJalalian, Hamid Reza. "Decomposition evolutionary algorithms for noisy multiobjective optimization". Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16828/.
Texto completo da fonteBrolin, Echeverria Paolo, e Joakim Westermark. "Benchmarking Rubik’sRevenge algorithms". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134903.
Texto completo da fonteDenna kandidatexamensuppsats undersöker två olika metoder som används för att lösa Rubiks Kub 4x4x4. Metoderna som analyseras är Reduction och Big Cube. Vi har implementerat kuben samt de bägge lösarna I Python. Genom en serie tester har vi kommit fram till att Big Cube har ett lägre genomsnittligt rotationsantal samt lägre standardavvikelse än Reduction. Reductionmetoden har däremot ett lägre minimumvärde på antalet rotationer och består av färre algoritmer. Det bästa tillvägagångssättet vore att kombinera de båda lösningarna.
Zhang, Minghua, e 張明華. "Sequence mining algorithms". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B44570119.
Texto completo da fonteMiles, Christopher Eoin. "Case-injected genetic algorithms in computer strategy games". abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433686.
Texto completo da fonteRiddell, A. G. "Computer algorithms for Euclidean lattice gauge theory calculations". Thesis, University of Canterbury. Physics, 1988. http://hdl.handle.net/10092/8220.
Texto completo da fonteRich, Thomas H. "Algorithms for computer aided design of digital filters". Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22867.
Texto completo da fonteMitchell, David Anthony Paul. "Fast algorithms and hardware for 3D computer graphics". Thesis, University of Sheffield, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299571.
Texto completo da fonteLi, Wenda. "Towards justifying computer algebra algorithms in Isabelle/HOL". Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289389.
Texto completo da fonteErb, Lugo Anthony (Anthony E. ). "Coevolutionary genetic algorithms for proactive computer network defenses". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112841.
Texto completo da fonteThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 47-48).
This thesis explores the use of coevolutionary genetic algorithms as tools in developing proactive computer network defenses. We also introduce rIPCA, a new coevolutionary algorithm with a focus on speed and performance. This work is in response to the threat of disruption that computer networks face by adaptive attackers. Our challenge is to improve network defenses by modeling adaptive attacker behavior and predicting attacks so that we may proactively defend against them. To address this, we introduce RIVALS, a new cybersecurity project developed to use coevolutionary algorithms to better defend against adaptive adversarial agents. In this contribution we describe RIVALS' current suite of coevolutionary algorithms and how they explore archiving as a means of maintaining progressive exploration. Our model also allows us to explore the connectivity of a network under an adversarial threat model. To examine the suite's effectiveness, for each algorithm we execute a standard coevolutionary benchmark (Compare-on-one) and RIVALS simulations on 3 different network topologies. Our experiments show that existing algorithms either sacrifice execution speed or forgo the assurance of consistent results. rIPCA, our adaptation of IPCA, is able to consistently produce high quality results, albeit with weakened guarantees, without sacrificing speed.
by Anthony Erb Lugo.
M. Eng.
Keup, Jessica Faith. "Computer Music Composition using Crowdsourcing and Genetic Algorithms". NSUWorks, 2011. http://nsuworks.nova.edu/gscis_etd/197.
Texto completo da fonteJavadi, Mohammad Saleh. "Computer Vision Algorithms for Intelligent Transportation Systems Applications". Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17166.
Texto completo da fonteHeggie, Patricia M. "Algorithms for subgroup presentations : computer implementation and applications". Thesis, University of St Andrews, 1991. http://hdl.handle.net/10023/13684.
Texto completo da fonte