Segui questo link per vedere altri tipi di pubblicazioni sul tema: Computer algorithms.

Tesi sul tema "Computer algorithms"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Computer algorithms".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Mosca, Michele. "Quantum computer algorithms". Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer". Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Testo completo
Abstract (sommario):

Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.


Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.

Gli stili APA, Harvard, Vancouver, ISO e altri
3

Rhodes, Daniel Thomas. "Hardware accelerated computer graphics algorithms". Thesis, Nottingham Trent University, 2008. http://irep.ntu.ac.uk/id/eprint/201/.

Testo completo
Abstract (sommario):
The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones. This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of computer graphics but which provide a great deal of scope for further investigation in the light of recent hardware advances. A new hardware implementation of a texture filtering technique, aimed at consumer level hardware, is presented. This novel technique utilises Fourier space image filtering to reduce aliasing. Investigation shows that the technique provides reduced levels of aliasing along with comparable levels of detail to currently popular techniques. This adds to the community's knowledge by expanding the range of techniques available, as well as increasing the number of techniques which offer the potential for easy integration with current consumer level graphics hardware along with real-time performance. Bump mapping is a long-standing and well understood technique. Variations and extensions of it have been popular in real-time 3D computer graphics for many years. A new hardware implementation of a technique termed Super Bump Mapping (SBM) is introduced. Expanding on the work of Cant and Langensiepen [1], the SBM technique adopts the novel approach of using normal maps which supply multiple vectors per texel. This allows the retention of much more detail and overcomes some of the aliasing deficiencies of standard bump mapping caused by the standard single vector approach and the non-linearity of the bump mapping process. A novel depth of field algorithm is proposed, which is an extension of the authors previous work [2][3][4]. The technique is aimed at consumer level hardware and attempts to raise the bar for realism by providing support for the 'see-through' effect. This effect is a vital factor in the realistic appearance of simulated depth of field and has been overlooked in real time computer graphics due to the complexities of an accurate calculation. The implementation of this new algorithm on current consumer level hardware is investigated and it is concluded that while current hardware is not yet capable enough, future iterations will provide the necessary functional and performance increases.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Mims, Mark McGrew. "Dynamical stability of quantum algorithms /". Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004342.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Tran, Chan-Hung. "Fast clipping algorithms for computer graphics". Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26336.

Testo completo
Abstract (sommario):
Interactive computer graphics allow achieving a high bandwidth man-machine communication only if the graphics system meets certain speed requirements. Clipping plays an important role in the viewing process, as well as in the functions zooming and panning; thus, it is desirable to develop a fast clipper. In this thesis, the intersection problem of a line segment against a convex polygonal object has been studied. Adaption of the the clip algorithms for parallel processing has also been investigated. Based on the conventional parametric clipping algorithm, two families of 2-D generalized line clipping algorithms are proposed: the t-para method and the s-para method. Depending on the implementation both run either linearly in time using a sequential tracing or logarithmically in time by applying the numerical bisection method. The intersection problem is solved after the sector locations of the endpoints of a line segment are determined by a binary search. Three-dimensional clipping with a sweep-defined object using translational sweeping or conic sweeping is also discussed. Furthermore, a mapping method is developed for rectangular clipping. The endpoints of a line segment are first mapped onto the clip boundaries by an interval-clip operation. Then a pseudo window is-defined and a set of conditions is derived for trivial acceptance and rejection. The proposed algorithms are implemented and compared with the Liang-Barsky algorithm to estimate their practical efficiency. Vectorization of the 2-D and 3-D rectangular clipping algorithms on an array processor has also been attempted.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Viloria, John A. (John Alexander) 1978. "Optimizing clustering algorithms for computer vision". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86847.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Testo completo
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

O'Brien, Neil. "Algorithms for scientific computing". Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/355716/.

Testo completo
Abstract (sommario):
There has long been interest in algorithms for simulating physical systems. We are concernedwith two areaswithin this field: fastmultipolemethods andmeshlessmethods. Since Greengard and Rokhlin’s seminal paper in 1987, considerable interest has arisen in fast multipole methods for finding the energy of particle systems in two and three dimensions, and more recently in many other applications where fast matrix-vector multiplication is called for. We develop a new fast multipole method that allows the calculation of the energy of a system of N particles in O(N) time, where the particles’ interactions are governed by the 2D Yukawa potential which takes the form of a modified Bessel function Kv. We then turn our attention to meshless methods. We formulate and test a new radial basis function finite differencemethod for solving an eigenvalue problemon a periodic domain. We then applymeshlessmethods to modelling photonic crystals. After an initial background study of the field, we detail the Maxwell equations, which govern the interaction of the light with the photonic crystal, and show how photonic band gaps may be given rise to. We present a novel meshless weak-strong form method with reduced computational cost compared to the existing meshless weak form method. Furthermore, we develop a new radial basis function finite differencemethod for photonic band gap calculations. Throughout the work we demonstrate the application of cutting-edge technologies such as cloud computing to the development and verification of algorithms for physical simulations.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Nofal, Samer. "Algorithms for argument systems". Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12173/.

Testo completo
Abstract (sommario):
Argument systems are computational models that enable an artificial intelligent agent to reason via argumentation. Basically, the computations in argument systems can be viewed as search problems. In general, for a wide range of such problems existing algorithms lack five important features. Firstly, there is no comprehensive study that shows which algorithm among existing others is the most efficient in solving a particular problem. Secondly, there is no work that establishes the use of cost-effective heuristics leading to more efficient algorithms. Thirdly, mechanisms for pruning the search space are understudied, and hence, further pruning techniques might be neglected. Fourthly, diverse decision problems, for extended models of argument systems, are left without dedicated algorithms fine-tuned to the specific requirements of the respective extended model. Fifthly, some existing algorithms are presented in a high level that leaves some aspects of the computations unspecified, and therefore, implementations are rendered open to different interpretations. The work presented in this thesis tries to address all these concerns. Concisely, the presented work is centered around a widely studied view of what computationally defines an argument system. According to this view, an argument system is a pair: a set of abstract arguments and a binary relation that captures the conflicting arguments. Then, to resolve an instance of argument systems the acceptable arguments must be decided according to a set of criteria that collectively define the argumentation semantics. For different motivations there are various argumentation semantics. Equally, several proposals in the literature present extended models that stretch the basic two components of an argument system usually by incorporating more elements and/or broadening the nature of the existing components. This work designs algorithms that solve decision problems in the basic form of argument systems as well as in some other extended models. Likewise, new algorithms are developed that deal with different argumentation semantics. We evaluate our algorithms against existing algorithms experimentally where sufficient indications highlight that the new algorithms are superior with respect to their running time.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Yu, Chia Woo. "Improved algorithms for hybrid video coding". Thesis, University of Warwick, 2007. http://wrap.warwick.ac.uk/3841/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Barbosa, Rafael da Ponte. "New algorithms for distributed submodular maximization". Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/95545/.

Testo completo
Abstract (sommario):
A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as submodular maximization problems. In many of these applications, the amount of data collected is quite large and it is growing at a very fast pace. For example, the wide deployment of sensors has led to the collection of large amounts of measurements of the physical world. Similarly, medical data and human activity data are being captured and stored at an ever increasing rate and level of detail. This data is often high-dimensional and complex, and it needs to be stored and/or processed in a distributed fashion. Following a recent line of work, we present here parallel algorithms for these problems, and analyze the compromise between quality of the solutions obtained and the amount of computational overhead. On the one hand, we develop strategies for bringing existing algorithms for constrained submodular maximization in the sequential setting to the distributed setting. The algorithms presented achieve constant approximation factors in two rounds, and near optimal approximation ratios in only a constant number of rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint. On the other hand, for unconstrained submodular maximization, we devise parallel algorithms combining naive random sampling and Double Greedy steps, and investigate how much the quality of the solutions degrades with less coordination.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Nguyen, Trung Thanh. "Continuous dynamic optimisation using evolutionary algorithms". Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1296/.

Testo completo
Abstract (sommario):
Evolutionary dynamic optimisation (EDO), or the study of applying evolutionary algorithms to dynamic optimisation problems (DOPs) is the focus of this thesis. Based on two comprehensive literature reviews on existing academic EDO research and real-world DOPs, this thesis for the first time identifies some important gaps in current academic research where some common types of problems and problem characteristics have not been covered. In an attempt to close some of these gaps, the thesis makes the following contributions: First, the thesis helps to characterise DOPs better by providing a new definition framework, two new sets of benchmark problems (for certain classes of continuous DOPs) and several new sets of performance measures (for certain classes of continuous DOPs). Second, the thesis studies continuous dynamic constrained optimisation problems (DCOPs), an important and common class of DOPs that have not been studied in EDO research. Contributions include developing novel optimisation approaches (with superior results to existing methods), analysing representative characteristics of DCOPs, identifying the strengths/weaknesses of existing methods and suggesting requirements for an algorithm to solve DCOPs effectively. Third, the thesis studies dynamic time-linkage optimisation problems (DTPs), another important and common class of DOPs that have not been well-studied in EDO research. Contributions include developing a new optimisation approach (with better results than existing methods in certain classes of DTPs), analysing the characteristics of DTPs and the strengths and weaknesses of existing EDO methods in solving certain classes of DTPs.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Matsakis, Nicolaos. "Approximation algorithms for packing and buffering problems". Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/82141/.

Testo completo
Abstract (sommario):
This thesis studies online and offine approximation algorithms for packing and buffering problems. In the second chapter of this thesis, we study the problem of packing linear programs online. In this problem, the online algorithm may only increase the values of the variables of the linear program and his goal is to maximize the value of the objective function of it. The online algorithm has initially full knowledge of all parameters of the linear program, except for the right-hand sides of the constraints which are gradually revealed to him by the adversary. This online problem has been introduced by Ochel et al. [2012]. Our contribution (Englert et al. [2014]) is to provide improved upper bounds for the competitiveness of both deterministic and randomized online algorithms for this problem, as well as an optimal deterministic online algorithm for the special case of linear programs involving two variables. In the third chapter we study the offine COLORFUL BIN PACKING problem. This problem is a variant of the BIN PACKING problem, where each item is associated with a color and where there exists the additional restriction that two items packed consecutively into the same bin cannot share the same color. The COLORFUL BIN PACKING problem has been studied mainly from an online perspective and has been introduced as a generalization of the BLACK AND WHITE BIN PACKING problem (Balogh et al. [2012]), i.e., the special case of this problem for two colors. We provide (joint work with Matthias Englert) a 2-appoximate algorithm for the COLORFUL BIN PACKING problem. In the fourth chapter we study the Longest Queue Drop (LQD) online algorithm for shared-memory switches with three and two output ports. The Longest Queue Drop algorithm is a well-known online algorithm used to direct the packet ow of shared-memory switches. According to LQD, when the buffer of the switch becomes full, a packet is preempted from the longest queue in the buffer to free buffer space for the newly arriving packet which is accepted. We show (Matsakis [2016], to appear) that the Longest Queue Drop algorithm is (3/2)-competitive for three-port switches, improving the previously best upper bound of 5/3 (Kobayashi et al. [2007]). Additionally, we show that this algorithm is exactly (4/3)-competitive for two-port switches, correcting a previously published result claiming a tight upper bound of 4M-4/3M-2 < 4=3, where M 2 Z+ denotes the buffer size.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Alam, Intekhab Asim. "Real time tracking using nature-inspired algorithms". Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8253/.

Testo completo
Abstract (sommario):
This thesis investigates the core difficulties in the tracking field of computer vision. The aim is to develop a suitable tuning free optimisation strategy so that a real time tracking could be achieved. The population and multi-solution based approaches have been applied first to analyse the convergence behaviours in the evolutionary test cases. The aim is to identify the core misconceptions in the manner the search characteristics of particles are defined in the literature. A general perception in the scientific community is that the particle based methods are not suitable for the real time applications. This thesis improves the convergence properties of particles by a novel scale free correlation approach. By altering the fundamental definition of a particle and by avoiding the nostalgic operations the tracking was expedited to a rate of 250 FPS. There is a reasonable amount of similarity between the tracking landscapes and the ones generated by three dimensional evolutionary test cases. Several experimental studies are conducted that compares the performances of the novel optimisation to the ones observed with the swarming methods. It is therefore concluded that the modified particle behaviour outclassed the traditional approaches by huge margins in almost every test scenario.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

King, David Jonathan. "Functional programming and graph algorithms". Thesis, University of Glasgow, 1996. http://theses.gla.ac.uk/1629/.

Testo completo
Abstract (sommario):
This thesis is an investigation of graph algorithms in the non-strict purely functional language Haskell. Emphasis is placed on the importance of achieving an asymptotic complexity as good as with conventional languages. This is achieved by using the monadic model for including actions on the state. Work on the monadic model was carried out at Glasgow University by Wadler, Peyton Jones, and Launchbury in the early nineties and has opened up many diverse application areas. One area is the ability to express data structures that require sharing. Although graphs are not presented in this style, data structures that graph algorithms use are expressed in this style. Several examples of stateful algorithms are given including union/find for disjoint sets, and the linear time sort binsort. The graph algorithms presented are not new, but are traditional algorithms recast in a functional setting. Examples include strongly connected components, biconnected components, Kruskal's minimum cost spanning tree, and Dijkstra's shortest paths. The presentation is lucid giving more insight than usual. The functional setting allows for complete calculational style correctness proofs - which is demonstrated with many examples. The benefits of using a functional language for expressing graph algorithms are quantified by looking at the issues of execution times, asymptotic complexity, correctness, and clarity, in comparison with traditional approaches. The intention is to be as objective as possible, pointing out both the weaknesses and the strengths of using a functional language.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Truong, Ngoc Cuong. "Algorithms for appliance usage prediction". Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/367540/.

Testo completo
Abstract (sommario):
Demand-Side Management (DSM) is one of the key elements of future Smart Electricity Grids. DSM involves mechanisms to reduce or shift the consumption of electricity in an attempt to minimise peaks. By so doing it is possible to avoid using expensive peaking plants that are also highly carbon emitting. A key challenge in DSM, however, is the need to predict energy usage from specific home appliances accurately so that consumers can be notified to shift or reduce the use of high energy-consuming appliances. In some cases, such notifications may be also need to be given at very short notice. Hence, to solve the appliance usage prediction problem, in this thesis we develop novel algorithms that take into account both users' daily practices (by taking advantage of the cyclic nature of routine activities) and the inter-dependency between the usage of multiple appliances (i.e., the user's typical consumption patterns). We propose two prediction algorithms to satisfy the needs for fast prediction and high accuracy respectively: i) a rule-based approach, EGH-H, for scenarios in which notifications need to be given at short notice, to find significant patterns in the use of appliances that can capture the user's behaviour (or habits), ii) a graphical{model based approach, GM-PMA (Graphical Model for Prediction in Multiple Appliances) for scenarios that require high prediction accuracy. We demonstrate through extensive empirical evaluations on real{world data from a prominent database of home energy usage that GM-PMA outperforms existing methods by up to 41%, and the runtime of EGH-H is 100 times lower on average, than that of other benchmark algorithms, while maintaining competitive prediction accuracy. Moreover, we demonstrate the use of appliance usage prediction algorithms in the context of demand{side management by proposing an Intelligent Demand Responses (IDR) mechanism, where an agent uses Logistic Inference to learn the user's preferences, and hence provides the best personalised suggestions to the user. We use simulations to evaluate IDR on a number of user types, and show that, by using IDR, users are likely to improve their savings significantly.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Eriksson, Daniel. "Algorithmic Design of Graphical Resources for Games Using Genetic Algorithms". Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139332.

Testo completo
Abstract (sommario):
Producing many varying instances of the same type of graphical resource for games can be of interest, such as trees or foliage. But when randomly generating graphical resources, you can often end up with many similar looking results or perhaps results that doesn't look like what it is meant to look like. This work investigates whether genetic algorithms can be applied to produce greater varying results when generating graphical resources by basing the fitness of each individual for each genetic generation on how similar the graphical resource is to previously generated resources. This work concludes from the limited work that was performed that while it seems possible that the use of genetic algorithms might be able to produce visually different graphical resources, Blender currently doesn't seem to be able to produce enough results in a reasonable time frame for this to be usable on a large scale.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Themelis, Andreas. "Proximal algorithms for structured nonconvex optimization". Thesis, IMT Alti Studi Lucca, 2018. http://e-theses.imtlucca.it/262/1/Themelis_phdthesis.pdf.

Testo completo
Abstract (sommario):
Due to their simplicity and versatility, splitting algorithms are often the methods of choice for many optimization problems arising in engineering. “Splitting” complex problems into simpler subtasks, their complexity scales well with problem size, making them particularly suitable for large-scale applications where other popular methods such as IP or SQP cannot be employed. There are, however, two major downsides: 1) there is no satisfactory theory in support of their employment for nonconvex problems, and 2) their efficacy is severely affected by ill conditioning. Many attempts have been made to overcome these issues, but only incomplete or case-specific theories have been established, and some enhancements have been proposed which however either fail to preserve the simplicity of the original algorithms, or can only offer local convergence guarantees. This thesis aims at overcoming these downsides. First, we provide novel tight convergence results for the popular DRS and ADMM schemes for nonconvex problems, through an elegant unified framework reminiscent of Lyapunov stability theory. “Proximal envelopes”, whose analysis is here extended to nonconvex problems, prove to be the suitable Lyapunov functions. Furthermore, based on these results we develop enhancements of splitting algorithms, the first that 1) preserve complexity and convergence properties, 2) are suitable for nonconvex problems, and 3) achieve asymptotic superlinear rates.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Tyler, J. E. M. "Speech recognition by computer : algorithms and architectures". Thesis, University of Greenwich, 1988. http://gala.gre.ac.uk/8707/.

Testo completo
Abstract (sommario):
This work is concerned with the investigation of algorithms and architectures for computer recognition of human speech. Three speech recognition algorithms have been implemented, using (a) Walsh Analysis, (b) Fourier Analysis and (c) Linear Predictive Coding. The Fourier Analysis algorithm made use of the Prime-number Fourier Transform technique. The Linear Predictive Coding algorithm made use of LeRoux and Gueguen's method for calculating the coefficients. The system was organised so that the speech samples could be input to a PC/XT microcomputer in a typical office environment. The PC/XT was linked via Ethernet to a Sun 2/180s computer system which allowed the data to be stored on a Winchester disk so that the data used for testing each algorithm was identical. The recognition algorithms were implemented entirely in Pascal, to allow evaluation to take place on several different machines. The effectiveness of the algorithms was tested with a group of five naive speakers, results being in the form of recognition scores. The results showed the superiority of the Linear Predictive Coding algorithm, which achieved a mean recognition score of 93.3%. The software was implemented on three different computer systems. These were an 8-bit microprocessor, a sixteen-bit microcomputer based on the IBM PC/XT, and a Motorola 68020 based Sun Workstation. The effectiveness of the implementations was measured in terms of speed of execution of the recognition software. By limiting the vocabulary to ten words, it has been shown that it would be possible to achieve recognition of isolated utterances in real time using a single 68020 microprocessor. The definition of real time in this context is understood to mean that the recognition task will on average, be completed within the duration of the utterance, for all the utterances in the recogniser's vocabulary. A speech recogniser architecture is proposed which would achieve real time speech recognition without any limitation being placed upon (a) the order of the transform, and (b) the size of the recogniser's vocabulary. This is achieved by utilising a pipeline of four processors, with the pattern matching process performed in parallel on groups of words in the vocabulary.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Shoker, Leor. "Signal processing algorithms for brain computer interfacing". Thesis, Cardiff University, 2006. http://orca.cf.ac.uk/56097/.

Testo completo
Abstract (sommario):
A brain computer interface (BCI) allows the user to communicate with a computer using only brain signals. In this way, the conventional neural pathways of peripheral nerves and muscles are bypassed, thereby enabling control of a computer by a person with no motor control. The brain signals, known as electroencephalographs (EEGs), are recorded by electrodes placed on the surface of the scalp. A requirement for a successful BCI is that interfering artifacts are removed from the EEGs, so that thereby the important cognitive information is revealed. Two systems based on second order blind source separation (BSS) are therefore proposed. The first system, is based on developing a gradient based BSS algorithm, within which a constraint is incorporated such that the effect of eye blinking artifacts are mitigated from the constituent independent components (ICs). The second method is based on reconstructing the EEGs such that the effect of eye blinking artifacts are removed. The EEGs are separated using an unconstrained BSS algorithm, based on the principles of second order blind identification. Certain characteristics describing eye blinking artifacts are used to identify the related ICs. Then the remaining ICs are used to reconstruct the artifact free EEGs. Both methods yield significantly better results than standard techniques. The degree to which the artifacts are removed is shown and compared with standard methods, both subjectively and objectively. The proposed BCI systems are based on extracting the sources related to finger movement and tracking the movement of the corresponding signal sources. The first proposed system explicitly localises the sources over successive temporal windows of ICs using the least squares (LS) method and characterises the trajectories of the sources. A constrained BSS algorithm is then developed to separate the EEGs while mitigating the eye blinking artifacts. Another approach is based on inferring causal relationships between electrode signals. Directed transfer functions (DTFs) are also applied to short temporal windows of EEGs, from which a time-frequency map of causality is constructed. Additionally, the distribution of beta band power for the IC related to finger movement is combined with the DTF approach to form part of a robust classification system. Finally, a new modality for BCI is introduced based on space-time-frequency masking. Here the sources are assumed to be disjoint in space, time and frequency. The method is based on multi-way analysis of the EEGs and extraction of components related to finger movements. The components are localised in space-time-frequency and compared with the original EEGs in order to quantify the motion of the extracted component.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

RICCA, MARCO. "Energy aware control algorithms for computer networks". Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2497193.

Testo completo
Abstract (sommario):
The main motivation of this work is to investigate techniques to reduce the power consumption inside a network element. It is enough to consider the high energy demand associated to the telecommunication networks field. As practical consequence the power consumption has become a relevant parameter and it represents a critical constraint for the network designers looking both the whole network infrastructure and the network elements like switches, routers and servers. The PhD has been focused mainly on two research areas of interest, the first one was the power consumption inside the switching fabric of an high speed router. The target was to analyze the effect of the dynamic power inside a switching fabric, to evaluate a set of optimization strategies in order to minimize the power consumption and to achieve the best trade-off between power, high performances and packet delays; the crossbar was used as reference switching architecture for this study. Looking at the consumption side, generally speaking, it is possible to define two families of switching fabrics: ˆ1)Bit-rate independent switching fabric, in which the consumption does not depend on the number of transported bits; this family is typical of optical switching fabrics ˆ2)Bit-rate dependent switching fabric, where the total consumption is proportional to the data transmission bit-rate, this family is typical of electronic switching fabrics The second research activity was carried at the Alcatel-Lucent Bell Laboratories, based in New Jersey (USA) and over a period of 9 months. The study of the power consumption across several network elements that are commercially available for the "corporate" market. We started from a set of collected larger number of power measurements over these network elements and thanks to them we were able to develop a linear mathematical model to describe the power consumption of a generic network element.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

PUTZU, LORENZO. "Computer aided diagnosis algorithms for digital microscopy". Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266877.

Testo completo
Abstract (sommario):
Automatic analysis and information extraction from an image is still a highly chal- lenging research problem in the computer vision area, attempting to describe the image content with computational and mathematical techniques. Moreover the in- formation extracted from the image should be meaningful and as most discrimi- natory as possible, since it will be used to categorize its content according to the analysed problem. In the Medical Imaging domain this issue is even more felt because many important decisions that affect the patient care, depend on the use- fulness of the information extracted from the image. Manage medical image is even more complicated not only due to the importance of the problem, but also because it needs a fair amount of prior medical knowledge to be able to represent with data the visual information to which pathologist refer. Today medical decisions that impact patient care rely on the results of laboratory tests to a greater extent than ever before, due to the marked expansion in the number and complexity of offered tests. These developments promise to improve the care of patients, but the more increase the number and complexity of the tests, the more increases the possibility to misapply and misinterpret the test themselves, leading to inappropriate diagnosis and therapies. Moreover, with the increased number of tests also the amount of data to be analysed increases, forcing pathologists to devote much time to the analysis of the tests themselves rather than to patient care and the prescription of the right therapy, especially considering that most of the tests performed are just check up tests and most of the analysed samples come from healthy patients. Then, a quantitative evaluation of medical images is really essential to overcome uncertainty and subjectivity, but also to greatly reduce the amount of data and the timing for the analysis. In the last few years, many computer assisted diagno- sis systems have been developed, attempting to mimic pathologists by extracting features from the images. Image analysis involves complex algorithms to identify and characterize cells or tissues using image pattern recognition technology. This thesis addresses the main problems associated to the digital microscopy analysis in histology and haematology diagnosis, with the development of algorithms for the extraction of useful information from different digital images, but able to distinguish different biological structures in the images themselves. The proposed methods not only aim to improve the degree of accuracy of the analysis, and reducing time, if used as the only means of diagnoses, but also they can be used as intermediate tools for skimming the number of samples to be analysed directly from the pathologist, or as double check systems to verify the correct results of the automated facilities used today.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Zhou, Tianyang 1980. "Modified LLL algorithms". Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99356.

Testo completo
Abstract (sommario):
Lattice basis reduction arises from many applications, such as cryptography, communications, GPS and so on. This thesis is concerned with the widely used LLL reduction. We cast it as a QRZ matrix factorization for real bases. Based on the matrix factorization, we first give the real version of the LLL algorithm (the original LLL algorithm is for integer bases). Then we propose three modified algorithms to improve the computational efficiency, while the reduced matrices satisfy the LLL-reduced criteria. The first modified algorithm, to be referred to as MLLLPIVOT, uses a block pivoting strategy. The second one, to be called MLLLINSERT, uses a greedy insertion strategy. The last one, to be called MLLLLAZY, uses a "lazy" size-reduction strategy. Extensive simulation results are given to show the improvements and different performance of the three algorithms. In addition, numerical stability of the LLL algorithm and the three modified algorithms is considered. The simulations indicate that on average the computational efficiency (measured by CPU time) of the four algorithms have the increasing order: LLL < MLLLPIVOT < MLLLINSERT < MLLLLAZY, and the four algorithms are backward numerically stable in most cases, and in some extreme cases, the numerical stability of these algorithms is in an opposite order. Furthermore, we also give the complexity analysis of LLL, MLLLINSERT and MLLLLAZY under the assumption of using exact arithmetic.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Schuilenburg, Alexander Marius. "Parallelisation of algorithms". Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/22211.

Testo completo
Abstract (sommario):
Most numerical software involves performing an extremely large volume of algebraic computations. This is both costly and time consuming in respect of computer resources and, for large problems, often super-computer power is required in order for results to be obtained in a reasonable amount of time. One method whereby both the cost and time can be reduced is to use the principle "Many hands make light work", or rather, allow several computers to operate simultaneously on the code, working towards a common goal, and hopefully obtaining the required results in a fraction of the time and cost normally used. This can be achieved through the modification of the costly, time consuming code, breaking it up into separate individual code segments which may be executed concurrently on different processors. This is termed parallelisation of code. This document describes communication between sequential processes, protocols, message routing and parallelisation of algorithms. In particular, it deals with these aspects with reference to the Transputer as developed by INMOS and includes two parallelisation examples, namely parallelisation of code to study airflow and of code to determine far field patterns of antennas. This document also reports on the practical experiences with programming in parallel.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Karunarathne, Lalith. "Network coding via evolutionary algorithms". Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/57047/.

Testo completo
Abstract (sommario):
Network coding (NC) is a relatively recent novel technique that generalises network operation beyond traditional store-and-forward routing, allowing intermediate nodes to combine independent data streams linearly. The rapid integration of bandwidth-hungry applications such as video conferencing and HDTV means that NC is a decisive future network technology. NC is gaining popularity since it offers significant benefits, such as throughput gain, robustness, adaptability and resilience. However, it does this at a potential complexity cost in terms of both operational complexity and set-up complexity. This is particularly true of network code construction. Most NC problems related to these complexities are classified as non deterministic polynomial hard (NP-hard) and an evolutionary approach is essential to solve them in polynomial time. This research concentrates on the multicast scenario, particularly: (a) network code construction with optimum network and coding resources; (b) optimising network coding resources; (c) optimising network security with a cost criterion (to combat the unintentionally introduced Byzantine modification security issue). The proposed solution identifies minimal configurations for the source to deliver its multicast traffic whilst allowing intermediate nodes only to perform forwarding and coding. In the method, a preliminary process first provides unevaluated individuals to a search space that it creates using two generic algorithms (augmenting path and linear disjoint path). An initial population is then formed by randomly picking individuals in the search space. Finally, the Multi-objective Genetic algorithm (MOGA) and Vector evaluated Genetic algorithm (VEGA) approaches search the population to identify minimal configurations. Genetic operators (crossover, mutation) contribute to include optimum features (e.g. lower cost, lower coding resources) into feasible minimal configurations. A fitness assignment and individual evaluation process is performed to identify the feasible minimal configurations. Simulations performed on randomly generated acyclic networks are used to quantify the performance of MOGA and VEGA.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Elabed, Jamal. "Implementing parallel sorting algorithms". Virtual Press, 1989. http://liblink.bsu.edu/uhtbin/catkey/543997.

Testo completo
Abstract (sommario):
The Republic of Guinea is located on the west coast of Africa at about 11° North latitude. A large portion of Guinea's supply of protein is dried fish. The actual drying method operates under open air, the foodstuff being unprotected from unexpected rains, windborne dirt and dust, and from infestation by insects, rodents, and other animals. More, the deforestation rate is increasing year after year, depleting the source of fuel for drying. Practical ways of drying fish cheaply and sanitarily would be welcome.Recently, much work has been devoted to developing algorithms for parallel processors. Parallel algorithms have received a great deal of attention because of the advances in computer hardware technology. These parallel processors and algorithms have been used to improve computational speed, especially in the areas of sorting, evaluation of polynomials, arithmetic expressions, matrix and graphic problems.Sorting is an important operation in business and computer engineering applications. The literature contains many sorting algorithms, both sequential and parallel, which have been developed and used in practical applications. bubble sort, quick sort, insertion sort, enumeration sort, bucket and odd-even transposition sort. Ada, a new excellent programming language that offers high-level concurrent processing facilities called tasks, is used in this thesis to introduce, implement, compare and evaluate some of the parallel sorting algorithms. This thesis will also show that parallel sorting algorithms reduce the time requirement to perform the tasks.
Department of Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Testo completo
Abstract (sommario):
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Abdul, Karim Mohamad Sharis. "Computer-aided aesthetics in evolutionary computer aided design". Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/27913.

Testo completo
Abstract (sommario):
This thesis presents research into the possibility of developing a computerised system that can evaluate the aesthetics and engineering aspects of solid shapes. One of the research areas is also to include such an evaluation system into an existing evolutionary CAD system which utilizes the Genetic Algorithms (GAs) technology. An extensive literature survey has been carried out to better understand and clarify the vagueness and subjectivity of the concept of aesthetics, which leads to the work of defining and quantifying a set of aesthetic parameters. This research achieves its novelty in aiming to assist designers in evaluating the aesthetics and functional aspects of designs early in the conceptual design stage, and its inclusion into an evolutionary CAD system. The field of Computer Aided Design (CAD) lacks the aesthetics aspect of the design, which is very crucial in evaluating designs especially considering the trend towards virtual prototypes replacing physical prototypes. This research has managed to suggest, define and quantify a set of aesthetic and functional elements or parameters, which will be the basis of solid shape evaluation. This achievement will help designers in determining the fulfilment of design targets, where the designers will have a full control to determine the priority of each evaluation element in the developed system. In achieving this, computer software including a programming language package and CAD software are involved, which eventually led to the development of a prototype system called Computer Aided Aesthetics and Functions Evaluation (CAAFE). An evolutionary CAD system called Evolutionary Form Design (EFD), which utilizes GAs, has been available for few years now. It evolves shapes for quick and creative suggestions, however it lacks the automated evaluation and aesthetics aspects of the design. This research has worked into the integrating of CAAFE into EFD, which led to a system that could evolve objects based on a selected and weighed aesthetic and functional elements. Finally, surveys from users have also been presented in this thesis to offer improvement to the scoring system within the CAAFE system.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Yang, Meng. "Algorithms in computer-aided design of VLSI circuits". Thesis, Edinburgh Napier University, 2006. http://researchrepository.napier.ac.uk/Output/6493.

Testo completo
Abstract (sommario):
With the increased complexity of Very Large Scale Integrated (VLSI) circuits, Computer Aided Design (CAD) plays an even more important role. Top-down design methodology and layout of VLSI are reviewed. Moreover, previously published algorithms in CAD of VLSI design are outlined. In certain applications, Reed-Muller (RM) forms when implemented with AND/XOR or OR/XNOR logic have shown some attractive advantages over the standard Boolean logic based on AND/OR logic. The RM forms implemented with OR/XNOR logic, known as Dual Forms of Reed-Muller (DFRM), is the Dual form of traditional RM implemented with AND /XOR. Map folding and transformation techniques are presented for the conversion between standard Boolean and DFRM expansions of any polarity. Bidirectional multi-segment computer based conversion algorithms are also proposed for large functions based on the concept of Boolean polarity for canonical product-of-sums Boolean functions. Furthermore, another two tabular based conversion algorithms, serial and parallel tabular techniques, are presented for the conversion of large functions between standard Boolean and DFRM expansions of any polarity. The algorithms were tested for examples of up to 25 variables using the MCNC and IWLS'93 benchmarks. Any n-variable Boolean function can be expressed by a Fixed Polarity Reed-Muller (FPRM) form. In order to have a compact Multi-level MPRM (MMPRM) expansion, a method called on-set table method is developed. The method derives MMPRM expansions directly from FPRM expansions. If searching all polarities of FPRM expansions, the MMPRM expansions with the least number of literals can be obtained. As a result, it is possible to find the best polarity expansion among 2n FPRM expansions instead of searching 2n2n - 1 MPRM expansions within reasonable time for large functions. Furthermore, it uses on-set coefficients only and hence reduces the usage of memory dramatically. Currently, XOR and XNOR gates can be implemented into Look-Up Tables (LUT) of Field Programmable Gate Arrays (FPGAs). However, FPGA placement is categorised to be NP-complete. Efficient placement algorithms are very important to CAD design tools. Two algorithms based on Genetic Algorithm (GA) and GA with Simulated Annealing (SA) are presented for the placement of symmetrical FPGA. Both of algorithms could achieve comparable results to those obtained by Versatile Placement and Routing (VPR) tools in terms of the number of routing channel tracks.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Nikolova, Evdokia Velinova. "Strategic algorithms". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54673.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 193-201).
Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components are, for example, uncertainty and economic incentives. Therefore, modem algorithm design is calling for more interdisciplinary approaches, as well as for deeper theoretical understanding, so that the algorithms can apply to more realistic settings and complex systems. Consider, for instance, the classical shortest path algorithm, which, given a graph with specified edge weights, seeks the path minimizing the total weight from a source to a destination. In practice, the edge weights are often uncertain and it is not even clear what we mean by shortest path anymore: is it the path that minimizes the expected weight? Or its variance, or some another metric? With a risk-averse objective function that takes into account both mean and standard deviation, we run into nonconvex optimization challenges that require new theory beyond classical shortest path algorithm design. Yet another shortest path application, routing of packets in the Internet, needs to further incorporate economic incentives to reflect the various business relationships among the Internet Service Providers that affect the choice of packet routes. Strategic Algorithms are algorithms that integrate optimization, uncertainty and economic modeling into algorithm design, with the goal of bringing about new theoretical developments and solving practical applications arising in complex computational-economic systems.
(cont.) In short, this thesis contributes new algorithms and their underlying theory at the interface of optimization, uncertainty and economics. Although the interplay of these disciplines is present in various forms in our work, for the sake of presentation we have divided the material into three categories: 1. In Part I we investigate algorithms at the intersection of Optimization and Uncertainty. The key conceptual contribution in this part is discovering a novel connection between stochastic and nonconvex optimization. Traditional algorithm design has not taken into account the risk inherent in stochastic optimization problems. We consider natural objectives that incorporate risk, which tum out equivalent to certain nonconvex problems from the realm of continuous optimization. As a result, our work advances the state of art in both stochastic and in nonconvex optimization, presenting new complexity results and proposing general purpose efficient approximation algorithms, some of which have shown promising practical performance and have been implemented in a real traffic prediction and navigation system. 2. Part II proposes new algorithm and mechanism design at the intersection of Uncertainty and Economics. In Part I we postulate that the random variables in our models come from given distributions. However, determining those distributions or their parameters is a challenging and fundamental problem in itself. A tool from Economics that has recently gained momentum for measuring the probability distribution of a random variable is an information or prediction market. Such markets, most popularly known for predicting the outcomes of political elections or other events of interest, have shown remarkable accuracy in practice, though at the same time have left open the theoretical and strategic analysis of current implementations, as well as the need for new and improved designs which handle more complex outcome spaces (probability distribution functions) as opposed to binary or n-ary valued distributions. The contributions of this part include a unified strategic analysis of different prediction market designs that have been implemented in practice.
(cont.) We also offer new market designs for handling exponentially large outcome spaces stemming from ranking or permutation-type outcomes, together with algorithmic and complexity analysis. 3. In Part III we consider the interplay of optimization and economics in the context of network routing. This part is motivated by the network of autonomous systems in the Internet where each portion of the network is controlled by an Internet service provider, namely by a self-interested economic agent. The business incentives do not exist merely in addition to the computer protocols governing the network. Although they are not currently integrated in those protocols and are decided largely via private contracting and negotiations, these economic considerations are a principal factor that determines how packets are routed. And vice versa, the demand and flow of network traffic fundamentally affect provider contracts and prices. The contributions of this part are the design and analysis of economic mechanisms for network routing. The mechanisms are based on first- and second-price auctions (the so-called Vickrey-Clarke-Groves, or VCG mechanisms). We first analyze the equilibria and prices resulting from these mechanisms. We then investigate the compatibility of the better understood VCG-mechanisms with the current inter-domain routing protocols, and we demonstrate the critical importance of correct modeling and how it affects the complexity and algorithms necessary to implement the economic mechanisms.
by Evdokia Velinova Nikolova.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Rahwan, Talal. "Algorithms for coalition formation in multi-agent systems". Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/49525/.

Testo completo
Abstract (sommario):
Coalition formation is a fundamental form of interaction that allows the creation of coherent groupings of distinct, autonomous, agents in order to efficiently achieve their individual or collective goals. Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of determining which of the possible coalitions to form in order to achieve some goal. This usually requires calculating a value for every possible coalition, known as the coalition value, which indicates how beneficial that coalition would be if it was formed. Now since the number of possible coalitions grows exponentially with the number of agents involved, then, instead of having a single agent calculate all these values, it would be more efficient to distribute this calculation among all agents, thus, exploiting all computational resources that are available to the system, and preventing the existence of a single point of failure. Against this background, we develop a novel algorithm for distributing the value calculation among the cooperative agents. Specifically, by using our algorithm, each agent is assigned some part of the calculation such that the agents' shares are exhaustive and disjoint. Moreover, the algorithm is decentralized, requires no communication between the agents, has minimal memory requirements, and can reflect variations in the computational speeds of the agents. To evaluate the effectiveness of our algorithm we compare it with the only other algorithm available in the literature for distributing the coalitional value calculations (due to Shehory and Kraus). This shows that for the case of 25 agents, the distribution process of our algorithm took less than 0.02% of the time, the values were calculated using 0.000006% of the memory, the calculation redundancy was reduced from 383229848 to 0, and the total number of bytes sent between the agents dropped from 1146989648 to 0. Note that for larger numbers of agents, these improvements become exponentially better. Once the coalitional values are calculated, the agents usually need to find a combination of coalitions in which every agent belongs to exactly one coalition, and by which the overall outcome of the system is maximized. This problem, which is widely known as the coalition structure generation problem, is extremely challenging due to the number of possible combinations which grows very quickly as the number of agents increases, making it impossible to go through the entire search space, even for small numbers of agents. Given this, many algorithms have been proposed to solve this problem using different techniques, ranging from dynamic programming, to integer programming, to stochastic search, all of which suffer from major limitations relating to execution time, solution quality, and memory requirements. With this in mind, we develop a novel, anytime algorithm for solving the coalition structure generation problem. Specifically, the algorithm can generate solutions by partitioning the space of all potential coalition structures into sub-spaces containing coalition structures that are similar, according to some criterion, such that these sub-spaces can be pruned by identifying their bounds. Using this representation, the algorithm can then search through the selected sub-space(s) very efficiently using a branch-and-bound technique. We empirically show that we are able to find solutions that are optimal in 0.082% of the time required by the fastest available algorithm in the literature (for 27 agents), and that is using only 33% of the memory required by that algorithm. Moreover, our algorithm is the first to be able to solve the coalition structure generation problem for numbers of agents bigger than 27 in reasonable time (less than 90 minutes for 30 agents as opposed to around 2 months for the current state of the art). The algorithm is anytime, and if interrupted before it would have normally terminated, it can still provide a solution that is guaranteed to be within a bound from the optimal one. Moreover, the guarantees we provide on the quality of the solution are significantly better than those provided by the previous state of the art algorithms designed for this purpose. For example, given 21 agents, and after only 0.0000002% of the search space has been searched, our algorithm usually guarantees that the solution quality is no worse than 91% of optimal value, while previous algorithms only guarantees 9.52%. Moreover, our guarantee usually reaches 100% after 0.0000019% of the space has been searched, while the guarantee provided by other algorithms can never go beyond 50% until the whole space has been searched. Again note that these improvements become exponentially better given larger numbers of agents.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

He, Dayu. "Algorithms for Graph Drawing Problems". Thesis, State University of New York at Buffalo, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10284151.

Testo completo
Abstract (sommario):

A graph G is called planar if it can be drawn on the plan such that no two distinct edges intersect each other but at common endpoints. Such drawing is called a plane embedding of G. A plane graph is a graph with a fixed embedding. A straight-line drawing G of a graph G = (V, E) is a drawing where each vertex of V is drawn as a distinct point on the plane and each edge of G is drawn as a line segment connecting two end vertices. In this thesis, we study a set of planar graph drawing problems.

First, we consider the problem of monotone drawing: A path P in a straight line drawing Γ is monotone if there exists a line l such that the orthogonal projections of the vertices of P on l appear along l in the order they appear in P. We call l a monotone line (or monotone direction) of P. G is called a monotone drawing of G if it contains at least one monotone path Puw between every pair of vertices u,w of G. Monotone drawings were recently introduced by Angelini et al. and represent a new visualization paradigm, and is also closely related to several other important graph drawing problems. As in many graph drawing problems, one of the main concerns of this research is to reduce the drawing size, which is the size of the smallest integer grid such that every graph in the graph class can be drawn in such a grid. We present two approaches for the problem of monotone drawings of trees. Our first approach show that every n-vertex tree T admits a monotone drawing on a grid of size O(n1.205) × O( n1.205) grid. Our second approach further reduces the size of drawing to 12n × 12n, which is asymptotically optimal. Both of our two drawings can be constructed in O(n) time.

We also consider monotone drawings of 3-connected plane graphs. We prove that the classical Schnyder drawing of 3-connected plane graphs is a monotone drawing on a f × f grid, which can be constructed in O(n) time.

Second, we consider the problem of orthogonal drawing. An orthogonal drawing of a plane graph G is a planar drawing of G such that each vertex of G is drawn as a point on the plane, and each edge is drawn as a sequence of horizontal and vertical line segments with no crossings. Orthogonal drawing has attracted much attention due to its various applications in circuit schematics, relationship diagrams, data flow diagrams etc. . Rahman et al. gave a necessary and sufficient condition for a plane graph G of maximum degree 3 to have an orthogonal drawing without bends. An orthogonal drawing D(G) is orthogonally convex if all faces of D(G) are orthogonally convex polygons. Chang et al. gave a necessary and sufficient condition (which strengthens the conditions in the previous result) for a plane graph G of maximum degree 3 to have an orthogonal convex drawing without bends. We further strengthen the results such that if G satisfies the same conditions as in previous papers, it not only has an orthogonally convex drawing, but also a stronger star-shaped orthogonal drawing.

Gli stili APA, Harvard, Vancouver, ISO e altri
34

Zhu, Huanzhou. "Developing graph-based co-scheduling algorithms with GPU acceleration". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/92000/.

Testo completo
Abstract (sommario):
On-chip cache is often shared between processes that run concurrently on different cores of the same processor. Resource contention of this type causes the performance degradation to the co-running processes. Contention-aware co-scheduling refers to the class of scheduling techniques to reduce the performance degradation. Most existing contention-aware co-schedulers only consider serial jobs. However, there often exist both parallel and serial jobs in computing systems. This thesis aims to tackle these issues. We start with modelling the problem of co-scheduling the mix of serial and parallel jobs as an Integer Programming (IP) problem. Then we construct a co-scheduling graph to model the problem, and a set of algorithms are developed to find both optimal and near-optimal solutions. The results show that the proposed algorithms can find the optimal co-scheduling solution and that the proposed approximation technique is able to find the near optimal solutions. In order to improve the scalability of the algorithms, we use GPU to accelerate the solving process. A graph processing framework, called WolfPath, is proposed in this thesis. By taking advantage of the co-scheduling graph, WolfPath achieves significant performance improvement. Due to the long preprocessing time of WolfPath, we developed WolfGraph, a GPU-based graph processing framework that features minimal preprocessing time and uses the hard disk as a memory extension to solve large-scale graphs on a single machine equipped with a GPU device. Comparing with existing GPU-based graph processing frameworks, WolfGraph can achieve similar execution time but with minimal preprocessing time.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Pieterse, Vreda. "Topic Maps for Specifying Algorithm Taxonomies : a case Study using Transitive Closure Algorithms". Thesis, University of Pretoria, 2016. http://hdl.handle.net/2263/59307.

Testo completo
Abstract (sommario):
The need for storing and retrieving knowledge about algorithms is addressed by creating a specialised information management scheme. This scheme is operationalised in terms of a topic map of algorithms. Metadata are specified for the adequate and precise description of algorithms. The specification describes both the data elements (called attributes) that are relevant to algorithms as well as the relationship of attributes to one another. In addition, a process is formalised for gathering data about algorithms and capturing it in the proposed topic map. The proposed process model and representation scheme are then illustrated by applying them to gather and represent information about transitive closure algorithms. To ensure that this thesis is self-contained, several themes about transitive closures are covered comprehensively. These include the mathematical domain-specific knowledge about transitive closures, methods for calculating the transitive closure of binary relations and techniques that can be applied in transitive closure algorithms. The work presented in this thesis has a multidisciplinary character. It contributes to the domains of formal aspects, algorithms, mathematical sciences, information sciences and software engineering. It has a strong formal foundation. The confirmation of the correctness of algorithms as well as reasoning regarding the complexity of algorithms are key aspects of this thesis. The content of this thesis revolves around algorithms: their attributes; how they relate to one another; and how new versions of the algorithms may be discovered. The introduction of new mathematical concepts and notational elements as well as new rigorous proofs contained in the thesis, extend the mathematical science domain. The main problem addressed in this thesis is an information management need. The technology, namely topic maps, used here to address the problem originated in the information science domain. It is applied in a new context that ultimately has the potential to lead to the automation of aspects of software implementation. This influences the traditional software engineering life cycle and quality of software products.
Thesis (PhD)--University of Pretoria, 2016.
Computer Science
PhD
Unrestricted
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Lu, Xin. "Efficient algorithms for scalable video coding". Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/59744/.

Testo completo
Abstract (sommario):
A scalable video bitstream specifically designed for the needs of various client terminals, network conditions, and user demands is much desired in current and future video transmission and storage systems. The scalable extension of the H.264/AVC standard (SVC) has been developed to satisfy the new challenges posed by heterogeneous environments, as it permits a single video stream to be decoded fully or partially with variable quality, resolution, and frame rate in order to adapt to a specific application. This thesis presents novel improved algorithms for SVC, including: 1) a fast inter-frame and inter-layer coding mode selection algorithm based on motion activity; 2) a hierarchical fast mode selection algorithm; 3) a two-part Rate Distortion (RD) model targeting the properties of different prediction modes for the SVC rate control scheme; and 4) an optimised Mean Absolute Difference (MAD) prediction model. The proposed fast inter-frame and inter-layer mode selection algorithm is based on the empirical observation that a macroblock (MB) with slow movement is more likely to be best matched by one in the same resolution layer. However, for a macroblock with fast movement, motion estimation between layers is required. Simulation results show that the algorithm can reduce the encoding time by up to 40%, with negligible degradation in RD performance. The proposed hierarchical fast mode selection scheme comprises four levels and makes full use of inter-layer, temporal and spatial correlation aswell as the texture information of each macroblock. Overall, the new technique demonstrates the same coding performance in terms of picture quality and compression ratio as that of the SVC standard, yet produces a saving in encoding time of up to 84%. Compared with state-of-the-art SVC fast mode selection algorithms, the proposed algorithm achieves a superior computational time reduction under very similar RD performance conditions. The existing SVC rate distortion model cannot accurately represent the RD properties of the prediction modes, because it is influenced by the use of inter-layer prediction. A separate RD model for inter-layer prediction coding in the enhancement layer(s) is therefore introduced. Overall, the proposed algorithms improve the average PSNR by up to 0.34dB or produce an average saving in bit rate of up to 7.78%. Furthermore, the control accuracy is maintained to within 0.07% on average. As aMADprediction error always exists and cannot be avoided, an optimisedMADprediction model for the spatial enhancement layers is proposed that considers the MAD from previous temporal frames and previous spatial frames together, to achieve a more accurateMADprediction. Simulation results indicate that the proposedMADprediction model reduces the MAD prediction error by up to 79% compared with the JVT-W043 implementation.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Malek, Fadi. "Polynomial zerofinding matrix algorithms". Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9980.

Testo completo
Abstract (sommario):
In linear algebra, the eigenvalues of a matrix are equivalently defined as the zeros of its characteristic polynomial. Determining the zeros of polynomials by the computation of the eigenvalues of a corresponding companion matrix turns the table on the usual definition. In this dissertation, the work of Newbery has been expanded and a (complex) symmetric or nonsymmetric companion matrix associated with a given characteristic polynomial has been constructed. Schmeisser's technique for the construction of a tridiagonal companion matrix associated with a polynomial with real zeros has been generalized to polynomials with complex zeros. New matrix algorithms based on Schmeisser's and Fiedler's companion matrices are developed. The matrix algorithm which is based on Schmeisser's matrix uses no initial values and computes the simple and multiple zeros with high accuracy. The algorithms based on Fielder's matrices are applied recursively, and require initial values as approximations to the true zeros of the polynomial. A few techniques concerning the choice of the required initial values are also presented. An important part of this thesis is the design of a new composite three-stage matrix algorithm for finding the real and complex zeros of polynomials. The composite algorithm reduces a polynomial with multiple zeros to another polynomial with simple zeros which are then computed with high accuracy. The exact multiplicities of these zeros are then calculated by means of Lagouanelle's limiting formula. The QR algorithm has been used in all the algorithms to find the eigenvalues of the companion matrices. The effectiveness of these algorithms is illustrated by presenting numerical results based on polynomials taken from the literature and considered to be ill-conditioned, as well as random polynomials with randomly generated zeros in small and large clusters. Polynomials are represented and evaluated in quadruple precision; but it suffices to use a double precision QR algorithm in order to obtain almost double precision in the zeros of polynomials.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Acharyya, Amit. "Resource constrained signal processing algorithms and architectures". Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/179167/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Jalalian, Hamid Reza. "Decomposition evolutionary algorithms for noisy multiobjective optimization". Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16828/.

Testo completo
Abstract (sommario):
Multi-objective problems are a category of optimization problem that contain more than one objective function and these objective functions must be optimized simultaneously. Should the objective functions be conflicting, then a set of solutions instead of a single solution is required. This set is known as Pareto optimal. Multi-objective optimization problems arise in many real world applications where several competing objectives must be evaluated and optimal solutions found for them, in the presence of trade offs among conflicting objectives. Maximizing returns while minimizing the risk of stock market investments, or maximizing performance whilst minimizing fuel consumption and hazardous gas emission when buying a car are typical examples of real world multi-objective optimization problems. In this case a number of optimal solutions can be found, known as non-dominated or Pareto optimal solutions. Pareto optimal solutions are reached when it is impossible to improve one objective without making the others worse. Classical ways to address this problem used direct or gradient based methods that rendered them insufficient or computationally expensive for large scale or combinatorial problems. Other difficulties attended the classical methods, such as problem knowledge, which may not be available, or sensitivity to some problem features. For example, finding solutions on the entire Pareto optimal set can only be guaranteed for convex problems. Classical methods for generating the Pareto front set aggregate the objectives into a single or parametrized function before search. Thus, several runs and parameter settings are performed to achieve a set of solutions that approximate the Pareto optimals. Subsequently new methods have been developed, based on computer experiments with meta-heuristic algorithms. Most of these meta-heuristics implement some sort of stochastic search method, amongst which the 'Evolutionary Algorithm' is garnering much attention. It possesses several characteristics that make it a desirable method for confronting multi-objective problems. As a result, a number of studies in recent decades have developed or modified the MOEA for different purposes. This algorithm works with a population of solutions which are capable of searching for multiple Pareto optimal solutions in a single run. At the same time, only the fittest individuals in each generation are offered the chance for reproduction and representation in the next generation. The fitness assignment function is the guiding system of MOEA. Fitness value represents the strength of an individual. Unfortunately, many real world applications bring with them a certain degree of noise due to natural disasters, inefficient models, signal distortion or uncertain information. This noise affects the performance of the algorithm's fitness function and disrupts the optimization process. This thesis explores and targets the effect of this disruptive noise on the performance of the MOEA. In this thesis, we study the noisy MOP and modify the MOEA/D to improve its performance in noisy environments. To achieve this, we will combine the basic MOEA/D with the 'Ordinal Optimization' technique to handle uncertainties. The major contributions of this thesis are as follows. First, MOEA/D is tested in a noisy environment with different levels of noise, to give us a deeper understanding of where the basic algorithm fails to handle the noise. Then, we extend the basic MOEA/D to improve its noise handling by employing the ordinal optimization technique. This creates MOEA/D+OO, which will outperform MOEA/D in terms of diversity and convergence in noisy environments. It is tested against benchmark problems of varying levels of noise. Finally, to test the real world application of MOEA/D+OO, we solve a noisy portfolio optimization with the proposed algorithm. The portfolio optimization problem is a classic one in finance that has investors wanting to maximize a portfolio's return while minimizing risk of investment. The latter is measured by standard deviation of the portfolio's rate of return. These two objectives clearly make it a multi-objective problem.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Brolin, Echeverria Paolo, e Joakim Westermark. "Benchmarking Rubik’sRevenge algorithms". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134903.

Testo completo
Abstract (sommario):
This Bachelor thesis paper investigates 2 different methods used to solve the Rubik’s Cube 4x4x4 puzzle. The analyzed methods are Reduction and Big Cube method. We have implemented the cube and the two solvers in Python. Through a series of tests we have concluded that the Big Cube method has a better average move count as well as a low standard deviation in comparison to the Reduction method. However the reduction method has a lower minimum move count and consists of fewer algorithms. The best approach would be to combine both methods to form an optimal solution.
Denna kandidatexamensuppsats undersöker två olika metoder som används för att lösa Rubiks Kub 4x4x4. Metoderna som analyseras är Reduction och Big Cube. Vi har implementerat kuben samt de bägge lösarna I Python. Genom en serie tester har vi kommit fram till att Big Cube har ett lägre genomsnittligt rotationsantal samt lägre standardavvikelse än Reduction. Reductionmetoden har däremot ett lägre minimumvärde på antalet rotationer och består av färre algoritmer. Det bästa tillvägagångssättet vore att kombinera de båda lösningarna.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Zhang, Minghua, e 張明華. "Sequence mining algorithms". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B44570119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Miles, Christopher Eoin. "Case-injected genetic algorithms in computer strategy games". abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433686.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Riddell, A. G. "Computer algorithms for Euclidean lattice gauge theory calculations". Thesis, University of Canterbury. Physics, 1988. http://hdl.handle.net/10092/8220.

Testo completo
Abstract (sommario):
The computer algorithm devised by K. Decker [25] for the calculation of strong coupling expansions in Euclidean lattice gauge theory is reviewed. Various shortcomings of this algorithm are pointed out and an improved algorithm is developed. The new algorithm does away entirely with the need to store large amounts of information, and is designed in such a way that memory useage is essentially independant of the order to which the expansion is being calculated. A good deal of the redundancy and double handling present in the algorithm of ref. [25] is also eliminated. The algorithm has been used to generate a 14th order expansion for the energy of a glue ball with non-zero momentum in Z₂ lattice gauge theory in 2+1 dimensions. The resulting expression is analysed in order to study the restoration of Lorentz invariance as the theory approaches the continuum. A description is presented of the alterations required to extend the algorithm to calculations in 3+1 dimensions. An eighth order expansion of the z₂ mass gap in 3+1 dimensions has been calculated. The eighth order term differs from a previously published result.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Rich, Thomas H. "Algorithms for computer aided design of digital filters". Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22867.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Mitchell, David Anthony Paul. "Fast algorithms and hardware for 3D computer graphics". Thesis, University of Sheffield, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299571.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Li, Wenda. "Towards justifying computer algebra algorithms in Isabelle/HOL". Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289389.

Testo completo
Abstract (sommario):
As verification efforts using interactive theorem proving grow, we are in need of certified algorithms in computer algebra to tackle problems over the real numbers. This is important because uncertified procedures can drastically increase the size of the trust base and under- mine the overall confidence established by interactive theorem provers, which usually rely on a small kernel to ensure the soundness of derived results. This thesis describes an ongoing effort using the Isabelle theorem prover to certify the cylindrical algebraic decomposition (CAD) algorithm, which has been widely implemented to solve non-linear problems in various engineering and mathematical fields. Because of the sophistication of this algorithm, people are in doubt of the correctness of its implementation when deploying it to safety-critical verification projects, and such doubts motivate this thesis. In particular, this thesis proposes a library of real algebraic numbers, whose distinguishing features include a modular architecture and a sign determination algorithm requiring only rational arithmetic. With this library, an Isabelle tactic based on univariate CAD has been built in a certificate-based way: external, untrusted code delivers solutions in the form of certificates that are checked within Isabelle. To lay the foundation for the multivariate case, I have formalised various analytical results including Cauchy's residue theorem and the bivariate case of the projection theorem of CAD. During this process, I have also built a tactic to evaluate winding numbers through Cauchy indices and verified procedures to count complex roots in some domains. The formalisation effort in this thesis can be considered as the first step towards a certified computer algebra system inside a theorem prover, so that various engineering projections and mathematical calculations can be carried out in a high-confidence framework.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Erb, Lugo Anthony (Anthony E. ). "Coevolutionary genetic algorithms for proactive computer network defenses". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112841.

Testo completo
Abstract (sommario):
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 47-48).
This thesis explores the use of coevolutionary genetic algorithms as tools in developing proactive computer network defenses. We also introduce rIPCA, a new coevolutionary algorithm with a focus on speed and performance. This work is in response to the threat of disruption that computer networks face by adaptive attackers. Our challenge is to improve network defenses by modeling adaptive attacker behavior and predicting attacks so that we may proactively defend against them. To address this, we introduce RIVALS, a new cybersecurity project developed to use coevolutionary algorithms to better defend against adaptive adversarial agents. In this contribution we describe RIVALS' current suite of coevolutionary algorithms and how they explore archiving as a means of maintaining progressive exploration. Our model also allows us to explore the connectivity of a network under an adversarial threat model. To examine the suite's effectiveness, for each algorithm we execute a standard coevolutionary benchmark (Compare-on-one) and RIVALS simulations on 3 different network topologies. Our experiments show that existing algorithms either sacrifice execution speed or forgo the assurance of consistent results. rIPCA, our adaptation of IPCA, is able to consistently produce high quality results, albeit with weakened guarantees, without sacrificing speed.
by Anthony Erb Lugo.
M. Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Keup, Jessica Faith. "Computer Music Composition using Crowdsourcing and Genetic Algorithms". NSUWorks, 2011. http://nsuworks.nova.edu/gscis_etd/197.

Testo completo
Abstract (sommario):
When genetic algorithms (GA) are used to produce music, the results are limited by a fitness bottleneck problem. To create effective music, the GA needs to be thoroughly trained by humans, but this takes extensive time and effort. Applying online collective intelligence or "crowdsourcing" to train a musical GA is one approach to solve the fitness bottleneck problem. The hypothesis was that when music was created by a GA trained by a crowdsourced group and music was created by a GA trained by a small group, the crowdsourced music would be more effective and musically sound. When a group of reviewers and composers evaluated the music, the crowdsourced songs scored slightly higher overall than the songs from the small-group songs, but with the small number of evaluators, the difference was not statistically significant.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Javadi, Mohammad Saleh. "Computer Vision Algorithms for Intelligent Transportation Systems Applications". Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17166.

Testo completo
Abstract (sommario):
In recent years, Intelligent Transportation Systems (ITS) have emerged as an efficient way of enhancing traffic flow, safety and management. These goals are realized by combining various technologies and analyzing the acquired data from vehicles and roadways. Among all ITS technologies, computer vision solutions have the advantages of high flexibility, easy maintenance and high price-performance ratio that make them very popular for transportation surveillance systems. However, computer vision solutions are demanding and challenging due to computational complexity, reliability, efficiency and accuracy among other aspects.   In this thesis, three transportation surveillance systems based on computer vision are presented. These systems are able to interpret the image data and extract the information about the presence, speed and class of vehicles, respectively. The image data in these proposed systems are acquired using Unmanned Aerial Vehicle (UAV) as a non-stationary source and roadside camera as a stationary source. The goal of these works is to enhance the general performance of accuracy and robustness of the systems with variant illumination and traffic conditions.   This is a compilation thesis in systems engineering consisting of three parts. The red thread through each part is a transportation surveillance system. The first part presents a change detection system using aerial images of a cargo port. The extracted information shows how the space is utilized at various times aiming for further management and development of the port. The proposed solution can be used at different viewpoints and illumination levels e.g. at sunset. The method is able to transform the images taken from different viewpoints and match them together. Thereafter, it detects discrepancies between the images using a proposed adaptive local threshold. In the second part, a video-based vehicle's speed estimation system is presented. The measured speeds are essential information for law enforcement and they also provide an estimation of traffic flow at certain points on the road. The system employs several intrusion lines to extract the movement pattern of each vehicle (non-equidistant sampling) as an input feature to the proposed analytical model. In addition, other parameters such as camera sampling rate and distances between intrusion lines are also taken into account to address the uncertainty in the measurements and to obtain the probability density function of the vehicle's speed. In the third part, a vehicle classification system is provided to categorize vehicles into \private car", \light trailer", \lorry or bus" and \heavy trailer". This information can be used by authorities for surveillance and development of the roads. The proposed system consists of multiple fuzzy c-means clusterings using input features of length, width and speed of each vehicle. The system has been constructed by using prior knowledge of traffic regulations regarding each class of vehicle in order to enhance the classification performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Heggie, Patricia M. "Algorithms for subgroup presentations : computer implementation and applications". Thesis, University of St Andrews, 1991. http://hdl.handle.net/10023/13684.

Testo completo
Abstract (sommario):
One of the main algorithms of computational group theory is the Todd-Coxeter coset enumeration algorithm, which provides a systematic method for finding the index of a subgroup of a finitely presented group. This has been extended in various ways to provide not only the index of a subgroup, but also a presentation for the subgroup. These methods tie in with a technique introduced by Reidemeister in the 1920's and later improved by Schreier, now known as the Reidemeister-Schreier algorithm. In this thesis we discuss some of these variants of the Todd-Coxeter algorithm and their inter-relation, and also look at existing computer implementations of these different techniques. We then go on to describe a new package for coset methods which incorporates various types of coset enumeration, including modified Todd- Coxeter methods and the Reidemeister-Schreier process. This also has the capability of carrying out Tietze transformation simplification. Statistics obtained from running the new package on a collection of test examples are given, and the various techniques compared. Finally, we use these algorithms, both theoretically and as computer implementations, to investigate a particular class of finitely presented groups defined by the presentation: < a, b | an = b2 = (ab-1) ß =1, ab2 = ba2 >. Some interesting results have been discovered about these groups for various values of β and n. For example, if n is odd, the groups turn out to be finite and metabelian, and if β= 3 or β= 4 the derived group has an order which is dependent on the values of n (mod 8) and n (mod 12) respectively.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia