Dissertations / Theses on the topic 'Algorithms and implementation'

To see the other types of publications on this topic, follow the link: Algorithms and implementation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithms and implementation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Karlsson, Mattias. "IMPLEMENTATION OF ALGORITHMS ON FPGAS." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-9735.

Full text
Abstract:

This thesis describes how an algorithm is transferred from a digital signal processor to an embedded microprocessor in an FPGA using C to hardware program from Altera.

Saab Avitronics develops the secondary high lift control system for the Boeing 787 aircraft. The high lift system consists of electric motors controlling the trailing edge wing flaps and the leading edge wing slats. The high lift motors manage to control the Boeing 787 aircraft with full power even if half of each motor’s stators are damaged. The motor is a PMDC brushless motor which is controlled by an advanced algorithm. The algorithm needs to be calculated by a fast special digital signal processor.

In this thesis I have tested if the algorithm can be transferred to an FPGA and still manage the time and safety demands. This was done by transferring an already working algorithm from the digital signal processor to an FPGA. The idea was to put the algorithm in an embedded NIOS II microprocessor and speed up the bottlenecks with Altera’s C to hardware program.

The study shows that the C-code needs to be optimized for C to hardware to manage the up speeding part, as the tests showed that the calculation time for the algorithm actually became longer with C to hardware. This thesis also shows that it is highly probable to use an FPGA equipped with Altera’s NIOS II safety critical microprocessor instead of a digital signal processor to control the electrical high lift motors in the Boeing 787 aircraft.

APA, Harvard, Vancouver, ISO, and other styles
2

Yildiz, Ozgur. "Implementation Of Mesh Generation Algorithms." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1339621/index.pdf.

Full text
Abstract:
In this thesis, three mesh generation software packages have been developed and implemented. The first two were based on structured mesh generation algorithms and used to solve structured surface and volume mesh generation problems of three-dimensional domains. Structured mesh generation algorithms were based on the concept of isoparametric coordinates. In structured surface mesh generation software, quadrilateral mesh elements were generated for complex three-dimensional surfaces and these elements were then triangulated in order to obtain high-quality triangular mesh elements. Structured volume mesh generation software was used to generate hexahedral mesh elements for volumes. Tetrahedral mesh elements were constructed from hexahedral elements using hexahedral node insertion method. The results, which were produced by the mesh generation algorithms, were converted to a required format in order to be saved in output files. The third software package is an unstructured quality tetrahedral mesh generator and was used to generate exact Delaunay tetrahedralizations, constrained (conforming) Delaunay tetrahedralizations and quality conforming Delaunay tetrahedralizations. Apart from the mesh generation algorithms used and implemented in this thesis, unstructured mesh generation techniques that can be used to generate quadrilateral, triangular, hexahedral and tetrahedral mesh elements were also discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Rothacher, Fritz Markus. "Sample-rate conversion : algorithms and VLSI implementation /." [Konstanz] : Hartung-Gorre, 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thakkar, Darshan Suresh, and darshanst@gmail com. "FPGA Implementation of Short Word-Length Algorithms." RMIT University. Electrical and Computer Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080806.140908.

Full text
Abstract:
Short Word-Length refers to single-bit, two-bit or ternary processing systems. SWL systems use Sigma-Delta Modulation (SDM) technique to express an analogue or multi-bit input signal in terms of a high frequency single-bit stream. In Sigma-Delta Modulation, the input signal is coarsely quantized into a single-bit representation by sampling it at a much higher rate than twice the maximum input frequency viz. the Nyquist rate. This single-bit representation is almost exclusively filtered to remove conversion quantization noise and sample decimated to the Nyquist frequency in preparation for traditional signal processing. SWL algorithms have a huge potential in a variety of applications as they offer many advantages as compared to multi-bit approaches. Features of SWL include efficient hardware implementation, increased flexibility and massive cost savings. Field Programmable Gate Arrays (FPGAs) are SRAM/FLASH based integrated circuits that can be programmed and re-programmed by the end user. FPGAs are made up of arrays of logic gates, routing channels and I/O blocks. State-of-the-art FPGAs include features such as Advanced Clock Management, Dedicated Multipliers, DSP Slices, High Speed I/O and Embedded Microprocessors. A System-on-Programmable-Chip (SoPC) design approach uses some or all the aforementioned resources to create a complete processing system on the device itself, ensuring maximum silicon area utilization and higher speed by eliminating inter-chip communication overheads. This dissertation focuses on the application of SWL processing systems in audio Class-D Amplifiers and aims to prove the claims of efficient hardware implementation and higher speeds of operation. The analog Class-D Amplifier is analyzed and an SWL equivalent of the system is derived by replacing the analogue components with DSP functions wherever possible. The SWL Class-D Amplifier is implemented on an FPGA, the standard emulation platform, using VHSIC Hardware Description Languages (VHDL). The approach is taken a step forward by adding re-configurability and media selectivity and proposing SDM adaptivity to improve performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Mahdi, Abdul-Hussain Ebrahim. "Efficient generalized transform algorithms for digital implementation." Thesis, Bangor University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Farooq, Muhammad. "New Lanczos-type algorithms and their implementation." Thesis, University of Essex, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.536976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jerez, Juan Luis. "Custom optimization algorithms for efficient hardware implementation." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/12791.

Full text
Abstract:
The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.
APA, Harvard, Vancouver, ISO, and other styles
8

Ferguson, Phillip David. "Implementation exploration of imaging algorithms on FPGAs." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3419/.

Full text
Abstract:
This portfolio thesis documents the work carried out as part of the Engineering Doctorate (EngD) programme undertaken at the Institute for System Level Integration. This work was sponsored and aided by Thales Optronics Ltd, a company well versed in developing specialised electro-optical devices. Field programmable gate arrays (FPGAs) are the devices of choice for custom image processing algorithms due to their reconfigurable nature. This also makes them more economical for low volume production runs where non-recoverable engineering costs are a large factor. Asynchronous circuits have had a remarkable surge in development over the last 20 years, to such an extent that they are beginning to displace conventional designs for niche applications. Their unique ability to adapt to environmental and data dependent processing needs have lead them to out-perform synchronous designs in ASIC platforms for certain applications. Abstract The main body of research was separated into three areas of work presented as three technical documents. The first area of research addresses an FPGA implementation of contrast limited adaptive histogram equalisation (CLAHE), an algorithm which provides increased visual performance over conventional methods. From this, a novel implementation strategy was provided along with the key design factors for future use in a commercial context. The second area of research investigates the ability to create asynchronous circuits on FPGA devices. The main motivation for this work was to establish if any of the benefits which had been demonstrated for ASIC devices can be applied to FPGA devices. The investigation surmised the most suitable asynchronous design style for FPGA devices, a design flow to allow asynchronous circuits to function correctly on FPGAs and novel design strategies to implement consistent and repeatable asynchronous components. The result of this work established a route to implement circuits asynchronously in an FPGA. The final area of research focused on a unique conversion tool that allows synchronous circuits to run asynchronously on FPGAs whilst maintaining the same data flow patterns. This research produced an automated tool capable of implementing circuits on an FPGA asynchronously from their synchronous descriptions. This approach allowed the primary motivators of this work to be addressed. The results of this work show timing, resource utilisation and noise spectrum benefits by implementing circuits asynchronously on FPGA devices.
APA, Harvard, Vancouver, ISO, and other styles
9

Bukina, Elena. "Asymptotically optimal gradient algorithms : analysis and implementation." Nice, 2012. http://www.theses.fr/2012NICE4033.

Full text
Abstract:
Nous nous intéressons dans ce manuscrit au problème relatif à la minimisation de fonctions quadratiques dont la matrice hessienne est creuse, symétrique et définie positive (ou, de manière équivalente, au problème de résolution de systèmes linéaires de grande taille). Les méthodes classiques itératives fréquemment employées pour résoudre ce problème procèdent en choisissant successivement des pas dont la longueur et la direction dépendent du spectre de la matrice et sont donc adaptées au problème particulier traité. Ce choix adaptatif du pas peut cependant limiter l’efficacité de l’implémentation parallèle d’un algorithme : la présence de nombreux produits scalaires à calculer limite grandement les performances en imposant des étapes de synchronisation ainsi qu’un communication globale coûteuse dans le cas particulier des machines parallèles à mémoire distribuée disposant d’un grand nombre de processeurs. L’approche proposée dans cette thèse se fonde sur l’utilisation d’une famille de méthodes du gradient pour laquelle l’inverse de la longueur des pas est choisi d’avance. Pour ce type de méthodes, l’utilisation d’une suite de longueurs de pas ayant une distribution arc sinus sur l’intervalle défini par les limites du spectre de la matrice hessienne permet de converger rapidement. De fait, il n’y a aucun besoin d’étudier précisément le détail du spectre dans la mesure où les longueurs de pas ne sont reliées au problème que par les valeurs propres extrêmes de la matrice hessienne. Nous proposons d’estimer celles-ci pendant le déroulement de l’algorithme lui-même. En conséquence de la simplicité du calcul de la longueur des pas, le calcul de produits scalaires à chaque itération de l’algorithme n’est pas nécessaire (ils ne sont utilisés que sur un petit nombre d’itérations prédéfinies dans le but de déterminer les limites spectrales de la matrice) rendant ainsi notre approche particulièrement intéressante dans un contexte de calcul parallèle. Nous proposons plusieurs méthodes de gradient couplées à différentes suites de longueurs de pas précalculées ainsi qu’à plusieurs estimateurs de valeurs propres. En pratique, les performances de la méthode la plus efficace (en termes de propriété de convergence et de coût calcul) sont testées sur un ensemble de problèmes théoriques et pratiques. La même approche est aussi considérée pour l’optimisation quadratique convexe sous contraintes d’égalité
In this work we consider the minimization of quadratic functions with sparse and symmetric positive-definite Hessian matrices (or, equivalently, the solution of large linear systems of equations). Classical iterative methods for solving these problems proceed by choosing the step sizes (and search directions) relatively to the spectrum of the matrix, which are thus adapted to the particular problem considered. This type of adaptive choice results in computations that may limit the efficiency of parallel implementations of a given method : the presence of several (separate) inner products to be computed at each iteration crates blocking steps due to required global communication on some distributed-memory parallel machines with large number of processors. The approach developed in this thesis is focused on the use of a family of gradient methods where the inverse step sizes are selected beforehand. For this type of methods the use of sequences of step sizes with the arcsine distribution on the interval defined by the bounds of the matrix spectrum allows to achieve fast rates of convergence. Therefore, there is no need to extensively study the spectrum since the step size are connected to the problem through only the extremal eigenvalues of the Hessian matrix. We propose to estimate the matrix spectrum and generated by the algorithm itself. Due to the simplicity of the step size generation, the computation of inner products at each iteration is not required (they are needed at just a small number of pre-defined iterations to determine the spectral boundaries), making the approach particularly interesting in a parallel computing context. Several effective gradient methods are proposed coupled with pre-computed sequences of step sizes and eigenvalue estimators. The practical performance of the most appealing of them (in terms of convergence properties and required computational effort) is tested on a set of theoretical and real-life test problems. The same approach is also considered for convex quadratic optimization subject to equality constraints
APA, Harvard, Vancouver, ISO, and other styles
10

Sankaran, Sundar G. "Implementation and evaluation of echo cancellation algorithms." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02132009-172004/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lim, Choon Kee. "Hypercube machine implementation of low-level vision algorithms." Ohio University / OhioLINK, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1182864143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hu, Ta-Hsiang. "Discrete cosine transform implementation in VHDL." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA245791.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 1990.
Thesis Advisor(s): Lee, Chin-Hwa ; Yang, Chyan. "December 1990." Description based on title screen as viewed on March 29, 2010. DTIC Identifier(s): Fast Fourier Transform, High Level Languages, CHIPS (Electronics), Computerized Simulation, Signal Processing, Theses, Algorithms, Floating Point Operation, VHDL (Vhsic Hardware Description Language). Author(s) subject terms: FFT System, DCT System Implementation. Includes bibliographical references (p. 152). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Yi-Chang. "Parallel thinning algorithms and their implementation on hypercube machine." Ohio : Ohio University, 1991. http://www.ohiolink.edu/etd/view.cgi?ohiou1183989550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kaplo, Fadhel. "A study on implementation of four graph algorithms." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/770954.

Full text
Abstract:
The study of graph theory and its applications have increased substantially in the past 40 years. This is especially the case in the applications to the fields of computer science, electrical, and industrial engineering. Applications vary from designing computer software and hardware to telephone networks to airline network.In the thesis, properties are studied in detail to include definitions, theorems, examples and implementations of four graph algorithms in two important topics, namely connectivity and shortest paths. In the implementation part, algorithms will be available to solve problems on selected graphs. A graphic representation is also included to have a better picture of a problem and its solution.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
15

Studer, Christoph. "Iterative MIMO decoding algorithms and VLSI implementation aspects." Konstanz Hartung-Gorre, 2009. http://d-nb.info/999022172/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zaker, Sassan. "Optimal transistor sizing in VLSI : algorithms & implementation /." [S.l.] : [s.n.], 1994. http://library.epfl.ch/theses/?nr=1223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Torcolacci, Veronica. "Implementation of Machine Learning Algorithms on Hardware Accelerators." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Nowadays, cutting-edge technology, innovation and efficiency are the cornerstones on which industries are based. Therefore, prognosis and health management have started to play a key role in the prevention of crucial faults and failures. Recognizing malfunctions in a system in advance is fundamental both in economic and safety terms. This obviously requires a lot of data – mainly information from sensors or machine control - to be processed, and it’s in this scenario that Machine Learning comes to the aid. This thesis aims to apply these methodologies to prognosis in automatic machines and has been carried out at LIAM lab (Laboratorio Industriale Automazione Macchine per il packaging), an industrial research laboratory born from the experience of leading companies in the sector. Machine learning techniques such as neural networks will be exploited to solve the problems of classification that derive from the system in exam. Such algorithms will be combined with systems identification techniques that performs an estimate of the plant parameters and a feature reduction by compressing the data. This makes easier for the neural networks to distinguish the different operating conditions and perform a good prognosis activity. Practically the algorithms will be developed in Python and then implemented on two hardware accelerators, whose performance will be evaluated.
APA, Harvard, Vancouver, ISO, and other styles
18

Hsieh, Dean C. "Implementation of genetic algorithms in casting production systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0009/MQ52463.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lim, Choon Kee. "Hypercube machine implementation of low-level vision algorithms." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182864143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Obrovac, Marko. "Chemical Computing for Distributed Systems: Algorithms and Implementation." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925257.

Full text
Abstract:
Avec l'émergence de plates-formes distribuées très hétérogènes, dynamiques et à large échelle, la nécessité d'un moyen de les programmer efficacement et de les gérer est apparu. Le concept de l'informatique autonomique propose de créer des systèmes auto-gérés c'est-à-dire des systèmes qui sont conscients de leurs composants et de leur environnement, et peuvent se configurer, s'optimiser, se réparer et se protéger. Dans le cadre de la réalisation de tels systèmes, la programmation déclarative, dont l'objectif est de faciliter la tâche du programmeur en séparant le contrôle de la logique du calcul, a retrouvé beaucoup d'intérêt ces derniers temps. En particulier, la programmation à base de des règles est considérée comme un modèle prometteur dans cette quête d'abstractions de programmation adéquates pour ces plates-formes. Cependant, bien que ces modèles gagnent beaucoup d'attention, ils créent une demande pour des outils génériques capables de les exécuter à large échelle. Le modèle de programmation chimique, qui a été conçu suivant la métaphore chimique, est un modèle de programmation à bas de règles et d'ordre supérieur, avec une exécution non-déterministe, où les règles sont appliquées de façon concurrente sur un multi ensemble de données. Dans cette thèse, nous proposons la conception, le développement et l'expérimentation d'un intergiciel distribué pour l'exécution de programmes chimique sur des plates-formes à large échelle et génériques. L'architecture proposée combine une couche de communication pair-à-pair avec un protocole de capture atomique d'objets sur lesquels les règles doivent être appliquées, et un système efficace de détection de terminaison. Nous décrivons le prototype d'intergiciel mettant en oeuvre cette architecture. En s'appuyant sur son déploiement sur une plate-forme expérimentale à large échelle, nous présentons les résultats de performance, qui confirment les complexités analytiques obtenues et montrons expérimentalement la viabilité d'un tel modèle de programmation.
APA, Harvard, Vancouver, ISO, and other styles
21

Day, Andrew. "The serial and parallel implementation of geometric algorithms." Thesis, University of East Anglia, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sims, Oliver. "Efficient implementation of video processing algorithms on FPGA." Thesis, University of Glasgow, 2007. http://theses.gla.ac.uk/4119/.

Full text
Abstract:
The work contained in this portfolio thesis was carried out as part of an Engineering Doctorate (Eng.D) programme from the Institute for System Level Integration. The work was sponsored by Thales Optronics, and focuses on issues surrounding the implementation of video processing algorithms on field programmable gate arrays (FPGA). A description is given of FPGA technology and the currently dominant methods of designing and verifying firmware. The problems of translating a description of behaviour into one of structure are discussed, and some of the latest methodologies for tackling this problem are introduced. A number of algorithms are then looked at, including methods of contrast enhancement, deconvolution, and image fusion. Algorithms are characterised according to the nature of their execution flow, and this is used as justification for some of the design choices that are made. An efficient method of performing large two-dimensional convolutions is also described. The portfolio also contains a discussion of an FPGA implementation of a PID control algorithm, an overview of FPGA dynamic reconfigurability, and the development of a demonstration platform for rapid deployment of video processing algorithms in FPGA hardware.
APA, Harvard, Vancouver, ISO, and other styles
23

Abo-Z, Ali Mahmoud. "General purpose feature extraction algorithms and their implementation." Thesis, University of Kent, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lampl, Tanja. "Implementation of adaptive filtering algorithms for noise cancellation." Thesis, Högskolan i Gävle, Avdelningen för elektroteknik, matematik och naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-33277.

Full text
Abstract:
This paper deals with the implementation and performance evaluation of adaptive filtering algorithms for noise cancellation without reference signal. Noise cancellation is a technique of estimating a desired signal from a noise-corrupted observation. If the signal and noise characteristics are unknown or change continuously over time, the need of adaptive filter arises. In contrast to the conventional digital filter design techniques, adaptive filters do not have constant filter parameters, they have the capability to continuously adjust their coefficients to their operating environment. To design an adaptive filter, that produces an optimum estimate of the desired signal from the noisy environment, different adaptive filtering algorithms are implemented and compared to each other. The Least Mean Square LMS, the Normalized Least Mean Square NLMS and the Recursive Least Square RLS algorithm are investigated. Three performance criteria are used in the study of these algorithms: the rate of convergence, the error performance and the signal-to-noise ratio SNR. The implementation results show that the adaptive noise cancellation application benefits more from the use of the NLMS algorithm instead of the LMS or RLS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Struble, Craig Andrew. "Analysis and Implementation of Algorithms for Noncommutative Algebra." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27393.

Full text
Abstract:
A fundamental task of algebraists is to classify algebraic structures. For example, the classification of finite groups has been widely studied and has benefited from the use of computational tools. Advances in computer power have allowed researchers to attack problems never possible before. In this dissertation, algorithms for noncommutative algebra, when ab is not necessarily equal to ba, are examined with practical implementations in mind. Different encodings of associative algebras and modules are also considered. To effectively analyze these algorithms and encodings, the encoding neutral analysis framework is introduced. This framework builds on the ideas used in the arithmetic complexity framework defined by Winograd. Results in this dissertation fall into three categories: analysis of algorithms, experimental results, and novel algorithms. Known algorithms for calculating the Jacobson radical and Wedderburn decomposition of associative algebras are reviewed and analyzed. The algorithms are compared experimentally and a recommendation for algorithms to use in computer algebra systems is given based on the results. A new algorithm for constructing the Drinfel'd double of finite dimensional Hopf algebras is presented. The performance of the algorithm is analyzed and experiments are performed to demonstrate its practicality. The performance of the algorithm is elaborated upon for the special case of group algebras and shown to be very efficient. The MeatAxe algorithm for determining whether a module contains a proper submodule is reviewed. Implementation issues for the MeatAxe in an encoding neutral environment are discussed. A new algorithm for constructing endomorphism rings of modules defined over path algebras is presented. This algorithm is shown to have better performance than previously used algorithms. Finally, a linear time algorithm, to determine whether a quotient of a path algebra, with a known Gröbner basis, is finite or infinite dimensional is described. This algorithm is based on the Aho-Corasick pattern matching automata. The resulting automata is used to efficiently determine the dimension of the algebra, enumerate a basis for the algebra, and reduce elements to normal forms.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Knight, Alan (Alan Evan) Carleton University Dissertation Computer Science. "Implementation of algorithms in a computational geometry workbench." Ottawa, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nagel, Peter Borden. "Multiprocessor implementation of algorithms for multisatellite orbit determination /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Heggie, Patricia M. "Algorithms for subgroup presentations : computer implementation and applications." Thesis, University of St Andrews, 1991. http://hdl.handle.net/10023/13684.

Full text
Abstract:
One of the main algorithms of computational group theory is the Todd-Coxeter coset enumeration algorithm, which provides a systematic method for finding the index of a subgroup of a finitely presented group. This has been extended in various ways to provide not only the index of a subgroup, but also a presentation for the subgroup. These methods tie in with a technique introduced by Reidemeister in the 1920's and later improved by Schreier, now known as the Reidemeister-Schreier algorithm. In this thesis we discuss some of these variants of the Todd-Coxeter algorithm and their inter-relation, and also look at existing computer implementations of these different techniques. We then go on to describe a new package for coset methods which incorporates various types of coset enumeration, including modified Todd- Coxeter methods and the Reidemeister-Schreier process. This also has the capability of carrying out Tietze transformation simplification. Statistics obtained from running the new package on a collection of test examples are given, and the various techniques compared. Finally, we use these algorithms, both theoretically and as computer implementations, to investigate a particular class of finitely presented groups defined by the presentation: < a, b | an = b2 = (ab-1) ß =1, ab2 = ba2 >. Some interesting results have been discovered about these groups for various values of β and n. For example, if n is odd, the groups turn out to be finite and metabelian, and if β= 3 or β= 4 the derived group has an order which is dependent on the values of n (mod 8) and n (mod 12) respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Chiung-Hsing. "Inner-product based signal processing algorithms and VLSI implementation." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1173764627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Yaw-Huei 1959. "A NEW TEST GENERATION ALGORITHM IMPLEMENTATION." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276527.

Full text
Abstract:
This thesis describes a new test generating algorithm, depth-first algorithm. This algorithm detects the reconvergent fanout. The controllability and observability measures are included in this algorithm to guide the forward and consistency drives. The major objective of this research is to develop a test vector generatiang algorithm, which is modified from D-algorithm, and to link this algorithm with SCIRTSS programs. This depth-first algorithm is more accurate and more efficient than D-algorithm. Serveral circuits are tested under DF3 and SCR3 and the results are listed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
31

Aytekin, Arda. "Asynchronous Algorithms for Large-Scale Optimization : Analysis and Implementation." Licentiate thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-203812.

Full text
Abstract:
This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution. The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound. Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay. Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results.

QC 20170317

APA, Harvard, Vancouver, ISO, and other styles
32

Alanazi, Zeyad M. "A study and implementation of some visibility graph algorithms." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917045.

Full text
Abstract:
In recent years extensive research has been done on visibility graphs. In this thesis, we study some of the visibility graph algorithms, and implement these algorithms in the graph editor - GraphPerfect - which is a part of a project headed by Dr. Jay S. Bagga of the Department of computer science at Ball State University. One of the goals of this project is to design and build a software tool to learn and work with graphs and graph algorithms.In this thesis, some properties of visibility graphs are studied in detail and implementation of some graph algorithms is given.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Jeong Eun. "Bayesian hybrid algorithms and models : implementation and associated issues." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/33151/1/Jeong_Lee_Thesis.pdf.

Full text
Abstract:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
APA, Harvard, Vancouver, ISO, and other styles
34

Serguienko, Anton. "Evaluation of Image Warping Algorithms for Implementation in FPGA." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11849.

Full text
Abstract:

The target of this master thesis is to evaluate the Image Warping technique and propose a possible design for an implementation in FPGA. The Image Warping is widely used in the image processing for image correction and rectification. A DSP is a usual choice for implantation of the image processing algorithms, but to decrease a cost of the target system it was proposed to use an FPGA for implementation.

In this work a different Image Warping methods was evaluated in terms of performance, produced image quality, complexity and design size. Also, considering that it is not only Image Warping algorithm which will be implemented on the target system, it was important to estimate a possible memory bandwidth used by the proposed design. The evaluation was done by implemented a C-model of the proposed design with a finite datapath to simulate hardware implementation as close as possible.

APA, Harvard, Vancouver, ISO, and other styles
35

Martin, Lorca Dario. "Implementation And Comparison Of Reconstruction Algorithms For Magnetic Resonance." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608250/index.pdf.

Full text
Abstract:
In magnetic resonance electrical impedance tomography (MR-EIT), crosssectional images of a conductivity distribution are reconstructed. When current is injected to a conductor, it generates a magnetic field, which can be measured by a magnetic resonance imaging (MRI) scanner. MR-EIT reconstruction algorithms can be grouped into two: current density based reconstruction algorithms (Type-I) and magnetic flux density based reconstruction algorithms (Type-II). The aim of this study is to implement a series of reconstruction algorithms for MR-EIT, proposed by several research groups, and compare their performance under the same circumstances. Five direct and one iterative Type-I algorithms, and an iterative Type-II algorithm are investigated. Reconstruction errors and spatial resolution are quantified and compared. Noise levels corresponding to system SNR 60, 30 and 20 are considered. Iterative algorithms provide the lowest errors for the noise- free case. For the noisy cases, the iterative Type-I algorithm yields a lower error than the Type-II, although it can diverge for SNR lower than 20. Both of them suffer significant blurring effects, especially at SNR 20. Another two algorithms make use of integration in the reconstruction, producing intermediate errors, but with high blurring effects. Equipotential lines are calculated for two reconstruction algorithms. These lines may not be found accurately when SNR is lower than 20. Another disadvantage is that some pixels may not be covered and, therefore, cannot be reconstructed. Finally, the algorithm involving the solution of a linear system provides the less blurred images with intermediate error values. It is also very robust against noise.
APA, Harvard, Vancouver, ISO, and other styles
36

Janis, Markus. "Analyzing and implementation of compression algorithms in an FPGA." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-80488.

Full text
Abstract:
The thesis is performed at ÅF AB in Stockholm. One of the development projects needed a compression algorithm. The work has been done in two major stages. Background theory was compiled and evaluated with respect to the suitability for FPGA implementation. One implementation phase were also done where the algorithms that was suitable was implemented. The system the algorithm was integrated into was composed of a Xilinx Virtex 5 FPGA platform integrated into a system developed at ÅF AB. The development was mainly done in VHDL, other programming languages such as Matlab and C++ was also used. A testbench was constructed to evaluate the performance of the algorithms with respect to the ability to compress data in a test file. This test showed that the Run length encoding was most suited for the task. The result of this test was however not the only source of information for a choice of algorithm. Due to a privacy agreement some variables is not included in the report. The design constructed was designed to act as a foundation for future thesis work within ÅF AB.
APA, Harvard, Vancouver, ISO, and other styles
37

Groves, Mark. "Concurrent Implementation of Packet Processing Algorithms on Network Processors." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2937.

Full text
Abstract:
Network Processor Units (NPUs) are a compromise between software-based and hardwired packet processing solutions. While slower than hardwired solutions, NPUs have the flexibility of software-based solutions, allowing them to adapt faster to changes in network protocols.

Network processors have multiple processing engines so that multiple packets can be processed simultaneously within the NPU. In addition, each of these processing engines is multi-threaded, with special hardware support built in to alleviate some of the cost of concurrency. This hardware design allows the NPU to handle multiple packets concurrently, so that while one thread is waiting for a memory access to complete, another thread can be processing a different packet. By handling several packets simultaneously, an NPU can achieve similar processing power as traditional packet processing hardware, but with greater flexibility.

The flexibility of network processors is also one of the disadvantages associated with them. Programming a network processor requires an in-depth understanding of the hardware as well as a solid foundation in concurrent design and programming. This thesis explores the challenges of programming a network processor, the Intel IXP2400, using a single-threaded packet scheduling algorithm as a sample case. The algorithm used is a GPS approximation scheduler with constant time execution. The thesis examines the process of implementing the algorithm in a multi-threaded environment, and discusses the scalability and load-balancing aspects of such an algorithm. In addition, optimizations are made to the scheduler implementation to improve the potential concurrency. The synchronization primitives available on the network processor are also examined, as they play a significant part in minimizing the overhead required to synchronize memory accesses by the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
38

Wåreus, Linus, and Max Wällstedt. "Comparison and Implementation of Query Containment Algorithms for XPath." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186467.

Full text
Abstract:
This thesis investigates the practical aspects of implementing Query Containment algorithms for the query language XPath. Query Containment is the problem to decide if the results of one query are a subset of the results of another query for any database. Query Containment algorithms can be used for the purpose of optimising the querying process in database systems. Two algorithms have been implemented and compared, The Canonical Model and The Homomorphism Technique. The algorithms have been compared with respect to speed, ease of implementation, accuracy and usability in database systems. Benchmark tests were developed to measure the execution times of the algorithms on a specific set of queries. A simple database system was developed to investigate the performance gain of using the algorithms. It was concluded that The Homomorphism Technique outperforms The Canonical Model in every test case with respect to speed. The Canonical Model is however more accurate than The Homomorphism Technique. Both algorithms were easy to implement, but The Homomorphism Technique was easier. In the database system, there was performance to be gained by using Query Containment algorithms for a certain type of queries, but in most cases there was a performance loss. A database system that utilises Query Containment algorithms for optimisation would for every issued query have to evaluate if such an algorithm should be used.
Denna rapport undersöker de praktiska aspekterna av att implementera Query Containment-algoritmer för queryspråket XPath. Query Containment är problemet att avgöra om resultaten av en query är en delmängd av resultaten av en annan query, oavsett databas. Query Containment-algoritmer kan användas för ändamålet att optimera queryingprocessen i databassystem. Två algoritmer har implementerats och jämförts, The Canonical Model och The Homomorphism Technique. Algoritmerna har jämförts med avseende på hastighet, lätthet att implementera, exakthet och användbarhet i riktiga databassystem. Prestandatester utvecklades för att mäta exekveringstider för algoritmerna på specifikt framtagna queries. Ett enkelt databassystem utvecklades för att undersöka prestandavinsten av att använda algoritmerna. Slutsatsen att The Homomorphism Technique presterar bättre än The Canonical Model i samtliga testfall med avseende på hastighet drogs. The Canonical Model är dock mer exakt än The Homomorphism Technique. Båda algoritmerna var lätta att implementera, men The Homomorphism Technique var lättare. I databassystemet fanns det en prestandavinst i att använda Query Containment-algoritmer för en viss typ av queries, men i de flesta fall var det en prestandaförlust. Ett databassystem som använder Query Containment-algoritmer för optimering bör för varje query avgöra om en sådan algoritm ska användas.
APA, Harvard, Vancouver, ISO, and other styles
39

Örnberg, Dennis. "Comparison and implementation of graph visualization algorithms using JavaFX." Thesis, Linköpings universitet, Databas och informationsteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133071.

Full text
Abstract:
Graph drawing is an important area in computer science and it has many different application areas. For example, graphs can be used to visualize structures like networks and databases. When the graphs are really big, however, it becomes difficult to draw them so that the user can get a good overview of the whole graph and all of its data. There exist a number of different algorithms that can be used to draw graphs, but they have a lot of differences. The goal of this report was to find an algorithm that produces graphs of satisfying quality in little time for the purpose of ontology engineering, and implement it using a platform that visualizes the graph using JavaFX. It is supposed to work on a visualization table with a touch display. A list of criteria for both the algorithm and the application was made to ensure that the final result would be satisfactory. A comparison between four well-known graph visualization algorithms was made and “GEM” was found to be the best suited algorithm for visualizing big graphs. The two platforms Gephi and Prefux were introduced and compared to each other, and the decision was made to implement the algorithm in Prefux since it has support for JavaFX. The algorithm was implemented and evaluated, it was found to produce visually pleasing graphs within a reasonable time frame. A modified version of the algorithm called GEM-2 was also introduced, implemented and evaluated. With GEM-2, the user can pick a specific number of levels to be expanded at first, additional levels can then be expanded by hand. This greatly improves the performance when there is no need to expand the whole graph at once, however, it also increases the amount of edge crossings which makes the graph less visually pleasing.
APA, Harvard, Vancouver, ISO, and other styles
40

Daami, Mourad. "Synchronization control of coded video streams, algorithms and implementation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq26314.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Tappenden, Rachael Elizabeth Helen. "Development & Implementation of Algorithms for Fast Image Reconstruction." Thesis, University of Canterbury. Mathematics and Statistics, 2011. http://hdl.handle.net/10092/5998.

Full text
Abstract:
Signal and image processing is important in a wide range of areas, including medical and astronomical imaging, and speech and acoustic signal processing. There is often a need for the reconstruction of these objects to be very fast, as they have some cost (perhaps a monetary cost, although often it is a time cost) attached to them. This work considers the development of algorithms that allow these signals and images to be reconstructed quickly and without perceptual quality loss. The main problem considered here is that of reducing the amount of time needed for images to be reconstructed, by decreasing the amount of data necessary for a high quality image to be produced. In addressing this problem two basic ideas are considered. The first is a subset selection problem where the aim is to extract a subset of data, of a predetermined size, from a much larger data set. To do this we first need some metric with which to measure how `good' (or how close to `best') a data subset is. Then, using this metric, we seek an algorithm that selects an appropriate data subset from which an accurate image can be reconstructed. Current algorithms use a criterion based upon the trace of a matrix. In this work we derive a simpler criterion based upon the determinant of a matrix. We construct two new algorithms based upon this new criterion and provide numerical results to demonstrate their accuracy and efficiency. A row exchange strategy is also described, which takes a given subset and performs interchanges to improve the quality of the selected subset. The second idea is, given a reduced set of data, how can we quickly reconstruct an accurate signal or image? Compressed sensing provides a mathematical framework that explains that if a signal or image is known to be sparse relative to some basis, then it may be accurately reconstructed from a reduced set of data measurements. The reconstruction process can be posed as a convex optimization problem. We introduce an algorithm that aims to solve the corresponding problem and accurately reconstruct the desired signal or image. The algorithm is based upon the Barzilai-Borwein algorithm and tailored specifically to the compressed sensing framework. Numerical experiments show that the algorithm is competitive with currently used algorithms. Following the success of compressed sensing for sparse signal reconstruction, we consider whether it is possible to reconstruct other signals with certain structures from reduced data sets. Specifically, signals that are a combination of a piecewise constant part and a sparse component are considered. A reconstruction process for signals of this type is detailed and numerical results are presented.
APA, Harvard, Vancouver, ISO, and other styles
42

Sundaram, Mohana. "The implementation of dynamic programming algorithms for looped systems." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 1990. http://digitalcommons.auctr.edu/dissertations/1700.

Full text
Abstract:
This thesis deals with a detailed algorithmic description of looped systems of nonserial dynamic programming. Using Pascal-type constructs, the feedforward and feedback looped systems are described as multistage dynamic programming processes. For such problems, the conventional techniques require enormous storage and CPU time due to the demands of dimensionality. This issue is addressed using a special technique in which the need for optimal decision storage is eliminated. To demonstrate the working of the technique the related algorithm is implemented using Pascal.
APA, Harvard, Vancouver, ISO, and other styles
43

雷應春 and Ying-chun Lui. "Lattice algorithms for multidimensional fields suitable for VLSI implementation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31208757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Silparcha, Udom. "Implementation of certain graph algorithms under a windowing environment." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/834633.

Full text
Abstract:
Graph theory is a relatively new way of thinking in mathematics. Graphs can model a number of different problems. Graph theory introduces solutions to many problems which human beings have faced since ancient times.A study of graphs will not be complete without an introduction to both theory and algorithms. Invention of the tools for studying graphs is necessary in order to help people learn the theory and execute the algorithms. The study of graphs itself, by nature, needs graphical representation which can give clearer images for a better understanding. A windowing environment is selected as an instrument for developing a device to study graphs because of its friendly Graphical User Interface.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
45

Marletta, Marco. "Theory and implementation of algorithms for Sturm-Lioville computations." Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Johnstone, Adrian Ivor Clive. "Development and implementation of real time image analysis algorithms." Thesis, Royal Holloway, University of London, 1989. http://repository.royalholloway.ac.uk/items/92c24902-3e72-4d25-9790-0e7eabd15469/1/.

Full text
Abstract:
This work concerns the development and implementation of real-time image processing algorithms. Such systems may be applied to industrial inspection problems, which typically require basic operations to be performed on 256 x 256 pixel images in 20 to 100ms using systems costing less than about £20000.Building such systems is difficult because conventional processors executing at around 1MIPS with conventional algorithms are some 2 orders of magnitude too slow. A solution to this is to use a closely coupled array processor such as the DAP, or CLIP4 which is designed especially for image processing. However such a space-parallel architecture imposes its own structure on the problem, and this restricts the class of algorithms which may be efficiently executed to those exhibiting similar space parallelism, i.e. so-called 'parallel algorithms'. This thesis examines an alternative approach which uses a mix of conventional processors and high speed hardware processors. A special frame store has been built for the acquisition and display of images stored in memory on a multiprocessor backplane. Also described are an interface to a host mini-computer, a bus interface to the system and its use with some hardwired and microcoded processors. This system is compared to a single computer operating with a frame store optimised for image processing. The basic software and hardware system described in this thesis has been used in a factory environment for foodproduct inspection.
APA, Harvard, Vancouver, ISO, and other styles
47

Heyne, Benjamin. "Efficient CORDIC based implementation of selected signal processing algorithms." Aachen Shaker, 2008. http://d-nb.info/991790073/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lui, Ying-chun. "Lattice algorithms for multidimensional fields suitable for VLSI implementation /." [Hong Kong : University of Hong Kong], 1989. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12373515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Bui, Linda, and Malin Häggström. "Implementation of Autonomous Parking with Two Path Planning Algorithms." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254215.

Full text
Abstract:
In urban environments, many drivers find it difficult to park, calling for an autonomous system to step in. Two path planning algorithms, Hybrid A* and RRT* Reeds-Shepp, were compared in parking scenarios with a small model car both in simulation and implementation. Two scenarios were tested: the vehicle facing inwards and outwards from the parking space, the latter requiring the vehicle to change direction at some point. The Hybrid A* algorithm proved to be more reliable than RRT* Reeds-Shepp since the latter generates random paths, while the first found longer paths that were not as optimal. RRT* Reeds-Shepp found a path in the forward scenario 9 out 10 times and 4 out of 5 in the reverse scenario. The planned paths were tested with the Stanley controller. The front wheels were given as guiding wheels both in forward and reverse, which worked well in simulation. In implementation, the first scenario matched the simulation quite closely. In the second scenario in implementation, the model cars were not able follow the path when reversing with the front wheels as guiding wheels. In conclusion, one algorithm did not outshine other but it also depended on the scenario. However, it was not optimal to use the front axle as a reference in the reverse driving situation.
APA, Harvard, Vancouver, ISO, and other styles
50

MELLA, SILVIA. "ANALYSIS OF CRYPTOGRAPHIC ALGORITHMS AGAINST THEORETICAL AND IMPLEMENTATION ATTACKS." Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/546558.

Full text
Abstract:
This thesis deals with theoretical and implementation analysis of cryptographic functions. Theoretical attacks exploit weaknesses in the mathematical structure of the cryptographic primitive, while implementation attacks leverage on information obtained by its physical implementation, such as leakage through physically observable parameters (side-channel analysis) or susceptibility to errors (fault analysis). In the area of theoretical cryptanalysis, we analyze the resistance of the Keccak-f permutations to differential cryptanalysis (DC). Keccak-f is used in different cryptographic primitives: Keccak (which defines the NIST standard SHA-3), Ketje and Keyak (which are currently at the third round of the CAESAR competition) and the authenticated encryption function Kravatte. In its basic version, DC makes use of differential trails, i.e. sequences of differences through the rounds of the primitive. The power of trails in attacks can be characterized by their weight. The existence of low-weight trails over all but a few rounds would imply a low resistance with respect to DC. We thus present new techniques to effciently generate all 6-round differential trails in Keccak-f up to a given weight, in order to improve known lower bounds. The limit weight we can reach with these new techniques is very high compared to previous attempts in literature for weakly aligned primitives. This allows us to improve the lower bound on 6 rounds from 74 to 92 for the four largest variants of Keccak-f. This result has been used by the authors of Kravatte to choose the number of rounds in their function. Thanks to their abstraction level, some of our techniques are actually more widely applicable than to Keccak-f. So, we formalize them in a generic way. The presented techniques have been integrated in the KeccakTools and are publicly available. In the area of fault analysis, we present several results on differential fault analysis (DFA) on the block cipher AES. Most DFA attacks exploit faults that modify the intermediate state or round key. Very few examples have been presented, that leverage changes in the sequence of operations by reducing the number of rounds. In this direction, we present four DFA attacks that exploit faults that alter the sequence of operations during the final round. In particular, we show how DFA can be conducted when the main operations that compose the AES round function are corrupted, skipped or repeated during the final round. Another aspect of DFA we analyze is the role of the fault model in attacks. We study it from an information theoretical point of view, showing that the knowledge that the attacker has on the injected fault is fundamental to mount a successful attack. In order to soften the a-priori knowledge on the injection technique needed by the attacker, we present a new approach for DFA based on clustering, called J-DFA. The experimental results show that J-DFA allows to successfully recover the key both in classical DFA scenario and when the model does not perfectly match the faults effect. A peculiar result of this method is that, besides the preferred candidate for the key, it also provides the preferred models for the fault. This is a quite remarkable ability because it furnishes precious information which can be used to analyze, compare and characterize different specific injection techniques on different devices. In the area of side-channel attacks, we improve and extend existing attacks against the RSA algorithm, known as partial key exposure attacks. These attacks on RSA show how it is possible to find the factorization of the modulus from the knowledge of some bits of the private key. We present new partial key exposure attacks when the countermeasure known as exponent blinding is used. We first improve known results for common RSA setting by reducing the number of bits or by simplifying the mathematical analysis. Then we present novel attacks for RSA implemented using the Chinese Remainder Theorem, a scenario that has never been analyzed before in this context.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography