To see the other types of publications on this topic, follow the link: Cuppen Divide and Conquer.

Dissertations / Theses on the topic 'Cuppen Divide and Conquer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cuppen Divide and Conquer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Courtois, Jérôme. "Leak study of cryptosystem implementations in randomized RNS arithmetic." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS290.

Full text
Abstract:
On parlera d'analyse forte pour une analyse qui permet de retrouver la clef d'un système cryptographique et d'une analyse faible dans le cas où on élimine des clefs candidates. Le but de cette thèse est essentiellement la compréhension du comportement de l'aléa des distances de Hamming produites par un système cryptographique de type ECC (Elliptic Curve for Cryptography) quand on utilise une représentation RNS (Residue Number System) avec la méthode des moduli aléatoires. Le Chapitre 2 introduit les différentes notions nécessaires à la compréhension de ce document. Il introduit brièvement l'algorithme de multiplication modulaire (Algorithme de Montgomery pour RNS) qui a inspiré la méthode des moduli aléatoires. Puis il décrit l'algorithme qui génère les séquences de distances de Hamming nécessaires à notre analyse. Ensuite il montre quel niveau de résistance apporte la méthode des moduli aléatoires contre différentes attaques classiques comme DPA (Diferential Power Analysis), CPA (Correlation Power Analysis), DPA du second ordre et MIA (Mutual Information Analysis). On apporte une compréhension de la distribution des distances de Hamming considérées comme des variables aléatoires. Suite à cela, on ajoute l'hypothèse gaussienne sur les distances de Hamming. On utilise alors le MLE (Maximum Likelihood Estimator) et une analyse forte comme pour faire des Template Attacks pour avoir une compréhension fine du niveau d'aléa apporté par la méthode des moduli aléatoires. Le Chapitre 3 est une suite naturelle des conclusions apportées par le Chapitre 2 sur l'hypothèse gaussienne. Si des attaques sont possibles avec le MLE, c'est qu'il existe sans doute des relations fortes entre les distances de Hamming considérées comme des variables aléatoires. La Section 3.2 cherche à quantifier ce niveau de dépendance des distances de Hamming. Ensuite, en restant dans l'hypothèse gaussienne, on remarquera qu'il est possible de construire une type de DPA qu'on a appelé DPA square reposant sur la covariance au lieu de la moyenne comme dans la DPA classique. Mais cela reste très gourmand en traces d'observation d'autant que dans de nombreux protocoles utilisant une ECC, on utilise une clef qu'une seule fois. La Section 3.4 s'efforce de montrer qu'il existe de l'information sur peu de traces de distances de Hamming malgré la randomisation des moduli. Pour cela, on fait un MLE par un conditionnement sur l'une des distances de Hamming avec une analyse faible. Le dernier Chapitre 4 commence par introduire brièvement les choix algorithmiques qui ont été faits pour résoudre les problèmes d'inversion de matrices de covariance (symétriques définies positives) de la Section 3.2 et l'analyse des relations fortes entre les Hamming dans la Section 3.2. On utilise ici des outils de Graphics Processing Unit (GPU) sur un très grand nombre de matrices de petites tailles. On parlera de Batch Computing. La méthode LDLt présentée au début de ce chapitre s'est avérée insuffisante pour résoudre complètement le problème du MLE conditionné présenté dans la Section 3.4. On présente le travail sur l'amélioration d'un code de diagonalisation de matrice tridiagonale utilisant le principe de Diviser pour Régner (Divide & Conquer) développé par Lokmane Abbas-Turki et Stéphane Graillat. On présente une généralisation de ce code, des optimisations en temps de calcul et une amélioration de la robustesse des calculs en simple précision pour des matrices de taille inférieure à 32
We will speak of strong analysis for an analysis which makes it possible to find the key to a cryptographic system. We define a weak analysis in the case where candidate keys are eliminated. The goal of this thesis is to understand the behavior of the random of Hamming distances produced by an ECC (Elliptic Curve for Cryptography) cryptographic system when using a RNS (Residue Number System) representation with the random moduli method. Chapter 2 introduces the different concepts for understanding this document. He brieflyintroducesthemodularmultiplicationalgorithm(MontgomeryalgorithmforRNS) which inspired the method of random moduli. Then it describes the algorithm which generatestheHammingdistancesequencesnecessaryforouranalysis. Thenitshowswhat level of resistance brings the method of random moduli against different classic attacks like DPA (Diferrential Power Analysis), CPA (Correlation Power Analysis), DPA of the second order and MIA (Mutual Information Analysis). We provide an understanding of the distribution of Hamming distances considered to be random variables. Following this, we add the Gaussian hypothesis on Hamming distances. We use MLE (Maximum Likelihood Estimator) and a strong analysis as to make Template Attacks to have a fine understanding of the level of random brought by the method of random moduli. The last Chapter 4 begins by briefly introducing the algorithmic choices which have been made to solve the problems of inversion of covariance matrices (symmetric definite positive) of Section 2.5 and the analysis of strong relationships between Hamming in Section 3.2. We use here Graphics Processing Unit (GPU) tools on a very large number of small size matrices. We talk about Batch Computing. The LDLt method presented at the beginning of this chapter proved to be insufficient to completely solve the problem of conditioned MLE presented in Section 3.4. We present work on the improvement of a diagonalization code of a tridiagonal matrix using the principle of Divide & Conquer developed by Lokmane Abbas-Turki and Stéphane Graillat. We present a generalization of this code, optimizations in computation time and an improvement of the accuracy of computations in simple precision for matrices of size lower than 32
APA, Harvard, Vancouver, ISO, and other styles
2

Pantawongdecha, Payut. "Autotuning divide-and-conquer matrix-vector multiplication." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105968.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 73-75).
Divide and conquer is an important concept in computer science. It is used ubiquitously to simplify and speed up programs. However, it needs to be optimized, with respect to parameter settings for example, in order to achieve the best performance. The problem boils down to searching for the best implementation choice on a given set of requirements, such as which machine the program is running on. The goal of this thesis is to apply and evaluate the Ztune approach [14] on serial divide-and-conquer matrix-vector multiplication. We implemented Ztune to autotune serial divide-and-conquer matrix-vector multiplication on machines with different hardware configurations, and found that Ztuneoptimized codes ran 1%-5% faster than the hand-optimized counterparts. We also compared Ztune-optimized results with other matrix-vector multiplication libraries including the Intel Math Kernel Library and OpenBLAS. Since the matrix-vector multiplication problem is a level 2 BLAS, it is not as computationally intensive as level 3 BLAS problems such as matrix-matrix multiplication and stencil computation. As a result, the measurement in matrix-vector multiplication is more prone to error from factors such as noise, cache alignment of the matrix, and cache states, which lead to wrong decision choices for Ztune. We explored multiple options to get more accurate measurements and demonstrated the techniques that remedied these issues. Lastly, we applied the Ztune approach to matrix-matrix multiplication, and we were able to achieve 2%-85% speedup compared to the hand-tuned code. This thesis represents joint work with Ekanathan Palamadai Natarajan.
by Payut Pantawongdecha.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Jewell, Sean William. "Divide and conquer sequential Monte Carlo for phylogenetics." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54514.

Full text
Abstract:
Recently reconstructing evolutionary histories has become a computational issue due to the increased availability of genetic sequencing data and relaxations of classical modelling assumptions. This thesis specializes a Divide & conquer sequential Monte Carlo (DCSMC) inference algorithm to phylogenetics to address these challenges. In phylogenetics, the tree structure used to represent evolutionary histories provides a model decomposition used for DCSMC. In particular, speciation events are used to recursively decompose the model into subproblems. Each subproblem is approximated by an independent population of weighted particles, which are merged and propagated to create an ancestral population. This approach provides the flexibility to relax classical assumptions on large trees by parallelizing these recursions.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Piper, Andrew James. "Object-oriented divide-and-conquer for parallel processing." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scardillo, Mike, and Mike Nisel. "Divide and Conquer: Improving Post-Flight Data Processing." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608595.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
This paper describes Dryden Flight Research Center's (DFRC's) transition from a mainframe-oriented post-flight data processing system, heavily dependent upon manual operation and scheduling, to a modern, distributed, highly automated system. After developing requirements and a concept development plan, DFRC replaced one multiple-CPU mainframe with five specialized servers, distributing the processing workload and separating functions. Access to flight data was improved by buying and building client server automated retrieval software that takes advantage of the local area network, and by providing over 500 gigabytes of on-line archival storage space. Engineering customers see improved access times and continuous availability (7-days per week, 24-hours per day) of flight research data. A significant reduction in computer operator workload was achieved, and minimal computer operator intervention is now required for flight data retrieval operations. This new post-flight system architecture was designed and built to provide flexibility, extensibility and cost-effective upgradeability. Almost two years of successful operation have proven the viability of the system. Future improvements will focus on decreasing the elapsed time between raw data capture and engineering unit data archival, increasing the on-line archival storage capacity, and decreasing the automated data retrieval response time.
APA, Harvard, Vancouver, ISO, and other styles
6

Khoshfetrat, Pakazad Sina. "Divide and Conquer: Distributed Optimization and Robustness Analysis." Doctoral thesis, Linköpings universitet, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117503.

Full text
Abstract:
As control of large-scale complex systems has become more and more prevalent within control, so has the need for analyzing such controlled systems. This is particularly due to the fact that many of the control design approaches tend to neglect intricacies in such systems, e.g., uncertainties, time delays, nonlinearities, so as to simplify the design procedure. Robustness analysis techniques allow us to assess the effect of such neglected intricacies on performance and stability. Performing robustness analysis commonly requires solving an optimization problem. However, the number of variables of this optimization problem, and hence the computational time, scales badly with the dimension of the system. This limits our ability to analyze large-scale complex systems in a centralized manner. In addition, certain structural constraints, such as privacy requirements or geographical separation, can prevent us from even forming the analysis problem in a centralized manner. In this thesis, we address these issues by exploiting structures that are common in large-scale systems and/or their corresponding analysis problems. This enables us to reduce the computational cost of solving these problems both in a centralized and distributed manner. In order to facilitate distributed solutions, we employ or design tailored distributed optimization techniques. Particularly, we propose three distributed optimization algorithms for solving the analysis problem, which provide superior convergence and/or computational properties over existing algorithms. Furthermore, these algorithms can also be used for solving general loosely coupled optimization problems that appear in a variety of fields ranging from control, estimation and communication systems to supply chain management and economics.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Fangqing. "A divide-and-conquer method for 3D capacitance extraction." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/166.

Full text
Abstract:
This thesis describes a divide-and-conquer algorithm to improve the 3D boundary element method (BEM) for capacitance extraction. We divide large interconnect structures into small sections, set new boundary conditions using the borderfor each section, solve each section, and then combine the results to derive the capacitance. The target application is critical nets where 3D accuracy is required. The new algorithm is a significant improvement over the traditional BEMs and their enhancements, such as the "window" method where conductors far away are dropped, and the "shield" method where conductors hidden behind other conductors are dropped. Experimental results show that our algorithm is 25 times faster than the traditional BEM and 5 times faster than the window+shield method, for medium to large structures. The error of the capacitance computed by the new algorithm is within 2% for self capacitance and 7% for coupling capacitance, compared with the results obtained by solving the entire system using BEM. Furthermore, our algorithms gives accurate distributed RC, where none of the previous 3D BEM algorithms and their enhancements can.
APA, Harvard, Vancouver, ISO, and other styles
8

Moinuddin, Md. "A divide and conquer approach for large spatial dataset." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3425417.

Full text
Abstract:
In recent times, the rise of `big data' has brought along major computational challenges in all the main disciplines of scientific research, including the field of spatial statistics. Some of these challenges include parametric estimation and quantification of estimation uncertainty that, when building statistical models using big data, pose an important computational load. Many methods have been proposed to address these challenges such as dimension reduction, approximation by Markov random fields, tapering of the covariance matrix, and subsampling based approaches. In this thesis a new \textit{divide-and-conquer} approach is proposed that we call \texttt{farmer} for providing effect size and standard error estimates in spatial models of big data. According to the proposed approach, all observations are divided into blocks that are mutually exclusive according to their position. For each block, the model parameters are estimated and recombined using a fixed or random meta-model to take into account the (possible) spatial dependence. This generalized method can be applied to a wide range of spatial models. For example, consider a linear Gaussian spatial model. In a simulation study, the \texttt{farmer} estimators were compared with estimators based on methods with similar sampling ideas. In the context of the Gaussian model, two applications with real data are presented. The proposed method appears computationally efficient compared to equivalent methods and has lower bias in the estimates. Furthermore, the proposed approach provides a more realistic estimate of standard errors. Finally, we propose an application of the method to generalized linear spatial models for simulated and real counting data.
Negli ultimi due decenni l'avvento dei \textit{big-data} ha portato sfide computazionali in tutte le principali discipline della ricerca scientifica. Anche la Statistica spaziale sta affrontando questa sfida. Quando un modello parametrico viene proposto per \textit{big-data}, la stima parametrica e la quantificazione dell'incertezza nella stima comporta un carico computazionale importante. Per questo sono stati proposti molti metodi per gestire queste sfide quali la riduzione della dimensionalit\`a, l'approssimazione mediante campi casuali di Markov, la rastremazione \textit{tapering} della matrice di covarianza e approcci basati sul campionamento. In questa tesi si propone un nuovo approccio \textit{divide-and-conquer} detto \texttt{farmer} per la stima e la valutazione dell'incertezza dei parametri in modelli spaziali in presenza di grandi moli di dati spaziali. Secondo l'approccio proposto tutte le osservazioni vengono divise in blocchi mutualmente esclusivi secondo la loro posizione e per ogni blocco si stimano i parametri del modello. Le stime vengono quindi ricombinate tramite un meta-modello a effetti fissi o casuali per tenere conto della (eventuale) dipendenza spaziale. Il metodo risulta completamente generale e può essere applicato ad un ampia gamma di modelli spaziali A titolo d'esempio viene considerato un modello spaziale lineare gaussiano. In uno studio di simulazione gli stimatori \texttt{farmer} sono stati confrontati con stimatori che si basano sulla medesima idea di campionamento Sempre nel contesto del modello gaussiano si presentano due applicazioni con dati reali. Il metodo proposto \`{e} risultato computazionalmente efficiente rispetto ai metodi concorrenti, con distorsione delle stime inferiore. Inoltre, l'approccio proposto fornisce una stima pi\`{u} realistica degli errori standard. Infine si propone un'applicazione del metodo a modelli spaziali lineari generalizzati per dati di conteggio simulati e reali.
APA, Harvard, Vancouver, ISO, and other styles
9

Esmaeili, Javad. "Parallel implementation of funtional languages using divide-and-conquer strategies." Thesis, University of Salford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Simpson, Leonie Ruth. "Divide and conquer attacks on shift register based stream ciphers." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Valenzuela, Christine Lesley. "Evolutionary divide and conquer : a novel genetic approach to the TSP." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gatlin, Kang Su. "Portable high performance programming via architecture-cognizant divide-and-conquer algorithms /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2000. http://wwwlib.umi.com/cr/ucsd/fullcit?p9984305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mäser, Fabian. "Divide and conquer in game tree search : algorithms, software and case studies /." [S.l.] : [s.n.], 2001. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gallant, Roger. "Implementation of parallel and divide and conquer algorithms in DeFT (LCGTO-DF)." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/9462.

Full text
Abstract:
The program DeFT is an implementation of Density Functional Theory (DFT). The formalism that is used in this program is the linear combination of gaussian-type-orbitals (LCGTO). In this approach, there exists a method for the variational fitting of the density. Exchange-correlation is fitted on a numerical grid. It is these fitting procedures that reduce the formal scaling of LCGTO-DF to N$\sp3$ from N$\sp4$. The contribution to the total energy from the Coulomb repulsion integrals and the exchange correlation integrals causes the current bottleneck of DFT calculations. The parallelization of the three-centered two electron integrals should lead to improved efficiency. The use of the PVM communication package gives a way to a relatively simple means for transforming serial to parallel code. An advantage of using PVM is that it is portable to a large variety of architectures. This allows PVM to act as a harness between clusters of workstations. The parallel implementation of DeFT is discussed. A divide-and-conquer implementation of LCGTO-DF is incorporated in DeFT. The use of Yang's density and density matrix formulations are incorporated into this scheme. The former is used for the fitting of the XC potential while the latter is used for the fitting of the charge density. This type of algorithm should achieve linear scaling for very large systems. Test calculations with the glycine heptapeptide demonstrate the errors to be expected from the introduction of the DAC approach. The use of buffer spaces is also examined in order to reduce the errors introduced by the DAC algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Choi, Kwangbom. "P-Coffee a new divide-and-conquer method for multiple sequence alignment /." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-01182005-060947/.

Full text
Abstract:
We describe a new divide-and-conquer method, P-Coffee, for alignment of multiple sequences. P-Coffee first identifies candidate alignment columns using a position-specific substitution matrix (the T-Coffee extended library), tests those columns, and accepts only qualified ones. Accepted columns do not only constitute a final alignment solution, but also divide a given sequence set into partitions. The same procedure is recursively applied to each partition until all the alignment columns are collected. In P-Coffee, we minimized the source of bias by aligning all the sequences simultaneously without requiring any heuristic function to optmize, phylogenetic tree, nor gap cost scheme. In this research, we show the performance of our approach by comparing our results with that of T-Coffee using the 144 test sets provided in BAliBASE v1.0. P-Coffee outperformed T-Coffee in accuracy especially for more complicated test sets.
APA, Harvard, Vancouver, ISO, and other styles
16

Bram, Justin Gary. "A "divide and conquer" strategy for NDE signal inversion in gas transmission pipelines /." Full text available online, 2006. http://www.lib.rowan.edu/find/theses.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Caimi, Gabrio Curzio. "Algorithmic decision support for train scheduling in a large and highly utilised railway network." Aachen Shaker, 2009. http://d-nb.info/998740608/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Maclaren, Heather. "A divide and conquer approach to using inductive logic programming for learning user models." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Prabhu, Haladi Ramanatha Sachin. "A network component analysis based divide and conquer method for transcriptional regulatory network analysis." Thesis, University of Sheffield, 2019. http://etheses.whiterose.ac.uk/22640/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Thompson, Martin Peter. "Increasing robustness, generalisation, and confidence in modular neural networks using a divide-and-conquer strategy." Thesis, University of Reading, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Needham, Perri. "Enhancing the capabilities of computational chemistry using GPU technology." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/enhancing-the-capabilities-of-computational-chemistry-using-gpu-technology(0988c19e-cc1a-443f-b82f-0c5fe0422d0b).html.

Full text
Abstract:
Three key enhancements were made to a semiempirical molecular orbital program to develop a fast, accurate method of calculating chemical properties of large (> 1000 atom) molecular systems, through the use of quantum theory. In this thesis the key enhancements are presented which are: the implementation of a divide-and-conquer approach to a self-consistent field procedure, in an effort to improve capability; the use of the novel technology, GPU technology, to parallelize the divide-and-conquer self-consistent field procedure, in an effort to improve the speed; the implementation of a newly developed semiempirical model, the Polarized Molecular Orbital Model, in an effort to improve the accuracy. The development of a divide-and-conquer approach to the SCF (DC-SCF) procedure (enhancement 1) was carried out using saturated hydrocarbon chains whereby the saturated hydrocarbon chain is partitioned into small overlapping subsystems and the Roothaan equations solved for each subsystem. An investigation was carried out to find the optimal partitioning scheme for saturated hydrocarbon chains in order to minimize the loss of energy experienced from neglecting some of the interactions in the system whilst maintaining near linear scaling with system size. The DC-SCF procedure was shown to be accurate to 10-3 kcal mol-1 per atom whilst calculating the SCF-energy nearly 6 times faster than using the standard SCF procedure, for a 698-atom system. The development of a parallel DC-SCF procedure and Cartesian forces calculation for use on a GPU (enhancement 2), resulted in a hybrid CPU/GPU DC-SCF implementation that calculated the energy of a 1997-atom saturated hydrocarbon chain 21 times faster than the standard serial SCF implementation and a accelerated Cartesian forces calculation that performed 7 times faster for a saturated hydrocarbon chain of 1205-atoms, when accelerated using an NVidia Tesla C2050 GPU. The hybrid CPU/GPU algorithm made use of commercially accelerated linear algebra libraries, CULA and CUBLAS. A comparison was made between CULA’s accelerated eigensolver routine and the accelerated DC-eigensolver (developed in this research) and it was found that for saturated hydrocarbon chains of > 350 atoms, the accelerated DC-eigensolver performed around twice as fast as the accelerated CULA eigensolver. The implementation of the Polarized Molecular Orbital model (enhancement 3) was validated against published isomerization energies and benchmarked against the non-nitrogen containing complexes in the S66 database. The benchmark complexes were categorized according to dominant intermolecular interactions namely, hydrogen bonding, dispersion interactions and mixed interactions. After assessment it was found that the PMO model predicts interaction energies of complexes with a mixture of dispersive and electrostatic interactions to the highest accuracy (0.69 kcal mol-1 with respect to CCSD(T)/CBS). The dispersion correction within the PMO model was found to ‘overcorrect’ the dispersive contribution for most complexes tested. The outcome of this research is a semiempirical molecular orbital program that calculates the energy of a closed-shell saturated hydrocarbon chain of ~2000 atoms in under 4 minutes instead of 1.5 hours when using a PM3-Hamiltonian and can calculate interaction energies of systems exhibiting a mixture of electrostatic and dispersive interactions to an accuracy of within 1 kcal mol-1 (relative to high-level quantum methods). To demonstrate a suitable application for the enhanced SE-MO program, interaction energies of a series of PAHs with water, phenol and methanol have been investigated. The resultant program is suitable for use in calculating the energy and forces of large material and (in future) biological systems by a fast and accurate method that would be impractical or impossible to do without these enhancements.
APA, Harvard, Vancouver, ISO, and other styles
22

Hansson, Erik. "A Case Study of Semi-Automatic Parallelization of Divide and Conquer Algorithms Using Invasive Interactive Parallelization." Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18365.

Full text
Abstract:

Since computers supporting parallel execution have become more and more common the last years, especially on the consumer market,  the need for methods and tools for parallelizing existing sequential programs has highly increased. Today there exist different methods of achieving this, in a more or less user friendly way. We have looked at one method, Invasive Interactive Parallelization (IIP), on a special problem area, divide and conquer algorithms, and performed a case study. This case study shows that by using IIP, sequential programs can be parallelized both for shared and distributed memory machines. We have focused on parallelizing Quick Sort for OpenMP and MPI environment using a tool, Reuseware, which is based on the concepts of Invasive Software Composition.

APA, Harvard, Vancouver, ISO, and other styles
23

Fachin, M. P. G. "The divide-and-conquer method for the solution of the symmetric tridiagonal eigenproblem and transputer implementations." Thesis, University of Kent, 1994. https://kar.kent.ac.uk/21194/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Warschkow, Oliver. "A divide-and-conquer implementation of the discrete variational DFT method for large molecular and solid systems." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chenu, Alexandre. "Leveraging sequentiality in Robot Learning : Application of the Divide & Conquer paradigm to Neuro-Evolution and Deep Reinforcement Learning." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS342.

Full text
Abstract:
"Pour réussir, il ne suffit pas de prévoir, il faut aussi savoir improviser." Cette citation d’Isaac Asimov, père fondateur de la robotique et auteur des Trois lois de la robotique, souligne toute l’importance d’être capable de s’adapter et d’agir dans l’instant présent pour réussir. Même si, aujourd’hui, les robots peuvent résoudre des tâches d’une complexité qui était inimaginable il y a encore quelques années, ces capacités d’adaptation leur font encore défaut, ce qui les empêche d’être déployé à une plus grande échelle. Pour remédier à ce manque d’adaptabilité, les roboticiens utilisent des algorithmes d’apprentissage afin de permettre aux robots de résoudre des tâches complexes de manière autonome. Deux types d’algorithmes d’apprentissage sont particulièrement adaptés à l’apprentissage autonome de contrôleurs par les robots : l’apprentissage profond par renforcement et la neuro-évolution. Cependant, ces deux classes d’algorithmes ne sont capables de résoudre des problèmes d’exploration difficiles, c’est-à- dire des problèmes avec un horizon long et un signal de récompense rare, que s’ils sont guidés dans leur processus d’apprentissage. Différentes approches peuvent être envisagées pour permettre à un robot de résoudre un tel problème sans être guidé. Une première approche consiste à rechercher une diversité de comportements plutôt qu’un comportement spécifique. L’idée étant que parmi cette diversité, certains comportements seront probablement capables de résoudre la tâche qui nous intéresse. Nous les appelons les algorithmes de recherche de diversité. Une deuxième approche consiste à guider le processus d’apprentissage en utilisant des démonstrations fournies par un expert. C’est ce qu’on appelle l’apprentissage par démonstration. Cependant, chercher des comportements divers ou apprendre par démonstration peut être inefficace dans certains contextes. En effet, la recherche de comportements divers peut être fastidieuse si l’environnement est complexe. D’autre part, l’apprentissage à partir d’une seule et unique démonstration peut être très difficile. Dans cette thèse, nous tentons d’améliorer l’efficacité des approches de recherche par diversité et d’apprentissage à partir d’une seule démonstration dans des problèmes d’exploration difficiles. Pour ce faire, nous supposons que les comportements robotiques complexes peuvent être décomposés en sous-comportements plus simples. Sur la base de ce biais séquentiel, nous adoptons une stratégie dite de "diviser-pour-régner", qui est bien connue pour être efficace lorsque le problème est composable. Nous proposons deux approches en particulier. Premièrement, après avoir identifié certaines limites des algorithmes de recherche de diversité basés sur la l’évolution de réseaux de neurones artificiels, nous proposons Novelty Search Skill Chaining. Cet algorithme combine la recherche de diversité avec l’enchaînement de compétences pour naviguer efficacement dans des labyrinthes qui sont difficiles à explorer pour des algorithmes de l’état-de-l’art. Dans une deuxième série de contributions, nous proposons les algorithmes Divide & Conquer Imitation Learning. L’intuition derrière ces méthodes est de décomposer la tâche complexe d’apprentissage à partir d’une seule démonstration en plusieurs sous-tâches plus simples consistant à atteindre des sous-buts successifs. DCIL-II, la variante la plus avancée, est capable d’apprendre des comportements de marche pour des robots humanoïdes sous-actionnés avec une efficacité sans précédent. Au-delà de souligner l’efficacité du paradigme de diviser-pour-régner dans l’apprentissage des robots, cette thèse met également en évidence les difficultés qui peuvent survenir lorsqu’on compose de comportements, même dans des environnements élémentaires. Il faudra inévitablement résoudre ces difficultés avant d’appliquer ces algorithmes directement à des robots réels. C’est peut-être une condition nécessaire pour le succès des prochaines générations [...]
“To succeed, planning alone is insufficient. One must improvise as well.” This quote from Isaac Asimov, founding father of robotics and author of the Three Laws of Robotics, emphasizes the importance of being able to adapt and think on one’s feet to achieve success. Although robots can nowadays resolve highly complex tasks, they still need to gain those crucial adaptability skills to be deployed on a larger scale. Robot Learning uses learning algorithms to tackle this lack of adaptability and to enable robots to solve complex tasks autonomously. Two types of learning algorithms are particularly suitable for robots to learn controllers autonomously: Deep Reinforcement Learning and Neuro-Evolution. However, both classes of algorithms often cannot solve Hard Exploration Problems, that is problems with a long horizon and a sparse reward signal, unless they are guided in their learning process. One can consider different approaches to tackle those problems. An option is to search for a diversity of behaviors rather than a specific one. The idea is that among this diversity, some behaviors will be able to solve the task. We call these algorithms Diversity Search algorithms. A second option consists in guiding the learning process using demonstrations provided by an expert. This is called Learning from Demonstration. However, searching for diverse behaviors or learning from demonstration can be inefficient in some contexts. Indeed, finding diverse behaviors can be tedious if the environment is complex. On the other hand, learning from demonstration can be very difficult if only one demonstration is available. This thesis attempts to improve the effectiveness of Diversity Search and Learning from Demonstration when applied to Hard Exploration Problems. To do so, we assume that complex robotics behaviors can be decomposed into reaching simpler sub-goals. Based on this sequential bias, we try to improve the sample efficiency of Diversity Search and Learning from Demonstration algorithms by adopting Divide & Conquer strategies, which are well-known for their efficiency when the problem is composable. Throughout the thesis, we propose two main strategies. First, after identifying some limitations of Diversity Search algorithms based on Neuro-Evolution, we propose Novelty Search Skill Chaining. This algorithm combines Diversity Search with Skill- Chaining to efficiently navigate maze environments that are difficult to explore for state-of-the-art Diversity Search. In a second set of contributions, we propose the Divide & Conquer Imitation Learning algorithms. The key intuition behind those methods is to decompose the complex task of learning from a single demonstration into several simpler goal-reaching sub-tasks. DCIL-II, the most advanced variant, can learn walking behaviors for under-actuated humanoid robots with unprecedented efficiency. Beyond underlining the effectiveness of the Divide & Conquer paradigm in Robot Learning, this work also highlights the difficulties that can arise when composing behaviors, even in elementary environments. One will inevitably have to address these difficulties before applying these algorithms directly to real robots. It may be necessary for the success of the next generations of robots, as outlined by Asimov
APA, Harvard, Vancouver, ISO, and other styles
26

Schmitt, Daniel [Verfasser], and Stefan [Akademischer Betreuer] Näher. "Multithread Plattformen für Divide und Conquer und inkrementelle Algorithmen und Anwendungen in der Computational Geometry / Daniel Schmitt ; Betreuer: Stefan Näher." Trier : Universität Trier, 2014. http://d-nb.info/1197700161/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Oprisan, Sorinel. "A Multi-Dimensional Width-Bounded Geometric Separator and its Applications to Protein Folding." ScholarWorks@UNO, 2005. http://scholarworks.uno.edu/td/238.

Full text
Abstract:
We used a divide-and-conquer algorithm to recursively solve the two-dimensional problem of protein folding of an HP sequence with the maximum number of H-H contacts. We derived both lower and upper bounds for the algorithmic complexity by using the newly introduced concept of multi-directional width-bounded geometric separator. We proved that for a grid graph G with n grid points P, there exists a balanced separator A subseteq P$ such that A has less than or equal to 1.02074 sqrt{n} points, and G-A has two disconnected subgraphs with less than or equal to {2over 3}n nodes on each subgraph. We also derive a 0.7555sqrt {n} lower bound for our balanced separator. Based on our multidirectional width-bounded geometric separator, we found that there is an O(n^{5.563sqrt{n}}) time algorithm for the 2D protein folding problem in the HP model. We also extended the upper bound results to rectangular and triangular lattices.
APA, Harvard, Vancouver, ISO, and other styles
28

Ni, Kai. "Tectonic smoothing and mapping." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41072.

Full text
Abstract:
Large-scale mapping has become the key to numerous applications, e.g. simultaneous localization and mapping (SLAM) for autonomous robots. Despite of the success of many SLAM projects, there are still some challenging scenarios in which most of the current algorithms are not able to deliver an exact solution fast enough. One of these challenges is the size of SLAM problems, which has increased by several magnitudes over the last decade. Another challenge for SLAM problems is the large amount of noise baked in the measurements, which often yields poor initializations and slows or even fails the optimization. Urban 3D reconstruction is another popular application for large-scale mapping and has received considerable attention recently from the computer vision community. High-quality 3D models are useful in various successful cartographic and architectural applications, such as Google Earth or Microsoft Live Local. At the heart of urban reconstruction problems is structure from motion (SfM). Due to the wide availability of cameras, especially on handhold devices, SfM is becoming a more and more crucial technique to handle a large amount of images. In the thesis, I present a novel batch algorithm, namely Tectonic Smoothing and Mapping (TSAM). I will show that the original SLAM graph can be recursively partitioned into multiple-level submaps using the nested dissection algorithm, which leads to the cluster tree, a powerful graph representation. By employing the nested dissection algorithm, the algorithm greatly minimizes the dependencies between two subtrees, and the optimization of the original graph can be done using a bottom-up inference along the corresponding cluster tree. To speed up the computation, a base node is introduced for each submap and is used to represent the rigid transformation of the submap in the global coordinate frame. As a result, the optimization moves the base nodes rather than the actual submap variables. I will also show that TSAM can be successfully applied to the SfM problem as well, in which a hypergraph representation is employed to capture the pairwise constraints between cameras. The hierarchical partitioning based on the hypergraph not only yields a cluster tree as in the SLAM problem but also forces resulting submaps to be nonsingular. I will demonstrate the TSAM algorithm using various simulation and real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
29

Qiu, Zhiquan Frank. "Advance the DNA computing." Texas A&M University, 2003. http://hdl.handle.net/1969/568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

DI, LIBERTO GIOVANNI. "NEW SEMICLASSICAL THEORIES FOR VIBRATIONAL SPECTROSCOPY." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/612134.

Full text
Abstract:
The main goal of this doctoral work was to develop theoretical advances of the semiclassical theory applied to molecular spectroscopy. In particular, the attention was centered at the coherent states based Time Averaging Semiclassical Initial Value Representation (TA-SCIVR) approximation to the vibrational spectral density. This approach is a solid way to access accurate vibrational spectra of molecular systems at a quantum approximate level. Nevertheless, it is affected by some criticalities as numerical issues and the so-called curse of dimensionality problem. Both represent an important stumbling block for the exploitation of the methodology towards molecules of increasing dimensions and complexity, preventing its application to general problems in the vibrational spectroscopy field. In my doctoral work we tried to face both issues, taming the numerical issues of the spectral density by introducing analytic and numerical approximations, and later developing with the group the Divide and Conquer Semiclassical dynamics (DC-SCIVR), a method which exploits the standard semiclassical formalism, but it works in reduced dimensional subspaces, with the aim of overcoming the curse of dimensionality. The advances first have been tested on simple molecules and then they have been employed to study spectroscopic relevant molecules. Main results show that it is possible to recover vibrational spectra even of those molecules affected by significant numerical issues, as well as high-dimensional ones, retaining the same accuracy of TA-SCIVR. In this thesis I first present some basics of the Semiclassical theory, with focus on vibrational spectroscopy, and then are shown the advances proposed, with applications on some relevant molecular systems in vibrational spectroscopy as supramolecular systems made by clusters of water and protonated glycine dimer, or high-dimensional molecules as benzene and C60.
APA, Harvard, Vancouver, ISO, and other styles
31

Alvin, Rydén Henrik, and Sara Sommarin. "Divide and Conquer : An in-depth study of the impact of the Swedish Leniency Program on the creation of cartels in the construction industry." Thesis, Linköpings universitet, Nationalekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130356.

Full text
Abstract:
Den här uppsatsen behandlar ämnena kartellbildning och eftergiftsprogrammet, samt byggbranschen. Syftet är att undersöka om det svenska eftergiftsprogrammet har förutsättningar för att avskräcka från kartellbildning i byggbranschen. För att undersöka detta kommer uppsatsen även att lyfta fram vilka förutsättningar som finns för karteller att verka i branschen. Med hjälp kartellteori identifieras faktorer som kan tänkas underlätta för karteller att verka, och med hjälp av eftergiftsteori kommer faktorer identifieras i programmet som kan tänkas hjälpa till att motverka kartellbildning. Med utgångspunkt i teorin och de faktorer som hittats kommer förhållandena i byggbranschen att undersökas.Vårt resultat visar att branschen i nuläget är relativt stabil med goda vinstmöjligheter och innehar inträdeshinder vilket tyder på att kartellbildning kan gynnas och att karteller troligen existerar i branschen. Resultatet visar även att eftergiftsprogrammet verkar ha möjligheter att avskräcka från kartellbildning genom att bidra med misstro mellan potentiella kartellmedlemmar. Dock verkar effekten bero på, åtminstone till viss del, antalet företag som överväger att ingå i ett samarbete. Att eftergiftsprogrammet är lika avskräckande i alla delar av branschen är alltså inte alls säkert. Vi kan framförallt förvänta oss att eftergiftsprogrammet “söndrar och härskar” bland de karteller med många medlemmar, och vilka verkar i de delar av byggbranschen som moderniserats.
APA, Harvard, Vancouver, ISO, and other styles
32

Pezzi, Guilherme Peretti. "Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/8613.

Full text
Abstract:
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster.
In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
APA, Harvard, Vancouver, ISO, and other styles
33

Ward, Paul. "A Scalable Partial-Order Data Structure for Distributed-System Observation." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1161.

Full text
Abstract:
Distributed-system observation is foundational to understanding and controlling distributed computations. Existing tools for distributed-system observation are constrained in the size of computation that they can observe by three fundamental problems. They lack scalable information collection, scalable data-structures for storing and querying the information collected, and scalable information-abstraction schemes. This dissertation addresses the second of these problems. Two core problems were identified in providing a scalable data structure. First, in spite of the existence of several distributed-system-observation tools, the requirements of such a structure were not well-defined. Rather, current tools appear to be built on the basis of events as the core data structure. Events were assigned logical timestamps, typically Fidge/Mattern, as needed to capture causality. Algorithms then took advantage of additional properties of these timestamps that are not explicit in the formal semantics. This dissertation defines the data-structure interface precisely, and goes some way toward reworking algorithms in terms of that interface. The second problem is providing an efficient, scalable implementation for the defined data structure. The key issue in solving this is to provide a scalable precedence-test operation. Current tools use the Fidge/Mattern timestamp for this. While this provides a constant-time test, it requires space per event equal to the number of processes. As the number of processes increases, the space consumption becomes sufficient to affect the precedence-test time because of caching effects. It also becomes problematic when the timestamps need to be copied between processes or written to a file. Worse, existing theory suggested that the space-consumption requirement of Fidge/Mattern timestamps was optimal. In this dissertation we present two alternate timestamp algorithms that require substantially less space than does the Fidge/Mattern algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Ning. "Shortest Path Queries in Very Large Spatial Databases." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1156.

Full text
Abstract:
Finding the shortest paths in a graph has been studied for a long time, and there are many main memory based algorithms dealing with this problem. Among these, Dijkstra's shortest path algorithm is one of the most commonly used efficient algorithms to the non-negative graphs. Even more efficient algorithms have been developed recently for graphs with particular properties such as the weights of edges fall into a range of integer. All of the mentioned algorithms require the graph totally reside in the main memory. Howevery, for very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), the requirement cannot be satisfied in most cases, so the algorithms mentioned above are not appropriate. My objective in this thesis is to design and evaluate the performance of external memory (disk-based) shortest path algorithms and data structures to solve the shortest path problem in very large digital maps. In particular the following questions are studied:What have other researchers done on the shortest path queries in very large digital maps?What could be improved on the previous works? How efficient are our new shortest paths algorithms on the digital maps, and what factors affect the efficiency? What can be done based on the algorithm? In this thesis, we give a disk-based Dijkstra's-like algorithm to answer shortest path queries based on pre-processing information. Experiments based on our Java implementation are given to show what factors affect the running time of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Akatov, Dmitri. "Exploiting parallelism in decomposition methods for constraint satisfaction." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531942.

Full text
Abstract:
Constraint Satisfaction Problems (CSPs) are NP-complete in general, however, there are many tractable subclasses that rely on the restriction of the structure of their underlying hypergraphs. It is a well-known fact, for instance, that CSPs whose underlying hypergraph is acyclic are tractable. Trying to define “nearly acyclic” hypergraphs led to the definition of various hypergraph decomposition methods. An important member in this class is the hypertree decomposition method, introduced by Gottlob et al. It possesses the property that CSPs falling into this class can be solved efficiently, and that hypergraphs in this class can be recognized efficiently as well. Apart from polynomial tractability, complexity analysis has shown, that both afore-mentioned problems lie in the low complexity class LOGCFL and are thus moreover efficiently parallelizable. A parallel algorithm has been proposed for the “evaluation problem”, however all algorithms for the “recognition problem” presented to date are sequential. The main contribution of this dissertation is the creation of an object oriented programming library including a task scheduler which allows the parallelization of a whole range of computational problems, fulfilling certain complexity-theoretic restrictions. This library merely requires the programmer to provide the implementation of several classes and methods, representing a general alternating algorithm, while the mechanics of the task scheduler remain hidden. In particular, we use this library to create an efficient parallel algorithm, which computes hypertree decompositions of a fixed width. Another result of a more theoretical nature is the definition of a new type of decomposition method, called Balanced Decompositions. Solving CSPs of bounded balanced width and recognizing such hypergraphs is only quasi-polynomial, however still parallelizable to a certain extent. A complexity-theoretic analysis leads to the definition of a new complexity class hierarchy, called the DC-hierarchy, with the first class in this hierarchy, DC1 , precisely capturing the complexity of solving CSPs of bounded balanced width.
APA, Harvard, Vancouver, ISO, and other styles
36

Vervoux, Cyril. "Contributions vers l'accélération de l'algorithme de Buchberger en combinant la méthode de coupure de Knuth-Schönhage et une approche de type sous-résultants." Limoges, 2002. http://www.unilim.fr/laco/theses/2002/T2002_01.pdf.

Full text
Abstract:
Le but de cette thèse est l'amélioration du calcul des bases de Gröbner. Dans cet esprit, nous proposons une méthode combinant les sous-résultants et une stratégie "divide and conquer" basée sur la méthode de coupure de Knuth-Schönhage. Un progrès majeur pour le calcul des bases de Gröbner serait une implémentation d'une telle méthode sur une plateforme appropriée telle la machine du Turing multi-bande de Schönehage qui n'est pas handicapée par les limites de la RAM. Dans ce contexte surviennent de nombreuses questions techniques. Avec l'idée d'avoir finalement une approche basée sur une substitution à la Kronecker pour traiter les polynômes multivariés, nous présentons une perspective positive pour un futur algorithme rapide de bases de Gröbner et les bases concrètes d'un tel algorithme dans le cadre des polynômes univariés. En fait, les résultats sont un algorithme des sous-résultants pour plusieurs polynômes, prouvé dans les cas génériques et qui marche bien expérimentalement dans tous les cas (y compris les cas non génériques), et un algorithme "divide and conquer" d'élimination sur plusieurs polynômes utilisant à la fois l'approche des sous-résultants et une méthode de coupure. Une implémentation en Magma révèle la supériorité de l'approche combinée sous-résultants et méthode de coupure, déjà pour des polynômes de degré environ 700 pour trois polynômes (et 1250 pour cinq polynômes), et montre ainsi clairement un fort potentiel pour le développement futur de cette approche. Enfin, cette thèse amène de nombreuses questions ouvertes qui sont à résoudre pour atteindre le but initialement fixé
The goal of this thesis is the improvement of Gröbner bases computation. .
APA, Harvard, Vancouver, ISO, and other styles
37

Atta-Asiamah, Ernest. "Distributed Inference for Degenerate U-Statistics with Application to One and Two Sample Test." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31777.

Full text
Abstract:
In many hypothesis testing problems such as one-sample and two-sample test problems, the test statistics are degenerate U-statistics. One of the challenges in practice is the computation of U-statistics for a large sample size. Besides, for degenerate U-statistics, the limiting distribution is a mixture of weighted chi-squares, involving the eigenvalues of the kernel of the U-statistics. As a result, it’s not straightforward to construct the rejection region based on this asymptotic distribution. In this research, we aim to reduce the computation complexity of degenerate U-statistics and propose an easy-to-calibrate test statistic by using the divide-and-conquer method. Specifically, we randomly partition the full n data points into kn even disjoint groups, and compute U-statistics on each group and combine them by averaging to get a statistic Tn. We proved that the statistic Tn has the standard normal distribution as the limiting distribution. In this way, the running time is reduced from O(n^m) to O( n^m/km_n), where m is the order of the one sample U-statistics. Besides, for a given significance level , it’s easy to construct the rejection region. We apply our method to the goodness of fit test and two-sample test. The simulation and real data analysis show that the proposed test can achieve high power and fast running time for both one and two-sample tests.
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Hai-Tao. "Diagramme de Voronoï généralisé pour un ensemble de polygones." Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00339655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Abrahamsson, Felix. "Designing a Question Answering System in the Domain of Swedish Technical Consulting Using Deep Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231586.

Full text
Abstract:
Question Answering systems are greatly sought after in many areas of industry. Unfortunately, as most research in Natural Language Processing is conducted in English, the applicability of such systems to other languages is limited. Moreover, these systems often struggle in dealing with long text sequences. This thesis explores the possibility of applying existing models to the Swedish language, in a domain where the syntax and semantics differ greatly from typical Swedish texts. Additionally, the text length may vary arbitrarily. To solve these problems, transfer learning techniques and state-of-the-art Question Answering models are investigated. Furthermore, a novel, divide-and-conquer based technique for processing long texts is developed. Results show that the transfer learning is partly unsuccessful, but the system is capable of perform reasonably well in the new domain regardless. Furthermore, the system shows great performance improvement on longer text sequences with the use of the new technique.
System som givet en text besvarar frågor är högt eftertraktade inom många arbetsområden. Eftersom majoriteten av all forskning inom naturligtspråkbehandling behandlar engelsk text är de flesta system inte direkt applicerbara på andra språk. Utöver detta har systemen ofta svårt att hantera långa textsekvenser. Denna rapport utforskar möjligheten att applicera existerande modeller på det svenska språket, i en domän där syntaxen och semantiken i språket skiljer sig starkt från typiska svenska texter. Dessutom kan längden på texterna variera godtyckligt. För att lösa dessa problem undersöks flera tekniker inom transferinlärning och frågebesvarande modeller i forskningsfronten. En ny metod för att behandla långa texter utvecklas, baserad på en dekompositionsalgoritm. Resultaten visar på att transfer learning delvis misslyckas givet domänen och modellerna, men att systemet ändå presterar relativt väl i den nya domänen. Utöver detta visas att systemet presterar väl på långa texter med hjälp av den nya metoden.
APA, Harvard, Vancouver, ISO, and other styles
40

Azam, Farooq. "Biologically Inspired Modular Neural Networks." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27998.

Full text
Abstract:
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
41

Vu, Hoang Hiep. "Large-scale and high-quality multi-view stereo." Phd thesis, Université Paris-Est, 2011. http://pastel.archives-ouvertes.fr/pastel-00779426.

Full text
Abstract:
Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
APA, Harvard, Vancouver, ISO, and other styles
42

Bayramoglu, Muhammet Fatih. "Sub-graph Approach In Iterative Sum-product Algorithm." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606550/index.pdf.

Full text
Abstract:
Sum-product algorithm can be employed for obtaining the marginal probability density functions from a given joint probability density function (p.d.f.). The sum-product algorithm operates on a factor graph which represents the dependencies of the random variables whose joint p.d.f. is given. The sum-product algorithm can not be operated on factor-graphs that contain loops. For these factor graphs iterative sum-product algorithm is used. A factor graph which contains loops can be divided in to loop-free sub-graphs. Sum-product algorithm can be operated in these loop-free sub-graphs and results of these sub-graphs can be combined for obtaining the result of the whole factor graph in an iterative manner. This method may increase the convergence rate of the algorithm significantly while keeping the complexity of an iteration and accuracy of the output constant. A useful by-product of this research that is introduced in this thesis is a good approximation to message calculation in factor nodes of the inter-symbol interference (ISI) factor graphs. This approximation has a complexity that is linearly proportional with the number of neighbors instead of being exponentially proportional. Using this approximation and the sub-graph idea we have designed and simulated joint decoding-equalization (turbo equalization) algorithm and obtained good results besides the low complexity.
APA, Harvard, Vancouver, ISO, and other styles
43

Moshir, Moghaddam Kianosh. "Automated Reasoning Support for Invasive Interactive Parallelization." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84830.

Full text
Abstract:
To parallelize a sequential source code, a parallelization strategy must be defined that transforms the sequential source code into an equivalent parallel version. Since parallelizing compilers can sometimes transform sequential loops and other well-structured codes into parallel ones automatically, we are interested in finding a solution to parallelize semi-automatically codes that compilers are not able to parallelize automatically, mostly because of weakness of classical data and control dependence analysis, in order to simplify the process of transforming the codes for programmers.Invasive Interactive Parallelization (IIP) hypothesizes that by using anintelligent system that guides the user through an interactive process one can boost parallelization in the above direction. The intelligent system's guidance relies on a classical code analysis and pre-defined parallelizing transformation sequences. To support its main hypothesis, IIP suggests to encode parallelizing transformation sequences in terms of IIP parallelization strategies that dictate default ways to parallelize various code patterns by using facts which have been obtained both from classical source code analysis and directly from the user.In this project, we investigate how automated reasoning can supportthe IIP method in order to parallelize a sequential code with an acceptable performance but faster than manual parallelization. We have looked at two special problem areas: Divide and conquer algorithms and loops in the source codes. Our focus is on parallelizing four sequential legacy C programs such as: Quick sort, Merge sort, Jacobi method and Matrix multipliation and summation for both OpenMP and MPI environment by developing an interactive parallelizing assistance tool that provides users with the assistanceneeded for parallelizing a sequential source code.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Huimin. "Algorithmes rapides et matrices de toeplitz." Paris, ENST, 1989. http://www.theses.fr/1989ENST0008.

Full text
Abstract:
Cette these concerne l'etude des algorithmes rapides fondes sur des matrices de toeplitz, dans le contexte des methodes d'analyse spectrale haute-resolution. Apres une analyse des proprietes et structures des matrices de correlation estimees par des methodes classiques, nous etudions dans la premiere partie de cette these, une matrice de toeplitz particuliere la matrice d'ouamri. Cette matrice est la seule matrice de correlation a la structure de toeplitz qui permette d'obtenir une estimation asymptotiquement sans biais des frequences sinusoidales. A l'aide de cette matrice, nous developpons un algorithme de burg modifie pour la modelisation ar, qui est superieur a l'algorithme initial en ce qui concerne le biais d'estimation des frequences, tout en ayant une complexite de calcul comparable. Le developpement d'algorithmes rapides fondes sur des matrices de toeplitz pour des applications dans le domaine de l'analyse spectrale fait l'objet de la deuxieme partie de la these. Deux principaux sujets ont ete abordes: la decomposition en elements propres de la matrice et la resolution des equations de yule-walker. Notre accent se porte sur le deuxieme. D'abord, nous effectuons une comparaison des algorithmes classiques pour la resolution des equations de yule-walker (algorithme de levinson, de schur, de berlekamp-massey, et de euclide). Ces algorithmes sont lies par leur interpretation comme algorithmes de calcul des approximants de pade. En les classant dans les deux categories suivantes: algorithme a un passage et algorithme a deux passages, nous montrons que l'ensemble des algorithmes classiques connus n'est pas complet, et nous proposons les variantes manquantes. Ensuite, d'une facon generale, nous etudions l'application de la strategie de doubling de levinson/schur, qui est le plus efficace parmi les algorithmes doubling possibles. Nous l'avons implante de deux manieres differentes, en utilisant d'une pa
APA, Harvard, Vancouver, ISO, and other styles
45

Thebault, Loïc. "Algorithmes Parallèles Efficaces Appliqués aux Calculs sur Maillages Non Structurés." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV088/document.

Full text
Abstract:
Le besoin croissant en simulation a conduit à l’élaboration de supercalculateurs complexes et d’un nombre croissant de logiciels hautement parallèles. Ces supercalculateurs requièrent un rendement énergétique et une puissance de calcul de plus en plus importants. Les récentes évolutions matérielles consistent à augmenter le nombre de noeuds de calcul et de coeurs par noeud. Certaines ressources n’évoluent cependant pas à la même vitesse. La multiplication des coeurs de calcul implique une diminution de la mémoire par coeur, plus de trafic de données, un protocole de cohérence plus coûteux et requiert d’avantage de parallélisme. De nombreuses applications et modèles actuels peinent ainsi à s’adapter à ces nouvelles tendances. En particulier, générer du parallélisme massif dans des méthodes d’éléments finis utilisant des maillages non structurés, et ce avec un nombre minimal de synchronisations et des charges de travail équilibrées, s’avèrent particulièrement difficile. Afin d’exploiter efficacement les multiples niveaux de parallélisme des architectures actuelles, différentes approches parallèles doivent être combinées. Cette thèse propose plusieurs contributions destinées à paralléliser les codes et les structures irrégulières de manière efficace. Nous avons développé une approche parallèle hybride par tâches à grain fin combinant les formes de parallélisme distribuée, partagée et vectorielle sur des structures irrégulières. Notre approche a été portée sur plusieurs applications industrielles développées par Dassault Aviation et a permis d’importants gains de performance à la fois sur les multicoeurs classiques ainsi que sur le récent Intel Xeon Phi
The growing need for numerical simulations results in larger and more complex computing centers and more HPC softwares. Actual HPC system architectures have an increasing requirement for energy efficiency and performance. Recent advances in hardware design result in an increasing number of nodes and an increasing number of cores per node. However, some resources do not scale at the same rate. The increasing number of cores and parallel units implies a lower memory per core, higher requirement for concurrency, higher coherency traffic, and higher cost for coherency protocol. Most of the applications and runtimes currently in use struggle to scale with the present trend. In the context of finite element methods, exposing massive parallelism on unstructured mesh computations with efficient load balancing and minimal synchronizations is challenging. To make efficient use of these architectures, several parallelization strategies have to be combined together to exploit the multiple levels of parallelism. This P.h.D. thesis proposes several contributions aimed at overpassing this limitation by addressing irregular codes and data structures in an efficient way. We developed a hybrid parallelization approach combining the distributed, shared, and vectorial forms of parallelism in a fine grain taskbased approach applied to irregular structures. Our approach has been ported to several industrial applications developed by Dassault Aviation and has led to important speedups using standard multicores and the Intel Xeon Phi manycore
APA, Harvard, Vancouver, ISO, and other styles
46

Le, Frioux Ludovic. "Towards more efficient parallel SAT solving." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS209.

Full text
Abstract:
Les solveurs résolvants le problème de SATisfiabilité booléenne sont utilisés avec succès dans de nombreux contextes. Ceci est dû à leur capacité à résoudre d'énormes problèmes. La plupart des solveurs SAT ont longtemps été séquentiels. L'émergence de machines multi-coeurs ouvre de nouvelles perspectives. Il existe de nombreux solveurs SAT parallèles différant par leurs stratégies, langages, etc. Il est donc difficile de comparer l'efficacité de différentes stratégies de manière équitable. De plus, l'introduction de nouvelles approches nécessite une compréhension approfondie de l'implémentation des solveurs existants. Nous présentons Painless : une platefrome pour implémenter des solveurs SAT parallèles pour les multi-coeurs. Grâce à sa modularité, elle offre une implémentation des composants de base et permet également aux utilisateurs de créer facilement leurs solveurs parallèles basés sur de nouvelles stratégies. Painless a permis de construire et tester de nombreux solveurs. Nous avons pu imiter le comportement de trois solveurs de l'état de l'art tout en factorisant de nombreuses parties. L'efficacité de Painless a été soulignée par le fait que ces implémentations sont au moins aussi efficaces que les originales. De plus, l'un des nos solveurs a gagné la compétition SAT 2018. Painless nous a permis de mener des expériences équitables avec des solveurs diviser pour régner et de mettre en lumière des compositions originales d'heuristiques plus performantes que celles déjà connues. De plus, nous avons été en mesure de créer et tester de nouvelles stratégies exploitant la structure modulaire des formules SAT
Boolean SATisfiability has been used successfully in many applicative contexts. This is due to the capability of modern SAT solvers to solve complex problems involving millions of variables. Most SAT solvers have long been sequential and based on the CDCL algorithm. The emergence of many-core machines opens new possibilities in this domain. There are numerous parallel SAT solvers that differ by their strategies, programming languages, etc. Hence, comparing the efficiency of the theoretical approaches in a fair way is a challenging task. Moreover, the introduction of a new approach needs a deep understanding of the existing solvers' implementations. We present Painless: a framework to build parallel SAT solvers for many-core environments. Thanks to its genericness and modularity, it provides the implementation of basics for parallel SAT solving. It also enables users to easily create their parallel solvers based on new strategies. Painless allowed to build and test existing strategies by using different chunk of solutions present in the literature. We were able to easily mimic the behaviour of three state-of-the-art solvers by factorising many parts of their implementations. The efficiency of Painless was highlighted as these implementations are at least efficient as the original ones. Moreover, one of our solvers won the SAT Competition'18. Going further, Painless enabled to conduct fair experiments in the context of divide-and-conquer solvers, and allowed us to highlight original compositions of strategies performing better than already known ones. Also, we were able to create and test new original strategies exploiting the modular structure of SAT instances
APA, Harvard, Vancouver, ISO, and other styles
47

Cankurtaran, Burak O. "Linear-scaling techniques for first principles calculations of stationary and dynamic systems." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/24.

Full text
Abstract:
First principles calculations can be a computationally intensive task when studying large systems. Linear-scaling methods must be employed to find the electronic structure of systems consisting of thousands of atoms and greater. The goal of this thesis is to combine the linear-scaling divide-and-conquer (D&C) method with the linear-scaling capabilities of the SIESTA (Spanish Initiative for Electronic Simulations with Thousands of Atoms) density functional theory (DFT) methodology and present this union as a viable approach to large-scale first principles calculations. In particular, the density matrix version of the D&C method is implemented into the SIESTA package. This implementation can accommodate high quality calculations consisting of atom numbers in the tens of thousands using moderate computing resources. Low quality calculations have been tested up to half million atoms using reasonably sized computing resources.The D&C method is extended to better handle atomic dynamics simulations. First, by alleviating issues caused by discontinuities in the potential energy surface, with the application of a switching function on the Hamiltonian and overlap matrices. This allows for a smooth potential energy surface to be generated. The switching function has the additional benefit of accelerating the self-consistent field (SCF) process. Secondly, the D&C frozen density matrix (FDM) is modified to allow for improved charge transfer between the active and constrained regions of the system. This modification is found to reduce both the number of SCF iterations required for self-consistency and the number of relaxation steps in a local geometry optimisation. The D&C paradigm is applied to the real-time approach of time-dependent density functional theory (TDDFT). The method is tested on a linear alkane molecule with varying levels of success.Divergences in the induced dipole moment occur when the external excitation field is aligned parallel to the axis of the molecule. The method succeeds in producing accurate dipole moments when the external field is aligned perpendicular to the molecule. Various techniques are tested to improve the proposed method. Finally, the performance and effectiveness of the current D&C implementation is evaluated by studying three current systems. The first two systems consist of two different DNA sequences and the last system is the large ZIF-100 zeolitic imidazolate framework (ZIF).
APA, Harvard, Vancouver, ISO, and other styles
48

Samara, Rafat. "TOP-K AND SKYLINE QUERY PROCESSING OVER RELATIONAL DATABASE." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-20108.

Full text
Abstract:
Top-k and Skyline queries are a long study topic in database and information retrieval communities and they are two popular operations for preference retrieval. Top-k query returns a subset of the most relevant answers instead of all answers. Efficient top-k processing retrieves the k objects that have the highest overall score. In this paper, some algorithms that are used as a technique for efficient top-k processing for different scenarios have been represented. A framework based on existing algorithms with considering based cost optimization that works for these scenarios has been presented. This framework will be used when the user can determine the user ranking function. A real life scenario has been applied on this framework step by step. Skyline query returns a set of points that are not dominated (a record x dominates another record y if x is as good as y in all attributes and strictly better in at least one attribute) by other points in the given datasets. In this paper, some algorithms that are used for evaluating the skyline query have been introduced. One of the problems in the skyline query which is called curse of dimensionality has been presented. A new strategy that based on the skyline existing algorithms, skyline frequency and the binary tree strategy which gives a good solution for this problem has been presented. This new strategy will be used when the user cannot determine the user ranking function. A real life scenario is presented which apply this strategy step by step. Finally, the advantages of the top-k query have been applied on the skyline query in order to have a quickly and efficient retrieving results.
APA, Harvard, Vancouver, ISO, and other styles
49

Henricksen, Matthew. "Design, Implementation and Cryptanalysis of Modern Symmetric Ciphers." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16055/1/Matt_Henricksen_Thesis.pdf.

Full text
Abstract:
The main objective of this thesis is to examine the trade-offs between security and efficiency within symmetric ciphers. This includes the influence that block ciphers have on the new generation of word-based stream ciphers. By incorporating block-cipher like components into their designs, word-based stream ciphers have experienced hundreds-fold improvement in speed over bit-based stream ciphers, without any observable security degradation. The thesis also emphasizes the importance of keying issues in block and stream ciphers, showing that by reusing components of the principal cipher algorithm in the keying algorithm, security can be enhanced without loss of key-agility or expanding footprint in software memory. Firstly, modern block ciphers from four recent cipher competitions are surveyed and categorized according to criteria that includes the high-level structure of the block cipher, the method in which non-linearity is instilled into each round, and the strength of the key schedule. In assessing the last criterion, a classification by Carter [45] is adopted and modified to improve its consistency. The classification is used to demonstrate that the key schedule of the Advanced Encryption Standard (AES) [62] is surprisingly flimsy for a national standard. The claim is supported with statistical evidence that shows the key schedule suffers from bit leakage and lacks sufficient diffusion. The thesis contains a replacement key schedule that reuses components from the cipher algorithm, leveraging existing analysis to improve security, and reducing the cipher's implementation footprint while maintaining key agility. The key schedule is analyzed from the perspective of an efficiency-security tradeoff, showing that the new schedule rectifies an imbalance towards e±ciency present in the original. The thesis contains a discussion of the evolution of stream ciphers, focusing on the migration from bit-based to word-based stream ciphers, from which follows a commensurate improvement in design flexibility and software performance. It examines the influence that block ciphers, and in particular the AES, have had upon the development of word-based stream ciphers. The thesis includes a concise literature review of recent styles of cryptanalytic attack upon stream ciphers. Also, claims are refuted that one prominent word-based stream cipher, RC4, suffers from a bias in the first byte of each keystream. The thesis presents a divide and conquer attack against Alpha1, an irregularly clocked bit-based stream cipher with a 128-bit state. The dominating aspect of the divide and conquer attack is a correlation attack on the longest register. The internal state of the remaining registers is determined by utilizing biases in the clocking taps and launching a guess and determine attack. The overall complexity of the attack is 261 operations with text requirements of 35,000 bits and memory requirements of 2 29.8 bits. MUGI is a 64-bit word-based cipher with a large Non-linear Feedback Shift Register (NLFSR) and an additional non-linear state. In standard benchmarks, MUGI appears to su®er from poor key agility because it is implemented on an architecture for which it is not designed, and because its NLFSR is too large relative to the size of its master key. An unusual feature of its key initialization algorithm is described. A variant of MUGI, entitled MUGI-M, is proposed to enhance key agility, ostensibly without any loss of security. The thesis presents a new word-based stream cipher called Dragon. This cipher uses a large internal NLFSR in conjunction with a non-linear filter to produce 64 bits of keystream in one round. The non-linear filter looks very much like the round function of a typical modern block cipher. Dragon has a native word size of 32 bits, and uses very simple operations, including addition, exclusive-or and s-boxes. Together these ensure high performance on modern day processors such as the Intel Pentium family. Finally, a set of guidelines is provided for designing and implementing symmetric ciphers on modern processors, using the Intel Pentium 4 as a case study. Particular attention is given to understanding the architecture of the processor, including features such as its register set and size, the throughput and latencies of its instruction set, and the memory layouts and speeds. General optimization rules are given, including how to choose fast primitives for use within the cipher. The thesis describes design decisions that were made for the Dragon cipher with respect to implementation on the Intel Pentium 4. Block Ciphers, Word-based Stream Ciphers, Cipher Design, Cipher Implementa- tion, -
APA, Harvard, Vancouver, ISO, and other styles
50

Henricksen, Matthew. "Design, Implementation and Cryptanalysis of Modern Symmetric Ciphers." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16055/.

Full text
Abstract:
The main objective of this thesis is to examine the trade-offs between security and efficiency within symmetric ciphers. This includes the influence that block ciphers have on the new generation of word-based stream ciphers. By incorporating block-cipher like components into their designs, word-based stream ciphers have experienced hundreds-fold improvement in speed over bit-based stream ciphers, without any observable security degradation. The thesis also emphasizes the importance of keying issues in block and stream ciphers, showing that by reusing components of the principal cipher algorithm in the keying algorithm, security can be enhanced without loss of key-agility or expanding footprint in software memory. Firstly, modern block ciphers from four recent cipher competitions are surveyed and categorized according to criteria that includes the high-level structure of the block cipher, the method in which non-linearity is instilled into each round, and the strength of the key schedule. In assessing the last criterion, a classification by Carter [45] is adopted and modified to improve its consistency. The classification is used to demonstrate that the key schedule of the Advanced Encryption Standard (AES) [62] is surprisingly flimsy for a national standard. The claim is supported with statistical evidence that shows the key schedule suffers from bit leakage and lacks sufficient diffusion. The thesis contains a replacement key schedule that reuses components from the cipher algorithm, leveraging existing analysis to improve security, and reducing the cipher's implementation footprint while maintaining key agility. The key schedule is analyzed from the perspective of an efficiency-security tradeoff, showing that the new schedule rectifies an imbalance towards e±ciency present in the original. The thesis contains a discussion of the evolution of stream ciphers, focusing on the migration from bit-based to word-based stream ciphers, from which follows a commensurate improvement in design flexibility and software performance. It examines the influence that block ciphers, and in particular the AES, have had upon the development of word-based stream ciphers. The thesis includes a concise literature review of recent styles of cryptanalytic attack upon stream ciphers. Also, claims are refuted that one prominent word-based stream cipher, RC4, suffers from a bias in the first byte of each keystream. The thesis presents a divide and conquer attack against Alpha1, an irregularly clocked bit-based stream cipher with a 128-bit state. The dominating aspect of the divide and conquer attack is a correlation attack on the longest register. The internal state of the remaining registers is determined by utilizing biases in the clocking taps and launching a guess and determine attack. The overall complexity of the attack is 261 operations with text requirements of 35,000 bits and memory requirements of 2 29.8 bits. MUGI is a 64-bit word-based cipher with a large Non-linear Feedback Shift Register (NLFSR) and an additional non-linear state. In standard benchmarks, MUGI appears to su®er from poor key agility because it is implemented on an architecture for which it is not designed, and because its NLFSR is too large relative to the size of its master key. An unusual feature of its key initialization algorithm is described. A variant of MUGI, entitled MUGI-M, is proposed to enhance key agility, ostensibly without any loss of security. The thesis presents a new word-based stream cipher called Dragon. This cipher uses a large internal NLFSR in conjunction with a non-linear filter to produce 64 bits of keystream in one round. The non-linear filter looks very much like the round function of a typical modern block cipher. Dragon has a native word size of 32 bits, and uses very simple operations, including addition, exclusive-or and s-boxes. Together these ensure high performance on modern day processors such as the Intel Pentium family. Finally, a set of guidelines is provided for designing and implementing symmetric ciphers on modern processors, using the Intel Pentium 4 as a case study. Particular attention is given to understanding the architecture of the processor, including features such as its register set and size, the throughput and latencies of its instruction set, and the memory layouts and speeds. General optimization rules are given, including how to choose fast primitives for use within the cipher. The thesis describes design decisions that were made for the Dragon cipher with respect to implementation on the Intel Pentium 4. Block Ciphers, Word-based Stream Ciphers, Cipher Design, Cipher Implementa- tion, -
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography