Dissertations / Theses on the topic 'Algorithmic number theory'

To see the other types of publications on this topic, follow the link: Algorithmic number theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 dissertations / theses for your research on the topic 'Algorithmic number theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Smith, Benjamin Andrew. "Explicit endomorphisms and correspondences." University of Sydney, 2006. http://hdl.handle.net/2123/1066.

Full text
Abstract:
Doctor of Philosophy (PhD)
In this work, we investigate methods for computing explicitly with homomorphisms (and particularly endomorphisms) of Jacobian varieties of algebraic curves. Our principal tool is the theory of correspondences, in which homomorphisms of Jacobians are represented by divisors on products of curves. We give families of hyperelliptic curves of genus three, five, six, seven, ten and fifteen whose Jacobians have explicit isogenies (given in terms of correspondences) to other hyperelliptic Jacobians. We describe several families of hyperelliptic curves whose Jacobians have complex or real multiplication; we use correspondences to make the complex and real multiplication explicit, in the form of efficiently computable maps on ideal class representatives. These explicit endomorphisms may be used for efficient integer multiplication on hyperelliptic Jacobians, extending Gallant--Lambert--Vanstone fast multiplication techniques from elliptic curves to higher dimensional Jacobians. We then describe Richelot isogenies for curves of genus two; in contrast to classical treatments of these isogenies, we consider all the Richelot isogenies from a given Jacobian simultaneously. The inter-relationship of Richelot isogenies may be used to deduce information about the endomorphism ring structure of Jacobian surfaces; we conclude with a brief exploration of these techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Benjamin Andrew. "Explicit endomorphisms and correspondences." Thesis, The University of Sydney, 2005. http://hdl.handle.net/2123/1066.

Full text
Abstract:
In this work, we investigate methods for computing explicitly with homomorphisms (and particularly endomorphisms) of Jacobian varieties of algebraic curves. Our principal tool is the theory of correspondences, in which homomorphisms of Jacobians are represented by divisors on products of curves. We give families of hyperelliptic curves of genus three, five, six, seven, ten and fifteen whose Jacobians have explicit isogenies (given in terms of correspondences) to other hyperelliptic Jacobians. We describe several families of hyperelliptic curves whose Jacobians have complex or real multiplication; we use correspondences to make the complex and real multiplication explicit, in the form of efficiently computable maps on ideal class representatives. These explicit endomorphisms may be used for efficient integer multiplication on hyperelliptic Jacobians, extending Gallant--Lambert--Vanstone fast multiplication techniques from elliptic curves to higher dimensional Jacobians. We then describe Richelot isogenies for curves of genus two; in contrast to classical treatments of these isogenies, we consider all the Richelot isogenies from a given Jacobian simultaneously. The inter-relationship of Richelot isogenies may be used to deduce information about the endomorphism ring structure of Jacobian surfaces; we conclude with a brief exploration of these techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Pellet--Mary, Alice. "Réseaux idéaux et fonction multilinéaire GGH13." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN048/document.

Full text
Abstract:
La cryptographie à base de réseaux euclidiens est un domaine prometteur pour la construction de primitives cryptographiques post-quantiques. Un problème fondamental, lié aux réseaux, est le problème du plus court vecteur (ou SVP, pour Shortest Vector Problem). Ce problème est supposé être difficile à résoudre même avec un ordinateur quantique. Afin d’améliorer l’efficacité des protocoles cryptographiques, on peut utiliser des réseaux structurés, comme par exemple des réseaux idéaux ou des réseaux modules (qui sont une généralisation des réseaux idéaux). La sécurité de la plupart des schémas utilisant des réseaux structurés dépend de la difficulté du problème SVP dans des réseaux modules, mais un petit nombre de schémas peuvent également être impactés par SVP dans des réseaux idéaux. La principale construction pouvant être impactée par SVP dans des réseaux idéaux est la fonction multilinéaire GGH13. Cette fonction multilinéaire est principalement utilisée aujourd’hui pour construire des obfuscateurs de programmes, c’est-à-dire des fonctions qui prennent en entrée le code d’un programme et renvoie le code d’un programme équivalent (calculant la même fonction), mais qui doit cacher la façon dont le programme fonctionne.Dans cette thèse, nous nous intéressons dans un premier temps au problème SVP dans les réseaux idéaux et modules. Nous présentons un premier algorithme qui, après un pre-calcul exponentiel, permet de trouver des vecteurs courts dans des réseaux idéaux plus rapidement que le meilleur algorithme connu pour des réseaux arbitraires. Nous présentons ensuite un algorithme pour les réseaux modules de rang 2, également plus efficace que le meilleur algorithme connu pour des réseaux arbitraires, à condition d’avoir accès à un oracle résolvant le problème du plus proche vecteur dans un réseau fixé. Le pré-calcul exponentiel et l’oracle pour le problème du plus proche vecteurs rendent ces deux algorithmes inutilisables en pratique.Dans un second temps, nous nous intéressons à la fonction GGH13 ainsi qu’aux obfuscateurs qui l’utilisent. Nous étudions d’abord l’impact des attaques statistiques sur la fonction GGH13 et ses variantes. Nous nous intéressons ensuite à la sécurité des obfuscateurs utilisant la fonction GGH13 et proposons une attaque quantique contre plusieurs de ces obfuscateurs. Cette attaque quantique utilise entre autres un algorithme calculant un vecteur court dans un réseau idéal dépendant d’un paramètre secret de la fonction GGH13
Lattice-based cryptography is a promising area for constructing cryptographic primitives that are plausibly secure even in the presence of quantum computers. A fundamental problem related to lattices is the shortest vector problem (or SVP), which asks to find a shortest non-zero vector in a lattice. This problem is believed to be intractable, even quantumly. Structured lattices, for example ideal lattices or module lattices (the latter being a generalization of the former), are often used to improve the efficiency of lattice-based primitives. The security of most of the schemes based on structured lattices is related to SVP in module lattices, and a very small number of schemes can also be impacted by SVP in ideal lattices.In this thesis, we first focus on the problem of finding short vectors in ideal and module lattices.We propose an algorithm which, after some exponential pre-computation, performs better on ideal lattices than the best known algorithm for arbitrary lattices. We also present an algorithm to find short vectors in rank 2 modules, provided that we have access to some oracle solving the closest vector problem in a fixed lattice. The exponential pre-processing time and the oracle call make these two algorithms unusable in practice.The main scheme whose security might be impacted by SVP in ideal lattices is the GGH13multilinear map. This protocol is mainly used today to construct program obfuscators, which should render the code of a program unintelligible, while preserving its functionality. In a second part of this thesis, we focus on the GGH13 map and its application to obfuscation. We first study the impact of statistical attacks on the GGH13 map and on its variants. We then study the security of obfuscators based on the GGH13 map and propose a quantum attack against multiple such obfuscators. This quantum attack uses as a subroutine an algorithm to find a short vector in an ideal lattice related to a secret element of the GGH13 map
APA, Harvard, Vancouver, ISO, and other styles
4

Varescon, Firmin. "Calculs explicites en théorie d'Iwasawa." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2019/document.

Full text
Abstract:
Dans le premier chapitre de cette thèse on rappelle l'énoncé ainsi que des équivalents de la conjecture de Leopoldt puis l'on donne un algorithme permettant de vérifier cette conjecture pour un corps de nombre et premier donnés. Pour la suite on suppose cette conjecture vraie pour le premier p fixé Et on étudie la torsion du groupe de Galois de l'extension abélienne maximale p-ramifiée. On présente une méthode qui détermine effectivement les facteurs invariants de ce groupe fini. Dans le troisième chapitre on donne des résultats numériques que l'on interpréte via des heuristiques à la Cohen-Lenstra. Dans le quatrième chapitre, à l'aide de l'algorithme qui permet le calcul de ce module, on donne des exemples de corps et de premiers vérifiant la conjecture de Greenberg
In the first chapter of this thesis we explain Leopoldt's conjecture and some equivalent formulations. Then we give an algorithm that checks this conjecture for a given prime p and a number field. Next we assume that this conjecture is true, and we study the torsion part of the Galois group of the maximal abelian p-ramified p-extension of a given number field. We present a method to compute the invariant factors of this finite group. In the third chapter we give an interpretation of our numrical result by heuristics “à la” Cohen-Lenstra. In the fourth and last chapter, using our algorithm which computes this torsion submodule, we give new examples of numbers fields which satisfy Greenberg's conjecture
APA, Harvard, Vancouver, ISO, and other styles
5

Shoup, Victor. "Removing randomness from computational number theory." Madison, Wis. : University of Wisconsin-Madison, Computer Sciences Dept, 1989. http://catalog.hathitrust.org/api/volumes/oclc/20839526.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Viu, Sos Juan. "Periods and line arrangements : contributions to the Kontsevich-Zagier period conjecture and to the Terao conjecture." Thesis, Pau, 2015. http://www.theses.fr/2015PAUU3022/document.

Full text
Abstract:
La première partie concerne un problème de théorie des nombres, pour laquel nous développons une approche géométrique basé sur des outils provenant de la géométrie algébrique et de la géométrique combinatoire. Introduites par M. Kontsevich et D. Zagier en 2001, les périodes sont des nombres complexes obtenus comme valeurs des intégrales d'une forme particulier, où le domaine et l'intégrande s'expriment par des polynômes avec coefficients rationnels. La conjecture de périodes de Kontsevich-Zagier affirme que n'importe quelle relation polynomiale entre périodes peut s'obtenir par des relations linéaires entre différentes représentations intégrales, exprimées par des règles classiques du calcul intégrale. En utilisant des résolutions de singularités, on introduit une réduction semi-canonique de périodes en se concentrant sur le fait d'obtenir une méthode algorithmique et constructive respectant les règles classiques de transformation intégrale: nous prouvons que n'importe quelle période non nulle, représentée par une certaine intégrale, peut être exprimée sauf signe comme le volume d'un ensemble semi-algébrique compact. La réduction semi-canonique permet une reformulation de la conjecture de périodes de Kontsevich-Zagier en termes de changement de variables préservant le volume entre ensembles semi-algébriques compacts. Via des triangulations et méthodes de la géométrie-PL, nous étudions les obstructions de cette approche comme la généralisation du 3ème Problème de Hilbert. Nous complétons les travaux de J. Wan dans le développement d'une théorie du degré pour les périodes, basée sur la dimension minimale de l'espace ambiance nécessaire pour obtenir une telle réduction compacte, en donnant une première notion géométrique sur la transcendance de périodes. Nous étendons cet étude en introduisant des notions de complexité géométrique et arithmétique pour le périodes basées sur la complexité polynomiale minimale parmi les réductions semi-canoniques d'une période. La seconde partie s'occupe de la compréhension d'objets provenant de la géométrie algébrique avec une forte connexion avec la géométrique combinatoire, pour lesquels nous avons développé une approche dynamique. Les champs de vecteurs logarithmiques sont un outils algébro-analytique utilisés dans l'étude des sous-variétés et des germes dans des variétés analytiques. Nous nous sommes concentré sur le cas des arrangements de droites dans des espaces affines ou projectifs. On s'est plus particulièrement intéressé à comprendre comment la combinatoire d'un arrangement détermine les relations entre les champs de vecteurs logarithmiques associés: ce problème est connu sous le nom de conjecture de Terao. Nous étudions le module des champs de vecteurs logarithmiques d'un arrangement de droites affin en utilisant la filtration induite par le degré des composantes polynomiales. Nous déterminons qu'il n'existent que deux types de champs de vecteurs polynomiaux qui fixent une infinité de droites. Ensuite, nous décrivons l'influence de la combinatoire de l'arrangement de droites sur le degré minimal attendu pour ce type de champs de vecteurs. Nous prouvons que la combinatoire ne détermine pas le degré minimal des champs de vecteurs logarithmiques d'un arrangement de droites affin, en présentant deux pairs de contre-exemples, chaque qu'un d'eux correspondant à une notion différente de combinatoire. Nous déterminons que la dimension des espaces de filtration suit une croissance quadratique à partir d'un certain degré, en dépendant uniquement de la combinatoire de l'arrangement. A fin d'étudier de façon calculatoire une telle filtration, nous développons une librairie de fonctions sur le software de calcul formel Sage
The first part concerns a problem of number theory, for which we develop a geometrical approach based on tools coming from algebraic geometry and combinatorial geometry. Introduced by M. Kontsevich and D. Zagier in 2001, periods are complex numbers expressed as values of integrals of a special form, where both the domain and the integrand are expressed using polynomials with rational coefficients. The Kontsevich-Zagier period conjecture affirms that any polynomial relation between periods can be obtained by linear relations between their integral representations, expressed by classical rules of integral calculus. Using resolution of singularities, we introduce a semi-canonical reduction for periods focusing on give constructible and algorithmic methods respecting the classical rules of integral transformations: we prove that any non-zero real period, represented by an integral, can be expressed up to sign as the volume of a compact semi-algebraic set. The semi-canonical reduction permit a reformulation of the Kontsevich-Zagier conjecture in terms of volume-preserving change of variables between compact semi-algebraic sets. Via triangulations and methods of PL–geometry, we study the obstructions of this approach as a generalization of the Third Hilbert Problem. We complete the works of J. Wan to develop a degree theory for periods based on the minimality of the ambient space needed to obtain such a compact reduction, this gives a first geometric notion of transcendence of periods. We extend this study introducing notions of geometric and arithmetic complexities for periods based in the minimal polynomial complexity among the semi-canonical reductions of a period. The second part deals with the understanding of particular objects coming from algebraic geometry with a strong background in combinatorial geometry, for which we develop a dynamical approach. The logarithmic vector fields are an algebraic-analytic tool used to study sub-varieties and germs of analytic manifolds. We are concerned with the case of line arrangements in the affine or projective space. One is interested to study how the combinatorial data of the arrangement determines relations between its associated logarithmic vector fields: this problem is known as the Terao conjecture. We study the module of logarithmic vector fields of an affine line arrangement by the filtration induced by the degree of the polynomial components. We determine that there exist only two types of non-trivial polynomial vector fields fixing an infinitely many lines. Then, we describe the influence of the combinatorics of the arrangement on the expected minimal degree for these kind of vector fields. We prove that the combinatorics do not determine the minimal degree of the logarithmic vector fields of an affine line arrangement, giving two pair of counter-examples, each pair corresponding to a different notion of combinatorics. We determine that the dimension of the filtered spaces follows a quadratic growth from a certain degree, depending only on the combinatorics of the arrangements. We illustrate these formula by computations over some examples. In order to study computationally these filtration, we develop a library of functions in the mathematical software Sage
APA, Harvard, Vancouver, ISO, and other styles
7

Lezowski, Pierre. "Questions d’euclidianité." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14642/document.

Full text
Abstract:
Nous étudions l'euclidianité des corps de nombres pour la norme et quelques unes de ses généralisations. Nous donnons en particulier un algorithme qui calcule le minimum euclidien d'un corps de nombres de signature quelconque. Cela nous permet de prouver que de nombreux corps sont euclidiens ou non pour la norme. Ensuite, nous appliquons cet algorithme à l'étude des classes euclidiennes pour la norme, ce qui permet d'obtenir de nouveaux exemples de corps de nombres avec une classe euclidienne non principale. Par ailleurs, nous déterminons tous les corps cubiques purs avec une classe euclidienne pour la norme. Enfin, nous nous intéressons aux corps de quaternions euclidiens. Après avoir énoncé les propriétés de base, nous étudions quelques cas particuliers. Nous donnons notamment la liste complète des corps de quaternions euclidiens et totalement définis sur un corps de nombres de degré au plus deux
We study norm-Euclideanity of number fields and some of its generalizations. In particular, we provide an algorithm to compute the Euclidean minimum of a number field of any signature. This allows us to study the norm-Euclideanity of many number fields. Then, we extend this algorithm to deal with norm-Euclidean classes and we obtain new examples of number fields with a non-principal norm-Euclidean class. Besides, we describe the complete list of pure cubic number fields admitting a norm-Euclidean class. Finally, we study the Euclidean property in quaternion fields. First, we establish its basic properties, then we study some examples. We provide the complete list of Euclidean quaternion fields, which are totally definite over a number field with degree at most two
APA, Harvard, Vancouver, ISO, and other styles
8

Molin, Pascal. "Intégration numérique et calculs de fonctions L." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00537489.

Full text
Abstract:
Cette thèse montre la possibilité d'une application rigoureuse de la méthode d'intégration numérique double-exponentielle introduite par Takahasi et Mori en 1974, et sa pertinence pour les calculs à grande précision en théorie des nombres. Elle contient en particulier une étude détaillée de cette méthode, des critères simples sur son champ d'application, et des estimations rigoureuses des termes d'erreur. Des paramètres explicités et précis permettent de l'employer aisément pour le calcul garanti de fonctions définies par des intégrales. Cette méthode est également appliquée en détail au calcul de transformées de Mellin inverses de facteurs gamma intervenant dans les calculs numériques de fonctions L. Par une étude unifiée, ce travail démontre la complexité d'un algorithme de M. Rubinstein et permet de proposer des algorithmes de calcul de valeurs de fonctions L quelconques dont le résultat est garanti et dont la complexité est meilleure en la précision.
APA, Harvard, Vancouver, ISO, and other styles
9

Coles, Jonathan. "Algorithms for bounding Folkman numbers /." Online version of thesis, 2005. https://ritdml.rit.edu/dspace/handle/1850/2765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Domingues, Riaal. "A polynomial time algorithm for prime recognition." Diss., Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08212007-100529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jin, Xia. "Ramsey numbers involving a triangle : theory & algorithms /." Online version of thesis, 1993. http://hdl.handle.net/1850/12147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lutze, David. "Solving Chromatic Number with Quantum Search and Quantum Counting." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2342.

Full text
Abstract:
This thesis presents a novel quantum algorithm that solves the Chromatic Number problem. Complexity analysis of this algorithm revealed a run time of O(2n/2n2(log2n)2). This is an improvement over the best known algorithm, with a run time of 2nnO(1) [1]. This algorithm uses the Quantum Search algorithm (often called Grover's Algorithm), and the Quantum Counting algorithm. Chromatic Number is an example of an NP-Hard problem, which suggests that other NP-Hard problems can also benefit from a speed-up provided by quantum technology. This has wide implications as many real world problems can be framed as NP-Hard problems, so any speed-up in the solution of these problems is highly sought after. A bulk of this thesis consists of a review of the underlying principles of quantum mechanics and quantum computing, building to the Quantum Search and Quantum Counting algorithms. The review is written with the assumption that the reader has no prior knowledge on quantum computing. This culminates with a presentation of algorithms for generating the quantum circuits required to solve K-Coloring and Chromatic Number.
APA, Harvard, Vancouver, ISO, and other styles
13

Saleh, R. A. "Algorithms and architectures using the number theoretic transform for digital signal processing." Thesis, University of Kent, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Yuanping. "Counting the number of spanning trees in some special graphs /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20ZHANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hanley, Jodi Ann. "Egyptian fractions." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2323.

Full text
Abstract:
Egyptian fractions are what we know as unit fractions that are of the form 1/n - with the exception, by the Egyptians, of 2/3. Egyptian fractions have actually played an important part in mathematics history with its primary roots in number theory. This paper will trace the history of Egyptian fractions by starting at the time of the Egyptians, working our way to Fibonacci, a geologist named Farey, continued fractions, Diophantine equations, and unsolved problems in number theory.
APA, Harvard, Vancouver, ISO, and other styles
16

Duval, Dominique. "Diverses questions relatives au calcul formel avec des nombres algèbriques : [thèse soutenue sur un ensemble de travaux]." Grenoble 1, 1987. http://www.theses.fr/1987GRE10028.

Full text
Abstract:
Presentation du systeme d5 de calcul formel avec des nombres algebriques, ecrit avec claire dicrescenzo. Exemples d'application illustrant l'efficacite de l'algorithme. Justification theorique du systeme d5 dans le cadre des theories equationnelles a l'aide de la notion d'evaluation dynamique
APA, Harvard, Vancouver, ISO, and other styles
17

Lavallee, Melisa Jean. "Dihedral quintic fields with a power basis." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2788.

Full text
Abstract:
Cryptography is defined to be the practice and studying of hiding information and is used in applications present today; examples include the security of ATM cards and computer passwords ([34]). In order to transform information to make it unreadable, one needs a series of algorithms. Many of these algorithms are based on elliptic curves because they require fewer bits. To use such algorithms, one must find the rational points on an elliptic curve. The study of Algebraic Number Theory, and in particular, rare objects known as power bases, help determine what these rational points are. With such broad applications, studying power bases is an interesting topic with many research opportunities, one of which is given below. There are many similarities between Cyclic and Dihedral fields of prime degree; more specifically, the structure of their field discriminants is comparable. Since the existence of power bases (i.e. monogenicity) is based upon finding solutions to the index form equation - an equation dependant on field discriminants - does this imply monogenic properties of such fields are also analogous? For instance, in [14], Marie-Nicole Gras has shown there is only one monogenic cyclic field of degree 5. Is there a similar result for dihedral fields of degree 5? The purpose of this thesis is to show that there exist infinitely many monogenic dihedral quintic fields and hence, not just one or finitely many. We do so by using a well- known family of quintic polynomials with Galois group D₅. Thus, the main theorem given in this thesis will confirm that monogenic properties between cyclic and dihedral quintic fields are not always correlative.
APA, Harvard, Vancouver, ISO, and other styles
18

Bucic, Ida. "Pollard's rho method." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85886.

Full text
Abstract:
In this work we are going to investigate a factorization method that was invented by John Pollard. It makes possible to factorize medium large integers into a product of prime numbers. We will run a C++ program and test how do different parameters affect the results. There will be a connection drawn between the Pollard's rho method, the Birthday paradox and the Floyd's cycle finding algorithm. In results we will find a polynomial function that has the best effectiveness and performance for Pollard's rho method.
APA, Harvard, Vancouver, ISO, and other styles
19

Relton, Samuel. "Algorithms for matrix functions and their Fréchet derivatives and condition numbers." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/algorithms-for-matrix-functions-and-their-frechet-derivatives-and-condition-numbers(f20e8144-1aa0-45fb-9411-ddc0dc7c2c31).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kanieski, William C. "On Computing with Perron Numbers: A Summary of New Discoveries in Cyclotomic Perron Numbers and New Computer Algorithms for Continued Research." Ohio University Honors Tutorial College / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1619193565661965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stewart, Robert Grisham. "A Statistical Evaluation of Algorithms for Independently Seeding Pseudo-Random Number Generators of Type Multiplicative Congruential (Lehmer-Class)." Digital Commons @ East Tennessee State University, 2007. https://dc.etsu.edu/etd/2049.

Full text
Abstract:
To be effective, a linear congruential random number generator (LCG) should produce values that are (a) uniformly distributed on the unit interval (0,1) excluding endpoints and (b) substantially free of serial correlation. It has been found that many statistical methods produce inflated Type I error rates for correlated observations. Theoretically, independently seeding an LCG under the following conditions attenuates serial correlation: (a) simple random sampling of seeds, (b) non-replicate streams, (c) non-overlapping streams, and (d) non-adjoining streams. Accordingly, 4 algorithms (each satisfying at least 1 condition) were developed: (a) zero-leap, (b) fixed-leap, (c) scaled random-leap, and (d) unscaled random-leap. Note that the latter satisfied all 4 independent seeding conditions. To assess serial correlation, univariate and multivariate simulations were conducted at 3 equally spaced intervals for each algorithm (N=24) and measured using 3 randomness tests: (a) the serial correlation test, (b) the runs up test, and (c) the white noise test. A one-way balanced multivariate analysis of variance (MANOVA) was used to test 4 hypotheses: (a) omnibus, (b) contrast of unscaled vs. others, (c) contrast of scaled vs. others, and (d) contrast of fixed vs. others. The MANOVA assumptions of independence, normality, and homogeneity were satisfied. In sum, the seeding algorithms did not differ significantly from each other (omnibus hypothesis). For the contrast hypotheses, only the fixed-leap algorithm differed significantly from all other algorithms. Surprisingly, the scaled random-leap offered the least difference among the algorithms (theoretically this algorithm should have produced the second largest difference). Although not fully supported by the research design used in this study, it is thought that the unscaled random-leap algorithm is the best choice for independently seeding the multiplicative congruential random number generator. Accordingly, suggestions for further research are proposed.
APA, Harvard, Vancouver, ISO, and other styles
22

Souza, Vera Lúcia Graciani de. "Fatoração de inteiros e grupos sobre conicas." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306147.

Full text
Abstract:
Orientador: Martinho da Costa Araujo
Dissertação (mestrado profissional) - Universidade Estadual de Campinas. Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-13T09:34:59Z (GMT). No. of bitstreams: 1 Souza_VeraLuciaGracianide_M.pdf: 1138543 bytes, checksum: 893a12834a41de0bedf2e0e1c71a3fc1 (MD5) Previous issue date: 2009
Resumo: Este trabalho tem por objetivo fatorar número inteiro utilizando pontos racionais sobre o círculo unitário. Igualmente pretende determinar alguns grupos sobre cônicas. A pesquisa inicia com os conceitos básicos de Álgebra e Teoria dos Números, que fundamentam que o conjunto de pontos racionais sobre o círculo unitário tem uma estrutura de grupo. Desse conjunto é possível estender a idéia de grupo de pontos racionais sobre o círculo para pontos racionais sobre cônicas. Para encontrar os pontos racionais sobre o círculo foi usada uma parametrização do círculo por funções trigonométricas. Para cada ponto sobre o círculo unitário está associado um ângulo com o eixo positivo das abscissas, portanto adicionar pontos sobre o círculo equivale adicionar seus ângulos correspondentes. Com a operação "adição" de pontos sobre o círculo é possível definir uma estrutura de grupo que é utilizada para fatorar números inteiros. Para a cônica, a operação "adição" é determinada algebricamente ao calcular o coeficiente angular da reta que passa por dois pontos dados e o elemento neutro dessa cônica, também justificada geometricamente. No trabalho foram determinados os grupos de pontos racionais sobre cônicas e demonstrado alguns resultados sobre esses grupos usando os resíduos quadráticos e finalizando com a dedução de alguns resultados sobre a soma das coordenadas dos pontos sobre uma cônica.
Abstract: The objective of this paper is to factorize integer number using rational points on the unitary circle. Also, it intends to determinate some groups on the conics. The research begins with the basic concepts of Algebra and Number Theory ensuring that the rational points set on the unitary circle has a structure of group. From this set is possible to extend the idea of rational points on the circle toward rational points on conics. In order to find the rational points on the circle a parametrization by trigonometric function on it was used. For each point on the unitary circle it is associated an angle with abscissa positive axis, therefore adding points on the circle equals to add its corresponding angles. With the operation of "addition" points on the circle it is possible to define a group structure that is used to factorize integer numbers. For the conic, the "addition" operation is algebraically determinated when the angle coeficient of the line is calculated that joins two given points and the neutral element of that conic, which is geometrically justified. In the research the rational points groups on the conics were determined, and some result on these groups using quadratic residues were demonstrated, and it was finalized with the deduction of some results concerning the coordinates sum of points on a conics.
Mestrado
Mestre em Matemática
APA, Harvard, Vancouver, ISO, and other styles
23

Prince, Jared. "Exploring the Effect of Different Numbers of Convolutional Filters and Training Loops on the Performance of AlphaZero." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/3087.

Full text
Abstract:
In this work, the algorithm used by AlphaZero is adapted for dots and boxes, a two-player game. This algorithm is explored using different numbers of convolutional filters and training loops, in order to better understand the effect these parameters have on the learning of the player. Different board sizes are also tested to compare these parameters in relation to game complexity. AlphaZero originated as a Go player using an algorithm which combines Monte Carlo tree search and convolutional neural networks. This novel approach, integrating a reinforcement learning method previously applied to Go (MCTS) with a supervised learning method (neural networks) led to a player which beat all its competitors.
APA, Harvard, Vancouver, ISO, and other styles
24

Barajas, Leandro G. "Process Control in High-Noise Environments Using A Limited Number Of Measurements." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/7741.

Full text
Abstract:
The topic of this dissertation is the derivation, development, and evaluation of novel hybrid algorithms for process control that use a limited number of measurements and that are suitable to operate in the presence of large amounts of process noise. As an initial step, affine and neural network statistical process models are developed in order to simulate the steady-state system behavior. Such models are vitally important in the evaluation, testing, and improvement of all other process controllers referred to in this work. Afterwards, fuzzy logic controller rules are assimilated into a mathematical characterization of a model that includes the modes and mode transition rules that define a hybrid hierarchical process control. The main processing entity in such framework is a closed-loop control algorithm that performs global and then local optimizations in order to asymptotically reach minimum bias error; this is done while requiring a minimum number of iterations in order to promptly reach a desired operational window. The results of this research are applied to surface mount technology manufacturing-lines yield optimization. This work achieves a practical degree of control over the solder-paste volume deposition in the Stencil Printing Process (SPP). Results show that it is possible to change the operating point of the process by modifying certain machine parameters and even compensate for the difference in height due to change in print direction.
APA, Harvard, Vancouver, ISO, and other styles
25

Edson, Marcia Ruth. "Around the Fibonacci Numeration System." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc3676/.

Full text
Abstract:
Let 1, 2, 3, 5, 8, … denote the Fibonacci sequence beginning with 1 and 2, and then setting each subsequent number to the sum of the two previous ones. Every positive integer n can be expressed as a sum of distinct Fibonacci numbers in one or more ways. Setting R(n) to be the number of ways n can be written as a sum of distinct Fibonacci numbers, we exhibit certain regularity properties of R(n), one of which is connected to the Euler φ-function. In addition, using a theorem of Fine and Wilf, we give a formula for R(n) in terms of binomial coefficients modulo two.
APA, Harvard, Vancouver, ISO, and other styles
26

Keller, Mitchel Todd. "Some results on linear discrepancy for partially ordered sets." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33810.

Full text
Abstract:
Tanenbaum, Trenk, and Fishburn introduced the concept of linear discrepancy in 2001, proposing it as a way to measure a partially ordered set's distance from being a linear order. In addition to proving a number of results about linear discrepancy, they posed eight challenges and questions for future work. This dissertation completely resolves one of those challenges and makes contributions on two others. This dissertation has three principal components: 3-discrepancy irreducible posets of width 3, degree bounds, and online algorithms for linear discrepancy. The first principal component of this dissertation provides a forbidden subposet characterization of the posets with linear discrepancy equal to 2 by completing the determination of the posets that are 3-irreducible with respect to linear discrepancy. The second principal component concerns degree bounds for linear discrepancy and weak discrepancy, a parameter similar to linear discrepancy. Specifically, if every point of a poset is incomparable to at most D other points of the poset, we prove three bounds: the linear discrepancy of an interval order is at most D, with equality if and only if it contains an antichain of size D; the linear discrepancy of a disconnected poset is at most the greatest integer less than or equal to (3D-1)/2; and the weak discrepancy of a poset is at most D. The third principal component of this dissertation incorporates another large area of research, that of online algorithms. We show that no online algorithm for linear discrepancy can be better than 3-competitive, even for the class of interval orders. We also give a 2-competitive online algorithm for linear discrepancy on semiorders and show that this algorithm is optimal.
APA, Harvard, Vancouver, ISO, and other styles
27

Verga, Juliana 1984. "Algoritmo para resolução do problema de fluxo multiproduto Fuzzy." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261869.

Full text
Abstract:
Orientador: Akebo Yamakami
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-14T08:52:58Z (GMT). No. of bitstreams: 1 Verga_Juliana_M.pdf: 625534 bytes, checksum: 396d5b5c1dafff5b2fbb632e185c4a72 (MD5) Previous issue date: 2009
Resumo: A teoria dos grafos é comumente utilizada na área da engenharia para resolver problemas que podem ser representados na forma de redes. Dentre diversos problemas abordados, o problema de fluxo multiproduto é um dos que também podem ser modelados por grafos. Este trabalho apresenta uma proposta de solução para o problema de fluxo multiproduto fuzzy. O problema foi modelado através de um grafo, cujos nós representam pontos de oferta e demanda de produtos, os quais trafegam pelos arcos da rede. O algoritmo proposto visa encontrar soluções factiveis e boas para o problema de fluxo multiproduto fuzzy em redes com incertezas nos custos e capacidades, contendo múltiplas origens e múltiplos destinos. As incertezas são modeladas por meio da teoria dos conjuntos fuzzy, que tem sido aplicada com sucesso em problemas com incertezas.
Abstract: The graph theory is commonly used in the area of engineering to solve problems that can be represented in the form of nets. Among several problems, the multicommodity flow problem is one that can be modeled by graphs. This work presents an approach for solving the fuzzy multicommodity flow problem. The problem was modeled through a graph whose nodes represent points of supply and demand of commodities, which pass through arcs of the network. Our algorithm aims to find a set of good feasible solutions for the fuzzy multicommodity flow problem in networks with uncertainties in the costs and capacities, containing multiple origins and multiple destinations. The uncertainties are modeled by means of the fuzzy sets theory, which has been successfully applied to problems with uncertainties.
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
28

Vishnoi, Nisheeth Kumar. "Theoretical Aspects of Randomization in Computation." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/6424.

Full text
Abstract:
Randomness has proved to be a powerful tool in all of computation. It is pervasive in areas such as networking, machine learning, computer graphics, optimization, computational number theory and is "necessary" for cryptography. Though randomized algorithms and protocols assume access to "truly" random bits, in practice, they rely on the output of "imperfect" sources of randomness such as pseudo-random number generators or physical sources. Hence, from a theoretical standpoint, it becomes important to view randomness as a resource and to study the following fundamental questions pertaining to it: Extraction: How do we generate "high quality" random bits from "imperfect" sources? Randomization: How do we use randomness to obtain efficient algorithms? Derandomization: How (and when) can we "remove" our dependence on random bits? In this thesis, we consider important problems in these three prominent and diverse areas pertaining to randomness. In randomness extraction, we present extractors for "oblivious bit fixing sources". In (a non-traditional use of) randomization, we have obtained results in machine learning (learning juntas) and proved hardness of lattice problems. While in derandomization, we present a deterministic algorithm for a fundamental problem called "identity testing". In this thesis we also initiate a complexity theoretic study of Hilbert's 17th problem. Here identity testing is used in an interesting manner. A common theme in this work has been the use of tools from areas such as number theory in a variety of ways, and often the techniques themselves are quite interesting.
APA, Harvard, Vancouver, ISO, and other styles
29

Junior, Domingos Dellamonica. "Extração de aleatoriedade a partir de fontes defeituosas." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04052007-160412/.

Full text
Abstract:
Recentemente, Barak et al. (2004) exibiram construções de extratores e dispersores determinísticos (funções computáveis em tempo polinomial) com parâmetros melhores do que era anteriormente possível. Introduziremos os conceitos envolvidos em tal trabalho e mencionaremos suas aplicações; em particular, veremos como é possível obter cotas muito melhores para o problema Ramsey bipartido (um problema bem difícil) utilizando as construções descritas no artigo. Também apresentamos resultados originais para melhorar tais construções. Tais idéias são inspiradas no trabalho de Anup Rao (2005) e utilizam o recente êxito de Jean Bourgain (2005) em obter extratores que quebram a \"barreira 1/2\".
Recently, Barak et al. (2004) constructed explicit deterministic extractors and dispersers (these are polynomial-time computable functions) with much better parameters than what was known before. We introduce the concepts involved in such a construction and mention some of its applications; in particular, we describe how it is possible to obtain much better bounds for the bipartite Ramsey problem (a very hard problem) using the machinery developed in that paper. We also present some original results that improve on these constructions. They are inspired by the work of Anup Rao (2005) and uses the recent breakthrough of Jean Bourgain (2005) in obtaining 2-source extractors that break the \"1/2-barrier\".
APA, Harvard, Vancouver, ISO, and other styles
30

Kreacic, Eleonora. "Some problems related to the Karp-Sipser algorithm on random graphs." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:3b2eb52a-98f5-4af8-9614-e4909b8b9ffa.

Full text
Abstract:
We study certain questions related to the performance of the Karp-Sipser algorithm on the sparse Erdös-Rényi random graph. The Karp-Sipser algorithm, introduced by Karp and Sipser [34] is a greedy algorithm which aims to obtain a near-maximum matching on a given graph. The algorithm evolves through a sequence of steps. In each step, it picks an edge according to a certain rule, adds it to the matching and removes it from the remaining graph. The algorithm stops when the remining graph is empty. In [34], the performance of the Karp-Sipser algorithm on the Erdös-Rényi random graphs G(n,M = [cn/2]) and G(n, p = c/n), c > 0 is studied. It is proved there that the algorithm behaves near-optimally, in the sense that the difference between the size of a matching obtained by the algorithm and a maximum matching is at most o(n), with high probability as n → ∞. The main result of [34] is a law of large numbers for the size of a maximum matching in G(n,M = cn/2) and G(n, p = c/n), c > 0. Aronson, Frieze and Pittel [2] further refine these results. In particular, they prove that for c < e, the Karp-Sipser algorithm obtains a maximum matching, with high probability as n → ∞; for c > e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching of G(n,M = cn/2) is of order Θlog n(n1/5), with high probability as n → ∞. They further conjecture a central limit theorem for the size of a maximum matching of G(n,M = cn/2) and G(n, p = c/n) for all c > 0. As noted in [2], the central limit theorem for c < 1 is a consequence of the result of Pittel [45]. In this thesis, we prove a central limit theorem for the size of a maximum matching of both G(n,M = cn/2) and G(n, p = c/n) for c > e. (We do not analyse the case 1 ≤ c ≤ e). Our approach is based on the further analysis of the Karp-Sipser algorithm. We use the results from [2] and refine them. For c > e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching is of order Θlog n(n1/5), with high probability as n → ∞, and the study [2] suggests that this difference is accumulated at the very end of the process. The question how the Karp-Sipser algorithm evolves in its final stages for c > e, motivated us to consider the following problem in this thesis. We study a model for the destruction of a random network by fire. Let us assume that we have a multigraph with minimum degree at least 2 with real-valued edge-lengths. We first choose a uniform random point from along the length and set it alight. The edges burn at speed 1. If the fire reaches a node of degree 2, it is passed on to the neighbouring edge. On the other hand, a node of degree at least 3 passes the fire either to all its neighbours or none, each with probability 1/2. If the fire extinguishes before the graph is burnt, we again pick a uniform point and set it alight. We study this model in the setting of a random multigraph with N nodes of degree 3 and α(N) nodes of degree 4, where α(N)/N → 0 as N → ∞. We assume the edges to have i.i.d. standard exponential lengths. We are interested in the asymptotic behaviour of the number of fires we must set alight in order to burn the whole graph, and the number of points which are burnt from two different directions. Depending on whether α(N) » √N or not, we prove that after the suitable rescaling these quantities converge jointly in distribution to either a pair of constants or to (complicated) functionals of Brownian motion. Our analysis supports the conjecture that the difference between the size of a matching obtained by the Karp-Sipser algorithm and the size of a maximum matching of the Erdös-Rényi random graph G(n,M = cn/2) for c > e, rescaled by n1/5, converges in distribution.
APA, Harvard, Vancouver, ISO, and other styles
31

Castel, Pierre. "Un algorithme de résolution des équations quadratiques en dimension 5 sans factorisation." Phd thesis, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-00685260.

Full text
Abstract:
Cette thèse en théorie algorithmique des nombres présente un nouvel algorithme probabiliste pour résoudre des équations quadratiques sur Z ou Q en dimension 5 sans utiliser de factorisation. Il est d'une complexité nettement meilleure que les algorithmes existants pour résoudre ce genre d'équations et repose sur deux algorithmes : celui de Simon et celui de Pollard et Schnorr. Après quelques rappels sur la théorie des formes quadratiques, on explique comment fonctionne cet algorithme. La suite consiste en l'analyse détaillée de cet algorithme pour laquelle on utilisera une version effective du théorème de densité de Tchebotarev.
APA, Harvard, Vancouver, ISO, and other styles
32

Munsch, Marc. "Moments des fonctions thêta." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4093/document.

Full text
Abstract:
On s’intéresse dans cette thèse à l’étude des fonctions thêta intervenant dans la preuve de l’équation fonctionnelle des fonctions L de Dirichlet. En particulier, on adapte certains résultats obtenus dans le cadre des fonctions L au cas des fonctions thêta. S. Chowla a conjecturé que les fonctions L de Dirichlet associées à des caractères χ primitifs ne doivent pas s’annuler au point central de leur équation fonctionnelle. De façon analogue, il est conjecturé que les fonctions thêta ne s'annulent pas au point 1. Dans le but de prouver cette conjecture pour beaucoup de caractères, on étudie les moments de fonctions thêta dans plusieurs familles. On se focalise sur deux familles importantes. La première considérée est l’ensemble des caractères de Dirichlet modulo p où p est un nombre premier. On prouve des formules asymptotiques pour les moments d'ordre 2 et 4 en se ramenant à des problèmes de nature diophantienne. La seconde famille considérée est celle des caractères primitifs et quadratiques associés à des discriminants fondamentaux d inférieurs à une certaine borne fixée. On donne une formule asymptotique pour le premier moment et une majoration pour le moment d'ordre 2 en utilisant des techniques de transformée de Mellin ainsi que des estimations sur les sommes de caractères. Dans les deux cas, on en déduit des résultats de non-annulation des fonctions thêta. On propose également un algorithme qui, pour beaucoup de caractères, se révèle en pratique efficace pour prouver la non-annulation sur l'axe réel positif des fonctions thêta ce qui entraîne la non-annulation sur le même axe des fonctions L associées
In this thesis, we focus on the study of theta functions involved in the proof of the functional equation of Dirichlet L- functions. In particular, we adapt some results obtained for L-functions to the case of theta functions. S. Chowla conjectured that Dirichlet L- functions associated to primitive characters χ don’t vanish at the central point of their functional equation. In a similar way to Chowla’s conjecture, it is conjectured that theta functions don't vanish at the central point of their functional equation for each primitive character. With the aim of proving this conjecture for a lot of characters, we study moments of theta functions in various families. We concentrate on two important families. The first one which we consider is the family of all Dirichlet characters modulo p where p is a prime number. In this case, we prove asymptotic formulae for the second and fourth moment of theta functions using diophantine techniques. The second family which we consider is the set of primitive quadratic characters associated to a fundamental discriminant less than a fixed bound. We give an asymptotic formula for the first moment and an upper bound for the second moment using techniques of Mellin transforms and estimation of character sums. In both cases, we deduce some results of non-vanishing. We also give an algorithm which, in practice, works well for a lot of characters to prove the non-vanishing of theta functions on the positive real axis. In this case, this implies in particular that the associated L-functions don’t vanish on the same axis
APA, Harvard, Vancouver, ISO, and other styles
33

Bergou, El Houcine. "Méthodes numériques pour les problèmes des moindres carrés, avec application à l'assimilation de données." Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0114/document.

Full text
Abstract:
L'algorithme de Levenberg-Marquardt (LM) est parmi les algorithmes les plus populaires pour la résolution des problèmes des moindres carrés non linéaire. Motivés par la structure des problèmes de l'assimilation de données, nous considérons dans cette thèse l'extension de l'algorithme LM aux situations dans lesquelles le sous problème linéarisé, qui a la forme min||Ax - b ||^2, est résolu de façon approximative, et/ou les données sont bruitées et ne sont précises qu'avec une certaine probabilité. Sous des hypothèses appropriées, on montre que le nouvel algorithme converge presque sûrement vers un point stationnaire du premier ordre. Notre approche est appliquée à une instance dans l'assimilation de données variationnelles où les modèles stochastiques du gradient sont calculés par le lisseur de Kalman d'ensemble (EnKS). On montre la convergence dans L^p de l'EnKS vers le lisseur de Kalman, quand la taille de l'ensemble tend vers l'infini. On montre aussi la convergence de l'approche LM-EnKS, qui est une variante de l'algorithme de LM avec l'EnKS utilisé comme solveur linéaire, vers l'algorithme classique de LM ou le sous problème est résolu de façon exacte. La sensibilité de la méthode de décomposition en valeurs singulières tronquée est étudiée. Nous formulons une expression explicite pour le conditionnement de la solution des moindres carrés tronqués. Cette expression est donnée en termes de valeurs singulières de A et les coefficients de Fourier de b
The Levenberg-Marquardt algorithm (LM) is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this thesis the extension of the LM algorithm to the scenarios where the linearized least squares subproblems, of the form min||Ax - b ||^2, are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally and almost surely to a first order stationary point. Our approach is applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble Kalman smoother (EnKS). A convergence proof in L^p of EnKS in the limit for large ensembles to the Kalman smoother is given. We also show the convergence of LM-EnKS approach, which is a variant of the LM algorithm with EnKS as a linear solver, to the classical LM algorithm where the linearized subproblem is solved exactly. The sensitivity of the trucated sigular value decomposition method to solve the linearized subprobems is studied. We formulate an explicit expression for the condition number of the truncated least squares solution. This expression is given in terms of the singular values of A and the Fourier coefficients of b
APA, Harvard, Vancouver, ISO, and other styles
34

Junior, Lucelindo Dias Ferreira. "Sistema de Engenharia Kansei para apoiar a descrição da visão do produto no contexto do Gerenciamento Ágil de Projetos de produtos manufaturados." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/18/18156/tde-09032012-141046/.

Full text
Abstract:
O Gerenciamento Ágil de Projetos é uma abordagem útil para projetos com alto grau de complexidade e incerteza. Duas de suas características são: o envolvimento do consumidor nas tomadas de decisão sobre o projeto do produto; e, o uso de uma visão do produto, artefato que representa e comunica as características prioritárias e fundamentais do produto a ser desenvolvido. Há métodos para apoiar a criação da visão do produto, mas eles apresentam deficiências em operacionalizar o envolvimento do consumidor final. Por outro lado, existe a Engenharia Kansei, uma metodologia que permite capturar as necessidades de um grande número de consumidores e relacioná-las a características do produto. Este trabalho apresenta um estudo aprofundado da metodologia da Engenharia Kansei e analisa como essa pode ser útil para apoiar a descrição da visão do produto, no contexto do Gerenciamento Ágil de Projetos de produtos manufaturados. Em seguida, para verificar essa proposição, apresenta o desenvolvimento de um Sistema de Engenharia Kansei baseado na Teoria de Quantificação Tipo I, Aritmética Fuzzy, e Algoritmos Genéticos, testado para o projeto de uma caneta voltada a alunos de pós-graduação. Para execução do projeto foi utilizado um conjunto de métodos e procedimentos, tais como: revisão bibliográfica sistemática; desenvolvimento matemático; desenvolvimento computacional; e, estudo de caso. Analisa-se o Sistema de Engenharia Kansei proposto, e os resultados no caso aplicado, para averiguar seu potencial. Indica evidencias que o Sistema de Engenharia Kansei é capaz de gerar requisitos sobre configurações de produtos segundo a perspectiva do consumidor potencial, e que essas configurações são úteis para a formulação da visão do produto e na evolução desta visão no decorrer do projeto de produto.
The Agile Project Management is a useful approach for projects with high degree of complexity and uncertainty. Two of its singularities are: costumer involvement in decision making about the product design; and the use of a product vision, an artifact that represents and communicates the fundamental and high-priority features of the product to be developed. There are methods to support the creation of the product vision, but they have shortcomings in operationalizing the costumer involvement. On the other hand, there is the Kansei Engineering, a methodology to capture the needs of a large number of consumers and correlate them to product features. This paper presents a detailed study of the Kansei Engineering methodology and analyzes how this can be useful to support the description of the product vision, in the context of Agile Project Management of manufactured products. Then, to verify this proposition, it presents the development of a Kansei Engineering System based on Quantification Theory Type I, Fuzzy Arithmetic and Genetic Algorithms, tested for the design of a pen aimed at graduate students. To implement the project we used a set of methods and procedures, such as systematic literature review, mathematical development, computational development, and case study. It analyzes the proposed Kansei Engineering System and the results in the case study applied, to ascertain their potential. Evidence indicates that Kansei Engineering System is capable of generating requirements on product configurations from the perspective of the potential consumer, and that these configurations are useful for the description of the product vision and for the progression of this vision during the project of the product.
APA, Harvard, Vancouver, ISO, and other styles
35

Rocha, Eugénio Alexandre Miguel. "Uma Abordagem Algébrica à Teoria de Controlo Não Linear." Doctoral thesis, Universidade de Aveiro, 2003. http://hdl.handle.net/10773/21444.

Full text
Abstract:
Doutoramento em Matemática
Nesta tese de Doutoramento desenvolve-se principalmente uma abordagem algébrica à teoria de sistemas de controlo não lineares. No entanto, outros tópicos são também estudados. Os tópicos tratados são os seguidamente enunciados: fórmulas para sistemas de controlo sobre álgebras de Lie livres, estabilidade de um sistema de corpos rolantes, algoritmos para aritmética digital, e equações integrais de Fredholm não lineares. No primeiro e principal tópico estudam-se representações para as soluções de sistemas de controlo lineares no controlo. As suas trajetórias são representadas pelas chamadas séries de Chen. Estuda-se a representação formal destas séries através da introdução de várias álgebras não associativas e técnicas específicas de álgebras de Lie livres. Sistemas de coordenadas para estes sistemas são estudados, nomeadamente, coordenadas de primeiro tipo e de segundo tipo. Apresenta-se uma demonstração alternativa para as coordenadas de segundo tipo e obtêm-se expressões explícitas para as coordenadas de primeiro tipo. Estas últimas estão intimamente ligadas ao logaritmo da série de Chen que, por sua vez, tem fortes relações com uma fórmula designada na literatura por “continuous Baker-Campbell- Hausdorff formula”. São ainda apresentadas aplicações à teoria de funções simétricas não comutativas. É, por fim, caracterizado o mapa de monodromia de um campo de vectores não linear e periódico no tempo em relação a uma truncatura do logaritmo de Chen. No segundo tópico é estudada a estabilizabilidade de um sistema de quaisquer dois corpos que rolem um sobre o outro sem deslizar ou torcer. Constroem-se controlos fechados e dependentes do tempo que tornam a origem do sistema de dois corpos num sistema localmente assimptoticamente estável. Vários exemplos e algumas implementações em Maple°c são discutidos. No terceiro tópico, em apêndice, constroem-se algoritmos para calcular o valor de várias funções fundamentais na aritmética digital, sendo possível a sua implementação em microprocessadores. São também obtidos os seus domínios de convergência. No último tópico, também em apêndice, demonstra-se a existência e unicidade de solução para uma classe de equações integrais não lineares com atraso. O atraso tem um carácter funcional, mostrando-se ainda a diferenciabilidade no sentido de Fréchet da solução em relação à função de atraso.
In this PhD thesis several subjects are studied regarding the following topics: formulas for nonlinear control systems on free Lie algebras, stabilizability of nonlinear control systems, digital arithmetic algorithms, and nonlinear Fredholm integral equations with delay. The first and principal topic is mainly related with a problem known as the continuous Baker-Campbell-Hausdorff exponents. We propose a calculus to deal with formal nonautonomous ordinary differential equations evolving on the algebra of formal series defined on an alphabet. We introduce and connect several (non)associative algebras as Lie, shuffle, zinbiel, pre-zinbiel, chronological (pre-Lie), pre-chronological, dendriform, D-I, and I-D. Most of those notions were also introduced into the universal enveloping algebra of a free Lie algebra. We study Chen series and iterated integrals by relating them with nonlinear control systems linear in control. At the heart of all the theory of Chen series resides a zinbiel and shuffle homomorphism that allows us to construct a purely formal representation of Chen series on algebras of words. It is also given a pre-zinbiel representation of the chronological exponential, introduced by A.Agrachev and R.Gamkrelidze on the context of a tool to deal with nonlinear nonautonomous ordinary differential equations over a manifold, the so-called chronological calculus. An extensive description of that calculus is made, collecting some fragmented results on several publications. It is a fundamental tool of study along the thesis. We also present an alternative demonstration of the result of H.Sussmann about coordinates of second kind using the mentioned tools. This simple and comprehensive proof shows that coordinates of second kind are exactly the image of elements of the dual basis of a Hall basis, under the above discussed homomorphism. We obtain explicit expressions for the logarithm of Chen series and the respective coordinates of first kind, by defining several operations on a forest of leaf-labelled trees. It is the same as saying that we have an explicit formula for the functional coefficients of the Lie brackets on a continuous Baker-Campbell-Hausdorff-Dynkin formula when a Hall basis is used. We apply those formulas to relate some noncommutative symmetric functions, and we also connect the monodromy map of a time-periodic nonlinear vector field with a truncation of the Chen logarithm. On the second topic, we study any system of two bodies rolling one over the other without twisting or slipping. By using the Chen logarithm expressions, the monodromy map of a flow and Lyapunov functions, we construct time-variant controls that turn the origin of a control system linear in control into a locally asymptotically stable equilibrium point. Stabilizers for control systems whose vector fields generate a nilpotent Lie algebra with degree of nilpotency · 3 are also given. Some examples are presented and Maple°c were implemented. The third topic, on appendix, concerns the construction of efficient algorithms for Digital Arithmetic, potentially for the implementation in microprocessors. The algorithms are intended for the computation of several functions as the division, square root, sines, cosines, exponential, logarithm, etc. By using redundant number representations and methods of Lyapunov stability for discrete dynamical systems, we obtain several algorithms (that can be glued together into an algorithm for parallel execution) having the same core and selection scheme in each iteration. We also prove their domains of convergence and discuss possible extensions. The last topic, also on appendix, studies the set of solutions of a class of nonlinear Fredholm integral equations with general delay. The delay is of functional character modelled by a continuous lag function. We ensure existence and uniqueness of a continuous (positive) solution of such equation. Moreover, under additional conditions, it is obtained the Fr´echet differentiability of the solution with respect to the lag function.
APA, Harvard, Vancouver, ISO, and other styles
36

Taylor, Ariel Jolishia. "Practicality of algorithmic number theory." 2013. http://hdl.handle.net/2152/22665.

Full text
Abstract:
This report discusses some of the uses of algorithms within number theory. Topics examined include the applications of algorithms in the study of cryptology, the Euclidean Algorithm, prime generating functions, and the connections between algorithmic number theory and high school algebra.
text
APA, Harvard, Vancouver, ISO, and other styles
37

Sorenson, Jonathan Paul. "Algorithms in number theory." 1991. http://catalog.hathitrust.org/api/volumes/oclc/24603560.html.

Full text
Abstract:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1991.
Vita. Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
38

Kruger, Jan Walters. "Generalizing the number of states in Bayesian belief propagation, as applied to portfolio management." Thesis, 1996. https://hdl.handle.net/10539/26225.

Full text
Abstract:
A research report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfillment of the requirements for the degree of Master of' Science.
This research report describes the use or the Pearl's algorithm in Bayesian belief networks to induce a belief network from a database. With a solid grounding in probability theory, the Pearl algorithm allows belief updating by propagating likelihoods of leaf nodes (variables) and the prior probabilities. The Pearl algorithm was originally developed for binary variables and a generalization to more states is investigated. The data 'Used to test this new method, in a Portfolio Management context, are the Return and various attributes of companies listed on the Johannesburg Stock Exchange ( JSE ). The results of this model is then compared to a linear regression model. The bayesian method is found to perform better than a linear regression approach.
Andrew Chakane 2018
APA, Harvard, Vancouver, ISO, and other styles
39

Tietken, Tom, and Nikolaus Castell-Castell. "The factoring of large integers by the novel Castell-Fact-Algorithm, 12th part." 2020. https://slub.qucosa.de/id/qucosa%3A71696.

Full text
Abstract:
The Prague Research Institute owns an self-developed algorithm (so-called 'Castell-fact-algorithm'), which is able to factorize unlimited large integers in an elegant and fast way. Because the experts are ignoring our information about it or even contradicting this fact (saying, 'it is not possible'), we hereby file subsequently another fast-developed, small algorithm as a 'teaser' (the so-called 'Tietken-Castell-Prime-Algorithm'), which can demonstrate the simple, efficient and creative operating principles of the Prague Research Institute. We call this Tietken-Castell-Prime-Algorithm 'creative', because it does not really create and identify prime numbers (at this assignment we are still working), but reach the same effect by a simple indirect procedure: With the assistence of a self-constructing and accumulating register (the so-called 'Tietken-Castell-register') prime numbers can also be a) created as well as b) identified and even big numbers, as far as they are already registered can practically be 'factorized' by reading out their prime-factors inside the register.
APA, Harvard, Vancouver, ISO, and other styles
40

Yeh, Chi-Kuang. "Optimal regression design under second-order least squares estimator: theory, algorithm and applications." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9765.

Full text
Abstract:
In this thesis, we first review the current development of optimal regression designs under the second-order least squares estimator in the literature. The criteria include A- and D-optimality. We then introduce a new formulation of A-optimality criterion so the result can be extended to c-optimality which has not been studied before. Following Kiefer's equivalence results, we derive the optimality conditions for A-, c- and D-optimal designs under the second-order least squares estimator. In addition, we study the number of support points for various regression models including Peleg models, trigonometric models, regular and fractional polynomial models. A generalized scale invariance property for D-optimal designs is also explored. Furthermore, we discuss one computing algorithm to find optimal designs numerically. Several interesting applications are presented and related MATLAB code are provided in the thesis.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Bird, William Herbert. "Graph Distinguishability and the Generation of Non-Isomorphic Labellings." Thesis, 2013. http://hdl.handle.net/1828/4839.

Full text
Abstract:
A distinguishing colouring of a graph G is a labelling of the vertices of G with colours such that no non-trivial automorphism of G preserves all colours. The distinguishing number of G is the minimum number of colours in a distinguishing colouring. This thesis presents a survey of the history of distinguishing colouring problems and proves new bounds and computational results about distinguishability. An algorithm to generate all labellings of a graph up to isomorphism is presented and compared to a previously published algorithm. The new algorithm is shown to have performance competitive with the existing algorithm, as well as being able to process automorphism groups far larger than the previous limit. A specialization of the algorithm is used to generate all minimal distinguishing colourings of a set of graphs with large automorphism groups and compute their distinguishing numbers.
Graduate
0984
0405
bbird@uvic.ca
APA, Harvard, Vancouver, ISO, and other styles
42

Castell-Castell, Nikolaus, and Tom Tietken. "Possibilities to identify prime numbers without RSA decryption algorithm and to decipher RSA encryptions indirectly (using a special list)." 2021. https://slub.qucosa.de/id/qucosa%3A74372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Caley, Timothy. "The Prouhet-Tarry-Escott problem." Thesis, 2012. http://hdl.handle.net/10012/7205.

Full text
Abstract:
Given natural numbers n and k, with n>k, the Prouhet-Tarry-Escott (PTE) problem asks for distinct subsets of Z, say X={x_1,...,x_n} and Y={y_1,...,y_n}, such that x_1^i+...+x_n^i=y_1^i+...+y_n^i\] for i=1,...,k. Many partial solutions to this problem were found in the late 19th century and early 20th century. When k=n-1, we call a solution X=(n-1)Y ideal. This is considered to be the most interesting case. Ideal solutions have been found using elementary methods, elliptic curves, and computational techniques. This thesis focuses on the ideal case. We extend the framework of the problem to number fields, and prove generalizations of results from the literature. This information is used along with computational techniques to find ideal solutions to the PTE problem in the Gaussian integers. We also extend a computation from the literature and find new lower bounds for the constant C_n associated to ideal PTE solutions. Further, we present a new algorithm that determines whether an ideal PTE solution with a particular constant exists. This algorithm improves the upper bounds for C_n and in fact, completely determines the value of C_6. We also examine the connection between elliptic curves and ideal PTE solutions. We use quadratic twists of curves that appear in the literature to find ideal PTE solutions over number fields.
APA, Harvard, Vancouver, ISO, and other styles
44

Moldenhauer, Carsten. "Game tree search algorithms for the game of cops and robber." Master's thesis, 2009. http://hdl.handle.net/10048/642.

Full text
Abstract:
Thesis (M. Sc.)--University of Alberta, 2009.
Title from PDF file main screen (viewed on Oct. 19, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta." Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
45

Henderson, Philip. "Playing and solving the game of Hex." Phd thesis, 2010. http://hdl.handle.net/10048/1311.

Full text
Abstract:
The game of Hex is of interest to the mathematics, algorithms, and artificial intelligence communities. It is a classical PSPACE-complete problem, and its invention is intrinsically tied to the Four Colour Theorem and the well-known strategy-stealing argument. Nash, Shannon, Tarjan, and Berge are among the mathematicians who have researched and published about this game. In this thesis we expand on previous research, further developing the mathematical theory and algorithmic techniques relating to Hex. In particular, we identify new classes of moves that can be pruned from consideration, and devise new algorithms to identify connection strategies efficiently. As a result of these theoretical improvements, we produce an automated solver capable of solving all 8 x 8 Hex openings and most 9 x 9 Hex openings; this marks the first time that computers have solved all Hex openings solved by humans. We also produce the two strongest automated Hex players in the world --- Wolve and MoHex --- and obtain both the gold and silver medals in the 2008 and 2009 International Computer Olympiads.
APA, Harvard, Vancouver, ISO, and other styles
46

Castell-Castell, Nikolaus, and Tom Tietken. "Das Faktorisieren großer Zahlen mittels des neuen CASTELL-FACT-ALGORITHMUS, 12. Teil: Als Teilaspekt hier die Einführung des neuen TIETKEN-CASTELL-PRIM-ALGORITHMUS zur indirekten, eindeutigen und korrekten Identifizierung und Herstellung von Primzahlen (prime numbers) unbegrenzter Größe." 2020. https://slub.qucosa.de/id/qucosa%3A71695.

Full text
Abstract:
Das Prague Research Institute ist im Besitz eines selbst entwickelten Algorithmus (dem sog. 'Castell-fact-Algorithmus'), der elegant und schnell unbegrenzt große Zahlen faktorisiert. Da die Fachwelt diesen Hinweis ignoriert oder sogar expressis verbis als 'nicht möglich' bestreitet, wird hier ein in Wochenfrist entwickelter kleiner Algorithmus als 'Teaser' nachgereicht (der sog. 'Tietken-Castell-Prim-Algorithmus'), der die einfache, effiziente und kreative Arbeitsweise des Prague Research Institutes demonstrieren soll. Kreativ ist dieser Tietken-Castell-Prim-Algorithmus, weil er nicht tatsächlich eigene Primzahlen selbst herstellt oder fremde Primzahlen als solche identifiziert (an diesem Projekt wird im Prague-Research-Institute noch gearbeitet), sondern durch ein einfaches indirektes Verfahren den gleichen Effekt erzielt: Mittels eines sich selbst aufbauenden, immer größer werdenden, Registers (dem sog. 'Tietken-Castell-Registers') können a) Primzahlen hergestellt und b) identifiziert werden und je nach bereits erzielter Größe des sich sukzessive aufbauenden Registers c) auch entsprechend große Zahlen, die aus Primzahlen entstanden sind, in diese Primzahlen zerlegt werden, indem diese Primzahl-Faktoren einfach nur aus dem Register ausgelesen werden.
APA, Harvard, Vancouver, ISO, and other styles
47

Castell-Castell, Nikolaus, and Tom Tietken. "Das Faktorisieren großer Zahlen mittels des neuen CASTELL-FACT-ALGORITHMUS, 12. Teil - Fortsetzung." 2020. https://slub.qucosa.de/id/qucosa%3A71829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Castell-Castell, Nikolaus, and Tom Tietken. "Moeglichkeiten, auch ohne RSA-Entschluesselungs-Algorithmus Primzahlen zu identifizieren und RSA-Verschluesselungen (mittels einer speziellen Liste) indirekt zu dechiffrieren." 2021. https://slub.qucosa.de/id/qucosa%3A74371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

(10732485), Clinton W. Bradford. "Square Forms Factoring with Sieves." Thesis, 2021.

Find full text
Abstract:
Square Form Factoring is an O(N1/4) factoring algorithm developed by D. Shanks using certain properties of quadratic forms. Central to the original algorithm is an iterative search for a square form. We propose a new subexponential-time algorithm called SQUFOF2, based on ideas of D. Shanks and R. de Vogelaire, which replaces the iterative search with a sieve, similar to the Quadratic Sieve.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography