Дисертації з теми "Arithmetic applications"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Arithmetic applications".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Hanss, Michael. "Applied fuzzy arithmetic : an introduction with engineering applications /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/enhancements/fy0662/2004117177-d.html.
Повний текст джерелаLee, Peter. "Hybrid-logarithmic arithmetic and applications." Thesis, University of Kent, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633518.
Повний текст джерелаTaïbi, Olivier. "Two arithmetic applications of Arthur's work." Palaiseau, Ecole polytechnique, 2014. https://tel.archives-ouvertes.fr/pastel-01066463/document.
Повний текст джерелаWe present two arithmetic applications of James Arthur's endoscopic classification of the discrete automorphic spectrum for symplectic and orthogonal groups. The first one consists in removing the irreducibility assumption in a theorem of Richard Taylor describing the image of complex conjugations by p-adic Galois representations associated with regular, algebraic, essentially self-dual, cuspidal automorphic representations of GL_{2n+1} over a totally real number field. We also extend it to the case of representations of GL_{2n} whose multiplicative character is ''odd''. We use a p-adic deformation argument, more precisely we prove that on the eigenvarieties for symplectic and even orthogonal groups, there are ''many'' points corresponding to (quasi-)irreducible Galois representations. Arthur's endoscopic classification is used to define these Galois representations, and also to transfer self-dual automorphic representations of the general linear group to these classical groups. The second application concerns the explicit computation of dimensions of spaces of automorphic or modular forms. Our main contribution is an algorithm computing orbital integrals at torsion elements of an unramified p-adic classical group, for the unit of the unramified Hecke algebra. It allows to compute the geometric side in Arthur's trace formula, and thus the Euler characteristic of the discrete spectrum in level one. Arthur's endoscopic classification allows to analyse precisely this Euler characteristic, and deduce the dimensions of spaces of level one automorphic forms. The dimensions of spaces of vector-valued Siegel modular forms, which constitute a more classical problem, are easily derived
Cheng, Lo Sing. "Efficient finite field arithmetic with cryptographic applications." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26871.
Повний текст джерелаGöbel, Benjamin [Verfasser], and Ulf [Akademischer Betreuer] Kühn. "Arithmetic Local Coordinates and Applications to Arithmetic Self-Intersection Numbers / Benjamin Göbel. Betreuer: Ulf Kühn." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2015. http://d-nb.info/1075317495/34.
Повний текст джерелаGöbel, Benjamin Verfasser], and Ulf [Akademischer Betreuer] [Kühn. "Arithmetic Local Coordinates and Applications to Arithmetic Self-Intersection Numbers / Benjamin Göbel. Betreuer: Ulf Kühn." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:18-74739.
Повний текст джерелаHandley, W. G. "Some machine characterizations of classes close to #DELTA#0̲'IN." Thesis, University of Manchester, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375068.
Повний текст джерелаDyer, A. K. "Applications of sieve methods in number theory." Thesis, Bucks New University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384646.
Повний текст джерелаViale, Matteo. "Applications of the proper forcing axiom to cardinal arithmetic." Paris 7, 2006. http://www.theses.fr/2006PA07A003.
Повний текст джерелаZhu, Dalin. "Residue number system arithmetic inspired applications in cellular downlink OFDMA." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2070.
Повний текст джерелаOudjida, Abdelkrim Kamel. "Binary Arithmetic for Finite-Word-Length Linear Controllers : MEMS Applications." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2001/document.
Повний текст джерелаThis thesis addresses the problem of optimal hardware-realization of finite-word-length(FWL) linear controllers dedicated to MEMS applications. The biggest challenge is to ensuresatisfactory control performances with a minimal hardware. To come up, two distinct butcomplementary optimizations can be undertaken: in control theory and in binary arithmetic. Only thelatter is involved in this work.Because MEMS applications are targeted, the binary arithmetic must be fast enough to cope withthe rapid dynamic of MEMS; power-efficient for an embedded control; highly scalable for an easyadjustment of the control performances; and easily predictable to provide a precise idea on therequired logic resources before the implementation.The exploration of a number of binary arithmetics showed that radix-2r is the best candidate that fitsthe aforementioned requirements. It has been fully exploited to designing efficient multiplier cores,which are the real engine of the linear systems.The radix-2r arithmetic was applied to the hardware integration of two FWL structures: a linear timevariant PID controller and a linear time invariant LQG controller with a Kalman filter. Both controllersshowed a clear superiority over their existing counterparts, or in comparison to their initial forms
Lazda, Christopher David. "Rational homotopy theory in arithmetic geometry : applications to rational points." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/24707.
Повний текст джерелаNibouche, O. "High performana computer arithmetic architectures for image and signal processing applications." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395217.
Повний текст джерелаKoutsianas, Angelos. "Applications of S-unit equations to the arithmetic of elliptic curves." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/86760/.
Повний текст джерелаZhao, Kaiyong. "A multiple-precision integer arithmetic library for GPUs and its applications." HKBU Institutional Repository, 2011. http://repository.hkbu.edu.hk/etd_ra/1237.
Повний текст джерелаCOLOMAR, MARIA FERNANDA PALLARES. "APPLICATIONS OF THE FIRST CONSISTENCY PROOF PRESENTED BY GENTZEN FOR PEANO ARITHMETIC." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4126@1.
Повний текст джерелаNa antologia que M.E. Szabo realizara dos trabalhos de Gentzen e publicara em 1969 se transcrevem, em um apêndice, algumas passagens apresentadas por Bernays ao editor pertencentes a uma primeira prova de consistência para a Aritmética de Peano realizada por Gentzen que não tinha sido publicada até então. À diferença das outras provas de consistência realizadas por Gentzen e já conhecidas na década de trinta, esta prova não utiliza o procedimento de indução transfinita até e0. Ao contrário, baseia-se na definição de um processo de redução de seqüentes que se associa sistematicamente a todo seqüente derivável permitindo reconhecê-lo como verdadeiro. Nós reconstruímos essa prova realizando algumas variações e estudamos o modo pelo qual a técnica principal utilizada (a definição do processo de redução de seqüentes) pode ser vista em relação a resultados da lógica clássica de primeira ordem tais como provas de completude. A parte central da nossa dissertação é a realização de uma versão desta prova de consistência para um sistema formal para a Aritmética de Heyting.
In the antology of Gentzens works made by M.E.Szabo and published in 1969, we find out in an appendix, some passages presented by Bernays to the editor. These texts belong to a first proof of Peanos Arithmetic consistency that Gentzen did not publish. In a different way from the other proofs of consistency made by Gentzen and already known in the thirties, this proof does not use the procedure of transfinite induction up to e0. On the contrary, it is based on the definition of a reduction process for sequents that is systematically associated to every derivable sequent allowing us to recognize it as a true sequent. We reconstructed this proof making some variations and we studied how the main technique used (the definition of the reduction process) could be seen in relation with other results of first order logic like proofs of completness. The main part of our dissertation is another version of this consistency proof for a formal system for Heyting Arithmetic.
Vonk, Jan Bert. "The Atkin operator on spaces of overconvergent modular forms and arithmetic applications." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:081e4e46-80c1-41e7-9154-3181ccb36313.
Повний текст джерелаReading, Alan G. "On counting problems in nonstandard models of Peano arithmetic with applications to groups." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/5421/.
Повний текст джерелаRivero, Salgado Óscar. "Arithmetic applications of the Euler systems of Beilinson-Flach elements and diagonal cycles." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671865.
Повний текст джерелаAquesta tesi estudia algunes aplicacions a problemes aritmètics dels anomenats sistemes d'Euler, que són classes de cohomologia galoisiana que varien de forma compatible sobre torres de cossos. Seguint una filosofia força general introduïda per Perrin-Riou, la imatge d'aquests sistemes d'Euler sota certs reguladors ens permet recuperar la funció L p-àdica associada a una representació de Galois. En aquesta tesi ens centrem fonamentalment en els sistemes de Beilinson-Flach i de cicles diagonals, tot i que també estudiem altres que comparteixen propietats amb els anteriors i que ens ajuden a una millor comprensió dels mateixos. Recordem que diferents treballs a la darrera dècada ja havíem aconseguit, mitjançant aquests sistemes d’Euler, provar nous casos de la conjectura equivariant de Birch i Swinnerton-Dyer, un dels grans reptes matemàtics dels nostres temps. Les aplicacions aritmètiques que discutim en aquesta monografia són diverses: zeros excepcionals, fórmules de valors especials, resultats de no anul·lació, connexions amb la teoria d’Iwasawa... Els primers capítols de la tesi estudien un fenomen de zeros excepcionals. Recordi’s que les funcions L p-àdiques interpolen, al llarg d’una certa regió, valors de la funció L complexa, llevat de certs factors d’Euler. L’anul·lació d’aquests factors acostuma donar lloc a fenòmens aritmètics interessants. Això, lluny de ser casual, admet una interpretació algebraica en termes de grups de Selmer. Per exemple, la anul·lació en s=0 de la funció de KubotaLeopoldt es relaciona amb el fet que hi hagi una p-unitat extra a la corresponent component del grup d’unitats, i el seu logaritme es relaciona amb la derivada de la funció L p-àdica. Aquesta és una de les conjectures més conegudes de Gross. Aquí comencem estudiant el cas de la representació adjunta d’una forma modular de pes 1. En aquest cas, provem una conjectura de Darmon, Lauder i Rotger que expressa el valor de la derivada de la funció L p-àdica associada en termes d’una combinació de logaritmes de unitats i p-unitats en el cos retallat per la representació. La prova fa servir la teoria de funcions L p-àdiques i funcions L p-àdiques millorades, així com les deformacions de Galois. A més, observem un fenomen que complementa aquest estudi. Les funcions L p-àdiques que hi surten són la imatge del sistema d’Euler de Beilinson-Flach pel morfisme de Perrin-Riou. Aquests zeros excepcionals també s’observen a nivell de sistemes d’Euler, i un pot introduir un concepte de classe derivada que ens permet recuperar l’invariant L que controla l’aritmètica de la representació galoisiana. No només això: amb aquesta noció de derivada podem donar una demostració alternativa del resultat anterior explotant la geometria d’aquests sistemes. Aquesta primera part de la tesi es complementa amb dos capítols on treballem el fenomen de zeros excepcionals tant per a unitats el·líptiques com per a cicles diagonals. Els últims capítols s’endinsen en l’estudi d’altres qüestions al voltant dels sistemes d’Euler, i es comença amb el desenvolupament d’un formalisme d’Artin a nivell de classes de cohomologia. El cas més bàsic passa per considerar una forma pròpia cuspidal de pes 2, de manera que sigui congruent a una sèrie d’Eisenstein. La classe de cohomologia associada a f descompon com la suma de dues components mòdul p. Nosaltres suggerim congruències relacionant cadascuna d’elles amb expressions que involucren unitats circulars. Això fa servir, per una banda, factoritzacions de funcions L p-àdiques i les lleis de reciprocitats; per l’altra, recuperarem alguns resultats de Fukaya-Kato desenvolupats durant la prova de les conjectures de Sharifi.
Карнаушенко, В. П., and А. В. Бородин. "Field Programmable Counter Arrays Integration with Field Programmable Gates Arrays." Thesis, NURE, MC&FPGA, 2019. https://mcfpga.nure.ua/conf/2019-mcfpga/10-35598-mcfpga-2019-004.
Повний текст джерелаO'Neill, Adam. "Stronger security notions for trapdoor functions and applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37109.
Повний текст джерелаSano, Kaoru. "Growth rate of height functions associated with ample divisors and its applications." Kyoto University, 2019. http://hdl.handle.net/2433/242570.
Повний текст джерелаArikan, Ali Ferda. "Structural models for the pricing of corporate securities and financial synergies : applications with stochastic processes including arithmetic Brownian motion." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5416.
Повний текст джерелаArikan, Ali F. "Structural models for the pricing of corporate securities and financial synergies. Applications with stochastic processes including arithmetic Brownian motion." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5416.
Повний текст джерелаWong, Kenneth Koon-Ho. "Applications of finite field computation to cryptology : extension field arithmetic in public key systems and algebraic attacks on stream ciphers." Queensland University of Technology, 2008. http://eprints.qut.edu.au/17570/.
Повний текст джерелаWigren, Thomas. "The Cauchy-Schwarz inequality : Proofs and applications in various spaces." Thesis, Karlstads universitet, Avdelningen för matematik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-38196.
Повний текст джерелаMétairie, Jérémy. "Contribution aux opérateurs arithmétiques GF(2m) et leurs applications à la cryptographie sur courbes elliptiques." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S023/document.
Повний текст джерелаCryptography and security market is growing up at an annual rate of 17 % according to some recent studies. Cryptography is known to be the science of secret. It is based on mathematical hard problems as integers factorization, the well-known discrete logarithm problem. Although those problems are trusted, software or hardware implementations of cryptographic algorithms can suffer from inherent weaknesses. Execution time, power consumption (...) can differ depending on secret informations such as the secret key. Because of that, some malicious attacks could be used to exploit these weak points and therefore can be used to break the whole crypto-system. In this thesis, we are interested in protecting our physical device from the so called side channel attacks as well as interested in proposing new GF(2^m) multiplication algorithms used over elliptic curves cryptography. As a protection, we first thought that parallel scalar multiplication (using halve-and-add and double-and-add algorithms both executed at the same time) would be a great countermeasure against template attacks. We showed that it was not the case and that parallelism could not be used as protection by itself : it had to be combined with more conventional countermeasures. We also proposed two new GF(2^m) representations we respectively named permuted normal basis (PNB) and Phi-RNS. Those two representations, under some requirements, can offer a great time-area trade-off on FPGAs
Silva, Salatiel Dias da. "Estudo do binômio de Newton." Universidade Federal da Paraíba, 2013. http://tede.biblioteca.ufpb.br:8080/handle/tede/7526.
Повний текст джерелаApproved for entry into archive by Maria Suzana Diniz (msuzanad@hotmail.com) on 2015-10-16T22:38:36Z (GMT) No. of bitstreams: 1 arquivototal.pdf: 971519 bytes, checksum: 75c5acddc58c0f9e43eb4d646a3fa8fd (MD5)
Made available in DSpace on 2015-10-16T22:38:36Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 971519 bytes, checksum: 75c5acddc58c0f9e43eb4d646a3fa8fd (MD5) Previous issue date: 2013-08-14
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This work deals with the study of the binomial developments started in the late years of Elementary School, when we deal with notable products, which is complemented in the second year of High School, from the study of Newton's Binomial. We will make a detailed study of the same, through a historical overview about the subject, properties of arithmetic triangle (Pascal's triangle / Tartaglia's), reaching the binomial theorem and, nally, some applications of these results in solving various problems, in the multinomial expanding and in the binomial series.
Este trabalho vem mostrar o estudo dos desenvolvimentos binomiais iniciado na 7a série (8o ano) do Ensino Fundamental, quando tratamos de produtos notáveis, que é complementado na segunda série do ensino médio, a partir do estudo do Binômio de Newton. Faremos um estudo detalhado do mesmo, passando por um apanhado histórico sobre o assunto, propriedades do triângulo aritmético (triângulo de Pascal/Tartaglia), chegando ao Teorema binomial e, por m, a algumas aplicações destes na resolução de problemas diversos, expansão multinomial e nas séries binomiais.
[Verfasser], Surapong Pongyupinpanich, Manfred [Akademischer Betreuer] Glesner, Michael [Akademischer Betreuer] Hübner, Andreas [Akademischer Betreuer] Binder, Harald [Akademischer Betreuer] Klingbeil, and Hans [Akademischer Betreuer] Eveking. "Optimal Design of Fixed-Point and Floating-Point Arithmetic Units for Scientific Applications / Surapong Pongyupinpanich. Betreuer: Manfred Glesner ; Michael Hübner ; Andreas Binder ; Harald Klingbeil ; Hans Eveking." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2012. http://d-nb.info/1106117581/34.
Повний текст джерелаChakhari, Aymen. "Évaluation analytique de la précision des systèmes en virgule fixe pour des applications de communication numérique." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S059/document.
Повний текст джерелаTraditionally, evaluation of accuracy is performed through two different approaches. The first approach is to perform simulations fixed-point implementation in order to assess its performance. These approaches based on simulation require large computing capacities and lead to prohibitive time evaluation. To avoid this problem, the work done in this thesis focuses on approaches based on the accuracy evaluation through analytical models. These models describe the behavior of the system through analytical expressions that evaluate a defined metric of precision. Several analytical models have been proposed to evaluate the fixed point accuracy of Linear Time Invariant systems (LTI) and of non-LTI non-recursive and recursive linear systems. The objective of this thesis is to propose analytical models to evaluate the accuracy of digital communications systems and algorithms of digital signal processing made up of non-smooth and non-linear operators in terms of noise. In a first step, analytical models for evaluation of the accuracy of decision operators and their iterations and cascades are provided. In a second step, an optimization of the data length is given for fixed-point hardware implementation of the Decision Feedback Equalizer DFE based on analytical models proposed and for iterative decoding algorithms such as turbo decoding and LDPC decoding-(Low-Density Parity-Check) in a particular quantization law. The first aspect of this work concerns the proposition analytical models for evaluating the accuracy of the non-smooth decision operators and the cascading of decision operators. So, the characterization of the quantization errors propagation in the cascade of decision operators is the basis of the proposed analytical models. These models are applied in a second step to evaluate the accuracy of the spherical decoding algorithmSSFE (Selective Spanning with Fast Enumeration) used for transmission MIMO systems (Multiple-Input Multiple -Output). In a second step, the accuracy evaluation of the iterative structures of decision operators has been the interesting subject. Characterization of quantization errors caused by the use of fixed-point arithmetic is introduced to result in analytical models to evaluate the accuracy of application of digital signal processing including iterative structures of decision. A second approach, based on the estimation of an upper bound of the decision error probability in the convergence mode, is proposed for evaluating the accuracy of these applications in order to reduce the evaluation time. These models are applied to the problem of evaluating the fixed-point specification of the Decision Feedback Equalizer DFE. The estimation of resources and power consumption on the FPGA is then obtained using the Xilinx tools to make a proper choice of the data widths aiming to a compromise accuracy/cost. The last step of our work concerns the fixed-point modeling of iterative decoding algorithms. A model of the turbo decoding algorithm and the LDPC decoding is then given. This approach integrates the particular structure of these algorithms which implies that the calculated quantities in the decoder and the operations are quantified following an iterative approach. Furthermore, the used fixed-point representation is different from the conventional representation using the number of bits accorded to the integer part and the fractional part. The proposed approach is based on the dynamic and the total number of bits. Besides, the dynamic choice causes more flexibility for fixed-point models since it is not limited to only a power of two
Baktir, Selcuk. "Efficient algorithms for finite fields, with applications in elliptic curve cryptography." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0501103-132249.
Повний текст джерелаKeywords: multiplication; OTF; optimal extension fields; finite fields; optimal tower fields; cryptography; OEF; inversion; finite field arithmetic; elliptic curve cryptography. Includes bibliographical references (p. 50-52).
Joldes, Mioara Maria. "Approximations polynomiales rigoureuses et applications." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00657843.
Повний текст джерелаBrunie, Nicolas. "Contribution à l'arithmétique des ordinateurs et applications aux systèmes embarqués." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0894/document.
Повний текст джерелаIn the last decades embedded systems have been challenged with more and more application variety, each time more constrained. This implies an ever growing need for performances and energy efficiency in arithmetic units. This work studies solutions ranging from hardware to software to improve arithmetic support in embedded systems. Some of these solutions were integrated in Kalray's MPPA processor. The first part of this work focuses on floating-Point arithmetic support in the MPPA. It starts with the design of a floating-Point unit (FPU) based on the classical FMA (Fused Multiply-Add) operator. The improvements we suggest, implement and evaluate include a mixed precision FMA, a 3-Operand add and a 2D scalar product, each time with a single rounding and support for subnormal numbers. It then considers the implementation of division and square root. The FPU is reused and modified to optimize the software implementations of those primitives at a lower cost. Finally, this first part opens up on the development of a code generator designed for the implementation of highly optimized mathematical libraries in different contexts (architecture, accuracy, latency, throughput). The second part studies a reconfigurable coprocessor, a hardware operator that could be dynamically modified to adapt on the fly to various applicative needs. It intends to provide performance close to ASIC implementation, with some of the flexibility of software. One of the addressed challenges is the integration of such a reconfigurable coprocessor into the low power embedded cluster of the MPPA. Another is the development of a software framework targeting the coprocessor and allowing design space exploration. The last part of this work leaves micro-Architecture considerations to study the efficient use of parallel arithmetic resources. It presents an improvement of regular architectures (Single Instruction Multiple Data), like those found in graphic processing units (GPU), for the execution of divergent control flow graphs
Filip, Silviu-Ioan. "Robust tools for weighted Chebyshev approximation and applications to digital filter design." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN063/document.
Повний текст джерелаThe field of signal processing methods and applications frequentlyrelies on powerful results from numerical approximation. One suchexample, at the core of this thesis, is the use of Chebyshev approximationmethods for designing digital filters.In practice, the finite nature of numerical representations adds an extralayer of difficulty to the design problems we wish to address using digitalfilters (audio and image processing being two domains which rely heavilyon filtering operations). Most of the current mainstream tools for thisjob are neither optimized, nor do they provide certificates of correctness.We wish to change this, with some of the groundwork being laid by thepresent work.The first part of the thesis deals with the study and development ofRemez/Parks-McClellan-type methods for solving weighted polynomialapproximation problems in floating-point arithmetic. They are veryscalable and numerically accurate in addressing finite impulse response(FIR) design problems. However, in embedded and power hungry settings,the format of the filter coefficients uses a small number of bits andother methods are needed. We propose a (quasi-)optimal approach basedon the LLL algorithm which is more tractable than exact approaches.We then proceed to integrate these aforementioned tools in a softwarestack for FIR filter synthesis on FPGA targets. The results obtainedare both resource consumption efficient and possess guaranteed accuracyproperties. In the end, we present an ongoing study on Remez-type algorithmsfor rational approximation problems (which can be used for infinite impulseresponse (IIR) filter design) and the difficulties hindering their robustness
Thomé, Emmanuel. "Théorie algorithmique des nombres et applications à la cryptanalyse de primitives cryptographiques." Habilitation à diriger des recherches, Université de Lorraine, 2012. http://tel.archives-ouvertes.fr/tel-00765982.
Повний текст джерелаNehmeh, Riham. "Quality Evaluation in Fixed-point Systems with Selective Simulation." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0020/document.
Повний текст джерелаTime-to-market and implementation cost are high-priority considerations in the automation of digital hardware design. Nowadays, digital signal processing applications use fixed-point architectures due to their advantages in terms of implementation cost. Thus, floating-point to fixed-point conversion is mandatory. The conversion process consists of two parts corresponding to the determination of the integer part word-length and the fractional part world-length. The refinement of fixed-point systems requires optimizing data word -length to prevent overflows and excessive quantization noises while minimizing implementation cost. Applications in image and signal processing domains are tolerant to errors if their probability or their amplitude is small enough. Numerous research works focus on optimizing the fractional part word-length under accuracy constraint. Reducing the number of bits for the fractional part word- length leads to a small error compared to the signal amplitude. Perturbation theory can be used to propagate these errors inside the systems except for unsmooth operations, like decision operations, for which a small error at the input can leads to a high error at the output. Likewise, optimizing the integer part word-length can significantly reduce the cost when the application is tolerant to a low probability of overflow. Overflows lead to errors with high amplitude and thus their occurrence must be limited. For the word-length optimization, the challenge is to evaluate efficiently the effect of overflow and unsmooth errors on the application quality metric. The high amplitude of the error requires using simulation based-approach to evaluate their effects on the quality. In this thesis, we aim at accelerating the process of quality metric evaluation. We propose a new framework using selective simulations to accelerate the simulation of overflow and un- smooth error effects. This approach can be applied on any C based digital signal processing applications. Compared to complete fixed -point simulation based approaches, where all the input samples are processed, the proposed approach simulates the application only when an error occurs. Indeed, overflows and unsmooth errors must be rare events to maintain the system functionality. Consequently, selective simulation allows reducing significantly the time required to evaluate the application quality metric. 1 Moreover, we focus on optimizing the integer part, which can significantly decrease the implementation cost when a slight degradation of the application quality is acceptable. Indeed, many applications are tolerant to overflows if the probability of overflow occurrence is low enough. Thus, we exploit the proposed framework in a new integer word-length optimization algorithm. The combination of the optimization algorithm and the selective simulation technique allows decreasing significantly the optimization time
Robert, Jean-Marc. "Contrer l'attaque Simple Power Analysis efficacement dans les applications de la cryptographie asymétrique, algorithmes et implantations." Thesis, Perpignan, 2015. http://www.theses.fr/2015PERP0039/document.
Повний текст джерелаThe development of online communications and the Internet have made encrypted data exchange fast growing. This has been possible with the development of asymmetric cryptographic protocols, which make use of arithmetic computations such as modular exponentiation of large integer or elliptic curve scalar multiplication. These computations are performed by various platforms, including smart-cards as well as large and powerful servers. The platforms are subject to attacks taking advantage of information leaked through side channels, such as instantaneous power consumption or electromagnetic radiations.In this thesis, we improve the performance of cryptographic computations resistant to Simple Power Analysis. On modular exponentiation, we propose to use multiple multiplications sharing a common operand to achieve this goal. On elliptic curve scalar multiplication, we suggest three different improvements : over binary fields, we make use of improved combined operation AB,AC and AB+CD applied to Double-and-add, Halve-and-add and Double/halve-and-add approaches, and to the Montgomery ladder ; over binary field, we propose a parallel Montgomery ladder ; we make an implementation of a parallel approach based on the Right-to-left Double-and-add algorithm over binary and prime fields, and extend this implementation to the Halve-and-add and Double/halve-and-add over binary fields
Dia, Roxana. "Towards Environment Perception using Integer Arithmetic for Embedded Application." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM038.
Повний текст джерелаThe main drawback of using grid-based representations for SLAM and for global localization is the required exponential computational complexity in terms of the grid size (of the map and the pose maps). The required grid size for modeling the environment surrounding a robot or of a vehicle can be in the order of thousands of millions of cells. For instance, a 2D square-shape space of size 100m × 100m, with a cell size of 10cm is modelled with a grid of 1 million cells. If we include a 2m of height to represent the third dimension, 20 millions of cells are required. Consequently, classical grid-based SLAM and global localization approaches require a parallel computing unit in order to meet the latency imposed by safety standards. Such a computation is usually done over workstations embedding Graphical Processing Units (GPUs) and/or a high-end CPUs. However, autonomous vehicles cannot handle such platforms for cost reason, and certification issues. Also, these platforms require a high power consumption that cannot fit within the limited source of energy available in some robots. Embedded hardware platforms are com- monly used as an alternative solution in automotive applications. These platforms meet the low-cost, low-power and small-space constraints. Moreover, some of them are automotive certified1, following the ISO26262 standard. However, most of them are not equipped with a floating-point unit, which limits the computational performance.The sigma-fusion project team in the LIALP laboratory at CEA-Leti has developed an integer-based perception method suitable for embedded devices. This method builds an occupancy grid via Bayesian fusion using integer arithmetic only, thus its "embeddability" on embedded computing platforms, without floating-point unit. This constitutes the major contribution of the PhD thesis of Tiana Rakotovao [Rakotovao Andriamahefa 2017].The objective of the present PhD thesis is to extend the integer perception framework to SLAM and global localization problems, thus offering solutions “em- beddable” on embedded systems
Godau, Claudia. "Cognitive bases of spontaneous shortcut use in primary school arithmetic." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17110.
Повний текст джерелаFlexible use of task-appropriate solving strategies is an important goal in mathematical education and educational standard of elementary school mathematics. Children need to decide spontaneously whether they calculate arithmetic problems the usual way or whether they invest time and effort to search for shortcut options and apply them. The focus of the current work lies on how students can be supported in spotting and applying shortcut strategies flexibly. Contextual factors are investigated that support the spontaneous usage of shortcuts and influences the transfer between them. Cognitive theories about how mathematical concepts and strategies develop were combined with findings from research on expertise, which disclose differences between the flexibility of experts and novices. In line with iterativ development of mathematical concepts successfully spotting and applying a shortcut might thus benefit from factors activating conceptual and/or procedural knowledge. Shortcuts based on commutativity (a + b = b + a) are used as a test case to investigat three contextual factors (instruction, association and estimation), which support or hinder spontaneous strategy use. Overall, the dissertation shows that spontaneous strategy use can be supported by some contextual factors and impeded by others. These contextual factors can, in principle, be controlled in school environment.
Miyamoto, Kenji. "Program extraction from coinductive proofs and its application to exact real arithmetic." Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-177777.
Повний текст джерелаDie Methode der Programmextraktion hat ihren Ursprung im Bereich der konstruktiven Mathematik, und stößt in letzter Zeit auf viel Interesse nicht nur bei Mathematikern sondern auch bei Informatikern. Vom Standpunkt der Mathematik ist ihr Ziel, aus Beweisen ihre rechnerische Bedeutung abzulesen, während vom Standpunkt der Informatik ihr Ziel die Untersuchung einer Methode ist, beweisbar korrekte Programme zu erhalten. Es ist deshalb naheliegend, neben theoretischen Ergebnissen auch ein praktisches Computersystem zur Verfügung zu haben, mit dessen Hilfe durch Programmextraktion lauffähige Programme entwickelt werden können. In dieser Doktorarbeit wird eine rechnerische Interpretation konstruktiver Beweise mit induktiven und koinduktiven Definitionen angegeben und untersucht. Die Interpretation geschieht dadurch, daß der rechnerische Gehalt von Beweisen in eine Programmiersprache übersetzt wird. Diese übersetzung wird Programmextraktion genannt; sie basiert auf Kreisels modifizierter Realisierbarkeit. Wir untersuchen die beweistheoretischen Grundlagen der Programmextraktion und erweitern den Beweisassistenten Minlog auf der Basis der erhaltenen theoretischen Resultate. Wenn eine Formel in Minlog formal bewiesen ist, läßt sich ein Programm aus dem Beweis extrahieren, und dieses extrahierte Programm kann in Minlog ausgeführt werden. Ferner sind extrahierte Programme beweisbar korrekt bezüglich der entsprechenden Formel aufgrund eines Korrektheitsatzes, den wir beweisen werden. Innerhalb unserer formalen Theorie bearbeiten wir einige aus der Literatur bekannte Fallstudien im Bereich der exakten reellen Arithmetik. Wir entwickeln eine vollständige Formalisierung der entsprechenden Beweise und diskutieren die in Minlog automatisch extrahierten Programme.
Chelton, William N. "Galois Field GF (2'') Arithmetic Circuits and Their Application in Elliptic Curve Cryptography." Thesis, University of Sheffield, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.490334.
Повний текст джерелаKatayama, Shin-ichi. "A Theorem on the Cohomology of Groups and some Arithmetical Applications." 京都大学 (Kyoto University), 1985. http://hdl.handle.net/2433/86360.
Повний текст джерелаLo, Haw-Jing. "Design of a reusable distributed arithmetic filter and its application to the affine projection algorithm." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28199.
Повний текст джерелаCommittee Chair: Anderson, Dr. David V.; Committee Member: Hasler, Dr. Paul E.; Committee Member: Mooney, Dr. Vincent J.; Committee Member: Taylor, Dr. David G.; Committee Member: Vuduc, Dr. Richard.
Miyamoto, Kenji [Verfasser], and Helmut [Akademischer Betreuer] Schwichtenberg. "Program extraction from coinductive proofs and its application to exact real arithmetic / Kenji Miyamoto. Betreuer: Helmut Schwichtenberg." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2013. http://d-nb.info/1065610521/34.
Повний текст джерелаMiyamoto, Kenji Verfasser], and Helmut [Akademischer Betreuer] [Schwichtenberg. "Program extraction from coinductive proofs and its application to exact real arithmetic / Kenji Miyamoto. Betreuer: Helmut Schwichtenberg." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-177777.
Повний текст джерелаNajahi, Mohamed amine. "Synthesis of certified programs in fixed-point arithmetic, and its application to linear algebra basic blocks : and its application to linear algebra basic blocks." Thesis, Perpignan, 2014. http://www.theses.fr/2014PERP1212.
Повний текст джерелаTo be cost effective, embedded systems are shipped with low-end micro-processors. These processors are dedicated to one or few tasks that are highly demanding on computational resources. Examples of widely deployed tasks include the fast Fourier transform, convolutions, and digital filters. For these tasks to run efficiently, embedded systems programmers favor fixed-point arithmetic over the standardized and costly floating-point arithmetic. However, they are faced with two difficulties: First, writing fixed-point codes is tedious and requires that the programmer must be in charge of every arithmetical detail. Second, because of the low dynamic range of fixed-point numbers compared to floating-point numbers, there is a persistent belief that fixed-point computations are inherently inaccurate. The first part of this thesis addresses these two limitations as follows: It shows how to design and implement tools to automatically synthesize fixed-point programs. Next, to strengthen the user's confidence in the synthesized codes, analytic methods are suggested to generate certificates. These certificates can be checked using a formal verification tool, and assert that the rounding errors of the generated codes are indeed below a given threshold. The second part of the thesis is a study of the trade-offs involved when generating fixed-point code for linear algebra basic blocks. It gives experimental data on fixed-point synthesis for matrix multiplication and matrix inversion through Cholesky decomposition
Parashar, Karthick. "Optimisations de niveau système pour les algorithmes de traitement du signal utilisant l'arithmétique virgule fixe." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00783806.
Повний текст джерелаHachami, Saïd. "Périodes hermitiennes des courbes et application à une formule de chowla-selberg." Nancy 1, 1988. http://www.theses.fr/1988NAN10142.
Повний текст джерелаGerner, Alexander. "A novel strategy for estimating groundwater recharge in arid mountain regions and its application to parts of the Jebel Akhdar Mountains (Sultanate of Oman)." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-137045.
Повний текст джерелаIn ariden Gebieten haben Gebirgseinzugsgebiete einen wesentlichen Anteil am gesamten natürlichen Wasserdargebot. Aufgrund i. Allg. tief liegender Grundwasserspiegel ist - in Abgrenzung zum Oberflächenabfluss am Gebirgsrand - auch der unterirdische Abstrom (mountain-front recharge) von besonderer Bedeutung. Die Ausdehnung des unterirdischen Einzugsgebiets ist dabei oft vage. Ansätze zur Abschätzung des mountain-front recharge basieren meist auf Grundwasserdaten und integrieren in Zeit und Raum. Damit können allerdings keine prognostischen oder zeitabhängigen Schätzungen für den Zustrom zur benachbarten alluvialen Aquifer gemacht werden. Daher wird im folgenden ein niederschlagsbasierter Ansatz vorgeschlagen. Das vorgeschlagene neue Konzept kombiniert drei Ansätze, um den genannten Herausforderungen zu begegnen. Mit einem neu entwickelten konzeptionellen hydrologischen Modell auf Basis verteilter Niederschläge werden monatliche Werte für die Grundwasserneubildung bereitgestellt. Es basiert auf nicht-linearen Beziehungen zwischen Niederschlag und Grundwasserneubildung für definierte hydrologisch homogene Einheiten und Jahreszeiten. Deren Ableitung basiert auf einer Massenbilanz und berücksichtigt die wesentlichen Neubildungsmechanismen. Die Parametrisierung basiert auf Expertenwissen zu Geomorphologie und Niederschlagscharakteristika. Fuzzy Arithmetik wird zur Berücksichtigung von Unsicherheiten in einer ergänzenden mittleren jährlichen Wasserbilanz verwendet. Damit können Unschärfen im Niederschlagsinput, beim Pflanzenwasserbedarf in Gebirgsoasen und best verfügbaren Schätzungen der Neubildung als Bruchteil des Niederschlags effizient berücksichtigt werden. Mittels kontinuierlicher Oberflächen, die den Grad der Zugehörigkeit zu einer bestimmten geographischen Entität anzeigen (fuzzy regions) werden Unsicherheiten in der räumlichen Ausdehnung der unterirdischen Einzugsgebiete beschrieben. Definierte Teilmengen dieser fuzzy regions werden dann bei den Wasserhaushaltsbetrachtungen als potentielle Grundwassereinzugsgebiete verwendet. Der vorgeschlagene Ansatz wurde in einer ariden, teils verkarsteten Gebirgsregion im Norden des Sultanats Oman angewendet. Die beiden sich ergänzenden Ansätze zur Abschätzung der Grundwasserneubildung ergaben im langjährigen Mittel vergleichbare Werte. Diese stimmten auch gut mit den Ergebnissen einer inversen Grundwassermodellierung überein. Die Plausibilität der Neubildungsraten für bestimmte hydrologisch homogene Einheiten und Jahreszeiten spricht für die Verlässlichkeit der Ergebnisse des konzeptionellen hydrologischen Modells. Offensichtlich tragen insbesondere die weniger intensiven Winterniederschläge wesentlich zur Grundwasserneubildung bei. Die Unsicherheiten bezüglich der Ausdehnung des Grundwassereinzugsgebiets belaufen sich auf ca. 30 % des mittleren jährlichen Dargebots. Die komplementäre Betrachtung benachbarter Grundwassereinzugsgebiete ist ein denkbarer Weg, diese Unsicherheit in Zukunft zu reduzieren. Ein wesentlicher Beitrag um die Ergebnisse dieser Studie zukünftig weiter zu untermauern wären hydrogeologische Erkundung und Beobachtung von Grundwasserständen im alluvialen Aquifer, insbesondere nahe dem Gebirgsrand. Diese Empfehlung gilt über dieses Fallbeispiel hinaus für vergleichbare Systeme, in denen ein Gebirgseinzugsgebiet den Aquifer in der angrenzende Ebene speist
Masoudi, Pedram. "Application of hybrid uncertainty-clustering approach in pre-processing well-logs." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S023/document.
Повний текст джерелаIn the subsurface geology, characterization of geological beds by well-logs is an uncertain task. The thesis mainly concerns studying vertical resolution of well-logs (question 1). In the second stage, fuzzy arithmetic is applied to experimental petrophysical relations to project the uncertainty range of the inputs to the outputs, here irreducible water saturation and permeability (question 2). Regarding the first question, the logging mechanism is modelled by fuzzy membership functions. Vertical resolution of membership function (VRmf) is larger than spacing and sampling rate. Due to volumetric mechanism of logging, volumetric Nyquist frequency is proposed. Developing a geometric simulator for generating synthetic-logs of a single thin-bed enabled us analysing sensitivity of the well-logs to the presence of a thin-bed. Regression-based relations between ideal-logs (simulator inputs) and synthetic-logs (simulator outputs) are used as deconvolution relations for removing shoulder-bed effect of thin-beds from GR, RHOB and NPHI well-logs. NPHI deconvolution relation is applied to a real case where the core porosity of a thin-bed is 8.4%. The NPHI well-log is 3.8%, and the deconvolved NPHI is 11.7%. Since it is not reasonable that the core porosity (effective porosity) be higher than the NPHI (total porosity), the deconvolved NPHI is more accurate than the NPHI well-log. It reveals that the shoulder-bed effect is reduced in this case. The thickness of the same thin-bed was also estimated to be 13±7.5 cm, which is compatible with the thickness of the thin-bed in the core box (<25 cm). Usually, in situ thickness is less than the thickness of the core boxes, since at the earth surface, there is no overburden pressure, also the cores are weathered. Dempster-Shafer Theory (DST) was used to create well-log uncertainty range. While the VRmf of the well-logs is more than 60 cm, the VRmf of the belief and plausibility functions (boundaries of the uncertainty range) would be about 15 cm. So, the VRmf is improved, while the certainty of the well-log value is lost. In comparison with geometric method, DST-based algorithm resulted in a smaller uncertainty range of GR, RHOB and NPHI logs by 100%, 71% and 66%, respectively. In the next step, cluster analysis is applied to NPHI, RHOB and DT for the purpose of providing cluster-based uncertainty range. Then, NPHI is calibrated by core porosity value in each cluster, showing low √MSE compared to the five conventional porosity estimation models (at least 33% of improvement in √MSE). Then, fuzzy arithmetic is applied to calculate fuzzy numbers of irreducible water saturation and permeability. Fuzzy number of irreducible water saturation provides better (less overestimation) results than the crisp estimation. It is found that when the cluster interval of porosity is not compatible with the core porosity, the permeability fuzzy numbers are not valid, e.g. in well#4. Finally, in the possibilistic approach (the fuzzy theory), by calibrating α-cut, the right uncertainty interval could be achieved, concerning the scale of the study