Literatura académica sobre el tema "Randomization of arithmetic"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Randomization of arithmetic".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Randomization of arithmetic"

1

Emiris, Ioannis Z. "A Complete Implementation for Computing General Dimensional Convex Hulls". International Journal of Computational Geometry & Applications 08, n.º 02 (abril de 1998): 223–53. http://dx.doi.org/10.1142/s0218195998000126.

Texto completo
Resumen
We present a robust implementation of the Beneath-Beyond algorithm for computing convex hulls in arbitrary dimension. Certain techniques used are of independent interest in the implementation of geometric algorithms. In particular, two important, and often complementary, issues are studied, namely exact arithmetic and degeneracy. We focus on integer arithmetic and propose a general and efficient method for its implementation based on modular arithmetic. We suggest that probabilistic modular arithmetic may be of wide interest, as it combines the advantages of modular arithmetic with the speed of randomization. The use of perturbations as a method to cope with input degeneracy is also illustrated. A computationally efficient scheme is implemented which, moreover, greatly simplifies the task of programming. We concentrate on postprocessing, often perceived as the Achilles' heel of perturbations. Experimental results illustrate the dependence of running time on the various input parameters and attempt a comparison with existing programs. Lastly, we discuss the visualization capabilities of our software and illustrate them for problems in computational algebraic geometry. All code is publicly available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Poirier, Julia, GY Zou y John Koval. "Confidence intervals for a difference between lognormal means in cluster randomization trials". Statistical Methods in Medical Research 26, n.º 2 (29 de septiembre de 2014): 598–614. http://dx.doi.org/10.1177/0962280214552291.

Texto completo
Resumen
Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gao, Pengfei, Hongyi Xie, Fu Song y Taolue Chen. "A Hybrid Approach to Formal Verification of Higher-Order Masked Arithmetic Programs". ACM Transactions on Software Engineering and Methodology 30, n.º 3 (mayo de 2021): 1–42. http://dx.doi.org/10.1145/3428015.

Texto completo
Resumen
Side-channel attacks, which are capable of breaking secrecy via side-channel information, pose a growing threat to the implementation of cryptographic algorithms. Masking is an effective countermeasure against side-channel attacks by removing the statistical dependence between secrecy and power consumption via randomization. However, designing efficient and effective masked implementations turns out to be an error-prone task. Current techniques for verifying whether masked programs are secure are limited in their applicability and accuracy, especially when they are applied. To bridge this gap, in this article, we first propose a sound type system, equipped with an efficient type inference algorithm, for verifying masked arithmetic programs against higher-order attacks. We then give novel model-counting-based and pattern-matching-based methods that are able to precisely determine whether the potential leaky observable sets detected by the type system are genuine or simply spurious. We evaluate our approach on various implementations of arithmetic cryptographic programs. The experiments confirm that our approach outperforms the state-of-the-art baselines in terms of applicability, accuracy, and efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hegland, Markus. "Numerical methods for computing the greatest common divisor of univariate polynomials using floating point arithmetic". ANZIAM Journal 60 (15 de agosto de 2019): C127—C139. http://dx.doi.org/10.21914/anziamj.v60i0.14059.

Texto completo
Resumen
Computing the greatest common divisor (GCD) for two polynomials in floating point arithmetic is computationally challenging and even standard library software might return the result GCD=1 even when the polynomials have a nontrivial GCD. Here we review Euclid's algorithm and test a variant for a class of random polynomials. We find that our variant of Euclid's method often produces an acceptable result. However, close monitoring of the norm of the vector of coefficients of the intermediate polynomials is required. References R. M. Corless, P. M. Gianni, B. M. Trager, and S. M. Watt. The singular value decomposition for polynomial systems. In Proceedings of the 1995 International Symposium on Symbolic and Algebraic Computation, ISSAC '95, pages 195207. ACM, 1995. doi:10.1145/220346.220371. H. J. Stetter. Numerical polynomial algebra. SIAM, 2004. doi:10.1137/1.9780898717976. Z. Zeng. The numerical greatest common divisor of univariate polynomials. In Randomization, relaxation, and complexity in polynomial equation solving, volume 556 of Contemp. Math., pages 187217. Amer. Math. Soc., 2011. doi:10.1090/conm/556/11014.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Pearce, S. C. "Some Design Problems in Crop Experimentation. I. The Use of Blocks". Experimental Agriculture 31, n.º 2 (abril de 1995): 191–204. http://dx.doi.org/10.1017/s0014479700025278.

Texto completo
Resumen
SUMMARYField experiments are commonly designed in blocks and there are sound reasons for using them, for example, control of unwanted environmental differences and administrative convenience. If used, they should be chosen to correspond to perceived differences in the site or to simplify farm operations and not merely to conform to statistical desiderata. Thus, it is not essential that each must contain one plot for each treatment, though there are advantages if they do. Some of the consequences of using other block sizes are examined, it being borne in mind that modern computer packages will perform most of the tiresome arithmetic. The effectiveness of blocks is considered and it is noted that they sometimes do harm rather than good. The analysis of variance is explained in terms of strata as used in many modern computer programs and is extended to include the recovery of information and resolvability. Recommendations are made as to randomization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Schicho, Kurt, Michael Figl, Rudolf Seemann, Markus Donat, Michael L. Pretterklieber, Wolfgang Birkfellner, Astrid Reichwein et al. "Comparison of laser surface scanning and fiducial marker–based registration in frameless stereotaxy". Journal of Neurosurgery 106, n.º 4 (abril de 2007): 704–9. http://dx.doi.org/10.3171/jns.2007.106.4.704.

Texto completo
Resumen
✓The authors compared the accuracy of laser surface scanning patient registration using the commercially available Fazer (Medtronic, Inc.) with the conventional registration procedure based on fiducial markers (FMs) in computer-assisted surgery. Four anatomical head specimens were prepared with 10 titanium microscrews placed at defined locations and scanned with a 16-slice spiral computed tomography unit. To compare the two registration methods, each method was applied five times for each cadaveric specimen; thus data were obtained from 40 registrations. Five microscrews (selected following a randomization protocol) were used for each FM-based registration; the other five FMs were selected for coordinate measurements by touching with a point measurement stylus. Coordinates of these points were also measured manually on the screen of the navigation computer. Coordinates were measured in the same manner after laser surface registration. The root mean square error as calculated by the navigation system ranged from 1.3 to 3.2 mm (mean 1.8 mm) with the Fazer and from 0.3 to 1.8 mm (mean 1.0 mm) with FM-based registration. The overall mean deviations (the arithmetic mean of the mean deviations of measurements on the four specimens) were 3.0 mm (standard deviation [SD] range 1.4–2.6 mm) with the Fazer and 1.4 mm (SD range 0.4–0.9 mm) with the FMs. The Fazer registration scans 300 surface points. Statistical tests showed the difference in the accuracy of these methods to be highly significant. In accordance with the findings of other groups, the authors concluded that the inclusion of a larger number of registration points might improve the accuracy of Fazer registration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Allen, Brian, Edward Florez, Reza Sirous, Seth T. Lirette, Michael Griswold, Erick M. Remer, Zhen J. Wang et al. "Comparative effectiveness of tumor response assessment methods: Standard-of-care versus computer-assisted response evaluation." Journal of Clinical Oncology 35, n.º 6_suppl (20 de febrero de 2017): 432. http://dx.doi.org/10.1200/jco.2017.35.6_suppl.432.

Texto completo
Resumen
432 Background: In clinical trials and clinical practice, tumor response assessment with computed tomography (CT) defines critical end points in patients with metastatic disease treated with systemic agents. Methods to reduce errors and improve efficiency in tumor response assessment could improve patient care. Methods: Eleven readers from 10 different institutions independently categorized tumor response according to three different therapeutic response criteria using paired baseline and initial post-therapy CT studies from 20 randomly selected patients with metastatic renal cell carcinoma treated with sunitinib as part of a completed phase III multi-institutional study. Images were evaluated with a manual tumor response evaluation method (standard-of-care) and with computer-assisted response evaluation (CARE) that included stepwise guidance, interactive error-identification and correction methods, automated tumor metric extraction, calculations, response categorization, and data/image archival. A cross-over design, patient randomization, and two-week washout period were used to reduce recall bias. Comparative effectiveness metrics included error rate and mean patient evaluation time. Results: The standard-of-care method was on average associated with one or more errors in 30.5% (6.1/20) of patients while CARE had a 0.0% (0.0/20) error rate (p<0.001). The most common errors were related to data transfer and arithmetic calculation. In patients with errors, the median number of error types was 1 (range 1-3). Mean patient evaluation time with CARE was twice as fast as the standard-of-care method (6.4 vs. 13.1 minutes, p<0.001). Conclusions: Computer-assisted tumor response evaluation reduced errors and time of evaluation, indicating better overall effectiveness than manual tumor response evaluation methods that are the current standard-of-care.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Allen, Brian C., Edward Florez, Reza Sirous, Seth T. Lirette, Michael Griswold, Erick M. Remer, Zhen J. Wang et al. "Comparative Effectiveness of Tumor Response Assessment Methods: Standard of Care Versus Computer-Assisted Response Evaluation". JCO Clinical Cancer Informatics, n.º 1 (noviembre de 2017): 1–16. http://dx.doi.org/10.1200/cci.17.00026.

Texto completo
Resumen
Purpose To compare the effectiveness of metastatic tumor response evaluation with computed tomography using computer-assisted versus manual methods. Materials and Methods In this institutional review board–approved, Health Insurance Portability and Accountability Act–compliant retrospective study, 11 readers from 10 different institutions independently categorized tumor response according to three different therapeutic response criteria by using paired baseline and initial post-therapy computed tomography studies from 20 randomly selected patients with metastatic renal cell carcinoma who were treated with sunitinib as part of a completed phase III multi-institutional study. Images were evaluated with a manual tumor response evaluation method (standard of care) and with computer-assisted response evaluation (CARE) that included stepwise guidance, interactive error identification and correction methods, automated tumor metric extraction, calculations, response categorization, and data and image archiving. A crossover design, patient randomization, and 2-week washout period were used to reduce recall bias. Comparative effectiveness metrics included error rate and mean patient evaluation time. Results The standard-of-care method, on average, was associated with one or more errors in 30.5% (6.1 of 20) of patients, whereas CARE had a 0.0% (0.0 of 20) error rate ( P < .001). The most common errors were related to data transfer and arithmetic calculation. In patients with errors, the median number of error types was 1 (range, 1 to 3). Mean patient evaluation time with CARE was twice as fast as the standard-of-care method (6.4 minutes v 13.1 minutes; P < .001). Conclusion CARE reduced errors and time of evaluation, which indicated better overall effectiveness than manual tumor response evaluation methods that are the current standard of care.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Krymovskyi, K. G., O. A. Kaniura y T. M. Kostiuk. "Necessary diagnostic criteria of dental crowding in children during mixed dentition with different facial skeleton growth patterns". Reports of Vinnytsia National Medical University 25, n.º 4 (30 de noviembre de 2021): 616–19. http://dx.doi.org/10.31393/reports-vnmedical-2021-25(4)-18.

Texto completo
Resumen
Annotation. Pathology of dental crowding during mixed dentition is one of the most common and difficult in the practice of dentist-orthodontist. Its prevalence, according to modern scientific data reaches 77% and occurs in all pathologies of occlusion (malocclusions). The aim of our study is to establish the relationship between the formation of dental crowding and the growth patterns of facial skeleton during mixed dentition in order to improve the effectiveness of orthodontic treatment. We used 42 pairs of plaster models and 42 slices of cone-beam computed tomography images (CBCT) for patients aged 7 to 11 years. Randomization of patients into study groups was performed according to the facial skeleton growth patterns and the Little index value. The analysis was performed by the method of variation statistics taking into account the mean values (mode, median, arithmetic mean) and mean error (M) with the assessment of reliable values by Student’s t-test, as well as determining the correlation coefficient using the Pearson pairwise method to detect connections between the obtained indicators at the minimum probability threshold p<0.05 using the statistical package EZR v. 1.35. According to the results of the examined patients: 30 people (71.4%) had a severe degree of dental crowding on both maxilla and mandible (LII> 8 mm.), more often it was associated with the neutral type of growth – 82% (with vertical – 60%). Statistically significant correlations were found between severe degree of dental crowding and vertical and neutral facial skeleton growth patterns (p<0.05). The results of the CBCT study showed that narrowing of the upper pharyngeal airway (UP) according to McNamara was more common in patients with neutral (85%) and vertical (80%) growth patterns with skeletal II and I class malocclusions according to Engle, which were 55% and 35%, respectively. The study revealed that the vast majority of children with dental crowding with different facial skeleton growth patterns had clinically significant disorders of the development of both maxillary and mandibular apical bases and airways which required immediate interceptive orthodontic treatment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nakai, Takeshi, Yuto Misawa, Yuuki Tokushige, Mitsugu Iwamoto y Kazuo Ohta. "How to Solve Millionaires’ Problem with Two Kinds of Cards". New Generation Computing, 5 de enero de 2021. http://dx.doi.org/10.1007/s00354-020-00118-8.

Texto completo
Resumen
AbstractCard-based cryptography, introduced by den Boer aims to realize multiparty computation (MPC) by using physical cards. We propose several efficient card-based protocols for the millionaires’ problem by introducing a new operation called Private Permutation (PP) instead of the shuffle used in most of existing card-based cryptography. Shuffle is a useful randomization technique by exploiting the property of card shuffling, but it requires a strong assumption from the viewpoint of arithmetic MPC because shuffle assumes that public randomization is possible. On the other hand, private randomness can be used in PPs, which enables us to design card-based protocols taking ideas of arithmetic MPCs into account. Actually, we show that Yao’s millionaires’ protocol can be easily transformed into a card-based protocol by using PPs, which is not straightforward by using shuffles because Yao’s protocol uses private randomness. Furthermore, we propose entirely novel and efficient card-based millionaire protocols based on PPs by securely updating bitwise comparisons between two numbers, which unveil a power of PPs. As another interest of these protocols, we point out they have a deep connection to the well-known logical puzzle known as “The fork in the road.”
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Randomization of arithmetic"

1

Courtois, Jérôme. "Leak study of cryptosystem implementations in randomized RNS arithmetic". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS290.

Texto completo
Resumen
On parlera d'analyse forte pour une analyse qui permet de retrouver la clef d'un système cryptographique et d'une analyse faible dans le cas où on élimine des clefs candidates. Le but de cette thèse est essentiellement la compréhension du comportement de l'aléa des distances de Hamming produites par un système cryptographique de type ECC (Elliptic Curve for Cryptography) quand on utilise une représentation RNS (Residue Number System) avec la méthode des moduli aléatoires. Le Chapitre 2 introduit les différentes notions nécessaires à la compréhension de ce document. Il introduit brièvement l'algorithme de multiplication modulaire (Algorithme de Montgomery pour RNS) qui a inspiré la méthode des moduli aléatoires. Puis il décrit l'algorithme qui génère les séquences de distances de Hamming nécessaires à notre analyse. Ensuite il montre quel niveau de résistance apporte la méthode des moduli aléatoires contre différentes attaques classiques comme DPA (Diferential Power Analysis), CPA (Correlation Power Analysis), DPA du second ordre et MIA (Mutual Information Analysis). On apporte une compréhension de la distribution des distances de Hamming considérées comme des variables aléatoires. Suite à cela, on ajoute l'hypothèse gaussienne sur les distances de Hamming. On utilise alors le MLE (Maximum Likelihood Estimator) et une analyse forte comme pour faire des Template Attacks pour avoir une compréhension fine du niveau d'aléa apporté par la méthode des moduli aléatoires. Le Chapitre 3 est une suite naturelle des conclusions apportées par le Chapitre 2 sur l'hypothèse gaussienne. Si des attaques sont possibles avec le MLE, c'est qu'il existe sans doute des relations fortes entre les distances de Hamming considérées comme des variables aléatoires. La Section 3.2 cherche à quantifier ce niveau de dépendance des distances de Hamming. Ensuite, en restant dans l'hypothèse gaussienne, on remarquera qu'il est possible de construire une type de DPA qu'on a appelé DPA square reposant sur la covariance au lieu de la moyenne comme dans la DPA classique. Mais cela reste très gourmand en traces d'observation d'autant que dans de nombreux protocoles utilisant une ECC, on utilise une clef qu'une seule fois. La Section 3.4 s'efforce de montrer qu'il existe de l'information sur peu de traces de distances de Hamming malgré la randomisation des moduli. Pour cela, on fait un MLE par un conditionnement sur l'une des distances de Hamming avec une analyse faible. Le dernier Chapitre 4 commence par introduire brièvement les choix algorithmiques qui ont été faits pour résoudre les problèmes d'inversion de matrices de covariance (symétriques définies positives) de la Section 3.2 et l'analyse des relations fortes entre les Hamming dans la Section 3.2. On utilise ici des outils de Graphics Processing Unit (GPU) sur un très grand nombre de matrices de petites tailles. On parlera de Batch Computing. La méthode LDLt présentée au début de ce chapitre s'est avérée insuffisante pour résoudre complètement le problème du MLE conditionné présenté dans la Section 3.4. On présente le travail sur l'amélioration d'un code de diagonalisation de matrice tridiagonale utilisant le principe de Diviser pour Régner (Divide & Conquer) développé par Lokmane Abbas-Turki et Stéphane Graillat. On présente une généralisation de ce code, des optimisations en temps de calcul et une amélioration de la robustesse des calculs en simple précision pour des matrices de taille inférieure à 32
We will speak of strong analysis for an analysis which makes it possible to find the key to a cryptographic system. We define a weak analysis in the case where candidate keys are eliminated. The goal of this thesis is to understand the behavior of the random of Hamming distances produced by an ECC (Elliptic Curve for Cryptography) cryptographic system when using a RNS (Residue Number System) representation with the random moduli method. Chapter 2 introduces the different concepts for understanding this document. He brieflyintroducesthemodularmultiplicationalgorithm(MontgomeryalgorithmforRNS) which inspired the method of random moduli. Then it describes the algorithm which generatestheHammingdistancesequencesnecessaryforouranalysis. Thenitshowswhat level of resistance brings the method of random moduli against different classic attacks like DPA (Diferrential Power Analysis), CPA (Correlation Power Analysis), DPA of the second order and MIA (Mutual Information Analysis). We provide an understanding of the distribution of Hamming distances considered to be random variables. Following this, we add the Gaussian hypothesis on Hamming distances. We use MLE (Maximum Likelihood Estimator) and a strong analysis as to make Template Attacks to have a fine understanding of the level of random brought by the method of random moduli. The last Chapter 4 begins by briefly introducing the algorithmic choices which have been made to solve the problems of inversion of covariance matrices (symmetric definite positive) of Section 2.5 and the analysis of strong relationships between Hamming in Section 3.2. We use here Graphics Processing Unit (GPU) tools on a very large number of small size matrices. We talk about Batch Computing. The LDLt method presented at the beginning of this chapter proved to be insufficient to completely solve the problem of conditioned MLE presented in Section 3.4. We present work on the improvement of a diagonalization code of a tridiagonal matrix using the principle of Divide & Conquer developed by Lokmane Abbas-Turki and Stéphane Graillat. We present a generalization of this code, optimizations in computation time and an improvement of the accuracy of computations in simple precision for matrices of size lower than 32
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Marrez, Jérémy. "Représentations adaptées à l'arithmétique modulaire et à la résolution de systèmes flous". Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS635.pdf.

Texto completo
Resumen
Les calculs modulaires entrant en jeu dans les applications en cryptographie asymétrique utilisent le plus souvent un modulo premier standardisé, dont le choix n’est pas toujours libre en pratique. L’amélioration des opérations modulaires est centrale pour l’efficacité et la sécurité de ces primitives. Cette thèse propose de fournir une arithmétique modulaire efficace pour le plus grand nombre de premiers possible, tout en la prémunissant contre certains types d’attaques. Pour ce faire, nous nous intéressons au système PMNS utilisé pour l’arithmétique modulaire, et proposons des méthodes afin d’obtenir de nombreux PMNS pour un premier donné, avec une arithmétique efficace sur les représentations. Nous considérons également la randomisation des calculs modulaires via des algorithmes de type Montgomery et Babaï en exploitant la redondance intrinsèque aux PMNS. Les changements induits de représentation des données au cours du calcul empêchent un attaquant d’effectuer des hypothèses utiles sur ces représentations. Nous présentons ensuite un système hybride, HyPoRes, avec un algorithme améliorant les réductions modulaires pour tout modulo premier. Les nombres sont représentés dans un PMNS avec des coefficients en RNS. La réduction modulaire est plus rapide qu’en RNS classique pour les premiers standardisés pour ECC. En parallèle, nous étudions un type de représentation utilisé pour la résolution réelle de systèmes flous. Nous revisitons l’approche globale de résolution faisant appel à des techniques algébriques classiques et la renforçons. Ces résultats incluent un système réel appelé la transformation réelle qui simplifie les calculs, et la gestion des signes des solutions
Modular computations involved in public key cryptography applications most often use a standardized prime modulo, the choice of which is not always free in practice. The improvement of modular operations is fundamental for the efficiency and safety of these primitives. This thesis proposes to provide an efficient modular arithmetic for the largest possible number of primes, while protecting it against certain types of attacks. For this purpose, we are interested in the PMNS system used for modular arithmetic, and propose methods to obtain many PMNS for a given prime, with an efficient arithmetic on the representations. We also consider the randomization of modular computations via algorithms of type Montgomery and Babaï by exploiting the intrinsic redundancy of PMNS. Induced changes of data representation during the calculation prevent an attacker from making useful assumptions about these representations. We then present a hybrid system, HyPoRes , with an algorithm that improves modular reductions for any prime modulo. The numbers are represented in a PMNS with coefficients in RNS. The modular reduction is faster than in conventional RNS for the primes standardized for ECC. In parallel, we are interested in a type of representation used to compute real solutions of fuzzy systems. We revisit the global approach of resolution using classical algebraic techniques and strengthen it. These results include a real system called the real transform that simplifies computations, and the management of the signs of the solutions
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dosso, Fangan Yssouf. "Contribution de l'arithmétique des ordinateurs aux implémentations résistantes aux attaques par canaux auxiliaires". Electronic Thesis or Diss., Toulon, 2020. http://www.theses.fr/2020TOUL0007.

Texto completo
Resumen
Cette thèse porte sur deux éléments actuellement incontournables de la cryptographie à clé publique, qui sont l’arithmétique modulaire avec de grands entiers et la multiplication scalaire sur les courbes elliptiques (ECSM). Pour le premier, nous nous intéressons au système de représentation modulaire adapté (AMNS), qui fut introduit par Bajard et al. en 2004. C’est un système de représentation de restes modulaires dans lequel les éléments sont des polynômes. Nous montrons d’une part que ce système permet d’effectuer l’arithmétique modulaire de façon efficace et d’autre part comment l’utiliser pour la randomisation de cette arithmétique afin de protéger l’implémentation des protocoles cryptographiques contre certaines attaques par canaux auxiliaires. Pour l’ECSM, nous abordons l’utilisation des chaînes d’additions euclidiennes (EAC) pour tirer parti de la formule d’addition de points efficace proposée par Méloni en 2007. L’objectif est d’une part de généraliser au cas d’un point de base quelconque l’utilisation des EAC pour effectuer la multiplication scalaire ; cela, grâce aux courbes munies d’un endomorphisme efficace. D’autre part, nous proposons un algorithme pour effectuer la multiplication scalaire avec les EAC, qui permet la détection de fautes qui seraient commises par un attaquant que nous détaillons
This thesis focuses on two currently unavoidable elements of public key cryptography, namely modular arithmetic over large integers and elliptic curve scalar multiplication (ECSM). For the first one, we are interested in the Adapted Modular Number System (AMNS), which was introduced by Bajard et al. in 2004. In this system of representation, the elements are polynomials. We show that this system allows to perform modular arithmetic efficiently. We also explain how AMNS can be used to randomize modular arithmetic, in order to protect cryptographic protocols implementations against some side channel attacks. For the ECSM, we discuss the use of Euclidean Addition Chains (EAC) in order to take advantage of the efficient point addition formula proposed by Meloni in 2007. The goal is to first generalize to any base point the use of EAC for ECSM; this is achieved through curves with one efficient endomorphism. Secondly, we propose an algorithm for scalar multiplication using EAC, which allows error detection that would be done by an attacker we detail
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Samuel, Sindhu. "Digital rights management (DRM) - watermark encoding scheme for JPEG images". Diss., 2008. http://hdl.handle.net/2263/27910.

Texto completo
Resumen
The aim of this dissertation is to develop a new algorithm to embed a watermark in JPEG compressed images, using encoding methods. This encompasses the embedding of proprietary information, such as identity and authentication bitstrings, into the compressed material. This watermark encoding scheme involves combining entropy coding with homophonic coding, in order to embed a watermark in a JPEG image. Arithmetic coding was used as the entropy encoder for this scheme. It is often desired to obtain a robust digital watermarking method that does not distort the digital image, even if this implies that the image is slightly expanded in size before final compression. In this dissertation an algorithm that combines homophonic and arithmetic coding for JPEG images was developed and implemented in software. A detailed analysis of this algorithm is given and the compression (in number of bits) obtained when using the newly developed algorithm (homophonic and arithmetic coding). This research shows that homophonic coding can be used to embed a watermark in a JPEG image by using the watermark information for the selection of the homophones. The proposed algorithm can thus be viewed as a ‘key-less’ encryption technique, where an external bitstring is used as a ‘key’ and is embedded intrinsically into the message stream. The algorithm has achieved to create JPEG images with minimal distortion, with Peak Signal to Noise Ratios (PSNR) of above 35dB. The resulting increase in the entropy of the file is within the expected 2 bits per symbol. This research endeavor consequently provides a unique watermarking technique for images compressed using the JPEG standard.
Dissertation (MEng)--University of Pretoria, 2008.
Electrical, Electronic and Computer Engineering
unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Randomization of arithmetic"

1

Didier, Laurent-Stephane, Fangan-Yssouf Dosso, Nadia El Mrabet, Jeremy Marrez y Pascal Veron. "Randomization of Arithmetic Over Polynomial Modular Number System". En 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). IEEE, 2019. http://dx.doi.org/10.1109/arith.2019.00048.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía