Dissertations / Theses on the topic 'Cryptographie à base de codes correcteurs'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Cryptographie à base de codes correcteurs.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Peirani, Béatrice. "Cryptographie à base de codes correcteurs d'erreurs et générateurs aléatoires." Aix-Marseille 1, 1994. http://www.theses.fr/1994AIX11046.
Aragon, Nicolas. "Cryptographie à base de codes correcteurs d’erreurs en métrique rang et application." Thesis, Limoges, 2020. http://www.theses.fr/2020LIMO0061.
Code-based cryptography is one of the fields allowing to build post-quantum cryptosystems, i.e secure against a quantum computer. Contrary to factorization and discrete logarithm, which are the two most used problems in cryptography right now, no algorithm is known to solve the decoding problem for random codes in polynomial time using a quantum computer. In this thesis, we focus on rank-based cryptography, in which we study codes based on the rank metric instead of the Hamming metric. This metric has the advantage of allowing to build cryptosystems with lower key sizes, but is less mature than the Hamming metric. Firstly, we present two new decoding algorithms in the rank metric : the first one is a combinatorial algorithm solving the decoding problem for random codes, hence allowing to better estimate the complexity of the attacks. The second one is an improvement of the decoding algorithm for Low Rank Parity Check (LRPC). We then present two code-based cryptosystems : a rank-based signature scheme which is an adaptation of the Schnorr-Lyubashevsky approach in the Euclidean metric, and an improvement of the Hamming Quasi-Cyclic (HQC) encryption scheme, for which we propose a new analysis of the decryption failure rate and the use of another family of error correcting codes. We then study two adaptations of the Schnorr-Lyubashevsky approach : one in the Hamming metric and the other one in the rank metric, for which we propose cryptanalysis allowing to recover secret keys using information leakage from the signatures. Finally we present the choices used to implement rank-based cryptosystems in the Rank-Based Cryptography (RBC) library
Urvoy, De Portzamparc Frédéric. "Sécurités algébrique et physique en cryptographie fondée sur les codes correcteurs d'erreurs." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066106/document.
Code-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art.Code-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art
Urvoy, De Portzamparc Frédéric. "Sécurités algébrique et physique en cryptographie fondée sur les codes correcteurs d'erreurs." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066106.
Code-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art.Code-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art
Turrel, Bardet Magali. "Etude des systèmes algébriques surdéterminés : applications aux codes correcteurs et à la cryptographie." Paris 6, 2004. https://tel.archives-ouvertes.fr/tel-00449609.
Bardet, Magali. "Étude des systèmes algébriques surdéterminés. Applications aux codes correcteurs et à la cryptographie." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2004. http://tel.archives-ouvertes.fr/tel-00449609.
Lequesne, Matthieu. "Analysis of code-based post-quantum cryptosystems." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS046.
Today, most public-key cryptosystems used to ensure the privacy and authenticity of communications rely on the hardness of number theoretic problems. For instance, the security of the RSA algorithm is based on the fact that factoring a product of large prime numbers is computationally hard. However, in 1994, Shor proposed an algorithm to efficiently solve this problem using a quantum computer. Large-scale quantum computers do not exists yet, but this could change within the next decades. Therefore, we need new public-key cryptosystems that resist quantum attacks. This is known as post-quantum cryptography. In 2017, the American National Institute of Standards and Technologies (NIST) invited researchers to submit proposals for the standardisation of post-quantum cryptographic algorithms. One promising solution is to design cryptosystems based of the hardness of decoding error-correcting codes. A significant proportion of cryptosystems submitted to the NIST use this approach. In this thesis, we propose an analysis of different code-based post-quantum cryptosystems. First, we study the case of QC-MDPC codes and show that one can recover the private key by observing the syndrome weight or the number of iterations of the decoding algorithm. Then, we propose key-recovery attacks exploiting the structure of three cryptosystems: Edon-K, RLCE and XGRS. Finally, we study the hardness of the general decoding problem for ternary codes, in the large weight setting, which is used in the Wave signature scheme
Augot, Daniel. "Décodage des codes algébriques et cryptographie." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00159149.
les codes cycliques binaires et les codes de Reed-Solomon sur un
alphabet $q$-aire (ainsi que les codes géométriques). En ce qui
concerne les codes cycliques, ceux-ci n'ont pas d'algorithme générique
de décodage, mis à part les codes BCH ou assimilés (bornes de
Hartman-Tzeng, de Roos). Au premier rang des codes intéressants pour
lesquels on ne connaît pas d'algorithme de décodage {\em générique}
figurent les {\em codes à résidus quadratiques}, qui ont de bons
paramètres. J'étudie une mise en équation du problème du décodage par
syndrôme de ces codes, que l'on peut résoudre avec des outils de base
de Gröbner. On obtient ainsi des algorithmes de décodage de complexité
raisonnable pour ces codes. Ces travaux ont fait l'objet d'une partie
de la thèse de Magali Bardet.
En ce qui concerne les codes de Reed-Solomon, ceux-ci peuvent être vus
comme des {\em codes d'évaluation}, et le problème de décodage associé
revient à approcher une fonction par des polynômes de base degré. De
grands progrès ont été réalisés par Guruswami et Sudan, qui ont trouvé
un algorithme qui décode bien au delà des rayons classiques de
décodage, en relaxant l'hypothèse que la solution doit être unique. Je
propose d'améliorer certaines étapes de cet algorithme, en le rendant
plus rapide et déterministe (notamment en évitant une factorisation de
polynôme sur un corps fini), dans le cas des codes Reed-Solomon, et
dans le cas des codes géométriques. Ces travaux ont été effectués en
encadrant Lancelot Pecquet.
Du point de vue théorique, j'ai étudié des généralisations
multivariées, qui correspondent à certains codes: les codes produits
de Reed-Solomon, et les codes de Reed-Muller. On obtient ainsi un bon
rayon de décodage, dans le cas des codes de petit taux. Dans le cas de
codes de Reed-Muller sur l'alphabet binaire, Cédric Tavernier, dans sa
thèse sous ma direction, a produit et implanté un algorithme efficace,
plus que ceux basés sur l'algorithme de Guruswami-Sudan.
J'ai étudié les aspects négatifs du problème de décodage par syndrôme
des codes linéaires, et du décodage des codes de Reed-Solomon, quand
le nombre d'erreurs est élevé, en but d'application en cryptographie.
Dans le premier cas, j'ai construit une fonction de hachage
cryptographique à réduction de sécurité, c'est-à-dire que trouver une
faiblesse dans le fonction de hachage revient à résoudre un problème
réputé difficile de codage. J'ai aussi construit une nouvelle
primitive de chiffrement à clé publique, reposant sur la difficulté de
décoder les codes de Reed-Solomon.
Dans un domaine plus appliqué, j'ai proposé avec Raghav Bhaskar un
nouvel algorithme d'échange de clé multi-utilisateurs, fondé sur le
problème du logarithme discret. Raghav Bhaskar a fourni une preuve de
sécurité de ce protocole, pendant sa thèse sous ma direction. Nous
avons aussi étudié comment adapter ce protocole aux pertes de
messages, car notre protocole est un des seuls qui est robuste à ces
pertes.
Cheriere, Agathe. "Side-channel resistance of cryptographic primitives based on error-correcting codes." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS092.
For about three decades, we have been aware of attacks targeting implementations of cryptosystems, exploiting physical information such as execution time. Naturally, questions arise about the threats these attacks pose to the upcoming industry deployments of post-quantum schemes. In this thesis, we focus on the resistance of error-correcting code-based cryptographic algorithms against side-channel attacks. We specifically studied two schemes, ROLLO and BIKE, which were candidates for the second round of post-quantum standardization organized by NIST. Through our research, we demonstrate that their constant-time implementation is notably vulnerable to attacks using power consumption analysis. To demonstrate these vulnerabilities, we employ techniques such as machine learning and linear algebra. Furthermore, for both scheme, the attack requires a single trace of power consumption to recover the private key. Following the identification of these vulnerabilities, we propose countermeasure strategies to prevent these attacks while maintaining constant-time operation. For about three decades, we have been aware of attacks targeting implementations of cryptosystems, exploiting physical information such as execution time. Naturally, questions arise about the threats these attacks pose to the upcoming industry deployments of post-quantum schemes. In this thesis, we focus on the resistance of error-correcting code-based cryptographic algorithms against side-channel attacks. We specifically studied two schemes, ROLLO and BIKE, which were candidates for the second round of post-quantum standardization organized by NIST. Through our research, we demonstrate that their constant-time implementation is notably vulnerable to attacks using power consumption analysis. To demonstrate these vulnerabilities, we employ techniques such as machine learning and linear algebra. Furthermore, for both scheme, the attack requires a single trace of power consumption to recover the private key. Following the identification of these vulnerabilities, we propose countermeasure strategies to prevent these attacks while maintaining constant-time operation
Cayrel, Pierre-Louis. "Construction et optimisation de cryptosystèmes basés sur les codes correcteurs d'erreurs." Limoges, 2008. https://aurore.unilim.fr/theses/nxfile/default/46aac3f7-1539-4684-bef6-9b1ae632c183/blobholder:0/2008LIMO4026.pdf.
In this thesis, we are interested in the study of encryption systems as well as signature schemes whose security relies on difficult problems of error-correcting codes. These research activities have been motivated, a part of a theoretical point of view by creating : new signature schemes with special properties and a way of reducing the size of the key of the McEliece scheme, and on the other hand, a practical point of view to use structural properties to obtain effective implementations of a signature scheme which is based on error-correcting codes. As its title indicates, this thesis deals with the construction and optimization of cryptosystems based on error-correcting codes and more particularly five new protocols. It presents a secure version of the Stern scheme in a low-resources environment, a new construction of the Kabatianski, Krouk and Smeets scheme, a signature scheme based on the identity proved secure in the random oracle model, a threshold ring signature scheme and a reduction of the size of the key of the McEliece scheme using quasi-cyclic alternant codes. In the annex, this work deals with algebraic attacks against linear feedback shift register with memory. It also presents a brief study of cyclic codes on rings of matrices
Misoczki, Rafael. "Two Approaches for Achieving Efficient Code-Based Cryptosystems." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00931811.
Mora, Rocco. "Algebraic techniques for decoding Reed-Solomon codes and cryptanalyzing McEliece-like cryptosystems." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS134.
Résumé Algebraic error-correcting codes respond to numerous needs that emerge from the world of digital communications and information. The mathematical structure of these families of codes allows the design of efficient encoding and decoding algorithms, thus enabling a variety of applications in modern technologies. In this manuscript, we delve into two of these aspects. First, we tackle the decoding problem for Reed-Solomon codes. We propose a new strategy that consists of solving a polynomial system, whose equations are connected to the power decoding algorithm, by using Gröbner bases techniques. We show that for some parameters our approach allows to reach and exceed Johnson's radius. We then move to some topics related to code-based cryptography. The oldest public key encryption scheme relying on codes was proposed in 1978 by McEliece. After more than four decades, McEliece's scheme is still secure and even quantum computers do not represent a threat. We analyze the algebraic structure and the security of subfield subcodes of Reed-Solomon codes, namely alternant and Goppa codes. Among others, McEliece's scheme is built upon binary Goppa codes. We investigate an algebraic method that allows to distinguish alternant and Goppa codes from random ones, provided that their rate is high enough. This study brings us to develop a polynomial-time attack on high-rate alternant codes. Again, we exploit Gröbner bases to solve a polynomial system that models the key-recovery problem. Finally, we give a procedure to enhance in some cases the range of distinguishable parameters
Schrek, Julien. "Signatures et authentications pour les cryptosystèmes bases sur les codes correcteurs en métrique de Hamming et en métrique rang." Limoges, 2013. http://aurore.unilim.fr/theses/nxfile/default/e9fa83b6-8739-4c8a-8bd2-90c729b9a03f/blobholder:0/2013LIMO4052.pdf.
This thesis consists of 13 chapters. The first 8 are the first part and the following five form the second part. This thesis is about signatures and authentifications based on coding theory for Hamming metric and rank metric and the second part, those obtained on rank metric. In first part, we present four new signatures based on coding theory. The first signature is a signature based on a zero-knowledge protocol and has the distinction of having a size smaller than the signatures built this way. The second signature is a very small but can be used only one time. It uses the highly structured codes of quadratic residues. The third signature is a ring threshold signature. It allows a group of people to sign hiding his identity from a larger group. The fourth is a signature undeniable. It controls the signature verification using an interactive protocol. The verification must not allow the author to cheat on the fact that it is or not the signer. The second part identifies the metric rank. We present a new generic attack on the syndrom decoding problem, more efficient than the previous ones. Two specific attacks on the cryptosystem K. Chen that completely breaks it. We also present a new signature that has a small size and small key sizes compare to other signatures in rank metric
Dyseryn-Fostier, Victor. "Exploring the multi-dimensional approach in code-based cryptography." Electronic Thesis or Diss., Limoges, 2024. http://www.theses.fr/2024LIMO0007.
Code-based cryptography is a fertile family of post-quantum algorithms. Its security relies mostly on the problem of decoding noisy codewords in a random code, which is believed to be difficult for a classical as well as a quantum computer. One of the challenges of post-quantum cryptography is its large key and message sizes when compared to classical cryptography. To overcome this limitation, objects with a cyclic structure, which can be stored in a succinct form, can be considered. This approach however implies that the security assumption does not rely on a generic instance of the decoding problem, thus creating potential vulnerabilities. We explore in this thesis a multi-dimensional approach to reduce the size of the schemes, as a more secure alternative than cyclicity. It consists in sending multiple errors with the same support, increasing the decoding capacity of the codes and hence allowing a global decrease in the parameters. Of course, the difficult problem in the security proof of the scheme needs to be adapted to the multi-dimensional case, however we believe that this variant is less risky than a cyclic structure. We present in this thesis two new encryption schemes and one new signature scheme based on this multi-dimensional approach. They outperform existing unstructured proposals. We also give an application of the multi-dimensional approach to homomorphic encryption based on ideal random codes
Chaulet, Julia. "Etude de cryptosystèmes à clé publique basés sur les codes MDPC quasi-cycliques." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066064/document.
Considering the McEliece cryptosystem using quasi-cylcic MDPC (Moderate Density Parity Check matrix) codes allows us to build a post-quantum encryption scheme with nice features. Namely, it has reasonable key sizes and both encryption and decryption are performed using binary operations. Thus, this scheme seems to be a good candidate for embedded and lightweight implementations. In this case, any information obtained through side channels can lead to an attack. In the McEliece cryptosystem, the decryption process essentially consists in decoding. As we consider the use of an iterative and probabilistic algorithm, the number of iterations needed to decode depends on the instance considered and some of it may fail to be decoded. These behaviors are not suitable because they may be used to extract information about the secrets. One countermeasure could be to bound the number of encryptions using the same key. Another solution could be to employ a constant time decoder with a negligible decoding failure probability, that is to say which is about the expected security level of the cryptosystem. The main goal of this thesis is to present new methods to analyse decoder behavior in a cryptographic context.Second, we explain why a McEliece encryption scheme based on polar code does not ensure the expected level of security. To do so, we apply new techniques to resolve the code equivalence problem. This allows us to highlight several common properties shared by Reed-Muller codes and polar codes. We introduce a new family of codes, named decreasing monomial codes, containing both Reed-Muller and polar codes. These results are also of independent interest for coding theory
Tale, kalachi Herve. "Sécurité des protocoles cryptographiques fondés sur la théorie des codes correcteurs d'erreurs." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR045/document.
Contrary to the cryptosystems based on number theory, the security of cryptosystems based on error correcting codes appears to be resistant to the emergence of quantum computers. Another advantage of these systems is that the encryption and decryption are very fast, about five times faster for encryption, and 10 to 100 times faster for decryption compared to RSA cryptosystem. Nowadays, the interest of scientific community in code-based cryptography is highly motivated by the latest announcement of the National Institute of Standards and Technology (NIST). They initiated the Post-Quantum cryptography Project which aims to define new standards for quantum resistant cryptography and fixed the deadline for public key cryptographic algorithm submissions for November 2017. This announcement motivates to study the security of existing schemes in order to find out whether they are secure. This thesis thus presents several attacks which dismantle several code-based encryption schemes. We started by a cryptanalysis of a modified version of the Sidelnikov cryptosystem proposed by Gueye and Mboup [GM13] which is based on Reed-Muller codes. This modified scheme consists in inserting random columns in the secret generating matrix or parity check matrix. The cryptanalysis relies on the computation of the square of the public code. The particular nature of Reed-Muller which are defined by means of multivariate binary polynomials, permits to predict the values of the dimensions of the square codes and then to fully recover in polynomial time the secret positions of the random columns. Our work shows that the insertion of random columns in the Sidelnikov scheme does not bring any security improvement. The second result is an improved cryptanalysis of several variants of the GPT cryptosystem which is a rank-metric scheme based on Gabidulin codes. We prove that any variant of the GPT cryptosystem which uses a right column scrambler over the extension field as advocated by the works of Gabidulin et al. [Gab08, GRH09, RGH11] with the goal to resist Overbeck’s structural attack [Ove08], are actually still vulnerable to that attack. We show that by applying the Frobeniusoperator appropriately on the public key, it is possible to build a Gabidulin code having the same dimension as the original secret Gabidulin code, but with a lower length. In particular, the code obtained by this way corrects less errors than thesecret one but its error correction capabilities are beyond the number of errors added by a sender, and consequently an attacker is able to decrypt any ciphertext with this degraded Gabidulin code. We also considered the case where an isometrictransformation is applied in conjunction with a right column scrambler which has its entries in the extension field. We proved that this protection is useless both in terms of performance and security. Consequently, our results show that all the existingtechniques aiming to hide the inherent algebraic structure of Gabidulin codes have failed. To finish, we studied the security of the Faure-Loidreau encryption scheme [FL05] which is also a rank-metric scheme based on Gabidulin codes. Inspired by our precedent work and, although the structure of the scheme differs considerably from the classical setting of the GPT cryptosystem, we show that for a range of parameters, this scheme is also vulnerable to a polynomial-time attack that recovers the private key by applying Overbeck’s attack on an appropriate public code. As an example we break in a few seconds parameters with 80-bit security claim
Candau, Marion. "Codes correcteurs d'erreurs convolutifs non commutatifs." Thesis, Brest, 2014. http://www.theses.fr/2014BRES0050/document.
An error correcting code adds redundancy to a message in order to correct it when errors occur during transmission.Convolutional codes are powerful ones, and therefore, often used. The principle of a convolutional code is to perform a convolution product between a message and a transfer function, both defined over the group of integers. These codes do not protect the message if it is intercepted by a third party. That is why we propose in this thesis, convolutional codes with cryptographic properties defined over non-commutative groups. We first studied codes over the infinite dihedral group, which despite good performance, do not have the desired cryptographic properties. Consequently, we studied convolutional block codes over finite groups with a time-varying encoding. Every time a message needs to be encoded, the process uses a different subset of the group. These subsets are chaotically generated from an initial state. This initial state is considered as the symmetric key of the code-induced cryptosystem. We studied many groups and many methods to define these chaotic subsets. We examined the minimum distance of the codes we conceived and we showed that it is slightly smaller than the minimum distance of the linear block codes. Nevertheless, our codes have, in addition, cryptographic properties that the others do not have. These non-commutative convolutional codes are then a compromise between error correction and security
Finiasz, Matthieu. "Nouvelles constructions utilisant des codes correcteurs d'erreurs en cryptographie à clef publique." Palaiseau, Ecole polytechnique, 2004. http://www.theses.fr/2004EPXX0033.
Chaulet, Julia. "Etude de cryptosystèmes à clé publique basés sur les codes MDPC quasi-cycliques." Electronic Thesis or Diss., Paris 6, 2017. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2017PA066064.pdf.
Considering the McEliece cryptosystem using quasi-cylcic MDPC (Moderate Density Parity Check matrix) codes allows us to build a post-quantum encryption scheme with nice features. Namely, it has reasonable key sizes and both encryption and decryption are performed using binary operations. Thus, this scheme seems to be a good candidate for embedded and lightweight implementations. In this case, any information obtained through side channels can lead to an attack. In the McEliece cryptosystem, the decryption process essentially consists in decoding. As we consider the use of an iterative and probabilistic algorithm, the number of iterations needed to decode depends on the instance considered and some of it may fail to be decoded. These behaviors are not suitable because they may be used to extract information about the secrets. One countermeasure could be to bound the number of encryptions using the same key. Another solution could be to employ a constant time decoder with a negligible decoding failure probability, that is to say which is about the expected security level of the cryptosystem. The main goal of this thesis is to present new methods to analyse decoder behavior in a cryptographic context.Second, we explain why a McEliece encryption scheme based on polar code does not ensure the expected level of security. To do so, we apply new techniques to resolve the code equivalence problem. This allows us to highlight several common properties shared by Reed-Muller codes and polar codes. We introduce a new family of codes, named decreasing monomial codes, containing both Reed-Muller and polar codes. These results are also of independent interest for coding theory
Loidreau, Pierre. "Metrique rang et cryptographie." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00200407.
des cryptosystémes fondés sur des familles de codes décodables en métrique rang.
Dallot, Léonard. "Sécurité de protocoles cryptographiques fondés sur les codes correcteurs d'erreurs." Caen, 2010. http://www.theses.fr/2010CAEN2047.
Code-based cryptography appeared in 1968, in the early years of public-key cryptography. The purpose of this thesis is the study of reductionist security of cryptographic constructions that belong to this category. After introducing some notions of cryptography and reductionist security, we present a rigorous analysis of reductionist security of three code-based encryption scheme : McEliece's cryptosystem, Niederreiter's variant and a hybrid scheme proposed by N. Sendrier and B. Biswas. The legitimacy of this approach is next illustrated by the cryptanalysis of two variants of McEliece's scheme that aim at reducing the size of the keys necessary for ensure communication confidentiality. Then we present a reductionist security proof of a signature scheme proposed in 2001 by N. Courtois, M. Finiasz and N. Sendrier. In order to manage this, we show that we need to slightly modify the scheme. Finally, we show that technics used in the previous scheme can also be used to build a provably secure threshold ring signature scheme
Richmond, Tania. "Implantation sécurisée de protocoles cryptographiques basés sur les codes correcteurs d'erreurs." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES048/document.
The first cryptographic protocol based on error-correcting codes was proposed in 1978 by Robert McEliece. Cryptography based on codes is called post-quantum because until now, no algorithm able to attack this kind of protocols in polynomial time, even using a quantum computer, has been proposed. This is in contrast with protocols based on number theory problems like factorization of large numbers, for which efficient Shor's algorithm can be used on quantum computers. Nevertheless, the McEliece cryptosystem security is based not only on mathematical problems. Implementation (in software or hardware) is also very important for its security. Study of side-channel attacks against the McEliece cryptosystem have begun in 2008. Improvements can still be done. In this thesis, we propose new attacks against decryption in the McEliece cryptosystem, used with classical Goppa codes, including corresponding countermeasures. Proposed attacks are based on evaluation of execution time of the algorithm or its power consumption analysis. Associate countermeasures are based on mathematical and algorithmic properties of the underlying algorithm. We show that it is necessary to secure the decryption algorithm by considering it as a whole and not only step by step
Landais, Gregory. "Mise en oeuvre de cryptosystèmes basés sur les codes correcteurs d'erreurs et de leurs cryptanalyses." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066602/document.
This thesis is about algorithmic problems arising when someone wants to implement a cryptosystem based on error correcting codes or a cryptanalysis of such a system. The benefits of these systems come from their excellent algorithmic complexity, better of several orders than the classical public key schemes. They also bring a credible alternative to the current systems that for most of them rely on number theory and on the problems of factorisation and discrete logarithm. P.Shor showed that these two problems could be solved in polynomial time in the quantum computer model. This computer is far from being operational but we will need alternatives we can trust and that have efficient implementations
Chaussade, Lionel. "Codes correcteurs avec les polynômes tordus." Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00813705.
Herbert, Vincent. "Des codes correcteurs pour sécuriser l'information numérique." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00657110.
Carrier, Kevin. "Recherche de presque-collisions pour le décodage et la reconnaissance de codes correcteurs." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS281.
Error correcting codes are tools whose initial function is to correct errors caused by imperfect communication channels. In a non-cooperative context, there is the problem of identifying unknown codes based solely on knowledge of noisy codewords. This problem can be difficult for certain code families, in particular LDPC codes which are very common in modern telecommunication systems. In this thesis, we propose new techniques to more easily recognize these codes. At the end of the 1970s, McEliece had the idea of redirecting the original function of codes to use in ciphers; thus initiating a family of cryptographic solutions which is an alternative to those based on number theory problems. One of the advantages of code-based cryptography is that it seems to withstand the quantum computing paradigm; notably thanks to the robustness of the generic decoding problem. The latter has been thoroughly studied for more than 60 years. The latest improvements all rely on using algorithms for finding pairs of points that are close to each other in a list. This is the so called near-collisions search problem. In this thesis, we improve the generic decoding by asking in particular for a new way to find close pairs. To do this, we use list decoding of Arikan's polar codes to build new fuzzy hashing functions. In this manuscript, we also deal with the search for pairs of far points. Our solution can be used to improve decoding over long distances. This new type of decoding finds very recent applications in certain signature models
Landais, Gregory. "Mise en oeuvre de cryptosystèmes basés sur les codes correcteurs d'erreurs et de leurs cryptanalyses." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066602.
This thesis is about algorithmic problems arising when someone wants to implement a cryptosystem based on error correcting codes or a cryptanalysis of such a system. The benefits of these systems come from their excellent algorithmic complexity, better of several orders than the classical public key schemes. They also bring a credible alternative to the current systems that for most of them rely on number theory and on the problems of factorisation and discrete logarithm. P.Shor showed that these two problems could be solved in polynomial time in the quantum computer model. This computer is far from being operational but we will need alternatives we can trust and that have efficient implementations
Debris-Alazard, Thomas. "Cryptographie fondée sur les codes : nouvelles approches pour constructions et preuves ; contribution en cryptanalyse." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS482.
In this thesis we study code-based cryptography. By this term we mean the crypto-systems whose security relies on the generic decoding problem. The first of those systems is a public key encryption scheme proposed by McEliece in 1978. Four decades later, no attack is known to present a serious threat on the system, even on a quantum computer. This makes code-based cryptography a credible candidate for post-quantum cryptography. First we give attacks against the code-based signature scheme RankSign, which was proposed to the post-quantum standardization of the NIST, and against the first code-based Identity-Based-Encryption scheme. On the other hand we propose a new code-based signature scheme: Wave. For this design we introduced a new trapdoor, the family of generalized (U,U+V)-codes. We show how to decode them for weights such that the generic decoding problem is hard. Then we show how to follow the Gentry-Peikert and Vaikuntanathan strategy which has proved to be fruitful in lattice-based cryptography. This was done by avoiding any information leakage of signatures thanks to an efficient rejection sampling. Furthermore, for the first time we propose a crypto-system whose security relies on the generic decoding problem for high distances. We give in this thesis the best known algorithm to solve this problem. At last, we study one of the few alternatives to information set decoding: the statistical decoding. First we improve algorithms to compute parity-check equations of small or moderate weight and we make the first asymptotic study of its complexity. We show that statistical decoding is not competitive with information set decoding contrary to what was claimed. This study relies on new results about Krawtchouk polynomials
El, Amrani Nora. "Codes additifs et matrices MDS pour la cryptographie." Thesis, Limoges, 2016. http://www.theses.fr/2016LIMO0034/document.
This PhD focuses on the links between error correcting codes and diffusion matrices used in cryptography symmetric. The goal is to study the possible construction of additives MDS codes defined over the group (Fm2, +) of binary m-tuples and minimize cost of hardware or software implementation of these diffusion matrices. This thesis begins with the study of codes defined over the polynomial ring F[x]/f(x), these codes are a generalization of quasi-cyclic codes, and continues with the study of additive systematic codes over (Fm2, +) and there relation with linear diffusion on symmetric cryptography. An important point of this thesis is the introduction of codes with coefficients in the ring of endomorphisms of Fm2. The link between codes which are a left-submodules and additive codes have been identified. The last part focuses on the study and construction of efficient diffusion MDS matrices for the cryptographic applications, namely the circulantes matrices, dyadic matrices, and matrices with hollow representation, in ordre to minimize their implementations
Barelli, Elise. "On the security of short McEliece keys from algebraic andalgebraic geometry codes with automorphisms." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX095/document.
In 1978, McEliece introduce a new public key encryption scheme coming from errors correcting codes theory. The idea is to use an error correcting code whose structure would be hidden, making it impossible to decode a message for anyone who do not know a specific decoding algorithm for the chosen code. The McEliece scheme has some advantages, encryption and decryption are very fast and it is a good candidate for public-key cryptography in the context of quantum computer. The main constraint is that the public key is too large compared to other actual public-key cryptosystems. In this context, we propose to study the using of some quasi-cyclic or quasi-dyadic codes. In this thesis, the two families of interest are: the family of alternant codes and the family of subfield subcode of algebraic geometry codes. We can construct quasi-cyclic alternant codes using an automorphism which acts on the support and the multiplier of the code. In order to estimate the securtiy of these QC codes we study the em{invariant code}. This invariant code is a smaller code derived from the public key. Actually the invariant code is exactly the subcode of code words fixed by the automorphism $sigma$. We show that it is possible to reduce the key-recovery problem on the original quasi-cyclic code to the same problem on the invariant code. This is also true in the case of QC algebraic geometry codes. This result permits us to propose a security analysis of QC codes coming from the Hermitian curve. Moreover, we propose compact key for the McEliece scheme using subfield subcode of AG codes on the Hermitian curve. The case of quasi-dyadic alternant code is also studied. Using the invariant code, with the em{Schur product} and the em{conductor} of two codes, we show weaknesses on the scheme using QD alternant codes with extension degree 2. In the case of the submission DAGS, proposed in the context of NIST competition, an attack exploiting these weakness permits to recover the secret key in few minutes for some proposed parameters
Faure, Cédric. "Etudes de systèmes cryptographiques construits à l'aide de codes correcteurs, en métrique de Hamming et en métrique rang." Phd thesis, Ecole Polytechnique X, 2009. http://pastel.archives-ouvertes.fr/pastel-00005288.
Prouff, Emmanuel. "Etude des fonctions booléenes et vectorielles pour la cryptographie." Caen, 2004. http://www.theses.fr/2004CAEN2020.
Cluzeau, Mathieu. "Reconstruction d'un schéma de codage." Phd thesis, Ecole Polytechnique X, 2006. http://tel.archives-ouvertes.fr/tel-00261461.
La première partie traite donc du problème de la reconstruction d'un code linéaire binaire à partir de la connaissance de mots de code bruités. Dans un premier temps, nous présentons et analysons une méthode suggérée par A. Valembois dans sa thèse. Cette analyse nous amène à présenter un nouveau test statistique permettant de trouver des mots susceptibles d'appartenir au dual du code utilisé lors de la transmission. Puis nous présentons un nouvel algorithme de décodage fondé sur les techniques classiques de décodage
itératif. Cet algorithme nous permet de corriger des erreurs même si certaines des équations de parité trouvées par le test statistique ne sont pas valides. Nous décrivons alors un nouvel algorithme de reconstruction utilisant cet algorithme de décodage.
La seconde partie traite du problème de la reconstruction d'un brasseur linéaire. Dans un premier temps, nous supposons que l'attaquant dispose de la sortie exacte du brasseur. Nous présentons alors différentes techniques permettant de reconstruire un brasseur synchrone ou auto-synchronisant en fonction des hypothèses envisagées sur l'entrée du brasseur. Ensuite, nous nous intéressons au cas général et nous présentons alors une technique algébrique permettant de reconstruire un brasseur synchrone quand
l'attaquant connaît simplement l'image de sa sortie par une transformation linéaire par bloc et une partie de la suite en entrée.
Gouget, Aline. "Etude de propriétés cryptographiques des fonctions booléennes et algorithme de confusion pour le chiffrement symétrique." Caen, 2004. http://www.theses.fr/2004CAEN2023.
Murat, Gaetan. "Résultants de polynômes de Ore et Cryptosystèmes de McEliece sur des Codes Rang faiblement structurés." Thesis, Limoges, 2014. http://www.theses.fr/2014LIMO0061/document.
The most commonly used encryption techniques in cryptography are based on problems in number theory. Despite their efficiency, they are vulnerable to post-quantum cryptographic attack. Therefore it is relevant to study other types of cryptosystems. In this work we study error-corrector codes based cryptosystmems, introduced by McEliece in 1978 ; being based on hard problems in coding theory, these cryptosystems do not have this weakness. However these cryptosystems are almost not used in practice because they are vulnerable to strucural attacks and they require a key with very big length. Recently a new family of codes named MDPC codes has been introduced as well as a cryptosystem that is based on these codes. It seems that MDPC codes are distinguishable only by finding words with weak weight in their dual, thus preventing them from structural attacks. Furthermore, they can have compact keys by using quasi-cyclic matrices.In the present paper we use the rank metric, a new metric for codes that was introduced by Gabidulin in and seems suited for a cryptographic use :• At first we studied Ore Polynomials and the special case of q-polynomials , the latter being iterates of the Fobenius automorphism on a finite field.These polynomials are widely in rank metric due to their use in the first code-based cryptosystems in rank metric. We reformulate already known results and give new results regarding the computation of GCD, resultants and subresultants of two Ore polynomials (as well as usual polynomials for which we give a generalization of the resultant computation to subresultants) using a right-hand multiplication matrix which is smaller than the well-known Sylvester matrix.These results may be reused in the cryptosystem we introduce in the next chapters, though this cryptosystem is not based on q-polynomials.• In the next part of our work we define the LRPC codes (for Low Rank Parity Check Codes), a new family of codes in rank metric. These codes have a parity check matrix whose rank weight is low (and thus they can be seen as a generalization of LDPC or MDPC codes to rank metric).We present the LRPC cryptosystem, a McEliece cryptosystem in rank metric based on LRPC codes. These codes are weakly structured and so are likely to resist structural attacks. We can choose a double-circulant parity check matrix which greatly lowers the key size (we name these particular codes DC-LRPC codes).Thus the DC-LRPC cryptosystems have a good security (being based on a hard problem in coding theory), are weakly structured, have small public keys and can be quickly decoded.An attack was found for DC-LRPC cryptosystem. This attack relies on folded codes and may greatly lower the security of the cryptosystem, however it works only when the polynomial X^(k-1)+X^(k-2)+⋯+1 has a divisor with big degree. We give parameters for which the cryptosystem remains valid
Maimuţ, Diana Ştefania. "Authentication and encryption protocols : design, attacks and algorithmic improvements." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0047/document.
This thesis addresses various topics in cryptology, namely protocol design, algorithmic improvements and attacks. In addition, we venture out of cryptography and propose two new applications of cryptographic techniques to error correcting codes. Our main results comprise a provably secure co-signature protocol and a provably secure authenticated encryption scheme. Our co-signature protocol achieves legal fairness, a novel fairness variant that does not rely on third parties. Legal fairness is implemented using Schnorr signatures. We also present a distributed Fiat-Shamir authentication protocol. The second part of the thesis is devoted to computational improvements, we discuss a method for doubling the speed of Barrett’s algorithm by using specific composite moduli, devise new BCH speed-up strategies using polynomial extensions of Barrett’s algorithm, describe a new backtracking-based multiplication algorithm suited for lightweight microprocessors and present a new number theoretic error-correcting code. Fault injection attacks are further overviewed and a new fault attack on ECC implementations is proposed
Vasseur, Valentin. "Post-quantum cryptography : a study of the decoding of QC-MDPC codes." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5202.
Post-quantum cryptography aims at securing exchanges against an adversary with a quantum computer. One approach considered to achieve post-quantum public key encryption relies on hard problems in coding theory. The key encapsulation mechanism BIKE, submitted to the NIST post-quantum standardization process, uses QC-MDPC codes whose quasi-cyclicity allows for a compact key representation. However, their decoding algorithms have a non-zero probability of failure (DFR) and this can be a security concern as demonstrated by Guo, Johansson and Stankovski. This work therefore focuses on the implementation and security of BIKE from the decoder's perspective. First, we design new algorithms that drastically reduce the DFR. These algorithms introduce features of soft-decision decoders into hard-decision decoders, thus bringing the performance of the former and preserving the simplicity of the latter. Second, we develop probabilistic models to predict the DFR in areas beyond the reach of simulations. The first model takes into account the regularity of the code, it is very accurate but can only analyze one iteration of a parallel decoder. The second model is based on a Markovian assumption of the behavior of a complete sequential decoder. Finally, we derive a DFR extrapolation method for which we establish confidence intervals. We then evaluate the adequacy of this extrapolation with the structural characteristics of the code that can affect the decoding process with weak keys or error floors
Dragoi, Vlad Florin. "Approche algébrique pour l'étude et la résolution de problèmes algorithmiques issus de la cryptographie et la théorie des codes." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR046/document.
First of all, during my PhD I focused on the public key cryptography, more exactly on the code-based cryptography. The main motivation is to study the security of the latest encryption schemes. For that, I analyzed in detail the structural properties of the main code families. Thus, my research was naturally directed to the study of the McEliece based encryption schemes, among which the latest MDCP based variant [MTSB13] and Polar codes variant [SK14]. In the case of the MDPC based variant, we manage to reveal an important weakness regarding the key pairs that are used in the protocol. Indeed, we proposed an efficient algorithm that retrieves the private key given the public key of the scheme. Next we counted the proportion of weak keys and we used the code equivalence problem to extend the number of weak keys. We published our results in an international conference in cryptography [BDLO16]. Next we studied the Polar codes and their application to public key cryptography.Since they were discovered by Arikan [Arı09], Polar codes are part of the most studied from an information theory point of view, family of codes. In terms of performance they are really efficient since they are capacity achieving over the Binary Discrete Memoryless Channels and they allow extremely fast encoding and decoding algorithms. Nonetheless, few facts are known about their structure. In this context, we have introduced an algebraic formalism which allowed us to reveal a big part of the structure of Polar codes. Indeed, we have managed to answer fundamental questions regarding Polar codes such as the dual, the minimum distance, the permutation group and the number of minimum weight codewords of a Polar code. Our results were published in an international conference in information theory [BDOT16]. We also managed to completely cryptanalyze the McEliece variant using Polar codes. The attack was a direct application of the aforementioned results on the structural properties of Polar codes and it was published in an international conference in postquantum cryptography [BCD+16]
Dubuc, Camus Sylvie. "Etude des propriétés de dégénérescence et de normalité des fonctions booléennes et construction de fonctions q-aires parfaitement non linéaires." Caen, 2001. http://www.theses.fr/2001CAEN2011.
Bringer, Julien. "Non-linéarité des fonctions booléennes : applications de la théorie des fonctions booléennes et des codes en cryptographie." Phd thesis, Université du Sud Toulon Var, 2007. http://tel.archives-ouvertes.fr/tel-00258334.
Kanade, Sanjay Ganesh. "Enhancing information security and privacy by combining biometrics with cryptography." Thesis, Evry, Institut national des télécommunications, 2010. http://www.theses.fr/2010TELE0022/document.
Securing information during its storage and transmission is an important and widely addressed issue. Generally, cryptographic techniques are used for information security. Cryptography requires long keys which need to be kept secret in order to protect the information. The drawback of cryptography is that these keys are not strongly linked to the user identity. In order to strengthen the link between the user's identity and his cryptographic keys, biometrics is combined with cryptography. In this thesis, we present various methods to combine biometrics with cryptography. With this combination, we also address the privacy issue of biometric systems: revocability, template diversity, and privacy protection are added to the biometric verification systems. Finally, we also present a protocol for generating and sharing biometrics based crypto-biometric session keys. These systems are evaluated on publicly available iris and face databases
Lavauzelle, Julien. "Codes with locality : constructions and applications to cryptographic protocols." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX082/document.
Locally correctable codes (LCCs) were introduced in order to retrieve pieces of information from a noisy codeword, by using a limited number of queries to its symbols, this number being called the locality. Three main families of LCCs reaching sublinear locality and arbitrarily high rate have been built so far. This specific range of parameters is of particular interest concerning practical applications of LCCs.In this thesis, after giving a state of the art for LCCs, we study how they can be built using block designs. We then give an analogue over projective spaces of the family of affine lifted codes introduced by Guo, Kopparty and Sudan. We exhibit several links between both families, and we give a precise analysis of the monomial structure of the code in the case of the lifting of order 2.The second part of the thesis focuses on the application of these codes to two cryptographic protocols. We first build a new private informatin retrieval (PIR) protocol from codes based on transversal designs, whose block size defines the locality of the code. Our construction features no computation on the server side, low storage overhead and moderate communication complexity. Then, we propose a new generic construction of proof-of-retrievability (PoR) that uses codes equipped with an elaborate structure of low-weight parity-check equations. We give a rigorous analysis of the security of our scheme, and we finally propose practical instantiations based on codes with locality
Saeed, Mohamed Ahmed. "Approche algébrique sur l'équivalence de codes." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR034/document.
Code equivalence problem plays an important role in coding theory and code based cryptography.That is due to its significance in classification of codes and also construction and cryptanalysis of code based cryptosystems. It is also related to the long standing problem of graph isomorphism, a well-known problem in the world of complexity theory. We introduce new method for solving code equivalence problem. We develop algebraic approaches to solve the problem in its permutation and diagonal versions. We build algebraic system by establishing relations between generator matrices and parity check matrices of the equivalent codes. We end up with system of multivariables of linear and quadratic equations which can be solved using algebraic tools such as Groebner basis and related techniques. By using Groebner basis techniques we can solve the code equivalence but the computation becomes complex as the length of the code increases. We introduced several improvements such as block linearization and Frobenius action. Using these techniques we identify many cases where permutation equivalence problem can be solved efficiently. Our method for diagonal equivalence solves the problem efficiently in small fields, namely F3 and F4. The increase in the field size results in an increase in the number of variables in our algebraic system which makes it difficult to solve. We introduce a new reduction from permutation code equivalence when the hull is trivial to graph isomorphism. This shows that this subclass of permutation equivalence is not harder than graph isomorphism.Using this reduction we obtain an algebraic system for graph isomorphism with interesting properties in terms of the rank of the linear part and the number of variables. We solve the graph isomorphism problem efficiently for random graphs with large number of vertices and also for some regular graphs such as Petersen, Cubical and Wagner Graphs
Legeay, Matthieu. "Utilisation du groupe de permutations d'un code correcteur pour améliorer l'efficacité du décodage." Rennes 1, 2012. http://www.theses.fr/2012REN1S099.
Error correcting codes and the linked decoding problem are one of the variants considered in post-quantum cryptography. In general, a random code has oftenly a trivial permutation group. However, the codes involved in the construction of cryptosystems and cryptographic functions based on error correcting codes usually have a non-trivial permutation group. Moreover, few cryptanalysis articles use the information contained in these permutation groups. We aim at improving decoding algorithms by using the permutation group of error correcting codes. There are many ways to apply it. The first one we focus on in this thesis is the one that uses the cyclic permutation called “shift” on information set decoding. Thus, we dwell on a work initiated by MacWilliams and put forward a detailed analysis of the complexity. The other way we investigate is to use a permutation of order two to create algebraically a subcode of the first code. Decoding in this subcode, of smaller parameters, is easier and allows to recover information in a perspective of decoding in the first code. Finally, we study the last pattern on the well known correcting codes, i. E. Reed-Muller codes, which extends the work initiated by Sidel'nikov-Pershakov
Jouguet, Paul. "Performance et sécurité de dispositifs de distribution quantique de clés à variables continues." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0048.
This thesis focuses on a cryptographic primitive that allows two distant parties to generate an arbitrary amount of secret key even in the presence of an eavesdropper, provided that they share a short initial secret message. We focus our study on continuous-variable protocols and demonstrate experimentally an all-fiber system that performs distribution of secret keys at 80 km on a dedicated fiber link while taking into account all known imperfections. We could extract secret keys at such a distance bydesigning specific error correcting codes that perform very close to Shannon’s bound for low signal to noise ratios. We also consider side-channel attacks that are not taken into account into the system security proof and propose some countermeasures. Finally, we study our system compability with intense communication channels that propagate on the same optical fiber
Koshelev, Dmitrii. "Nouvelles applications des surfaces rationnelles et surfaces de Kummer généralisées sur des corps finis à la cryptographie à base de couplages et à la théorie des codes BCH." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASM001.
There is well developed theory of so-called toric codes, i.e., algebraic geometry codes on toric varieties over a finite field. Besides ordinary (i.e., split) tori and toric varieties there are non-split ones. Therefore the thesis is dedicated to the study of algebraic geometry codes on the latter
Briaud, Pierre. "Algebraic cryptanalysis of post-quantum schemes and related assumptions." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS396.
This thesis studies the effect of algebraic techniques on certain post-quantum cryptosystems. We give attacks on multivariate and code-based schemes in the rank metric, some of which have been proposed to standardization by NIST. Most of these works involve the MinRank problem or structured versions of it. We have devised new polynomial modelings for some of these versions and contributed to analysis of existing ones, in particular the Support-Minors modeling (Bardet et al., EUROCRYPT 2020). Our break of a recent multivariate encryption scheme (Raviv et al. , PKC 2021) is also a MinRank attack. Finally, we studied other algebraic systems no longer related to MinRank arising from the cryptanalysis of Regular Syndrome Decoding (Augot et al. Mycrypt 2005) and that of a symmetric primitive tailored to zero-knowledge proofs (Bouvier et al., CRYPTO 2023)
Géraud, Rémi. "Advances in public-key cryptology and computer exploitation." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE057/document.
Information security relies on the correct interaction of several abstraction layers: hardware, operating systems, algorithms, and networks. However, protecting each component of the technological stack has a cost; for this reason, many devices are left unprotected or under-protected. This thesis addresses several of these aspects, from a security and cryptography viewpoint. To that effect we introduce new cryptographic algorithms (such as extensions of the Naccache–Stern encryption scheme), new protocols (including a distributed zero-knowledge identification protocol), improved algorithms (including a new error-correcting code, and an efficient integer multiplication algorithm), as well as several contributions relevant to information security and network intrusion. Furthermore, several of these contributions address the performance of existing and newly-introduced constructions
Jouguet, Paul. "Performance et sécurité de dispositifs de distribution quantique de clés à variables continues." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0048/document.
This thesis focuses on a cryptographic primitive that allows two distant parties to generate an arbitrary amount of secret key even in the presence of an eavesdropper, provided that they share a short initial secret message. We focus our study on continuous-variable protocols and demonstrate experimentally an all-fiber system that performs distribution of secret keys at 80 km on a dedicated fiber link while taking into account all known imperfections. We could extract secret keys at such a distance bydesigning specific error correcting codes that perform very close to Shannon’s bound for low signal to noise ratios. We also consider side-channel attacks that are not taken into account into the system security proof and propose some countermeasures. Finally, we study our system compability with intense communication channels that propagate on the same optical fiber
Shettell, Nathan. "Quantum Information Techniques for Quantum Metrology." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS504.
Quantum metrology is an auspicious discipline of quantum information which is currently witnessing a surge of experimental breakthroughs and theoretical developments. The main goal of quantum metrology is to estimate unknown parameters as accurately as possible. By using quantum resources as probes, it is possible to attain a measurement precision that would be otherwise impossible using the best classical strategies. For example, with respect to the task of phase estimation, the maximum precision (the Heisenberg limit) is a quadratic gain in precision with respect to the best classical strategies. Of course, quantum metrology is not the sole quantum technology currently undergoing advances. The theme of this thesis is exploring how quantum metrology can be enhanced with other quantum techniques when appropriate, namely: graph states, error correction and cryptography. Graph states are an incredibly useful and versatile resource in quantum information. We aid in determining the full extent of the applicability of graph states by quantifying their practicality for the quantum metrology task of phase estimation. In particular, the utility of a graph state can be characterised in terms of the shape of the corresponding graph. From this, we devise a method to transform any graph state into a larger graph state (named a bundled graph state) which approximately saturates the Heisenberg limit. Additionally, we show that graph states are a robust resource against the effects of noise, namely dephasing and a small number of erasures, and that the quantum Cramér-Rao bound can be saturated with a simple measurement strategy. Noise is one of the biggest obstacles for quantum metrology that limits its achievable precision and sensitivity. It has been showed that if the environmental noise is distinguishable from the dynamics of the quantum metrology task, then frequent applications of error correction can be used to combat the effects of noise. In practise however, the required frequency of error correction to maintain Heisenberg-like precision is unobtainable for current quantum technologies. We explore the limitations of error correction enhanced quantum metrology by taking into consideration technological constraints and impediments, from which, we establish the regime in which the Heisenberg limit can be maintained in the presence of noise. Fully implementing a quantum metrology problem is technologically demanding: entangled quantum states must be generated and measured with high fidelity. One solution, in the instance where one lacks all of the necessary quantum hardware, is to delegate a task to a third party. In doing so, several security issues naturally arise because of the possibility of interference of a malicious adversary. We address these issues by developing the notion of a cryptographic framework for quantum metrology. We show that the precision of the quantum metrology problem can be directly related to the soundness of an employed cryptographic protocol. Additionally, we develop cryptographic protocols for a variety of cryptographically motivated settings, namely: quantum metrology over an unsecured quantum channel and quantum metrology with a task delegated to an untrusted party. Quantum sensing networks have been gaining interest in the quantum metrology community over the past few years. They are a natural choice for spatially distributed problems and multiparameter problems. The three proposed techniques, graph states, error correction and cryptography, are a natural fit to be immersed in quantum sensing network. Graph states are an well-known candidate for the description of a quantum network, error correction can be used to mitigate the effects of a noisy quantum channel, and the cryptographic framework of quantum metrology can be used to add a sense of security. Combining these works formally is a future perspective