Dissertations / Theses on the topic 'Codes binaires'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 31 dissertations / theses for your research on the topic 'Codes binaires.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Klaimi, Rami. "Etude de turbocodes non binaires pour les futurs systèmes de communication et de diffusion." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0141.
Full textNowadays communication standards have adopted different binary forward error correction codes. Turbo codes were adopted for the long term evolution standard, while binary LDPC codes were standardized for the fifth generation of mobile communication (5G) along side with the polar codes. Meanwhile, the focus of the communication community is shifted towards the requirement of beyond 5G standards. Networks for the year 2030 and beyond are expected to support novel forward-looking scenarios, such as holographic communications, autonomous vehicles, massive machine-type communications, tactile Internet… To respond to the expected requirements of new communication systems, non-binary LDPC codes were defined, and they are shown to achieve better error correcting performance than the binary LDPC codes. This performance gain was followed by a high decoding complexity, depending on the field order.Similar studies emerged in the context of turbo codes, where the non-binary turbo codes were defined, and have shown promising error correcting performance, while imposing high complexity. The aim of this thesis is to propose a new low-complex structure of non-binary turbocodes. The constituent blocks of this structure were optimized in this work, and a new low complexity decoding algorithm was proposed targeting a future hardware implementation. The obtained results are promising, where the proposed codes are shown to outperform existing binary and non-binary codes from the literature
Shams, Bilal. "Les Codes LDPC non-binaires de nouvelle génération." Thesis, Cergy-Pontoise, 2010. http://www.theses.fr/2010CERG0525/document.
Full textIn this thesis we present our work in the domain of non-binary decoding algorithm for general classes of non-binary LDPC codes. Low-Density Parity-Check (LDPC) codes were originally presented by Gallager in 1963, and after some fundamental theoretical advancements, they were considered in standards like DVB-S2, WI-MAX, DSL, W-LAN etc. Later on, non-binary LDPC (NB-LDPC)codes were proposed in the litterature, and showed better performance for small lengths or when used on non-binary channels. However, the advantages of using NB-LDPC codes comes with the consequence of an heavily increased decoding complexity. For a code defined in GF(q), the complexity is of the order O(q^2). Similarly, the memory required for storing messages is of order O(q). Consequently, the implementation of an LDPC-decoder defined over a field order q > 64 becomes practically impossible.The main objective of the thesis is to develop reduced complexity algorithms for non-binary LDPC codes that exhibit excellent performance and is practically im-plementable. For better decoding performance, not only the decoding algorithm is important, but also the structure of the code plays an important role. With this goal in mind, a new family of codes called cluster-NB-LDPC codes was developped and specific improvements of the NB decoder for cluster-NB-LDPC codes were proposed. Our principal result is that we were able to propose decoders for cluster-NB-LDPC codes with reduced complexity compared to usual decoders for NB-LDPC codes on fields, without any performance loss in error correction capability
Li, Erbao. "Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00806192.
Full textSassatelli, Lucile. "Codes LDPC multi-binaires hybrides et méthodes de décodage itératif." Phd thesis, Université de Cergy Pontoise, 2008. http://tel.archives-ouvertes.fr/tel-00819413.
Full textAbassi, Oussama. "Étude des décodeurs LDPC non-binaires." Lorient, 2014. https://hal.science/tel-01176817.
Full textGorgoglione, Matteo. "Analyse et construction de codes LDPC non-binaires pour des canaux à évanouissement." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00778170.
Full textGorgolione, Matteo. "Analyse et construction de codes LDPC non-binaires pour des canaux à évanouissement." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00819415.
Full textGorgoglione, Matteo. "Analyse et construction de codes LDPC non-binaires pour des canaux à evanouissement." Thesis, Cergy-Pontoise, 2012. http://www.theses.fr/2012CERG0578/document.
Full textOver the last 15 years, spectacular advances in the analysis and design of graph-basedcodes and iterative decoding techniques paved the way for the development of error correctionsystems operating very close to the theoretical Shannon limit. A prominent rolehas been played by the class of Low Density Parity Check (LDPC) codes, introduced inthe early 60's by Gallager's and described latter in terms of sparse bipartite graphs. In theearly 2000's, LDPC codes were shown to be capacity approaching codes for a wide rangeof channel models, which motivated the increased interest of the scientific community andsupported the rapid transfer of this technology to the industrial sector. Over the past fewyears there has been an increased interest in non-binary LDPC codes due to their enhancedcorrection capacity. Although Gallager already proposed in his seminal work the use ofnon-binary alphabets (by using modular arithmetic), non-binary LDPC codes defined overfinite fields have only been investigated starting with the late 90's. They have been provento provide better performance than their binary counterparts when the block-length issmall to moderate, or when the symbols sent through channel are not binary, which is thecase for high-order modulations or for multiple-antennas channels. However, the performancegain comes at a non-negligible cost in the decoding complexity, which may prohibitthe use of non-binary LDPC codes in practical systems, especially when the price to payin decoding complexity is too high for the performance gain that one can get.This thesis addresses the analysis and design of non-binary LDPC codes for fadingchannels. The main goal is to demonstrate that besides the gain in the decoding performance,the use of non-binary LDPC codes can bring additional benefits that may offsetthe extra cost in decoding complexity. Flexibility and diversity are the two benefitsthat we demonstrate in this thesis. The exibility is the capacity of a coding system toaccommodate multiple coding rates through the use of a unique encoder/decoder pair. Thediversity of a coding system relates to its capacity to fully exploit the communicationchannel's heterogeneity.The first contribution of the thesis is the development of a Density Evolution approximationmethod, based on the Monte-Carlo simulation of an infinite code. We showthat the proposed method provides accurate and precise estimates of non-binary ensemblethresholds, and makes possible the optimization of non-binary codes for a wide range ofapplications and channel models.The second contribution of the thesis consists of the analysis and design of flexiblecoding schemes through the use of puncturing. We show that the non-binary LDPCcodes are more robust to puncturing than their binary counterparts, thanks to the factthat non-binary symbol-nodes can be only partially punctured. For regular codes, we showthat the design of puncturing patterns must respect different rules depending on whetherthe symbol-nodes are of degree 2 or higher. For irregular codes we propose an optimizationprocedure and we present optimized puncturing distributions for non-binary LDPC codes,iiiwhich exhibit a gap to capacity between 0.2 and 0.5dB , for punctured rates varying from0.5 to 0.9.The third contribution investigates the non-binary LDPC codes transmitted over aRayleigh (fast) fading channel, in which different modulated symbols are affected by differentfading factors. In case of one-to-one correspondence between modulated and codedsymbols, deep fading can make some coded symbols totally unrecoverable, leading to apoor system performance. In order to avoid this phenomenon, binary diversity can beexploited by using a bit-interleaver module placed between the encoder and the modulator.We propose an optimized interleaving algorithm, inspired from the Progressive Edge-Growth (PEG) method, which ensures maximum girth of th
Herbert, Vincent. "Des codes correcteurs pour sécuriser l'information numérique." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00657110.
Full textPogildiakov, Ivan. "Contributions à l'étude du nombre de points rationnels des courbes sur les corps finis." Electronic Thesis or Diss., Rennes 1, 2022. http://www.theses.fr/2022REN1S125.
Full textThis thesis is devoted to the study of the lower existence bounds on the genus of hyperelliptic curves over a fixed finite field. More precisely, given an integer N, we ask: what is the minimum number G such that for any g bigger than G there is a genus g hyperelliptic curve having exactly N rational points? The first part of the manuscript deals with so called pointless curves, i.e. when N equals zero. Using some explicit constructions, for each characteristic we establish a new bound that depends linearly on the size of the ground finite field. In the second part one considers the general case. Here, when the characteristic is odd and N is strictly positive, a new quasilinear bound on the genera is obtained by means of implicit constructions. The third part completes the above theoretical results. We propose a new approach to the computer search of hyperelliptic curves over small finite fields of odd characteristic by exploiting the machinery of binary linear codes. This thesis was held under the direction of Professor Alexey Zykin at the University of French Polynesia (2015—2019) and the codirection of Michael A. Tsfasman (Institute for Information Transmission Problems, Moscow)
Chehade, Tarek. "Optimisation d'un précodeur MIMO-OFDM dans le contexte de l'estimation aveugle et semi-aveugle du canal de communication." Thesis, Brest, 2015. http://www.theses.fr/2015BRES0077/document.
Full textChannel estimation plays an important role in wireless mobile communications, especially in MIMO systems. Unlike conventional channel estimation techniques based on training sequences or pilot symbols, blind techniques does not require the insertion of training symbols and allow higher throughput. The main problems of the blind lies in the ambiguity over the estimated channel. Based on the same methods as the blind estimation, the semi-blind estimation techniques are more robust. They exploit the blind information along with information provided by a small number of training symbols. The channel estimation is useful in MIMO systems and allows the precoding of the MIMO-OFDM signal by applying a pre-mixture in order to improve performance. Many types of precoders exist and their performance varies depending not only on the optimization criteria (Water-Filling, MMSE, Equal Error, max-SNR, max-d min ...), but also on the estimated channel. In this thesis we study the impact of using the channel information (CSI) from the blind and semi-blind estimation techniques to apply MIMO linear precoders. We also present a statistical study of the estimation error of these methods. The optimization of these precoders leads eventually to use another process allowing more performance improvement: the error correcting codes. We are particularly interested in non-binary LDPC codes and their association with linear MIMO precoders. We show that a matching is possible, and is beneficial in some cases. The optimization of this combination has allowed us to propose a new robust and more efficient precoder based on the maximization of mutual information
Shams, Bilal. "Codes LDPC non-binaire de nouvelle generation." Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00766409.
Full textNhan, Nhat-Quang. "Optimisation de précodeurs linéaires pour les systèmes MIMO à récepteurs itératifs." Thesis, Brest, 2016. http://www.theses.fr/2016BRES0062/document.
Full textThe long-term evolution (LTE) and the LTE-Advanced (LTE-A) standardizations are predicted to play essential roles in the future fifth-generation (5G) mobile networks. These standardizations require high data rate and high quality of service, which assures low error-rate and low latency. Besides, as discussed in the recent surveys, low complexity communication systems are also essential in the next 5G mobile networks. To adapt to the modern trend of technology, in this PhD thesis, we investigate the multiple-input multiple-output (MIMO) wireless communication schemes. In the first part of this thesis, low-complex forward error correction (FEC) codes are used for low complexity and latency. By considering iterative receivers at the receiver side, we exploit MIMO linear precoding and mapping methods to optimize the error-rate performance of these systems. In the second part of this thesis, non-binary low density parity check (NB-LDPC) codes are investigated. We propose to use MIMO precoders to reduce the complexity for NB-LDPC encoded MIMO systems. A novel low complexity decoding algorithm for NB-LDPC codes is also proposed at the end of this thesis
Chabot, Christophe. "Reconnaissance de codes, structure des codes quasi-cycliques." Limoges, 2009. https://aurore.unilim.fr/theses/nxfile/default/ca1051fa-cdfe-4a04-8251-fb35a0ef5b5e/blobholder:0/2009LIMO4036.pdf.
Full textIn this thesis, we first deal with the problem of recognition of codes. It consists in recovering the structure of an error-correcting code used during a data transmission only from the noisy intercepted sequence. We give efficient methods for the recognition of a known code, for the reconstruction of codes belonging to a family like cyclic codes and for the detection of parameters of convolutional codes. Then, we study the structure of quasi-cyclic codes in parallel of the results known for cyclic codes. We give a construction of a sub-family of quasi-cyclic codes cancelled by a polynomial with matricial coefficients. Some of these codes reach large minimum distances. Finally, we deal with permutations keeping the quasi-cyclicity of a code
Lakhdhar, Khaled. "Encodage entropique des indices binaires d'un quantificateur algébrique encastré." Mémoire, Université de Sherbrooke, 2009. http://savoirs.usherbrooke.ca/handle/11143/1526.
Full textKoliaï, Souad. "Approche statique et dynamique pour l'évaluation de performances de codes scientifiques." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0010.
Full textCurrent hardware tends to increase pressure on programmers to optimize the codes. The complexity of modern architectures makes it more difficult to understand the behavior of the programs running on them. Moreover, the compilers apply aggressive optimizations which makes the compiled code more difficult to understand. This increasing complexity shows that there is still a need of performance analysis to help the programmers. Different tools and techniques exist, but no single tool is a panacea; instead, different tools have different strengths. This thesis proposes two different and complementary tools for performance analysis on binary code. The first tool, Maqao’s static analysis, performs a static evaluation of the performance of the code, and gives an estimate of the quality of the code, such as the vectorization ratios. The second tool, Decan, is a new approach of performance analysis that targets the memory instructions to pinpoint the set of instructions responsible of the poor performance. Both tools are combined to propose a semi-automated methodology for performance evaluation
Gruber, Fabian. "Débogage de performance pour code binaire : Analyse de sensitivité et profilage de dépendances." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM071.
Full textDebugging, as usually understood, revolves around finding and removing defects in software that prevent it from functioning correctly.That is, when one talks about bugs and debugging one usually means functional bugs and functional debugging.In the context of this thesis, however, we will talk about performance bugs and performance debugging.Meaning we want to find defects that do not cause a program to crash or behave wrongly, but that make it run inefficiently, too slow, or use too many resources.To that end, we have developed tools that analyse and model the performance to help programmers improve their code to get better performance.We propose the following two performance debugging techniques: sensitivity based performance bottleneck analysis and data-dependence profiling driven optimization feedback.Sensitivity Based Performance Bottleneck Analysis:Answering a seemingly trivial question about a program's performance, such as whether it is memory-bound or CPU-bound, can be surprisingly difficult.This is because the CPU and memory are not merely two completely independent resources, but are composed of multiple complex interdependent subsystems.Here a stall of one resource can both mask or aggravate problems with another resource.We present a sensitivity based performance bottleneck analysis that uses high-level performance model implemented in GUS, a fast CPU simulator to pinpoint performance bottlenecks.Our performance model needs a baseline for the expected performance of different operations on a CPU, like the peak IPC and how different instructions compete for processor resources.Unfortunately, this information is seldom published by hardware vendors, such as Intel or AMD.To build our processor model, we have developed a system to reverse-engineer the required information using automatically generated micro-benchmarks.Data-Dependence Driven Polyhedral Optimization Feedback:We have developed MICKEY, a dynamic data-dependence profiler that provides high-level optimization feedback on the applicability and profitability of optimizations missed by the compiler.MICKEY leverages the polyhedral model, a powerful optimization framework for finding sequences of loop transformations to expose data locality and implement both coarse, i.e. thread, and fine-grain, i.e. vector-level, parallelism.Our tool uses dynamic binary instrumentation allowing it to analyze program written in different programming languages or using third-party libraries for which no source code is available.Internally MICKEY uses a polyhedral intermediate representation IR that encodes both the dynamic execution of a program's instructions as well as its data dependencies.The IR not only captures data dependencies across multiple loops but also across, possibly recursive, procedure calls.We have developed an efficient trace compression algorithm, called the folding algorithm, that constructs this polyhedral IR from a program's execution.The folding algorithm also finds strides in memory accesses to predict the possibility and profitability of vectorization.It can scale to real-life applications thanks to a safe, selective over-approximation mechanism for partially irregular data dependencies and iteration spaces
Bekrar, Sofia. "Recherche de vulnérabilités logicielles par combinaison d'analyses de code binaire et de frelatage (Fuzzing)." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM058.
Full textFuzz testing (a.k.a. fuzzing) is one of the most effective approaches for discovering security vulnerabilities in large and closed-source software. Despite their wide use in the software industry, traditional fuzzing techniques suffer from a poor coverage, which results in a large number of false negatives. The other common drawback is the lack of knowledge about the application internals. This limits their ability to generate high quality inputs. Thus such techniques have limited fault detection capabilities. We present an automated smart fuzzing approach which combines advanced binary code analysis techniques. Our approach has five components. A test suite reduction technique, to optimize the ratio between the amount of covered code and the execution time. A fast and optimized code coverage measurement technique, to evaluate the fuzzing effectiveness. A static analysis technique, to locate potentially sensitive sequences of code with respect to vulnerability patterns. An origin-aware dynamic taint analysis technique, to precisely identify the input fields that may trigger potential vulnerabilities. Finally, an evolutionary based test generation technique, to produce relevant inputs. We implemented our approach as an integrated tool chain, and we evaluated it on numerous industrial case studies. The obtained results demonstrate its effectiveness in automatically discovering zero-day vulnerabilities in major closed-source applications. Our approach is relevant to both defensive and offensive security purposes
Reynaud, Daniel. "Analyse de codes auto-modifiants pour la sécurité logicielle." Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL049N/document.
Full textSelf-modifying programs run in a very specific way: they are capable to rewrite their own code at runtime. Remarkably absent from theoretical computation models, they are present in every modern computer and operating system. Indeed, they are used by bootloaders, for just-in-time compilation or dynamic optimizations. They are also massively used by malware authors in order to bypass antivirus signatures and to delay analysis. Finally, they are unintentionally present in every program, since we can model code injection vulnerabilities (such as buffer overflows) as the ability for a program to accidentally execute data.In this thesis, we propose a formal framework in order to characterize advanced self-modifying behaviors and code armoring techniques. A prototype, TraceSurfer, allows us to detect these behaviors by using fine-grained execution traces and to visualize them as self-reference graphs. Finally, we assess the performance and efficiency of the tool by running it on a large corpus of malware samples
Garando, Lahcen. "Architecture intégrée de rétines B-codées par processeurs cellulaires." Paris 11, 1986. http://www.theses.fr/1986PA112136.
Full textThe integration, on a single ship, of optoelectronic sensors, binary picture coders, and massively parallel processors, results in a compact real time vision system, a kind of « intelligent retina ». First, we define the processor array architecture: each PE combines a binary picture optoelectronic sensor, a bit serial logic unit, and neighborhood communications means. This array can acquire a binary picture and apply to it any iteration of operators, provided they require only bolean processing of neighbours datas: they are called neighborhood combinatorial logic. We detail also electronic circuits performing the coding of a grey-level picture in the black pixels density of a binary picture-this kind of half toning is called B-coding. Finally, we describe prototype chips layout in a NMOS technology: these lead to the realization of an intelligent retina, whose array structure is based on a less than 25 transistors PE
Hebbal, Yacine. "Semantic monitoring mechanisms dedicated to security monitoring in IaaS cloud." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0029/document.
Full textVirtual Machine Introspection (VMI) consists inmonitoring VMs security from the hypervisor layer which offers thanks to its location a strong visibility on their activities in addition to a strong isolation from them. However, hypervisor view of VMs is just raw bits and bytes in addition to hardware states. The semantic difference between this raw view and the one needed for VM security monitoring presents a significant challenge for VMI called “the semantic gap”. In order to obtain semantic information about VM states and activities for monitoring their security from the hypervisor layer, we present in this thesis a set of techniques based on analysis and reuse of VM kernel binary code. These techniques enable to identify addresses and names of most VM kernel functions then instrument (call, intercept and analyze) them to automatically bridge the semantic gap regardless of challenges presented by compiler optimizations and kernel base address randomization
Trouillot, Xavier. "Étude de paramètres géométriques à partir du code de Freeman." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00496290.
Full textFrançon, Michel-Guy. "Analyse d'un schéma de transmission pour communications mobiles par satellites." Toulouse, ENSAE, 1997. http://www.theses.fr/1997ESAE0021.
Full textYe, Xin. "Model checking self modifying code." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7010.
Full textA Self modifying code is code that modifies its own instructions during execution time. It is nowadays widely used, especially in malware to make the code hard to analyse and to detect by anti-viruses. Thus, the analysis of such self modifying programs is a big challenge. Pushdown Systems (PDSs) is a natural model that is extensively used for the analysis of sequential programs because it allows to accurately model procedure calls and mimic the program’s stack. In this thesis, we propose to extend the PushDown System model with self-modifying rules. We call the new model Self-Modifying PushDown System (SM-PDS). A SM-PDS is a PDS that can modify its own set of transitions during execution. First, we show how SM-PDSs can be used to naturally represent self-modifying programs and provide efficient algorithms to compute the backward and forward reachable configurations of SM-PDSs. Then, we consider the LTL model-checking problem of self-modifying code. We reduce this problem to the emptiness problem of Self-modifying Büchi Pushdown Systems (SM-BPDSs). We also consider the CTL model-checking problem of self-modifying code. We reduce this problem to the emptiness problem of Self-modifying Alternating Büchi Pushdown Systems (SM-ABPDSs). We implement our techniques in a tool called SMODIC. We obtained encouraging results. In particular, our tool was able to detect several self-modifying malwares; it could even detect several malwares that well-known anti-viruses such as McAfee, Norman, BitDefender, Kinsoft, Avira, eScan, Kaspersky, Qihoo-360, Avast and Symantec failed to detect
Raymond, Pascal. "Compilation efficace d'un langage déclaratif synchrone : le générateur de code Lustre-V3." Phd thesis, Grenoble INPG, 1991. http://tel.archives-ouvertes.fr/tel-00198546.
Full textBendifallah, Zakaria. "Généralisation de l’analyse de performance décrémentale vers l’analyse différentielle." Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS038V/document.
Full textA crucial step in the process of application performance analysis is the accurate detection of program bottlenecks. A bottleneck is any event which contributes to extend the execution time. Determining their cause is important for application developpers as it enable them to detect code design and generation flaws.Bottleneck detection is becoming a difficult art. Techniques such as event counts,which succeeded to find bottlenecks easily in the past, became less efficient because of the increasing complexity of modern micro-processors, and because of the introduction of parallelism at several levels. Consequently, a real need for new analysis approaches is present in order to face these challenges.Our work focuses on performance analysis and bottleneck detection of computeintensive loops in scientific applications. We work on Decan, a performance analysis and bottleneck detection tool, which offers an interesting and promising approach called Decremental Analysis. The tool, which operates at binary level, is based on the idea of performing controlled modifications on the instructions of a loop, and comparing the new version (called variant) to the original one. The goal is to assess the cost of specific events, and thus the existence or not of bottlenecks.Our first contribution, consists of extending Decan with new variants that we designed, tested and validated. Based on these variants, we developed analysis methods which we used to characterize hot loops and find their bottlenecks. Welater, integrated the tool into a performance analysis methodology (Pamda) which coordinates several analysis tools in order to achieve a more efficient application performance analysis.Second, we introduce several improvements on the Decan tool. Techniquesdeveloped to preserve the control flow of the modified programs, allowed to use thetool on real applications instead of extracted kernels. Support for parallel programs(thread and process based) was also added. Finally, our tool primarily relying on execution time as the main concern for its analysis process, we study the opportunity of also using other hardware generated events, through a study of their stability, precision and overhead
Bouvier, des Noes Mathieu. "Détection itérative des séquences pseudo-aléatoires." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT068/document.
Full textPseudo-random binary sequences are very common in wireless transmission systems and ciphering mechanisms. More specifically, they are used in direct sequence spread spectrum transmission systems like UMTS or GPS, or to construct preamble sequences for synchronization and channel estimation purpose like in LTE. It is always required to synchronize the receiver with the transmitted sequence. The usual way consists in correlating the received signal with a replica of the sequence. If the correlation exceeds a predefined threshold, the synchronization is declared valid.This thesis addresses a different approach: the binary sequence is detected with a forward error correction decoding algorithm. This allows for instance to detect very long sequences.In this thesis, we show that decoding a pseudo-random sequence is a problematic of the kind ‘detect and decode'. The decoder detects the presence of the transmitted sequence and simultaneously estimates its initial state. In conventional detection theory, this corresponds to a GLRT detector that uses a decoder to estimate the unknown parameter which is the transmitted sequence. For pseudo-random sequences, the decoder implements an iterative message-passing algorithm. It uses a parity check matrix to define the decoding graph on which the algorithm applies. Each parity check equation has a weight t, corresponding to the number of variables in the equation.Parity check equations are thus an essential component of the decoder. The decoding procedure is known to be sensitive to the weight t of the parity check equations. For m-sequences, the number of parity check equations is already known. It is given by the number of codewords of weight t of the corresponding Hamming dual code. For Gold sequences, the number of parity check equations of weight t = 3 and 4 has already been evaluated by Kasami. In this thesis we provide an analytical expression for the number of parity check equations of weight t = 5 when the degree of the generator polynomial r is odd. Knowing this number is important because there is no parity check equation of weight t < 5 when r is odd. This enumeration is also used to provide an estimation of the least degree of parity check equations of weight t.We have then addressed the problem of selecting the parity check equations used by the decoder. We observed the probability of false alarm is very sensitive to this selection. It is explained by the presence or absence of absorbing sets which block the convergence of the decoder when it is fed only with noise. These sets are known to be responsible for error floor of LDPC codes. We give a method to identify these sets according to the parity check equations used by the decoder. The probability of false alarm can increase dramatically if these absorbing sets are destroyed. Then we propose an algorithm for selecting these parity check equations. It relies on the minimization of the number of cycles of length 6 and 8. Simulation show that the algorithm allows to improve significantly the probability of false alarm and the average acquisition time.Eventually, we propose 2 algorithms for the detection of the scrambling codes used in the uplink of UMTS-FDD and CDMA2000 systems. They highlights a new vulnerability of DSSS transmission systems. It is now conceivable to detect these transmission if the sequence's generator is known
Hallou, Nabil. "Runtime optimization of binary through vectorization transformations." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S120/document.
Full textIn many cases, applications are not optimized for the hardware on which they run. This is due to backward compatibility of ISA that guarantees the functionality but not the best exploitation of the hardware. Many reasons contribute to this unsatisfying situation such as legacy code, commercial code distributed in binary form, or deployment on compute farms. Our work focuses on maximizing the CPU efficiency for the SIMD extensions. The first contribution is a lightweight binary translation mechanism that does not include a vectorizer, but instead leverages what a static vectorizer previously did. We show that many loops compiled for x86 SSE can be dynamically converted to the more recent and more powerful AVX; as well as, how correctness is maintained with regards to challenges such as data dependencies and reductions. We obtain speedups in line with those of a native compiler targeting AVX. The second contribution is a runtime auto-vectorization of scalar loops. For this purpose, we use open source frame-works that we have tuned and integrated to (1) dynamically lift the x86 binary into the Intermediate Representation form of the LLVM compiler, (2) abstract hot loops in the polyhedral model, (3) use the power of this mathematical framework to vectorize them, and (4) finally compile them back into executable form using the LLVM Just-In-Time compiler. In most cases, the obtained speedups are close to the number of elements that can be simultaneously processed by the SIMD unit. The re-vectorizer and auto-vectorizer are implemented inside a dynamic optimization platform; it is completely transparent to the user, does not require any rewriting of the binaries, and operates during program execution
Okouyi, Antsina W'Ampoumou Rodrigue. "Faisabilité d'un système basé sur le DS-CDMA pour les futurs réseaux locaux sans fil à 60 GHz." Lille 1, 2006. http://www.theses.fr/2005LIL12024.
Full textBen, Chikha Haithem. "Etude et Amélioration de Turbo-Codage Distribué pour les Réseaux Coopératifs." Thesis, Valenciennes, 2012. http://www.theses.fr/2012VALE0011/document.
Full textDiversity provides an efficient method for combating multipath fading in mobile radio systems. One of the most common forms of spatial diversity is multiple-input multipleoutput (MIMO), where full diversity is obtained. However, embedding multiple antennas at the transmitter or the receiver can sometimes be expensive. As an alternative to collocated antennas, cooperative diversity in wireless multi-hop networks confirms their ability to achieve spatial diversity gains by exploiting the spatial diversity of the traditional MIMO techniques, without each node necessarily having multiple antennas. In addition, cooperative diversity has been shown to provide the network with importantthroughput, reduced energy requirements and improved access coverage.In light of this, the objective of this thesis is to devise coding schemes suitable for relay channels that aim at showing the best compromise between performance of diversity and coding gains. Firstly, we investigate a distributed turbo coding scheme dedicated to L-relay channels operating in the soft-decode-and-forward mode. Then, we present a proposed distributed turbo coded cooperative (DTCC) scheme, called parallel concatenated convolutional-based distributed coded cooperation. Finally, we investigate antenna/soft-relaying selection for DTCC networks in order to improve their end-to-end performance. Assuming BPSK transmission for fully interleaved channels with ideal channel state information, we define the explicit upper bounds for error probability inRayleigh fading channels with independent fading. Both theoretical limits and simulation results are presented to demonstrate the performances
Kaddoum, Georges. "Contributions à l’amélioration des systèmes de communication multi-utilisateur par chaos : synchronisation et analyse des performances." Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000245/.
Full textRadiocommunications field is currently in full development. In recent years, many researchers have explored the possibility of using chaotic signals to transmit data, especially in a multi-user case. Among the various multiple access techniques, the CDMA (Code Division Multiple Access) allows different users to transmit simultaneously on the same frequency band. The sequences currently used for classical spread spectrum are the sequences known as pseudo-random binary sequences with low cross-correlation generated on the basis of a shift linear register (Gold sequences) or binary orthogonal sequences (Walsh codes). This thesis has focused on the study of a communication system with multi-user spread spectrum using chaotic generators as spreading sequences. The chaotic signals can be generated by iterative discrete systems modelled by discrete transformations. In a first step, we have studied various chaotic signals from different dynamical systems, / a priori / defined by traditional functions continuous or piece wise linear functions. Relying on the correlation properties and the energy distribution of chaotic signals, a comparative study between different chaotic sequences was made in the framework of chaos-based DS-CDMA systems. The purpose of this comparison is to provide necessary elements to choose the best sequence for a spread spectrum system under an Additive White Gaussian Noise (AWGN) channel. A simple method to rapidly and accurately predict the bit error rate for chaos-based DS-CDMA was proposed in single and multi-user cases. Further study on the energy distribution has resulted in an analytical expression of the bit error rate. These performances have been also been studied and extended to the multi-user case. In a second part, we have explored the synchronization process of chaotic dynamical systems. After reviewing the existing approaches in the literature, we have focused our attention on the study of different methods of digital integration in order to determine an appropriate method to achieve synchronization using coupling with a low a low computing charge. Finally, we have studied the synchronization process for chaos-based DS-CDMA system. We have proposed receivers incorporating synchronization units similar to the synchronization units (/ acquisition and tracking /) used in conventional spread spectrum systems. These synchronization units are using simultaneously a classical binary pseudo-random sequence together with a chaotic sequence in order to achieve and maintain synchronization. These techniques were compared to a similar existing method recently proposed in literature. We have demonstrate the improvement in performance brought by our proposed system, including the fact that this system is also operational in the asynchronous case