Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Error correcting index codes.

Rozprawy doktorskie na temat „Error correcting index codes”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Error correcting index codes”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Kosek, Peter M. "Error Correcting Codes". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417508067.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Skoglund, Isabell. "Reed-Solomon Codes - Error Correcting Codes". Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97343.

Pełny tekst źródła
Streszczenie:
In the following pages an introduction of the error correcting codes known as Reed-Solomon codes will be presented together with different approaches for decoding. This is supplemented by a Mathematica program and a description of this program that gives an understanding in how the choice of decoding algorithms affect the time it takes to find errors in stored or transmitted information.
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Xuesong. "Cartesian authentication codes from error correcting codes /". View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20WANGX.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Hessler, Martin. "Optimization, Matroids and Error-Correcting Codes". Doctoral thesis, Linköpings universitet, Tillämpad matematik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51722.

Pełny tekst źródła
Streszczenie:
The first subject we investigate in this thesis deals with optimization problems on graphs. The edges are given costs defined by the values of independent exponential random variables. We show how to calculate some or all moments of the distributions of the costs of some optimization problems on graphs. The second subject that we investigate is 1-error correcting perfect binary codes, perfect codes for short. In most work about perfect codes, two codes are considered equivalent if there is an isometric mapping between them. We call this isometric equivalence. Another type of equivalence is given if two codes can be mapped on each other using a non-singular linear map. We call this linear equivalence. A third type of equivalence is given if two codes can be mapped on each other using a composition of an isometric map and a non-singular linear map. We call this extended equivalence. In Paper 1 we give a new better bound on how much the cost of the matching problem with exponential edge costs varies from its mean. In Paper 2 we calculate the expected cost of an LP-relaxed version of the matching problem where some edges are given zero cost. A special case is when the vertices with probability 1 – p have a zero cost loop, for this problem we prove that the expected cost is given by a formula. In Paper 3 we define the polymatroid assignment problem and give a formula for calculating all moments of its cost. In Paper 4 we present a computer enumeration of the 197 isometric equivalence classes of the perfect codes of length 31 of rank 27 and with a kernel of dimension 24. In Paper 5 we investigate when it is possible to map two perfect codes on each other using a non-singular linear map. In Paper 6 we give an invariant for the equivalence classes of all perfect codes of all lengths when linear equivalence is considered. In Paper 7 we give an invariant for the equivalence classes of all perfect codes of all lengths when extended equivalence is considered. In Paper 8 we define a class of perfect codes that we call FRH-codes. It is shown that each FRH-code is linearly equivalent to a so called Phelps code and that this class contains Phelps codes as a proper subset.
Style APA, Harvard, Vancouver, ISO itp.
5

Fyn-Sydney, Betty Iboroma. "Phan geometries and error correcting codes". Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4433/.

Pełny tekst źródła
Streszczenie:
In this thesis, we define codes based on the Phan geometry of type An. We show that the action of the group SUn+1(q) is not irreducible on the code. In the rank two case, we prove that the code is spanned by those apartments which only consist of chambers belonging to the Phan geometry and obtain submodules for the code.
Style APA, Harvard, Vancouver, ISO itp.
6

Guruswami, Venkatesan 1976. "List decoding of error-correcting codes". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8700.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 303-315).
Error-correcting codes are combinatorial objects designed to cope with the problem of reliable transmission of information on a noisy channel. A fundamental algorithmic challenge in coding theory and practice is to efficiently decode the original transmitted message even when a few symbols of the received word are in error. The naive search algorithm runs in exponential time, and several classical polynomial time decoding algorithms are known for specific code families. Traditionally, however, these algorithms have been constrained to output a unique codeword. Thus they faced a "combinatorial barrier" and could only correct up to d/2 errors, where d is the minimum distance of the code. An alternate notion of decoding called list decoding, proposed independently by Elias and Wozencraft in the late 50s, allows the decoder to output a list of all codewords that differ from the received word in a certain number of positions. Even when constrained to output a relatively small number of answers, list decoding permits recovery from errors well beyond the d/2 barrier, and opens up the possibility of meaningful error-correction from large amounts of noise. However, for nearly four decades after its conception, this potential of list decoding was largely untapped due to the lack of efficient algorithms to list decode beyond d/2 errors for useful families of codes. This thesis presents a detailed investigation of list decoding, and proves its potential, feasibility, and importance as a combinatorial and algorithmic concept.
(cont.) We prove several combinatorial results that sharpen our understanding of the potential and limits of list decoding, and its relation to more classical parameters like the rate and minimum distance. The crux of the thesis is its algorithmic results, which were lacking in the early works on list decoding. Our algorithmic results include: * Efficient list decoding algorithms for classically studied codes such as Reed-Solomon codes and algebraic-geometric codes. In particular, building upon an earlier algorithm due to Sudan, we present the first polynomial time algorithm to decode Reed-Solomon codes beyond d/2 errors for every value of the rate. * A new soft list decoding algorithm for Reed-Solomon and algebraic-geometric codes, and novel decoding algorithms for concatenated codes based on it. * New code constructions using concatenation and/or expander graphs that have good (and sometimes near-optimal) rate and are efficiently list decodable from extremely large amounts of noise. * Expander-based constructions of linear time encodable and decodable codes that can correct up to the maximum possible fraction of errors, using unique (not list) decoding.
by Venkatesan Guruswami.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
7

Guo, Alan Xinyu. "New error correcting codes from lifting". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99776.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 117-121).
Error correcting codes have been widely used for protecting information from noise. The theory of error correcting codes studies the range of parameters achievable by such codes, as well as the efficiency with which one can encode and decode them. In recent years, attention has focused on the study of sublinear-time algorithms for various classical problems, such as decoding and membership verification. This attention was driven in part by theoretical developments in probabilistically checkable proofs (PCPs) and hardness of approximation. Locally testable codes (codes for which membership can be verified using a sublinear number of queries) form the combinatorial core of PCP constructions and thus play a central role in computational complexity theory. Historically, low-degree polynomials (the Reed-Muller code) have been the locally testable code of choice. Recently, "affine-invariant" codes have come under focus as providing potential for new and improved codes. In this thesis, we exploit a natural algebraic operation known as "lifting" to construct new affine-invariant codes from shorter base codes. These lifted codes generically possess desirable combinatorial and algorithmic properties. The lifting operation preserves the distance of the base code. Moreover, lifted codes are naturally locally decodable and testable. We tap deeper into the potential of lifted codes by constructing the "lifted Reed-Solomon code", a supercode of the Reed-Muller code with the same error-correcting capabilities yet vastly greater rate. The lifted Reed-Solomon code is the first high-rate code known to be locally decodable up to half the minimum distance, locally list-decodable up to the Johnson bound, and robustly testable, with robustness that depends only on the distance of the code. In particular, it is the first high-rate code known to be both locally decodable and locally testable. We also apply the lifted Reed-Solomon code to obtain new bounds on the size of Nikodym sets, and also to show that the Reed-Muller code is robustly testable for all field sizes and degrees up to the field size, with robustness that depends only on the distance of the code.
by Alan Xinyu Guo.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
8

Vicente, Renato. "Statistical physics of error-correcting codes". Thesis, Aston University, 2000. http://publications.aston.ac.uk/10608/.

Pełny tekst źródła
Streszczenie:
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
Style APA, Harvard, Vancouver, ISO itp.
9

Erxleben, Wayne Henry 1963. "Error-correcting two-dimensional modulation codes". Thesis, The University of Arizona, 1993. http://hdl.handle.net/10150/291577.

Pełny tekst źródła
Streszczenie:
Modulation coding, to limit the number of consecutive zeroes in a data stream, is essential in digital magnetic recording/playback systems. Additionally, such systems require error correction coding to ensure that the decoded output matches the recorder input, even if noise is present. Typically these two coding steps have been performed independently, although various methods of combining them into one step have recently appeared. Another recent development is two-dimensional modulation codes, which meet runlength constraints using several parallel recording tracks, significantly increasing channel capacity. This thesis combines these two ideas. Previous techniques (both block and trellis structures) for combining error correction and modulation coding are surveyed, with discussion of their applicability in the two-dimensional case. One approach, based on trellis-coded modulation, is explored in detail, and a class of codes developed which exploit the increased capacity to achieve good error-correcting ability at the same rate as common non-error-correcting one-dimensional codes.
Style APA, Harvard, Vancouver, ISO itp.
10

Joseph, Binoy. "Clustering For Designing Error Correcting Codes". Thesis, Indian Institute of Science, 1994. https://etd.iisc.ac.in/handle/2005/3915.

Pełny tekst źródła
Streszczenie:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
Style APA, Harvard, Vancouver, ISO itp.
11

Joseph, Binoy. "Clustering For Designing Error Correcting Codes". Thesis, Indian Institute of Science, 1994. http://hdl.handle.net/2005/66.

Pełny tekst źródła
Streszczenie:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
Style APA, Harvard, Vancouver, ISO itp.
12

Hunt, Andrew W. "Hyper-codes, high-performance low-complexity error-correcting codes". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0007/MQ32401.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Parvaresh, Farzad. "Algebraic list-decoding of error-correcting codes". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3244733.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed February 23, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 134-141).
Style APA, Harvard, Vancouver, ISO itp.
14

Tjhai, Cen Jung. "A study of linear error correcting codes". Thesis, University of Plymouth, 2007. http://hdl.handle.net/10026.1/1624.

Pełny tekst źródła
Streszczenie:
Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes.
Style APA, Harvard, Vancouver, ISO itp.
15

Feldman, Jon 1975. "Decoding error-correcting codes via linear programming". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/42831.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 147-151).
Error-correcting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full error-correcting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new "LP decoders" have tight combinatorial characterizations of decoding success that can be used to analyze error-correcting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the channel. We refer to this property as the ML certificate property. We provide specific LP decoders for two major families of codes: turbo codes and low-density parity-check (LDPC) codes. These codes have received a great deal of attention recently due to their unprecedented error-correcting performance.
(cont.) Our decoder is particularly attractive for analysis of these codes because the standard message-passing algorithms used for decoding are often difficult to analyze. For turbo codes, we give a relaxation very close to min-cost flow, and show that the success of the decoder depends on the costs in a certain residual graph. For the case of rate-1/2 repeat-accumulate codes (a certain type of turbo code), we give an inverse polynomial upper bound on the probability of decoding failure. For LDPC codes (or any binary linear code), we give a relaxation based on the factor graph representation of the code. We introduce the concept of fractional distance, which is a function of the relaxation, and show that LP decoding always corrects a number of errors up to half the fractional distance. We show that the fractional distance is exponential in the girth of the factor graph. Furthermore, we give an efficient algorithm to compute this fractional distance. We provide experiments showing that the performance of our decoders are comparable to the standard message-passing decoders. We also give new provably convergent message-passing decoders based on linear programming duality that have the ML certificate property.
by Jon Feldman.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
16

Kalyanaraman, Shankar Umans Christopher. "On obtaining pseudorandomness from error-correcting codes /". Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-06022006-170858.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Lyle, Suzanne McLean. "Error Correcting Codes and the Human Genome". Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1689.

Pełny tekst źródła
Streszczenie:
In this work, we study error correcting codes and generalize the concepts with a view toward a novel application in the study of DNA sequences. The author investigates the possibility that an error correcting linear code could be included in the human genome through application and research. The author finds that while it is an accepted hypothesis that it is reasonable that some kind of error correcting code is used in DNA, no one has actually been able to identify one. The author uses the application to illustrate how the subject of coding theory can provide a teaching enrichment activity for undergraduate mathematics.
Style APA, Harvard, Vancouver, ISO itp.
18

Harrington, James William Preskill John P. "Analysis of quantum error-correcting codes : symplectic lattice codes and toric codes /". Diss., Pasadena, Calif. : California Institute of Technology, 2004. http://resolver.caltech.edu/CaltechETD:etd-05122004-113132.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Spielman, Daniel Alan. "Computationally efficient error-correcting codes and holographic proofs". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36998.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Tingxian, Zhou, HOU LIKUN i XU BINGXING. "The Error-Correcting Codes of The m-Sequence". International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613419.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The paper analyses the properties of m-sequence error-correcting codes when adapting the correlation detection decoding method, deduces the error-tolerant number formula of binary sequence with a good auto-correlation property being used as error-correcting codes, provides with a method to increase the efficiency of the m-sequence error-correcting codes and make its coding and decoding procedures in the form of framed figures.
Style APA, Harvard, Vancouver, ISO itp.
21

Hieta-aho, Erik. "On Finite Rings, Algebras, and Error-Correcting Codes". Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1525182104493243.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Corazza, Federico Augusto. "Analysis of graph-based quantum error-correcting codes". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23801/.

Pełny tekst źródła
Streszczenie:
With the advent of quantum computers, there has been a growing interest in the practicality of this device. Due to the delicate conditions that surround physical qubits, one could wonder whether any useful computation could be implemented on such devices. As we describe in this work, it is possible to exploit concepts from classical information theory and employ quantum error-correcting techniques. Thanks to the Threshold Theorem, if the error probability of physical qubits is below a given threshold, then the logical error probability corresponding to the encoded data qubit can be arbitrarily low. To this end, we describe decoherence which is the phenomenon that quantum bits are subject to and is the main source of errors in quantum memories. From the cause of error of a single qubit, we then introduce the error models that can be used to analyze quantum error-correcting codes as a whole. The main type of code that we studied comes from the family of topological codes and is called surface code. Of these codes, we consider both the toric and planar structures. We then introduce a variation of the standard planar surface code which better captures the symmetries of the code architecture. Once the main properties of surface codes have been discussed, we give an overview of the working principles of the algorithm used to decode this type of topological code: the minimum weight perfect matching. Finally, we show the performance of the surface codes that we introduced, comparing them based on their architecture and properties. These simulations have been performed with different error channel models to give a more thorough description of their performance in several situations showing relevant results.
Style APA, Harvard, Vancouver, ISO itp.
23

Rodrigues, Luís Filipe Abade. "Error correcting codes for visible light communication systems". Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/15887.

Pełny tekst źródła
Streszczenie:
Mestrado em Engenharia Eletrónica e Telecomunicações
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Ao longo dos últimos anos o número de utilizadores de redes sem fios tem aumentado. Até ao momento, a tecnologia RF (Radio Frequência) dominado este segmento. No entanto, a saturação nessa região do espectro eletromagnético exige tecnologias alternativas para redes sem fios. Recentemente, com o crescimento do mercado da iluminação LED (Díodo Emissor de Luz), as Comunicações por Luz Visível têm atraído as atenções dos investigadores. Em primeiro lugar, é uma fonte de luz eficiente para aplicações de iluminação. Em segundo lugar, o LED é um dispositivo que é facilmente modulado e com grande largura de banda. Por último, permite combinar iluminação e comunicação no mesmo dispositivo, ou seja, permite a implementação de sistemas de comunicação sem fios altamente eficientes. Um dos aspetos mais importantes num sistema de comunicação é a sua fiabilidade quando sujeitos a canais com ruído. Nestes cenários, a informação recebida pode vir afetada de erros. Para garantir o correto funcionamento do sistema, é muito comum o uso de um codificador de canal. A sua função é codificar a informação a ser enviada para melhorar a performance do sistema. O uso de Códigos de Correção de Erros é muito frequente permitindo anexar informação redundante aos dados originais. No recetor, a informação redundante é usada para recuperar possíveis erros na transmissão. Esta dissertação apresenta os passos da implementação de um Codificador de Canal para VLC. Foram consideradas várias técnicas tais como os códigos Reed-Solomon e os códigos Convolucionais, Interleaving (Bloco e Convolucional), CRC e Puncturing. Foi efetuada uma análise das características de cada técnica a fim de avaliar quais as mais apropriadas para o cenário em questão. Numa primeira fase, vários modelos foram implementados em Simulink a fim de simular o comportamento dos mesmos em diferentes cenários. Mais tarde os modelos foram implementados e simulados em blocos de hardware. Para obter resultados de uma forma mais rápida, foram elaborados modelos de co-simulação em hardware. No final, diferentes técnicas foram combinadas para criar um Codificador de Canal capaz de detetar e corrigir erros aleatórios e em rajada, graças ao uso de códigos Reed-Solomon em conjunto com técnicas de Interleaving. Adicionalmente, usando o algoritmo CRC, após o processo de descodficação, o sistema proposto é capaz de identificar possíveis erros que não puderam ser corrigidos.
Style APA, Harvard, Vancouver, ISO itp.
24

Lin, Winnie Carleton University Dissertation Engineering Systems and Computer. "Generalised linear anticodes and optimum error-correcting codes". Ottawa, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Apollonio, Pietrofrancesco. "Erasure error correcting codes applied to DTN communications". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6852/.

Pełny tekst źródła
Streszczenie:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Style APA, Harvard, Vancouver, ISO itp.
26

Rudra, Atri. "List decoding and property testing of error correcting codes /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6929.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Palaniappan, Karthik. "Propagation of updates to replicas using error correcting codes". Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=1915.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains vi, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 67-68).
Style APA, Harvard, Vancouver, ISO itp.
28

Webster, Paul Thomas. "Fault-Tolerant Logical Operators in Quantum Error-Correcting Codes". Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25112.

Pełny tekst źródła
Streszczenie:
Performing quantum computing that is robust against noise will require that all operations are fault-tolerant, meaning that they succeed with high probability even if a limited number of errors occur. We address the problem of fault-tolerantly implementing logical operators on quantum error-correcting codes – operators that apply logic gates to information protected by such codes. Specifically, we investigate what classes of logical operators are possible by particular approaches in important types of codes, especially topological stabiliser codes. We also analyse what fundamental limitations constrain the goal of realising fault-tolerant quantum computing by such implementations and how these limitations can be overcome. We begin by presenting necessary background theory on quantum computing, quantum error-correcting codes and fault tolerance. We then specifically consider the approach to fault tolerance of locality-preserving logical operators in topological stabiliser codes. We present a method for determining the set of such operators admitted by a wide range of such codes and apply this method to important examples such as surface codes and colour codes. Next, we consider the alternative approach of implementing logical operators in topological stabiliser codes with defects, especially by the technique of braiding. We show that such approaches are fundamentally limited, but that effective schemes can nonetheless be constructed, both within these limitations and by circumventing them. We then consider fault tolerance in a more general context. We prove a highly general no-go theorem in this context, applicable to a wide range of stabiliser codes. We also show that this proof illuminates how it can be circumvented and provides perspective on a range of fault-tolerant schemes. Finally, we conclude by reviewing how these results collectively address our research questions and suggesting future work.
Style APA, Harvard, Vancouver, ISO itp.
29

Kubischta, Eric. "A Polynomial Time Procedure Converting Error Correcting Codes to Semantically Secure Wiretap Codes". Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28772.

Pełny tekst źródła
Streszczenie:
We furnish a procedure based on universal hash families that can convert an error correcting code of rate R to a semantically secure wiretap code of rate R?? where ? is some parameter derived from the eavesdropper?s channel. This conversion is shown to be polynomial time e?cient with block length and is applicable to any discrete time channel. To prove the induced wiretap code is semantically secure, we have upgraded recent leakage bounds by maximizing over all message distributions. The semantic leakage is shown to be exponentially decreasing with block length. As an explicit application, we construct a concrete, polynomial time e?cient, semantically secure wiretap code that can achieve the secrecy capacity of the AWGN wiretap channel. Moreover, this wiretap coding scheme has both probability of error and semantic leakage exponentially diminishing with block length.
Style APA, Harvard, Vancouver, ISO itp.
30

Nenno, Robert B. "An introduction to the theory of nonlinear error-correcting codes /". Online version of thesis, 1987. http://hdl.handle.net/1850/10350.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Akeredolu, P. S. "Some new procedures for generating and decoding error correcting codes". Thesis, De Montfort University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382273.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Hurwitz, Jeremy Scott. "Error-correcting codes and applications to large scale classification systems". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53140.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (p. 37-39).
In this thesis, we study the performance of distributed output coding (DOC) and error-Correcting output coding (ECOC) as potential methods for expanding the class of tractable machine-learning problems. Using distributed output coding, we were able to scale a neural-network-based algorithm to handle nearly 10,000 output classes. In particular, we built a prototype OCR engine for Devanagari and Korean texts based upon distributed output coding. We found that the resulting classifiers performed better than existing algorithms, while maintaining small size. Error-correction, however, was found to be ineffective at increasing the accuracy of the ensemble. For each language, we also tested the feasibility of automatically finding a good codebook. Unfortunately, the results in this direction were primarily negative.
by Jeremy Scott Hurwitz.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
33

Katsaros, A. "An adaptable high-speed error-control algebraic decoder". Thesis, University of Kent, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374159.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Alabbadi, Mohssen. "Intergration of error correction, encryption, and signature based on linear error-correcting block codes". Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14959.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Ong, Chong Tean. "On the undetected error probability of linear codes". Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29722.

Pełny tekst źródła
Streszczenie:
The probability of undetected error P[formula omitted](є) for the primitive triple-error-correcting BCH codes of blocklength 2[formula omitted]  1, used solely for error detection on a binary symmetric channel with crossover probability є ≤ 1/2, is examined. It is shown that for odd values of m, P[formula omitted(є) increases monotonically with є. For even values of m, this is not necessarily true. However, for a fixed є, as m increases, P[formula omitted](є) approaches 2‾[formula omitted] where p is the number of parity bits. The extended double and triple-error-correcting primitive BCH codes are also examined. The undetected error probability of these codes is shown to have similar characteristics as the non-extended cases. An improved upper bound on the probability of undetected error which is valid for any linear code is derived. Comparison of this improved upper bound with the Kasami upper bound for some classes of codes is shown.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
36

Lan, Ching Fu. "Design techniques for graph-based error-correcting codes and their applications". Texas A&M University, 2004. http://hdl.handle.net/1969.1/3329.

Pełny tekst źródła
Streszczenie:
In Shannon’s seminal paper, “A Mathematical Theory of Communication”, he defined ”Channel Capacity” which predicted the ultimate performance that transmission systems can achieve and suggested that capacity is achievable by error-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused by channels afterward. The discovery of turbo codes and rediscovery of Low Density Parity Check codes (LDPC) have revived the research in channel coding with novel ideas and techniques on code concatenation, iterative decoding, graph-based construction and design based on density evolution. This dissertation focuses on the design aspect of graph-based channel codes such as LDPC and Irregular Repeat Accumulate (IRA) codes via density evolution, and use the technique (density evolution) to design IRA codes for scalable image/video communication and LDPC codes for distributed source coding, which can be considered as a channel coding problem. The first part of the dissertation includes design and analysis of rate-compatible IRA codes for scalable image transmission systems. This part presents the analysis with density evolution the effect of puncturing applied to IRA codes and the asymptotic analysis of the performance of the systems. In the second part of the dissertation, we consider designing source-optimized IRA codes. The idea is to take advantage of the capability of Unequal Error Protection (UEP) of IRA codes against errors because of their irregularities. In video and image transmission systems, the performance is measured by Peak Signal to Noise Ratio (PSNR). We propose an approach to design IRA codes optimized for such a criterion. In the third part of the dissertation, we investigate Slepian-Wolf coding problem using LDPC codes. The problems to be addressed include coding problem involving multiple sources and non-binary sources, and coding using multi-level codes and nonbinary codes.
Style APA, Harvard, Vancouver, ISO itp.
37

Shen, Bingxin. "Application of Error Correction Codes in Wireless Sensor Networks". Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/ShenB2007.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Paul, Arnab. "Designing Secure and Robust Distribted and Pervasive Systems with Error Correcting Codes". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6848.

Pełny tekst źródła
Streszczenie:
This thesis investigates the role of error-correcting codes in Distributed and Pervasive Computing. The main results are at the intersection of Security and Fault Tolerance for these environments. There are two primary areas that are explored in this thesis. 1. We have investigated protocols for large scale fault tolerant secure distributed storage. The two main concerns here are security and redundancy. In one arm of this research we developed SAFE, a distributed storage system based on a new protocol that offers a two-in-one solution to fault-tolerance and confidentiality. This protocol is based on cryptographic properties of error correction codes. In another arm, we developed esf, another prototype distributed persistent storage; esf facilitates seamless hardware extension of storage units, high resilience to loads and provides high availability. The main ingredient in its design is a modern class of erasure codes known as the {em Fountain Codes}. One problem in such large storage is the heavy overhead of the associated fingerprints needed for checking data integrity. esf deploys a clever integrity check mechanism by use of a data structure known as the {em Merkle Tree} to address this issue. 2. We also investigated the design of a new remote authentication protocol. Applications over long range wireless would benefit quite a bit from this design. We designed and implemented LAWN, a lightweight remote authentication protocol for wireless networks that deploys a randomized approximation scheme based on Error correcting codes. We have evaluated in detail the performance of LAWN; while it adds very low overhead of computation, the savings in bandwidth and power are quite dramatic.
Style APA, Harvard, Vancouver, ISO itp.
39

Kanungo, Aparna. "Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes". Texas A&M University, 2005. http://hdl.handle.net/1969.1/2714.

Pełny tekst źródła
Streszczenie:
Quantum error correcting codes have been introduced to encode the data bits in extra redundant bits in order to accommodate errors and correct them. However, due to the delicate nature of the quantum states or faulty gate operations, there is a possibility of catastrophic spread of errors which might render the error correction techniques ineffective. Hence, in this thesis we concentrate on how various operations can be carried out fault-tolerantly so that the errors are not propagated in the same block. We prove universal fault-tolerance for nonbinary CSS codes. This thesis is focussed only on nonbinary quantum codes and all the results pertain to nonbinary codes. Efficient error detection and correction techniques using fault-tolerant techniques can help as long as we ensure that the gate error probability is below a certain threshold. The calculation of this threshold is therefore important to see if quantum computations are realizable or not, even with fault-tolerant operations. We derive an expression to compute the gate error threshold for nonbinary quantum codes and test this result for different classes of codes, to get codes with best threshold results.
Style APA, Harvard, Vancouver, ISO itp.
40

Bazzi, Louay Mohamad Jamil 1974. "Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/17042.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (leaves 207-214).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.
(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of AC₀ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.
by Louay M.J. Bazzi.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
41

Cross, Andrew W. (Andrew William) 1979. "Fault-tolerant quantum computer architectures using hierarchies of quantum error-correcting codes". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44407.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 221-238).
Quantum computers have been shown to efficiently solve a class of problems for which no efficient solution is otherwise known. Physical systems can implement quantum computation, but devising realistic schemes is an extremely challenging problem largely due to the effect of noise. A quantum computer that is capable of correctly solving problems more rapidly than modern digital computers requires some use of so-called fault-tolerant components. Code-based fault-tolerance using quantum error-correcting codes is one of the most promising and versatile of the known routes for fault-tolerant quantum computation. This dissertation presents three main, new results about code-based fault-tolerant quantum computer architectures. The first result is a large new family of quantum codes that go beyond stabilizer codes, the most well-studied family of quantum codes. Our new family of codeword stabilized codes contains all known codes with optimal parameters. Furthermore, we show how to systematically find, construct, and understand such codes as a pair of codes: an additive quantum code and a classical (nonlinear) code. Second, we resolve an open question about universality of so-called transversal gates acting on stabilizer codes. Such gates are universal for classical fault-tolerant computation, but they were conjectured to be insufficient for universal fault-tolerant quantum computation. We show that transversal gates have a restricted form and prove that some important families of them cannot be quantum universal. This is strong evidence that so-called quantum software is necessary to achieve universality, and, therefore, fault-tolerant quantum computer architecture is fundamentally different from classical computer architecture. Finally, we partition the fault-tolerant design problem into levels of a hierarchy of concatenated codes and present methods, compatible with rigorous threshold theorems, for numerically evaluating these codes.
(cont.) The methods are applied to measure inner error-correcting code performance, as a first step toward elucidation of an effective fault-tolerant quantum computer architecture that uses no more than a physical, inner, and outer level of coding. Of the inner codes, the Golay code gives the highest pseudothreshold of 2 x 10-3. A comparison of logical error rate and overhead shows that the Bacon-Shor codes are competitive with Knill's C₄/C₆ scheme at a base error rate of 10⁻⁴.
by Andrew W. Cross.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
42

Toscano, Giuseppe. "Near Shannon-Limit Error Correcting Codes For Satellite Free-Space Optical Communications". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5303/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Bartz, Michael. "Soft decision decoding of block codes using multilayer perceptrons". Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15391.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Guruswami, Venkatesan. "List decoding of error-correcting codes : winning thesis of the 2002 ACM Doctoral Dissertation Competition /". Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/enhancements/fy0823/2004115727-d.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Tezeren, Serdar U. "Reed-Muller codes in error correction in wireless adhoc networks". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FTezeren.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Murali Tummala, Roberto Cristi. Includes bibliographical references (p. 133-134). Also available online.
Style APA, Harvard, Vancouver, ISO itp.
46

Luna, Amjad A. "The design and implementation of trellis-based soft decision decoders for block codes". Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15818.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Blanchard, Bart. "Quantization effects and implementation considerations for turbo decoders". [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000107.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains xiii, 91 p.; also contains graphics. Includes vita. Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
48

Thomas, Anoop. "Index Coding, Error Correcting Index Codes And Matroids". Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5326.

Pełny tekst źródła
Streszczenie:
In the problem of index coding, there is a central source sending messages to a set of receivers which demand messages over a noiseless broadcast channel. Each receiver knows a subset of messages which is referred to as side-information. A backward channel exists between each of the receivers and the source. Receivers use the reverse channel to communicate the side-information back to the source. The source uses this side-information to develop encoding schemes so that all the receivers are satisfied with minimum number of transmissions. In this work, we consider few problems related to index coding. First, we consider the informed source coding problem in which the receiver communicates only the cardinality of the side-information available to it and not the messages that constitute the side-information. We use l-th Near Maximum Distance Sep- arable (Near-MDS) codes to construct encoding schemes for the informed source coding problem. The advantage of using l-th Near-MDS codes is the reduction of required field size. For certain class of informed source coding problems, we show that the codes constructed from l`-th Near-MDS codes are optimal under some field size restrictions. Links between matroid theory, network coding and index coding problems have been an active area of research. The generalized index coding problem is a general version of the conventional index coding problem in which both the source and receivers possess linear functions of messages. We establish connections between representation of discrete polymatroids and generalized index coding. We show that the generalized index coding problem has a vector linear solution if and only if there exists a representable discrete polymatroid satisfying certain conditions. Further, from a discrete polymatroid a generalized index coding problem is con- structed. It is shown that if the generalized index coding problem has a vector linear solution over the binary eld then an associated discrete polymatroid is representable over the binary eld. For generalized index coding problems con- structed from matroids, we show that a scalar linear solution to the constructed problem exists if and only if the matroid is binary representable. We also consider the index coding with restricted information (ICRI) problem in which the source must ensure that each receiver will not be able to decode a certain subset of messages. Interference alignment techniques are used to obtain results on the ICRI problem. Necessary conditions for the existence of a solution to the ICRI problem is found. A technique to obtain solutions to the ICRI problem using contractions corresponding to restrictions is obtained. Index coding over noisy channels is also considered. Error correcting index codes are encoding schemes which enable every receiver to correct up to a certain number of errors in the received message. We allow each receiver to have a di erent error correcting capability and introduce di erential error correcting index codes to achieve this. A set of necessary and su cient conditions for a matrix to correspond to a vector linear di erential index code is found. We establish the link between vector linear di erential error correcting index codes and discrete polymatroids. It is shown that a vector linear di erential error correcting index code exists if and only if there exists a representable discrete polymatroid satisfying certain conditions. Lastly, index coding problem over binary symmetric channels is studied. It is observed that the probability of error at each receiver depends on the index code used. A condition on the optimal index codes which minimize the maximum error probability among all the receivers is derived. For a special class of index coding problems, an algorithm to identify an optimal index code which gives the best performance in terms of minimal maximum error probability across all the receivers is provided.
Style APA, Harvard, Vancouver, ISO itp.
49

Jinghu, Chen. "Reduced complexity decoding algorithms for low-density parity check codes and turbo codes". 2003. http://proquest.umi.com/pqdweb?index=0&did=765086321&SrchMode=2&sid=11&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1233251616&clientId=23440.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Karat, Nujoom Sageer. "Error Correction in Index Coding And Coded Caching". Thesis, 2019. https://etd.iisc.ac.in/handle/2005/5054.

Pełny tekst źródła
Streszczenie:
In an index coding problem, there is a sender which has a set of messages and there are multiple receivers demanding possibly different subsets of these messages and possessing some other subsets of these messages as side-information. The goal for the sender is to meet the demands of the receivers in minimum number of transmissions using which along with the side-information every receiver can get what it demands. There have been many extensions and generalizations to the index coding prob- lem in the literature. One such generalization is to the case when the transmissions are subject to errors. Lower and upper bounds on the optimal number of transmis- sions required to correct a speci ed number of errors have been established. In this work, optimal linear error correcting index codes are obtained for three classes of index coding problems namely single-prior index coding problems, single unicast index coding problems with symmetric neighboring consecutive side-information and index coding problems with min-rank at most two. In another generalization of index coding problem, namely Generalized Index Coding (GIC) problem, linear combination of the messages can be demanded and the side-information at the receivers may also be some linear combinations of messages. For a particular class of GIC problems we discuss a construction of the optimal scalar linear index codes. For this class of GIC problems, the optimal linear error correcting index codes are also constructed. In a coded caching problem, a single server is connected to a set of users through a shared bottleneck link, which is assumed to be error-free. A coded caching problem involves two phases. The placement or prefetching phase is done during non-peak hours during which all the users ll their local cache with the portions of the les available. During the delivery phase, each user requests a le and the server delivers coded transmissions to meet the demands. During the placement, if the parts of the les are stored in the cache without coding, the prefetching is termed as uncoded prefetching. If coding is done among the parts of the les during the placement, then the prefetching is referred to as coded prefetching. When multiple users share a common cache, the scheme is referred to as the shared cache scheme. The centralized schemes involve a central coordi- nation during the placement phase. The schemes for which a central coordination is not required during the placement phase are called the decentralized schemes. In our work, the links between the server and the users are assumed to be error prone. A new delivery scheme is required to meet the demands of each user even after receiving a nite number of transmissions in error in the presence of erro- neous portions of les in the cache. The minimum average rate and the minimum peak rate for this problem are characterized. The optimal linear error correcting delivery schemes are proposed for various classes of coded caching problems in the absence of prefetching errors. The classes of problems which we considered include centralized schemes like the symmetric batch prefetching and the shared cache scheme, which involve uncoded prefetching, the Chen, Fan, Letaif scheme and the Jesus Vilardebo scheme, which involve coded prefetching. We have also considered decentralized schemes like the Ali-Niesen decentralized Scheme and the Least Recently Sent (LRS) online coded caching scheme. We also prove the optimality of the LRS online coded caching scheme using results from index cod- ing. Furthermore, alternate proofs for the optimality are given for coded caching problems with the symmetric batch prefetching and the Ali-Niesen decentralized prefetching. In addition to this, lower bounds are established for the optimal rate required when there are prefetching errors in the symmetric batch prefetching.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii