To see the other types of publications on this topic, follow the link: Detection codes.

Dissertations / Theses on the topic 'Detection codes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Detection codes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Danfeng. "Iterative coded multiuser detection using LDPC codes." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27939.

Full text
Abstract:
Multiuser detection (MUD) has been regarded as an effective technique for combating cochannel interference (CCI) in time-division multiple access (TDMA) systems and multiple access interference (MAI) in code-division multiple access (CDMA) systems. An optimal multiuser detector for coded multiuser systems is usually practically infeasible due to the associated complexity. An iterative receiver consisting of a soft-input soft-output (SISO) multiuser detector and a bank of SISO single user decoders can provide a system performance which approaches to that of single user system after a few iterations. In this thesis, MUD and LDPC decoding are combined to improve the multiuser receiver performance. The soft output of the LDPC decoder is fed back to the multiuser detector to improve the detection. This leads to decision variables that have a smaller MAI component. These decision variables are then returned to the decoder and the decoding process benefits from the improvement to the decision variables. The process can be repeated many times. The resulting iterative multiuser receiver is designed based on the soft parallel interference cancellation (PIC) algorithm. For the interference reconstruction, the LDPC decoder is improved to produce the log-likelihood ratios (LLR) of the information bits as well as the parity bits. A sub-optimal approach is proposed to output the LLR of the parity bits with very low complexity. Thanks to the powerful error-correction ability of the LDPC decoder, the LDPC multiuser receiver can achieve a satisfactory convergence, and substantially outperforms non-iterative receivers. Three types of SISO multiuser detectors are provided. They are: Soft Interference Cancellation (SIC) detector, SISO decorrelating detector and SISO minimum mean square error (MMSE) detector. The resulting system performance converges very quickly. The comparison of these three types of detectors is also shown in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
2

Schiffel, Ute. "Hardware Error Detection Using AN-Codes." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-69872.

Full text
Abstract:
Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems. The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation. We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used. We present the following novel encoding approaches: - Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and - Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes. In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time. For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations. Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors. This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiao, Jiaxi. "Information theoretic approach in detection and security codes." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43620.

Full text
Abstract:
Signal detection plays a critical role in realizing reliable transmission through communication systems. In this dissertation, by applying information theoretic approach, efficient detection schemes and algorithms are designed for three particular communication systems. First, a computation efficient coding and detection algorithm is developed to decode two dimensional inter-symbol interference (ISI) channels. The detection algorithm significantly reduces the computation complexity and makes the proposed equalization algorithm applicable. A new metric, the post-detection mutual information (PMI), is established to quantify the ultimate information rate between the discrete inputs and the hard detected output. This is the first time that the information rate loss caused by the hard mapping of the detectors is considered. Since the hard mapping step in the detector is irreversible, we expect that the PMI is reduced compared to the MI without hard mapping. The conclusion is confirmed by both the simulation and the theoretic results. Random complex field code is designed to achieve the secrecy capacity of wiretap channel with noiseless main channel and binary erasure eavesdroppers' channel. More importantly, in addition to approaching the secrecy capacity, RCFC is the first code design which provides a platform to tradeoff secrecy performance with the erasure rate of the eavesdropper's channel and the secrecy rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Gu, Yu. "Noncoherent communications using space-time trellis codes." Thesis, University of Canterbury. Electrical and Computer Engineering, 2008. http://hdl.handle.net/10092/1252.

Full text
Abstract:
In the last decade much interest has been shown in space-time trellis codes (STTCs) since they can offer coding gain along with the ability to exploit the space and time diversity of MIMO channels. STTCs can be flexibly designed by trading off performance versus complexity. The work of Dayal [1] stated that if training symbols are used together with data symbols, then a space-time code can be viewed as a noncoherent code. The authors of [1] described the migration from coherent space-time codes to training assisted noncoherent space-time codes. This work focuses on the development of training assisted noncoherent STTCs, thus extending the concept of noncoherent training codes to STTCs. We investigate the intrinsic link between coherent and noncoherent demod- ulation. By analyzing noncoherent STTCs for up to four transmit antennas, we see that they have similar performance deterioration to noncoherently demodulated M-PSK using a single antenna. Various simulations have been done to confirm the analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Katz, Ettie. "Trellis codes for multipath fading ISI channels with sequential detection." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Oruç, Özgür Altınkaya Mustafa Aziz. "Differential and coherent detection schemes for space-time block codes/." [s.l.]: [s.n.], 2002. http://library.iyte.edu.tr/tezler/master/elektrikveelektronikmuh/T000133.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Knopp, Raymond. "Module-phase-codes with non-coherent detection and reduced-complexity decoding." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=68034.

Full text
Abstract:
This thesis considers M-ary phase coding for the non-coherent AWGN channel. More precisely, we develop block-coded MPSK modulation schemes specifically for non-coherent block detection which significantly surpass the performance of ideal uncoded coherent MPSK. A class of block codes which are well-matched to MPSK modulation, called module-phase codes, is presented. The algebraic framework used for defining these codes relies on elements of module theory which are discussed along with a method for constructing such codes for non-coherent detection. It is shown that differential encoding, when considered on a block basis, may be viewed as a specific code from a particular class of module-phase codes. Two classes of more powerful codes which achieve significant coding gain with respect to coherent detection of uncoded MPSK are presented. In the first class of module-phase codes, the coding gain is achieved at the expense of bandwidth expansion. In the second class, however, the coding gain is achieved at the expense of signal constellation expansion without expanding bandwidth. A reduced-complexity/sub-optimal decoding strategy based on a modification of information set decoding is described. Its performance is analysed through the use of computer simulations for various different codes. Finally, we address the performance of these codes combined with the reduced-complexity decoding method over correlated Rayleigh fading channels.
APA, Harvard, Vancouver, ISO, and other styles
8

Valenti, Matthew C. "Iterative Detection and Decoding for Wireless Communications." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/28290.

Full text
Abstract:
Turbo codes are a class of forward error correction (FEC) codes that offer energy efficiencies close to the limits predicted by information theory. The features of turbo codes include parallel code concatenation, recursive convolutional encoding, nonuniform interleaving, and an associated iterative decoding algorithm. Although the iterative decoding algorithm has been primarily used for the decoding of turbo codes, it represents a solution to a more general class of estimation problems that can be described as follows: a data set directly or indirectly drives the state transitions of two or more Markov processes; the output of one or more of the Markov processes is observed through noise; based on the observations, the original data set is estimated. This dissertation specifically describes the process of encoding and decoding turbo codes. In addition, a more general discussion of iterative decoding is presented. Then, several new applications of iterative decoding are proposed and investigated through computer simulation. The new applications solve two categories of problems: the detection of turbo codes over time-varying channels, and the distributed detection of coded multiple-access signals. Because turbo codes operate at low signal-to-noise ratios, the process of phase estimation and tracking becomes difficult to perform. Additionally, the turbo decoding algorithm requires precise estimates of the channel gain and noise variance. The first significant contribution of this dissertation is a study of several methods of channel estimation suitable specifically for turbo coded systems. The second significant contribution of this dissertation is a proposed method for jointly detecting coded multiple-access signals using observations from several locations, such as spatially separated base stations. The proposed system architecture draws from the concepts of macrodiversity combining, multiuser detection, and iterative decoding. Simulation results show that when the system is applied to the time division multiple-access cellular uplink, a significant improvement in system capacity results.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Chang. "Inconsistency detection and resolution for context-aware pervasive computing /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20XU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Albayrak, Aras. "Automatic Pose and Position Estimation by Using Spiral Codes." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-27175.

Full text
Abstract:
This master thesis is about providing the implementation of synthesis, detection of spiral symbols and estimating the pan/tilt angle and position by using camera calibration. The focus is however on the latter, the estimation of parameters of localization. Spiral symbols are used to be able to give an object an identity as well as to locate it. Due to the spiral symbol´s characteristic shape, we can use the generalized structure tensor (GST) algorithm which is particularly efficient to detect different members of the spiral family. Once we detect spirals, we know the position and identity parameters of the spirals within an apriori known geometric configuration (on a sheet of paper). In turn, this information can be used to estimate the 3D-position and orientation of the object on which spirals are attached using a camera calibration method.   This thesis provides an insight into how automatic detection of spirals attached on a sheet of paper, and from this, automatic deduction of position and pose parameters of the sheet, can be achieved by using a network camera. GST algorithm has an advantage of running the processes of detection of spirals efficiently w.r.t detection performance and computational resources because it uses a spiral image model well adapted to spiral spatial frequency characteristic. We report results on how detection is affected by zoom parameters of the network camera, as well as by the GST parameters; such as filter size. After all spirals centers are located and identified w.r.t. their twist/bending parameter, a flexible technique for camera calibration, proposed by Zhengyou Zhang implemented in Matlab within the present study, is performed. The performance of the position and pose estimation in 3D is reported. The main conclusion is, we have reasonable surface angle estimations for images which were taken by a WLAN network camera in different conditions such as different illumination and different distances.
APA, Harvard, Vancouver, ISO, and other styles
11

Levienaise-Obadia, B. "Efficient texture-based indexing for interactive image retrieval and cue detection." Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/842806/.

Full text
Abstract:
The focus of this thesis is the definition of a complete framework for texture-based annotation and retrieval. This framework is centred on the concept of "texture codes", so called because they encode the relative energy levels of Gabor filter responses. These codes are pixel-based, robust descriptors with respect to illumination variations, can be generated efficiently, and included in a fast retrieval process. They can act as local or global descriptors, and can be used in the representations of regions or objects. Our framework is therefore capable of supporting a wide range of queries and applications. During our research, we have been able to utilise results of psychological studies on the perception of similarity and have explored non-metric similarity scores. As a result, we have found that similarity can be evaluated with simple measures predominantly relying on the information extracted from the query, without a drastic loss in retrieval performance. We have been able to show that the most simple measure possible, counting the number of common codes between the query and a stored image, can for some algorithmic parameters outperform well-proven benchmarks. Importantly also, our measures can all support partial comparisons, so that region-based queries can be answered without the need for segmentation. We have investigated refinements of the framework which endow it with the ability to localise queries in candidate images, and to deal with user relevance feedback. The final framework can generate good and fast retrieval results as demonstrated with a databases of 3723 images, and can therefore be useful as a stand-alone system. The framework has also been applied to the problem of high-level annotation. In particular, it has been used as a cue detector, where a cue is a visual example of a particular concept such as a type of sport. The detection results show that the system can predict the correct cue among a small set of cues, and can therefore provide useful information to an engine fusing the outputs of several cue detectors. So an important aspect of this framework is that it is expected to be an asset within a multi-cue annotation and/or retrieval system.
APA, Harvard, Vancouver, ISO, and other styles
12

Akdemir, Kahraman D. "Error Detection Techniques Against Strong Adversaries." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-dissertations/406.

Full text
Abstract:
"Side channel attacks (SCA) pose a serious threat on many cryptographic devices and are shown to be effective on many existing security algorithms which are in the black box model considered to be secure. These attacks are based on the key idea of recovering secret information using implementation specific side-channels. Especially active fault injection attacks are very effective in terms of breaking otherwise impervious cryptographic schemes. Various countermeasures have been proposed to provide security against these attacks. Double-Data-Rate (DDR) computation, dual-rail encoding, and simple concurrent error detection (CED) are the most popular of these solutions. Even though these security schemes provide sufficient security against weak adversaries, they can be broken relatively easily by a more advanced attacker. In this dissertation, we propose various error detection techniques that target strong adversaries with advanced fault injection capabilities. We first describe the advanced attacker in detail and provide its characteristics. As part of this definition, we provide a generic metric to measure the strength of an adversary. Next, we discuss various techniques for protecting finite state machines (FSMs) of cryptographic devices against active fault attacks. These techniques mainly depend on nonlinear robust codes and physically unclonable functions (PUFs). We show that due to the nonuniform behavior of FSM variables, securing FSMs using nonlinear codes is an important and difficult problem. As a solution to this problem, we propose error detection techniques based on nonlinear codes with different randomization methods. We also show how PUFs can be utilized to protect a class of FSMs. This solution provides security on the physical level as well as the logical level. In addition, for each technique, we provide possible hardware realizations and discuss area/security performance. Furthermore, we provide an error detection technique for protecting elliptic curve point addition and doubling operations against active fault attacks. This technique is based on nonlinear robust codes and provides nearly perfect error detection capability (except with exponentially small probability). We also conduct a comprehensive analysis in which we apply our technique to different elliptic curves (i.e. Weierstrass and Edwards) over different coordinate systems (i.e. affine and projective). "
APA, Harvard, Vancouver, ISO, and other styles
13

Khatami, Seyed Mehrdad. "Read Channel Modeling, Detection, Capacity Estimation and Two-Dimensional Modulation Codes for TDMR." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/577306.

Full text
Abstract:
Magnetic recording systems have reached a point where the grain size can no longer be reduced due to energy stability constraints. As a new magnetic recording paradigm, two-dimensional magnetic recording (TDMR) relies on sophisticated signal processing and coding algorithms, a much less expensive alternative to radically altering the media or the read/write head as required for the other technologies. Due to 1) the significant reduction of grains per bit, and 2) the aggressive shingled writing, TDMR faces several formidable challenges. Firstly, severe interference is introduced in both down-track and cross-track directions due to the read/write head dimensions. Secondly, reduction in the number of grains per bit results in variations of bit boundaries which consequently lead to data-dependent jitter noise. Moreover, the bit to grain ratio reduction will cause some bits not to be properly magnetized or to be overwritten which introduces write errors to the system. The nature of write and read processes in TDMR necessitates that the information storage be viewed as a two-dimensional (2D) system. The challenges in TDMR signal processing are 1) an accurate read channel model, 2) mitigating the effect of inter-track interference (ITI) and inter-symbol interference (ISI) by using an equalizer, 3) developing 2D modulation/error correcting codes matching the TDMR channel model, 4) design of truly 2D detectors, and 5) computing the lower bounds on capacity of TDMR channel. The work is concerned with several objectives in regard to the challenges in TDMR systems. 1. TDMR Channel Modeling: As one of the challenges of the project, the 2D Microcell model is introduced as a read channel model for TDMR. This model captures the data-dependent properties of the media noise and it is well suited in regard to detector design. In line with what has been already done in TDMR channel models, improvements can be made to tune the 2D Microcell model for different bit to grain densities. Furthermore, the 2D Microcell model can be modified to take into account dependency between adjacent microtrack borders positions. This assumption will lead to more accurate model in term of closeness to the Voronoi model. 2. Detector Design: The need for 2D detection is not unique to TDMR systems. However, it is still largely an open problem to develop detectors that are close to optimal maximum likelihood (ML) detection for the 2D case. As one of the important blocks of the TDMR system, the generalized belief propagation (GBP) detector is developed and introduced as a near ML detector. Furthermore, this detector is tuned to improve the performance for the TDMR channel model. 3. Channel Capacity Estimation: Two dimensional magnetic recording (TDMR) is a new paradigm in data storage which envisions densities up to 10 Tb/in² as a result of drastically reducing bit to grain ratio. In order to reach this goal aggressive write (shingled writing) and read process are used in TDMR. Kavcic et al. proposed a simple magnetic grain model called the granular tiling model which captures the essence of read/write process in TDMR. Capacity bounds for this model indicate that 0.6 user bit per grain densities are possible, however, previous attempt to reach capacities are not close to the channel capacity. We provide a truly two-dimensional detection scheme for the granular tiling model based on generalized belief propagation (GBP). Factor graph interpretation of the detection problem is provided and formulated in this section. Then, GBP is employed to compute marginal a posteriori probabilities for the constructed factor graph. Simulation results show huge improvements in detection. A lower bound on the mutual information rate (MIR) is also derived for this model based on GBP detector. Moreover, for the Voronoi channel model, the MIR is estimated for the case of constrained and unconstrained input. 4. Modulation Codes: Constrained codes also known as modulation codes are a key component in the digital magnetic recording systems. The constrained code forbids particular input data patterns which lead to some of the dominant error events or higher media noise. The goal of the dissertation in regard to modulation codes is to construct a 2D modulation code for the TDMR channel which improves the overall performance of the TDMR system. Furthermore, we implement an algorithm to estimate the capacity of the 2D modulation codes based on generalized belief propagation (GBP) algorithm. The capacity is also calculated in presence of white and colored noise which is the case for TDMR channel. 5. Joint Detection and Decoding Schemes: In data recording systems, a concatenated approach toward the constrained code and error-correcting code (ECC) is typically used and the decoding is done independently. We show the improvement in combining the decoding of the constrained code and the ECC using GBP algorithm. We consider the performance of a combined modulation constraints and the ECC on a binary-input additive white Gaussian noise (AWGN) channel (BIAWGNC) and also over one-dimensional (1D) and 2D ISI channels. We will show that combining the detection, demodulation and decoding results in a superior performance compared to concatenated schemes.
APA, Harvard, Vancouver, ISO, and other styles
14

Kagioglidis, Ioannis. "Performance analysis of a LINK-16/JTIDS compatible waveform with noncoherent detection, diversity and side information." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FKagioglidis%5FECE.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Robertson, R. Clark. "September 2009." Description based on title screen as viewed on 6 November 2009. Author(s) subject terms: Link-16/JTIDS, (31, 15) Reed-Solomon (RS) coding, 32-ary Orthogonal signaling, Additive White Gaussian Noise (AWGN), Pulse-Noise Interference (PNI), Perfect Side Information (PSI). Includes bibliographical references (p. 49-51). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
15

Crowe, Don Raymond. "Error detection abilities of conducting students under four modes of instrumental score study." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186656.

Full text
Abstract:
This study investigated the effect of four score study styles--no score study, study with score alone, study with score and a correct aural example, and score study at the electronic keyboard--on the pitch and rhythm error detection abilities of beginning conducting students. Subjects were 30 members of undergraduate beginning conducting classes at three midwestern universities. Four tests were developed, each having 31 four- to six-measure excerpts from band literature. Each excerpt contained only one error. Excerpts were grouped according to difficulty and assigned to tests in a modified random manner to facilitate equality of difficulty between sets. Within each test, excerpts were arranged in order of increasing difficulty and rescored to contain from one to eight parts. A counterbalanced design was utilized featuring a Latin Square into which the four score study styles were entered. Over the course of four sessions subjects received all four styles and all four tests. The orders in which subjects received score study styles were assigned on a rotational basis. Each subject within a university received the tests in the same order, but this order varied between universities. Six Hypercard © (Atkinson, 1987-90) stacks were developed on a Macintosh LC computer for presentation of the tests, management of the study, and data collection. Excerpts were played through MIDI keyboards using sampled wind instrument sounds. Study with the score and a correct aural example was found to be significantly more effective than either study with the score alone or no study. No significant difference was found between score study at the keyboard and any other score study style. There were significant differences in test scores attributable to the number of parts in examples. Generally, error detection became more difficult as the number of parts in examples increased. There were no significant differences in test scores attributable to the order of presentation of score study styles, individual example sets, or groups/order of presentation of example sets. There were significant differences in means score study time per session attributable to score study style, and in mean total time per session attributable to session number.
APA, Harvard, Vancouver, ISO, and other styles
16

Arabaci, Murat. "Nonbinary-LDPC-Coded Modulation Schemes for High-Speed Optical Communication Networks." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/195826.

Full text
Abstract:
IEEE has recently finished its ratification of the IEEE Standard 802.3ba in June 2010 which set the target Ethernet speed as 100 Gbps. The studies on the future trends of the ever-increasing demands for higher speed optical fiber communications show that there is no sign of decline in the demand. Constantly increasing internet traffic and the bandwidth-hungry multimedia services like HDTV, YouTube, voice-over-IP, etc. can be shown as the main driving forces. Indeed, the discussions over the future upgrades on the Ethernet speeds have already been initiated. It is predicted that the next upgrade will enable 400 Gbps Ethernet and the one after will be toward enabling the astounding 1 Tbps Ethernet.Although such high and ultra high transmission speeds are unprecedented over any transmission medium, the bottlenecks for achieving them over the optical fiber remains to be fundamental. At such high operating symbol rates, the signal impairments due to inter- and intra-channel fiber nonlinearities and polarization mode dispersion get exacerbated to the levels that cripple the high-fidelity communication over optical fibers. Therefore, efforts should be exerted to provide solutions that not only answer the need for high-speed transmission but also maintain low operating symbol rates.In this dissertation, we contribute to these efforts by proposing nonbinary-LDPC-coded modulation (NB-LDPC-CM) schemes as enabling technologies that can meet both the aforementioned goals. We show that our proposed NB-LDPC-CM schemes can outperform their prior-art, binary counterparts called bit-interleaved coded modulation (BI-LDPC-CM) schemes while attaining the same aggregate bit rates at a lower complexity and latency. We provide comprehensive analysis on the computational complexity of both schemes to justify our claims with solid evidence. We also compare the performances of both schemes by using amplified spontaneous emission (ASE) noise dominated optical fiber transmission and short to medium haul optical fiber transmission scenarios. Both applications show outstanding performances of NB-LDPC-CM schemes over the prior-art BI-LDPC-CM schemes with increasing gaps in coding gain as the transmission speeds increase. Furthermore, we present how a rate-adaptive NB-LDPC-CM can be employed to fully utilize the resources of a long haul optical transport network throughout its service time.
APA, Harvard, Vancouver, ISO, and other styles
17

Hillier, Caleb Pedro. "A system on chip based error detection and correction implementation for nanosatellites." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2841.

Full text
Abstract:
Thesis (Master of Engineering in Electrical Engineering)--Cape Peninsula University of Technology, 2018.
This thesis will focus on preventing and overcoming the effects of radiation in RAM on board the ZA cube 2 nanosatellite. The main objective is to design, implement and test an effective error detection and correction (EDAC) system for nanosatellite applications using a SoC development board. By conducting an in-depth literature review, all aspects of single-event effects are investigated, from space radiation right up to the implementation of an EDAC system. During this study, Hamming code was identified as a suitable EDAC scheme for the prevention of single-event effects. During the course of this thesis, a detailed radiation study of ZA cube 2’s space environment is conducted. This provides insight into the environment to which the satellite will be exposed to during orbit. It also provides insight which will allow accurate testing should accelerator tests with protons and heavy ions be necessary. In order to understand space radiation, a radiation study using ZA cube 2’s orbital parameters was conducted using OMERE and TRIM software. This study included earth’s radiation belts, galactic cosmic radiation, solar particle events and shielding. The results confirm that there is a need for mitigation techniques that are capable of EDAC. A detailed look at different EDAC schemes, together with a code comparison study was conducted. There are two types of error correction codes, namely error detection codes and error correction codes. For protection against radiation, nanosatellites use error correction codes like Hamming, Hadamard, Repetition, Four Dimensional Parity, Golay, BCH and Reed Solomon codes. Using detection capabilities, correction capabilities, code rate and bit overhead each EDAC scheme is evaluated and compared. This study provides the reader with a good understanding of all common EDAC schemes. The field of nanosatellites is constantly evolving and growing at a very fast speed. This creates a growing demand for more advanced and reliable EDAC systems that are capable of protecting all memory aspects of satellites. Hamming codes are extensively studied and implemented using different approaches, languages and software. After testing three variations of Hamming codes, in both Matlab and VHDL, the final and most effective version was Hamming [16, 11, 4]2. This code guarantees single error correction and double error detection. All developed Hamming codes are suited for FPGA implementation, for which they are tested thoroughly using simulation software and optimised.
APA, Harvard, Vancouver, ISO, and other styles
18

Radhakrishnan, Rathnakumar. "Detection and Decoding for Magnetic Storage Systems." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/194396.

Full text
Abstract:
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.
APA, Harvard, Vancouver, ISO, and other styles
19

Olivier, James L. "Concurrent error detection in arithmetic processors using [absolute value of] g3N [subscript m] codes /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487597424135791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chih, Samuel C. M. "Error detection capability and coding schemes for fiber optic communication." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/14854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Urbani, Camilla. "Stabilizer Codes for Quantum Error Correction and Synchronization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
This thesis project aims to deepen the basic concepts of quantum mechanics with particular reference to quantum information theory and quantum error correction codes, fundamental for a correct reception of information. The relations between these codes and classical ones have been investigated, based upon their representation in terms of stabilizers and then developing a possible error detection code. It has also been examined a classical problem in communication systems, namely frame synchronization, discussing it in quantum communication systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Dorfman, Vladimir. "Detection and coding techniques for partial response channels /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3094619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Altekar, Shirish A. "Detection and coding techniques for magnetic recording channels /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9804513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mein, Gordon F. (Gordon Francis) Carleton University Dissertation Engineering Electrical. "A study of common logic design errors and methods for their detection." Ottawa, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
25

Grymel, Martin-Thomas. "Error control with binary cyclic codes." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/error-control-with-binary-cyclic-codes(a5750b4a-e4d6-49a8-915b-3e015387ad36).html.

Full text
Abstract:
Error-control codes provide a mechanism to increase the reliability of digital data being processed, transmitted, or stored under noisy conditions. Cyclic codes constitute an important class of error-control code, offering powerful error detection and correction capabilities. They can easily be generated and verified in hardware, which makes them particularly well suited to the practical use as error detecting codes.A cyclic code is based on a generator polynomial which determines its properties including the specific error detection strength. The optimal choice of polynomial depends on many factors that may be influenced by the underlying application. It is therefore advantageous to employ programmable cyclic code hardware that allows a flexible choice of polynomial to be applied to different requirements. A novel method is presented in this thesis to realise programmable cyclic code circuits that are fast, energy-efficient and minimise implementation resources.It can be shown that the correction of a single-bit error on the basis of a cyclic code is equivalent to the solution of an instance of the discrete logarithm problem. A new approach is proposed for computing discrete logarithms; this leads to a generic deterministic algorithm for analysed group orders that equal Mersenne numbers with an exponent of a power of two. The algorithm exhibits a worst-case runtime in the order of the square root of the group order and constant space requirements.This thesis establishes new relationships for finite fields that are represented as the polynomial ring over the binary field modulo a primitive polynomial. With a subset of these properties, a novel approach is developed for the solution of the discrete logarithm in the multiplicative groups of these fields. This leads to a deterministic algorithm for small group orders that has linear space and linearithmic time requirements in the degree of defining polynomial, enabling an efficient correction of single-bit errors based on the corresponding cyclic codes.
APA, Harvard, Vancouver, ISO, and other styles
26

Grabner, Mitchell J. "A Cognitive MIMO OFDM Detector Design for Computationally Efficient Space-Time Decoding." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538696/.

Full text
Abstract:
In this dissertation a computationally efficient cognitive multiple-input multiple-output (MIMO) orthogonal frequency division duplexing (OFDM) detector is designed to decode perfect space-time coded signals which are able maximize the diversity and multiplexing properties of a rich fading MIMO channel. The adaptive nature of the cognitive detector allows a MIMO OFDM communication system to better meet to needs of future wireless communication networks which require both high reliability and low run-time complexity depending on the propagation environment. The cognitive detector in conjunction with perfect space-time coding is able to achieve up to a 2 dB bit-error rate (BER) improvement at low signal-to-noise ratio (SNR) while also achieving comparable runtime complexity in high SNR scenarios.
APA, Harvard, Vancouver, ISO, and other styles
27

Schiffel, Ute [Verfasser], Christof [Akademischer Betreuer] Fetzer, and Wolfgang [Akademischer Betreuer] Ehrenberger. "Hardware Error Detection Using AN-Codes / Ute Schiffel. Gutachter: Christof Fetzer ; Wolfgang Ehrenberger. Betreuer: Christof Fetzer." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://d-nb.info/1067189289/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Achanta, Raghavendra. "Detection and correction of global positional system carrier phase measurement anomalies." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1089745184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Morozov, Alexei. "Optimierung von Fehlererkennungsschaltungen auf der Grundlage von komplementären Ergänzungen für 1-aus-3 und Berger Codes." Phd thesis, Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2005/536/.

Full text
Abstract:
Die Dissertation stellt eine neue Herangehensweise an die Lösung der Aufgabe der funktionalen Diagnostik digitaler Systeme vor. In dieser Arbeit wird eine neue Methode für die Fehlererkennung vorgeschlagen, basierend auf der Logischen Ergänzung und der Verwendung von Berger-Codes und dem 1-aus-3 Code. Die neue Fehlererkennungsmethode der Logischen Ergänzung gestattet einen hohen Optimierungsgrad der benötigten Realisationsfläche der konstruierten Fehlererkennungsschaltungen. Außerdem ist eins der wichtigen in dieser Dissertation gelösten Probleme die Synthese vollständig selbstprüfender Schaltungen.
In this dissertation concurrent checking by use of a complementary circuit for an 1-out-of-n Codes and Berger-Code is investigated. For an arbitrarily given combinational circuit necessary and sufficient conditions for the existence of a totally self-checking checker are derived for the first time.
APA, Harvard, Vancouver, ISO, and other styles
30

Tkachenko, Iuliia. "Generation and analysis of graphical codes using textured patterns for printed document authentication." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS148/document.

Full text
Abstract:
En raison du développement et de la disponibilité des appareils d'impression et de numérisation, le nombre de documents contrefaits augmente rapidement. En effet, les documents de valeur ainsi que les emballages de produits sont de plus en plus ciblés par des duplications non autorisées. Par conséquent, différents éléments de sécurité (hologrammes, encres, papiers) ont été proposés pour prévenir ces actions illégales. Dans cette thèse, nous nous concentrons sur les éléments de sécurité imprimés qui offrent un haut niveau de sécurité et qui possèdent une mise en œuvre et une intégration simple. Nous présentons comment générer de nouveaux éléments de sécurité qui visent à protéger les documents de valeur et les emballages contre des processus de duplication non autorisés. Ces éléments nous permettent en outre de stocker une grande quantité d'informations cachées.La caractéristique principale de ces éléments de sécurité est leur sensibilité au processus d'impression et de numérisation. Cette sensibilité est obtenue à l'aide de motifs texturés spécifiques. Ces motifs sont des images binaires qui possèdent une structure sensible aux processus d'impression, de numérisation et de copie. Nous définissons les critères spécifiques qui doivent être respectés lors du choix de ces motifs texturés. La quantité d'information encodée dans l'image augmente avec le nombre de motifs texturés utilisées.En complément, nous proposons dans ce mémoire d'améliorer la robustesse de la détection des modules, pour tous les codes graphiques, par l'utilisation d'une nouvelle mesure d'erreur quadratique moyenne pondérée. L'utilisation de cette nouvelle mesure nous a permis d'augmenter de façon significative le taux de reconnaissance des modules lorsqu'ils sont utilisés dans des codes à barres standard à haute densité. Enfin, nous étudions expérimentalement plusieurs phénomènes : le processus physique d'impression et de numérisation, la séparation du bruit du scanner de celui de l'imprimante et les changements de couleurs après processus d'impression et de numérisation. Nous concluons à partir de cette étude expérimentale, que le processus d'impression et de numérisation ne peut pas être modélisé comme un loi Gaussienne. Nous mettons en avant que ce processus n'est ni blanc ni ergodique au sens large
Due to the development and availability of printing and scanning devices, the number of forged/counterfeited valuable documents and product packages is increasing. Therefore, different security elements (holograms, inks, papers) have been suggested to prevent these illegal actions. In this thesis, we focus on printed security elements that give access to a high security level with an easy implementation and integration. We present how to generate several novel security elements that aim to protect valuable documents and packaging against unauthorized copying process. Moreover, these security elements allow us to store a huge amount of hidden information.The main characteristic of these security elements is their sensitivity to the print-and-scan process. This sensitivity stems from the use of specific textured patterns. These patterns, which are binary images, have a structure that changes during the printing, scanning and copying processes. We define new specific criteria that ensures the chosen textured patterns to have the appropriate property. The amount of additional information encoded in the patterns increases with the number of patterns used.Additionally, we propose a new weighted mean squared error measure to improve the robustness of module detection for any high density barcodes. Thanks to this measure, the recognition rate of modules used in standard high density barcodes after print-and-scan process can be significantly increased. Finally, we experimentally study several effects: the physical print-and-scan process, separation of scanner noise from printer noise and changes of colors after print-and-scan process. We conclude, from these experimental results, that the print-and-scan process cannot be considered as being a Gaussian process. It has been also highlighted that this process is neither white nor ergodic in the wide sense
APA, Harvard, Vancouver, ISO, and other styles
31

Bahceci, Israfil. "Multiple-Input Multiple-Output Wireless Systems: Coding, Distributed Detection and Antenna Selection." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-08262005-022321/.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2006.
Altunbasak, Yucel, Committee Chair ; Mersereau, Russell M., Committee Member ; Fekri, Faramarz, Committee Member ; Smith, Glenn, Committee Member ; Huo, Xiaoming, Committee Member. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
32

Von, Solms Suné. "Design of effective decoding techniques in network coding networks / Suné von Solms." Thesis, North-West University, 2013. http://hdl.handle.net/10394/9544.

Full text
Abstract:
Random linear network coding is widely proposed as the solution for practical network coding applications due to the robustness to random packet loss, packet delays as well as network topology and capacity changes. In order to implement random linear network coding in practical scenarios where the encoding and decoding methods perform efficiently, the computational complex coding algorithms associated with random linear network coding must be overcome. This research contributes to the field of practical random linear network coding by presenting new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this research field by building on the current solutions available in the literature through the utilisation of familiar coding schemes combined with methods from other research areas, as well as developing innovative coding methods. We show that by transmitting source symbols in predetermined and constrained patterns from the source node, the causality of the random linear network coding network can be used to create structure at the receiver nodes. This structure enables us to introduce an innovative decoding scheme of low decoding delay. This decoding method also proves to be resilient to the effects of packet loss on the structure of the received packets. This decoding method shows a low decoding delay and resilience to packet erasures, that makes it an attractive option for use in multimedia multicasting. We show that fountain codes can be implemented in RLNC networks without changing the complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that approximate the degree distribution of encoded packets required for successful belief propagation decoding. Previous work done showed that the redundant packets generated by RLNC networks can be used for error detection at the receiver nodes. This error detection method can be implemented without implementing an outer code; thus, it does not require any additional network resources. We analyse this method and show that this method is only effective for single error detection, not correction. In this thesis the current body of knowledge and technology in practical random linear network coding is extended through the contribution of effective decoding techniques in practical network coding networks. We present both analytical and simulation results to show that the developed techniques can render low complexity coding algorithms with low decoding delay in RLNC networks.
Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013
APA, Harvard, Vancouver, ISO, and other styles
33

Gaubatz, Gunnar. "Tamper-resistant arithmetic for public-key cryptography." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-030107-115645/.

Full text
Abstract:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.
Keywords: Side Channel Attacks; Fault Attacks; Public-Key Cryptography; Error Detection; Error Detecting Codes. Includes bibliographical references (leaves 127-136).
APA, Harvard, Vancouver, ISO, and other styles
34

Tsai, Meng-Ying (Brady). "Iterative joint detection and decoding of LDPC-Coded V-BLAST systems." Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ta, Thanh Dinh. "Modèle de protection contre les codes malveillants dans un environnement distribué." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0040/document.

Full text
Abstract:
La thèse contient deux parties principales: la première partie est consacrée à l’extraction du format des messages, la deuxième partie est consacrée à l’obfuscation des comportements des malwares et la détection. Pour la première partie, nous considérons deux problèmes: "la couverture des codes" et "l’extraction du format des messages". Pour la couverture des codes, nous proposons une nouvelle méthode basée sur le "tainting intelligent" et sur l’exécution inversée. Pour l’extraction du format des messages, nous proposons une nouvelle méthode basée sur la classification de messages en utilisant des traces d’exécution. Pour la deuxième partie, les comportements des codes malveillants sont formalisés par un modèle abstrait pour la communication entre le programme et le système d’exploitation. Dans ce modèle, les comportements du programme sont des appels systèmes. Étant donné les comportements d’un programme bénin, nous montrons de façon constructive qu’il existe plusieurs programmes malveillants ayant également ces comportements. En conséquence, aucun détecteur comportemental n’est capable de détecter ces programmes malveillants
The thesis consists in two principal parts: the first one discusses the message for- mat extraction and the second one discusses the behavioral obfuscation of malwares and the detection. In the first part, we study the problem of “binary code coverage” and “input message format extraction”. For the first problem, we propose a new technique based on “smart” dynamic tainting analysis and reverse execution. For the second one, we propose a new method using an idea of classifying input message values by the corresponding execution traces received by executing the program with these input values. In the second part, we propose an abstract model for system calls interactions between malwares and the operating system at a host. We show that, in many cases, the behaviors of a malicious program can imitate ones of a benign program, and in these cases a behavioral detector cannot distinguish between the two programs
APA, Harvard, Vancouver, ISO, and other styles
36

Potter, Chris, Kurt Kosbar, and Adam Panagos. "Hardware Discussion of a MIMO Wireless Communication System Using Orthogonal Space Time Block Codes." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606194.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
Although multiple-input multiple-output (MIMO) systems have become increasingly popular, the existence of real time results to compare with those predicted by theory is still surprisingly limited. In this work the hardware description of a MIMO wireless communication system using orthogonal space time block codes (OSTBC) is discussed for two antennas at both the transmitter and receiver. A numerical example for a frequency flat time correlated channel is given to show the impact of channel estimation.
APA, Harvard, Vancouver, ISO, and other styles
37

Shaheem, Asri. "Iterative detection for wireless communications." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0223.

Full text
Abstract:
[Truncated abstract] The transmission of digital information over a wireless communication channel gives rise to a number of issues which can detract from the system performance. Propagation effects such as multipath fading and intersymbol interference (ISI) can result in significant performance degradation. Recent developments in the field of iterative detection have led to a number of powerful strategies that can be effective in mitigating the detrimental effects of wireless channels. In this thesis, iterative detection is considered for use in two distinct areas of wireless communications. The first considers the iterative decoding of concatenated block codes over slow flat fading wireless channels, while the second considers the problem of detection for a coded communications system transmitting over highly-dispersive frequency-selective wireless channels. The iterative decoding of concatenated codes over slow flat fading channels with coherent signalling requires knowledge of the fading amplitudes, known as the channel state information (CSI). The CSI is combined with statistical knowledge of the channel to form channel reliability metrics for use in the iterative decoding algorithm. When the CSI is unknown to the receiver, the existing literature suggests the use of simple approximations to the channel reliability metric. However, these works generally consider low rate concatenated codes with strong error correcting capabilities. In some situations, the error correcting capability of the channel code must be traded for other requirements, such as higher spectral efficiency, lower end-to-end latency and lower hardware cost. ... In particular, when the error correcting capabilities of the concatenated code is weak, the conventional metrics are observed to fail, whereas the proposed metrics are shown to perform well regardless of the error correcting capabilities of the code. The effects of ISI caused by a frequency-selective wireless channel environment can also be mitigated using iterative detection. When the channel can be viewed as a finite impulse response (FIR) filter, the state-of-the-art iterative receiver is the maximum a posteriori probability (MAP) based turbo equaliser. However, the complexity of this receiver's MAP equaliser increases exponentially with the length of the FIR channel. Consequently, this scheme is restricted for use in systems where the channel length is relatively short. In this thesis, the use of a channel shortening prefilter in conjunction with the MAP-based turbo equaliser is considered in order to allow its use with arbitrarily long channels. The prefilter shortens the effective channel, thereby reducing the number of equaliser states. A consequence of channel shortening is that residual ISI appears at the input to the turbo equaliser and the noise becomes coloured. In order to account for the ensuing performance loss, two simple enhancements to the scheme are proposed. The first is a feedback path which is used to cancel residual ISI, based on decisions from past iterations. The second is the use of a carefully selected value for the variance of the noise assumed by the MAP-based turbo equaliser. Simulations are performed over a number of highly dispersive channels and it is shown that the proposed enhancements result in considerable performance improvements. Moreover, these performance benefits are achieved with very little additional complexity with respect to the unmodified channel shortened turbo equaliser.
APA, Harvard, Vancouver, ISO, and other styles
38

Hällgren, Karl-Johan. "Waveform agility for robust radar detection and jamming mitigation." Thesis, Uppsala universitet, Fasta tillståndets elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-453241.

Full text
Abstract:
In this report metrics for jamming resistance and radar performance of waveform sets are described and developed, and different sets of waveforms are optimized, evaluated and compared. It is shown that without additional processing or PRI jitter, waveform sets can reach jamming resistance a few dB worse than what is provided by PRI jitter alone, and together with PRI jitter a few dB better. Waveforms with better jamming resistance tend to have worse range sidelobes and Doppler tolerance, but show less structure in their spectrograms, suggesting better LPI properties. The Doppler tolerance metric is new, as well as the comparative analysis of waveform sets on multiple metrics including jamming resistance.
Radar är fundamentalt i modern krigsföring. Med en radar kan man avfyra vapen från säkra avstånd och med precision mäta in mål. En radarstörare har som mål att förhindra en radar från att mäta in sitt mål. Då radarn fungerar genom att sända ut specifikt modulerade radiovågspulser och lyssna efter ekot från omgivningen kan störaren förhindra detta genom att antingen sända mycket starkt brus, eller genom att sända radiovågspulser med samma specifika modulation. Den senare metoden kallas för DRFM-störning, där förkortningen står för Digitalt RadioFrekvens-Minne, vilket antyder att störaren kan minnas radarns modulation och själv använda den. Om radarn använder en ny modulation (eng: waveform) för varje puls kan störaren inte använda modulationen den minns från förra pulsen utan måste vänta på att nästa puls träffar den innan den kan repetera pulsen, vilket begränsar dess störförmåga. Denna rapport tänker sig att radarn har en begränsad uppsättning av modulationer att byta mellan, och undersöker olika sådana uppsättningar och bedömer och jämför dem på olika mått av radarprestanda och störtålighet. Radioprestandamåtten inkluderar hur mycket förstärkning och hur fin upplösning man får av modulationen, hur väl modulationen kan hantera mycket snabba mål, och hur stora "sidolober" som uppstår runt starka mål. Sidolobsfenomenet är jämförbart med det optiska fenomenet där små men ljusstarka saker på natten kan se ut att ha en ljus halo eller ljusa utstrålningar runt sig. Störtålighetsmåtten kvantifierar hur distinkta de olika modulationerna i radarns uppsättning är, och på så vis hur väl radarn kan urskilja en modulation från de andra, tillsammans med hur liten sannolikheten är att störaren lyckas välja just den modulation vi kommer använda till nästa puls. Resultaten visar att metoden av modulationsbyten kan ge nästan lika stor störtålighet som en välkänd metod, PRI-jitter, ger själv och något högre i kombination med den metoden. Bättre störtålighet visas gå hand i hand med sämre mått på radarprestanda, men mindre strukturerade spektrogram vilket antyder att de kan vara svårare att upptäckas av radarspanare. Försämringen i måtten på radarprestanda innebär inte nödvändigtvis en lika stor försämring i faktisk radarprestanda, då sidoloberna tar an en brusartad karaktär vilket leder till praktiska fördelar gentemot de vanliga fixa sidoloberna.
APA, Harvard, Vancouver, ISO, and other styles
39

Batshon, Hussam George. "Coded Modulation for High Speed Optical Transport Networks." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194075.

Full text
Abstract:
At a time where almost 1.75 billion people around the world use the Internet on a regular basis, optical communication over optical fibers that is used in long distance and high demand applications has to be capable of providing higher communication speed and re-liability. In recent years, strong demand is driving the dense wavelength division multip-lexing network upgrade from 10 Gb/s per channel to more spectrally-efficient 40 Gb/s or 100 Gb/s per wavelength channel, and beyond. The 100 Gb/s Ethernet is currently under standardization, and in a couple of years 1 Tb/s Ethernet is going to be standardized as well for different applications, such as the local area networks (LANs) and the wide area networks (WANs). The major concern about such high data rates is the degradation in the signal quality due to linear and non-linear impairments, in particular polarization mode dispersion (PMD) and intrachannel nonlinearities. Moreover, the higher speed transceivers are expensive, so the alternative approaches of achieving the required rates is preferably done using commercially available components operating at lower speeds.In this dissertation, different LDPC-coded modulation techniques are presented to offer a higher spectral efficiency and/or power efficiency, in addition to offering aggregate rates that can go up to 1Tb/s per wavelength. These modulation formats are based on the bit-interleaved coded modulation (BICM) and include: (i) three-dimensional LDPC-coded modulation using hybrid direct and coherent detection, (ii) multidimensional LDPC-coded modulation, (iii) subcarrier-multiplexed four-dimensional LDPC-coded modulation, (iv) hybrid subcarrier/amplitude/phase/polarization LDPC-coded modulation, and (v) iterative polar quantization based LDPC-coded modulation.
APA, Harvard, Vancouver, ISO, and other styles
40

Raorane, Pooja Prakash. "Sampling Based Turbo and Turbo Concatenated Coded Noncoherent Modulation Schemes." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1279071861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

STEFANI, GIOVANNI L. de. "Sobre a técnica de Rod Drop em medidas de reatividade integral em bancos de controle e segurança de reatores nucleares." reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10210.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:36:03Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:59:23Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
42

Gerber, Egardt. "The use of classification methods for gross error detection in process data." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85856.

Full text
Abstract:
Thesis (MScEng)-- Stellenbosch University, 2013.
ENGLISH ABSTRACT: All process measurements contain some element of error. Typically, a distinction is made between random errors, with zero expected value, and gross errors with non-zero magnitude. Data Reconciliation (DR) and Gross Error Detection (GED) comprise a collection of techniques designed to attenuate measurement errors in process data in order to reduce the effect of the errors on subsequent use of the data. DR proceeds by finding the optimum adjustments so that reconciled measurement data satisfy imposed process constraints, such as material and energy balances. The DR solution is optimal under the assumed statistical random error model, typically Gaussian with zero mean and known covariance. The presence of outliers and gross errors in the measurements or imposed process constraints invalidates the assumptions underlying DR, so that the DR solution may become biased. GED is required to detect, identify and remove or otherwise compensate for the gross errors. Typically GED relies on formal hypothesis testing of constraint residuals or measurement adjustment-based statistics derived from the assumed random error statistical model. Classification methodologies are methods by which observations are classified as belonging to one of several possible groups. For the GED problem, artificial neural networks (ANN’s) have been applied historically to resolve the classification of a data set as either containing or not containing a gross error. The hypothesis investigated in this thesis is that classification methodologies, specifically classification trees (CT) and linear or quadratic classification functions (LCF, QCF), may provide an alternative to the classical GED techniques. This hypothesis is tested via the modelling of a simple steady-state process unit with associated simulated process measurements. DR is performed on the simulated process measurements in order to satisfy one linear and two nonlinear material conservation constraints. Selected features from the DR procedure and process constraints are incorporated into two separate input vectors for classifier construction. The performance of the classification methodologies developed on each input vector is compared with the classical measurement test in order to address the posed hypothesis. General trends in the results are as follows: - The power to detect and/or identify a gross error is a strong function of the gross error magnitude as well as location for all the classification methodologies as well as the measurement test. - For some locations there exist large differences between the power to detect a gross error and the power to identify it correctly. This is consistent over all the classifiers and their associated measurement tests, and indicates significant smearing of gross errors. - In general, the classification methodologies have higher power for equivalent type I error than the measurement test. - The measurement test is superior for small magnitude gross errors, and for specific locations, depending on which classification methodology it is compared with. There is significant scope to extend the work to more complex processes and constraints, including dynamic processes with multiple gross errors in the system. Further investigation into the optimal selection of input vector elements for the classification methodologies is also required.
AFRIKAANSE OPSOMMING: Alle prosesmetings bevat ʼn sekere mate van metingsfoute. Die fout-element van ʼn prosesmeting word dikwels uitgedruk as bestaande uit ʼn ewekansige fout met nul verwagte waarde, asook ʼn nie-ewekansige fout met ʼn beduidende grootte. Data Rekonsiliasie (DR) en Fout Opsporing (FO) is ʼn versameling van tegnieke met die doelwit om die effek van sulke foute in prosesdata op die daaropvolgende aanwending van die data te verminder. DR word uitgevoer deur die optimale veranderinge aan die oorspronklike prosesmetings aan te bring sodat die aangepaste metings sekere prosesmodelle gehoorsaam, tipies massa- en energie-balanse. Die DR-oplossing is optimaal, mits die statistiese aannames rakende die ewekansige fout-element in die prosesdata geldig is. Dit word tipies aanvaar dat die fout-element normaal verdeel is, met nul verwagte waarde, en ʼn gegewe kovariansie matriks. Wanneer nie-ewekansige foute in die data teenwoordig is, kan die resultate van DR sydig wees. FO is daarom nodig om nie-ewekansige foute te vind (Deteksie) en te identifiseer (Identifikasie). FO maak gewoonlik staat op die statistiese eienskappe van die meting aanpassings wat gemaak word deur die DR prosedure, of die afwykingsverskil van die model vergelykings, om formele hipoteses rakende die teenwoordigheid van nie-ewekansige foute te toets. Klassifikasie tegnieke word gebruik om die klasverwantskap van observasies te bepaal. Rakende die FO probleem, is sintetiese neurale netwerke (SNN) histories aangewend om die Deteksie en Identifikasie probleme op te los. Die hipotese van hierdie tesis is dat klassifikasie tegnieke, spesifiek klassifikasiebome (CT) en lineêre asook kwadratiese klassifikasie funksies (LCF en QCF), suksesvol aangewend kan word om die FO probleem op te los. Die hipotese word ondersoek deur middel van ʼn simulasie rondom ʼn eenvoudige gestadigde toestand proses-eenheid wat aan een lineêre en twee nie-lineêre vergelykings onderhewig is. Kunsmatige prosesmetings word geskep met behulp van lukrake syfers sodat die foutkomponent van elke prosesmeting bekend is. DR word toegepas op die kunsmatige data, en die DR resultate word gebruik om twee verskillende insetvektore vir die klassifikasie tegnieke te skep. Die prestasie van die klassifikasie metodes word vergelyk met die metingstoets van klassieke FO ten einde die gestelde hipotese te beantwoord. Die onderliggende tendense in die resultate is soos volg: - Die vermoë om ‘n nie-ewekansige fout op te spoor en te identifiseer is sterk afhanklik van die grootte asook die ligging van die fout vir al die klassifikasie tegnieke sowel as die metingstoets. - Vir sekere liggings van die nie-ewekansige fout is daar ‘n groot verskil tussen die vermoë om die fout op te spoor, en die vermoë om die fout te identifiseer, wat dui op smering van die fout. Al die klassifikasie tegnieke asook die metingstoets baar hierdie eienskap. - Oor die algemeen toon die klassifikasie metodes groter sukses as die metingstoets. - Die metingstoets is meer suksesvol vir relatief klein nie-ewekansige foute, asook vir sekere liggings van die nie-ewekansige fout, afhangende van die klassifikasie tegniek ter sprake. Daar is verskeie maniere om die bestek van hierdie ondersoek uit te brei. Meer komplekse, niegestadigde prosesse met sterk nie-lineêre prosesmodelle en meervuldige nie-ewekansige foute kan ondersoek word. Die moontlikheid bestaan ook om die prestasie van klassifikasie metodes te verbeter deur die gepaste keuse van insetvektor elemente.
APA, Harvard, Vancouver, ISO, and other styles
43

Ellinger, John David. "Multi-Carrier Radar for Target Detection and Communications." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1463839176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Neal, David A. "Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity-Check Codes in the Presence of Burst Noise." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1402.

Full text
Abstract:
Low-density parity-check (LDPC) codes provide excellent error correction performance and can approach the channel capacity, but their performance degrades significantly in the presence of burst noise. Bursts of errors occur in many common channels, including the magnetic recording and the wireless communications channels. Strategies such as interleaving have been developed to help compensate for bursts errors. These techniques do not exploit the correlations that can exist between the noise variance on observations in and out of the bursts. These differences can be exploited in calculations of prior probabilities to improve accuracy of soft information that is sent to the LDPC decoder. Effects of using different noise variances in the calculation of prior probabilities are investigated. Using the true variance of each observation improves performance. A novel burst detector utilizing the forward/backward algorithm is developed to determine the state of each observation, allowing the correct variance to be selected for each. Comparisons between this approach and existing techniques demonstrate improved performance. The approach is generalized and potential future research is discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Uriarte, Toboso Alain. "Optimum Ordering for Coded V-BLAST." Thesis, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23509.

Full text
Abstract:
The optimum ordering strategies for the coded V-BLAST system with capacity achieving temporal codes on each stream are studied in this thesis. Mathematical representations of the optimum detection ordering strategies for the coded V-BLAST under instantaneous rate allocation (IRA), uniform power/rate allocation (URA), instantaneous power allocation(IPA) and instantaneous power/rate allocation (IPRA) are derived. For two transmit antennas, it is shown that the optimum detection strategies are based on the per-stream before-processing channel gains. Based on approximations of the per-stream capacity equation, closed-form expressions of the optimal ordering strategy under the IRA at low and high signal to noise ratio (SNR) are derived. Necessary optimality conditions under the IRA are given. Thresholds for the low, intermediate and high SNR regimes in the 2-Tx-antenna system under the IPRA are determined, and the SNR gain of the ordering is studied for each regime. Performances of simple suboptimal ordering strategies are analysed, some of which perform very close to the optimum one.
APA, Harvard, Vancouver, ISO, and other styles
46

Parreau, Aline. "Problèmes d'identification dans les graphes." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00745054.

Full text
Abstract:
Dans cette thèse, nous étudions des problèmes d'identification des sommets dans les graphes. Identifier les sommets d'un graphe consiste à attribuer à chaque sommet un objet qui rend le sommet unique par rapport aux autres. Nous nous intéressons particulièrement aux codes identifiants : sous-ensembles de sommets d'un graphe, dominants, tels que le voisinage fermé de chaque sommet du graphe a une intersection unique avec l'ensemble. Les sommets du code identifiant peuvent être considérés comme des capteurs et chaque sommet du graphe comme un lieu possible pour une défaillance. Nous caractérisons tout d'abord l'ensemble des graphes pour lesquels tous les sommets sauf un sont nécessaires dans tout code identifiant. Le problème consistant à trouver un code identifiant optimal, c'est-'a-dire de taille minimale, étant NP-difficile, nous l'étudions sur quatre classes restreintes de graphes. Suivant les cas, nous pouvons résoudre complètement le problème (pour les graphes de Sierpinski), améliorer les bornes générales (pour les graphes d'intervalles, les graphes adjoints, la grille du roi) ou montrer que le problème reste difficile même restreint (pour les graphes adjoints). Nous considérons ensuite des variations autour des codes identifiants permettant plus de flexibilité pour les capteurs. Nous étudions par exemple des capteurs du plan capables de détecter des défaillances 'a un rayon connu avec une erreur tolérée. Nous donnons des constructions de tels codes et bornons leur taille pour des valeurs de rayons et d'erreurs fixés ou asymptotiques. Nous introduisons enfin la notion de coloration identifiante d'un graphe, permettant d'identifier les sommets d'un graphe avec les couleurs présentes dans son voisinage. Nous comparons cette coloration avec la coloration propre des graphes et donnons des bornes sur le nombre de couleurs nécessaires pour identifier un graphe, pour plusieurs classes de graphes.
APA, Harvard, Vancouver, ISO, and other styles
47

Kašpar, Jakub. "Vyhodnocení podobnosti programových kódů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241092.

Full text
Abstract:
Main goal of this work is to get acquainted with the plagiarism problem and propose the methods that will lead to detection of plagiarism in program codes. In the first part of this paper different types of plagiarism and some methods of detection are introduced. In the next part the preprocessing and attributes detection is described. Than the new method of detection and adaptive weights usage is proposed. Last part summarizes the results of the detector testing on the student projects database
APA, Harvard, Vancouver, ISO, and other styles
48

Benaddi, Tarik. "Sparse graph-based coding schemes for continuous phase modulations." Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/16037/1/Benaddi_Tarik.pdf.

Full text
Abstract:
The use of the continuous phase modulation (CPM) is interesting when the channel represents a strong non-linearity and in the case of limited spectral support; particularly for the uplink, where the satellite holds an amplifier per carrier, and for downlinks where the terminal equipment works very close to the saturation region. Numerous studies have been conducted on this issue but the proposed solutions use iterative CPM demodulation/decoding concatenated with convolutional or block error correcting codes. The use of LDPC codes has not yet been introduced. Particularly, no works, to our knowledge, have been done on the optimization of sparse graph-based codes adapted for the context described here. In this study, we propose to perform the asymptotic analysis and the design of turbo-CPM systems based on the optimization of sparse graph-based codes. Moreover, an analysis on the corresponding receiver will be done.
APA, Harvard, Vancouver, ISO, and other styles
49

Hua, Nan. "Space-efficient data sketching algorithms for network applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44899.

Full text
Abstract:
Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.
APA, Harvard, Vancouver, ISO, and other styles
50

Fu, Shengli. "Space-time coding and decoding for MIMO wireless communication systems." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 0.57Mb, 156 p, 2005. http://wwwlib.umi.com/dissertations/fullcit?3182631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography