Dissertations / Theses on the topic 'Decoding performance'

To see the other types of publications on this topic, follow the link: Decoding performance.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Decoding performance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, L. H. C. "Convolutional code design and performance." Thesis, University of Manchester, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xin. "Collaborative decoding and its performance analysis." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Granke, Daniel. "Decoding Acting Vocabulary." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3110.

Full text
Abstract:
This paper compares seemingly similar words from a variety of acting teachers, and shows how it is impossible to draw clear comparisons between words that are often used as synonyms. The paper is a reflection of the journey from believing in translation to recognizing its impossibility. In Chapter 1 we focus on one of the most common elements in actor training, Attention/focus/concentration, and analyze the shades of meaning in those words and the difficulty of talking about them in isolation. In Chapter 2 we look at the way in which semiotic analysis can explain the words resistance to equivalence. In Chapter 3 we look at one of the central terms in most collegiate actor training objective, and see how it reveals both the problems inherent in translation. In Chapter 4 we look at how this knowledge can influence the classroom in a positive way.
APA, Harvard, Vancouver, ISO, and other styles
4

Krishnamurthi, Sumitha. "Performance of Recursive Maximum Likelihood Turbo Decoding." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1070481352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

CONDO, CARLO. "VLSI decoding architectures: flexibility, robustness and performance." Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2544356.

Full text
Abstract:
Stemming from previous studies on flexible LDPC decoders, this thesis work has been mainly focused on the development of flexible turbo and LDPC decoder designs, and on the narrowing of the power, area and speed gap they might present with respect to dedicated solutions. Additional studies have been carried out within the field of increased code performance and of decoder resiliency to hardware errors. The first chapter regroups several main contributions in the design and implementation of flexible channel decoders. The first part concerns the design of a Network-on-Chip (NoC) serving as an interconnection network for a partially parallel LDPC decoder. A best-fit NoC architecture is designed and a complete multi-standard turbo/LDPC decoder is designed and implemented. Every time the code is changed, the decoder must be reconfigured. A number of variables influence the duration of the reconfiguration process, starting from the involved codes down to decoder design choices. These are taken in account in the flexible decoder designed, and novel traffic reduction and optimization methods are then implemented. In the second chapter a study on the early stopping of iterations for LDPC decoders is presented. The energy expenditure of any LDPC decoder is directly linked to the iterative nature of the decoding algorithm. We propose an innovative multi-standard early stopping criterion for LDPC decoders that observes the evolution of simple metrics and relies on on-the-fly threshold computation. Its effectiveness is evaluated against existing techniques both in terms of saved iterations and, after implementation, in terms of actual energy saving. The third chapter portrays a study on the resilience of LDPC decoders under the effect of memory errors. Given that the purpose of channel decoders is to correct errors, LDPC decoders are intrinsically characterized by a certain degree of resistance to hardware faults. This characteristic, together with the soft nature of the stored values, results in LDPC decoders being affected differently according to the meaning of the wrong bits: ad-hoc error protection techniques, like the Unequal Error Protection devised in this chapter, can consequently be applied to different bits according to their significance. In the fourth chapter the serial concatenation of LDPC and turbo codes is presented. The concatenated FEC targets very high error correction capabilities, joining the performance of turbo codes at low SNR with that of LDPC codes at high SNR, and outperforming both current deep-space FEC schemes and concatenation-based FECs. A unified decoder for the concatenated scheme is subsequently proposed.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Junsheng. "Code representation and performance of graph-Based decoding." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3297580.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed April 28, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
7

Abedi, Ali. "Invariance Properties and Performance Evaluation of Bit Decoding Algorithms." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/768.

Full text
Abstract:
Certain properties of optimal bitwise APP (A Posteriori Probability) decoding of binary linear block codes are studied. The focus is on the Probability Density Function (pdf) of the bit Log-Likelihood-Ratio (LLR). A general channel model with discrete (not necessarily binary) input and discrete or continuous output is considered. It is proved that under a set of mild conditions on the channel, the pdf of the bit LLR of a specific bit position is independent of the transmitted code-word. It is also shown that the pdf of a given bit LLR, when the corresponding bit takes the values of zero and one, are symmetric with respect to each other (reflection of one another with respect to the vertical axis). In the case of channels with binary inputs, a sufficient condition for two bit positions to have the same pdf is presented. An analytical method for approximate performance evaluation of binary linear block codes using an Additive White Gaussian Noise (AWGN) channel model with Binary Phase Shift Keying (BPSK) modulation is proposed. The pdf of the bit LLR is expressed in terms of the Gram-Charlier series expansion. This expansion requires knowledge of the statistical moments of the bit LLR. An analytical method for calculating these moments which is based on some recursive calculations involving certain weight enumerating functions of the code is introduced. It is proved that the approximation can be as accurate as desired, using enough numbers of terms in the Gram-Charlier series expansion. A new method for the performance evaluation of Turbo-Like Codes is presented. The method is based on estimating the pdf of the bit LLR by using an exponential model. The moment matching method is combined with the maximum entropy principle to estimate the parameters of the new model. A simple method is developed for computing the Probabilities of the Point Estimates (PPE) for the estimated parameters, as well as for the Bit Error Rate (BER). It is demonstrated that this method requires significantly fewer samples than the conventional Monte-Carlo (MC) simulation.
APA, Harvard, Vancouver, ISO, and other styles
8

Leong, Chi Wa. "Reduced complexity decoding and relative performance of turbo codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq23379.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sankaranarayanan, Sundararajan. "Iterative Decoding of Codes on Graphs." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/194618.

Full text
Abstract:
The growing popularity of a class of linear block codes called the low-density parity-check (LDPC) codes can be attributed to the low complexity of the iterative decoders, and their potential to achieve performance very close to the Shannon capacity. This makes them an attractive candidate for ECC applications in communication systems. This report proposes methods to systematically construct regular and irregular LDPC codes.A class of regular LDPC codes are constructed from incidence structures in finite geometries like projective geometry and affine geometry. A class of irregular LDPC codes are constructed by systematically splitting blocks of balanced incomplete block designs to achieve desired weight distributions. These codes are decoded iteratively using message-passing algorithms, and the performance of these codes for various channels are presented in this report.The application of iterative decoders is generally limited to a class of codes whose graph representations are free of small cycles. Unfortunately, the large class of conventional algebraic codes, like RS codes, has several four cycles in their graph representations. This report proposes an algorithm that aims to alleviate this drawback by constructing an equivalent graph representation that is free of four cycles. It is theoretically shown that the four-cycle free representation is better suited to iterative erasure decoding than the conventional representation. Also, the new representation is exploited to realize, with limited success, iterative decoding of Reed-Solomon codes over the additive white Gaussian noise channel.Wiberg, Forney, Richardson, Koetter, and Vontobel have made significant contributions in developing theoretical frameworks that facilitate finite length analysis of codes. With an exception of Richardson's, most of the other frameworks are much suited for the analysis of short codes. In this report, we further the understanding of the failures in iterative decoders for the binary symmetric channel. The failures of the decoder are classified into two categories by defining trapping sets and propagating sets. Such a classification leads to a successful estimation of the performance of codes under the Gallager B decoder. Especially, the estimation techniques show great promise in the high signal-to-noise ratio regime where the simulation techniques are less feasible.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Tao Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Performance of iterative detection and decoding for MIMO-BICM systems." Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/26197.

Full text
Abstract:
Multiple-input multiple-output (MIMO) wireless technology is an emerging cost- effective approach to offer multiple-fold capacity improvement relative to the conven- tional single-antenna systems. To achieve the capacities of MIMO channels, MIMO bit-interleaved-coded-modulation (BICM) systems with iterative detection and decod- ing (IDD) are studied in this thesis. The research for this dissertation is conducted based on the iterative receivers with convolutional codes and turbo codes. A variety of MIMO detectors, such as a maximum a posteriori probability (MAP) detector, a list sphere detector (LSD) and a parallel interference canceller (PIC) together with a decision statistic combiner (DSC), are studied. The performance of these iterative receivers is investigated via bounding techniques or Monte-Carlos simulations. Moreover, the computational complexities of the components are quantified and compared. The convergence behaviors of the iterative receivers are analyzed via variance trans- fer (VTR) functions and variance exchange graphs (VEGs). The analysis of conver- gence behavior facilitates the finding of components with good matching. For a fast fading channel, we show that the "waterfall region" of an iterative receiver can be predicted by VEG. For a slow fading channel, it is shown that the performance of an iterative receiver is essentially limited by the early interception ratio (ECR) which is obtained via simulations. After the transfer properties of the detectors are unveiled, a detection switching (DSW) methodology is proposed and the switching criterion based on cross entropy (CE) is derived. By employing DSW, the performance of an iterative receiver with a list sphere detector (LSD) of a small list size is considerably improved. It is shown that the iterative receiver achieves a performance very close to that with a maximum a posteriori probability (MAP) detector but with a significantly reduced complexity. For an iterative receiver with more than two components, various iteration sched- ules are explored. The schedules are applied in an iterative receiver with PIC-DSC. It is shown that the iterative receiver with a periodic scheduling outperforms that with the conventional scheduling at the same level of complexity.
APA, Harvard, Vancouver, ISO, and other styles
11

Han, Peipei. "Decoding Steady-State VisualEvoked Potentials(SSVEPs)- Implementation and Performance Analysis." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-432289.

Full text
Abstract:
Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces(BCIs) have been widely investigated. Algorithms from the canonical correlation analysis(CCA) family perform extremely well in detecting stimulus targets by analyzing the relationship of frequency features between electroencephalogram (EEG) signals and stimulus targets. In addition to CCA algorithms, convolutional neural networks(CCNs) also improve the performance of SSVEP-based BCIs by generalizing well on the frequency features of the EEG signals. To find a new method for speeding up an online SSVEP decoding system, we have evaluated three CCA methods which are standard CCA, individual-template CCA(IT-CCA), and Extended CCA, together with the complex spectrum CNN(C-CNN). The results have proved that algorithms requiring individual subject training highly outperform standard CCA.
APA, Harvard, Vancouver, ISO, and other styles
12

Manso, Rogerio C. "Performance analysis of M-QAM with Viterbi soft-decision decoding." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FManso.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Duggan, Andrew. "Performance analysis of algebraic soft-decision decoding of Reed-Solomon codes." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3691.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Dept. of Electrical and Computer Engineering . Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
14

Rankl, Tobias. "Performance and bounds of optical receivers with electronic detection and decoding." Aachen Shaker, 2010. http://d-nb.info/1001789091/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Roitzsch, Michael. "Slice-Level Trading of Quality and Performance in Decoding H.264 Video." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-26472.

Full text
Abstract:
When a demanding video decoding task requires more CPU resources then available, playback degrades ungracefully today: The decoder skips frames selected arbitrarily or by simple heuristics, which is noticed by the viewer as jerky motion in the good case or as images completely breaking up in the bad case. The latter can happen due to missing reference frames. This thesis provides a way to schedule individual decoding tasks based on a cost for performance trade. Therefore, I will present a way to preprocess a video, generating estimates for the cost in terms of execution time and the performance in terms of perceived visual quality. The granularity of the scheduling decision is a single slice, which leads to a much more fine-grained approach than dealing with entire frames. Together with an actual scheduler implementation that uses the generated estimates, this work allows for higher perceived quality video playback in case of CPU overload
Wenn eine anspruchsvolle Video-Dekodierung mehr Prozessor-Ressourcen benötigt, als verfügbar sind, dann verschlechtert sich die Abspielqualität mit aktuellen Methoden drastisch: Willkürlich oder mit einfachen Heuristiken ausgewählten Bilder werden nicht dekodiert. Diese Auslassung nimmt der Betrachter im günstigsten Fall nur als ruckelnde Bewegung wahr, im ungünstigen Fall jedoch als komplettes Zusammenbrechen nachfolgender Bilder durch Folgefehler im Dekodierprozess. Meine Arbeit ermöglicht es, einzelne Teilaufgaben des Dekodierprozesses anhand einer Kosten-Nutzen-Analyse einzuplanen. Dafür ermittle ich die Kosten im Sinne von Rechenzeitbedarf und den Nutzen im Sinne von visueller Qualität für einzelne Slices eines H.264 Videos. Zusammen mit einer Implementierung eines Schedulers, der diese Werte nutzt, erlaubt meine Arbeit höhere vom Betrachter wahrgenommene Videoqualität bei knapper Prozessorzeit
APA, Harvard, Vancouver, ISO, and other styles
16

Castaño-Candamil, Sebastián [Verfasser], and Michael W. [Akademischer Betreuer] Tangermann. "Machine learning methods for motor performance decoding in adaptive deep brain stimulation." Freiburg : Universität, 2020. http://d-nb.info/1224808762/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Donato, Ridgley Israel Louis. "Decoding team performance in a self-organizing collaboration network using community structure." Thesis, Massachusetts Institute of Technology, 2018.

Find full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 55-57).
When assembling a team, it is imperative to assess the ability of the team to perform the task in question and to compare the performance of potential teams. In this thesis, I investigate the predictive power of different community detection methods in determining team performance in the self-organizing Kaggle platform and find that my methodology can achieve an average accuracy of 57% when predicting the result of a competition while using no performance information to identify communities. First, I motivate our interest in team performance and why a network setting is useful, as well as present the Kaggle platform as a collaboration network of users on teams participating in competitions. Next, in order to identify communities, I applied a selection of techniques to project the Kaggle network onto a team network and applied both spectral methods and DBSCAN to identify communities of teams while remaining ignorant of their performances. Finally, I generated cross-cluster performance distributions, evaluated the significance of communities found, and calculated a predictor statistic. Using holdout validation, I test and compare the merits of the different community detection methods and find that the Cosine Similarity in conjunction with spectral methods yields the best performance and provides an average accuracy of 57% when predicting the pairwise results of a competition.
by Israel Louis Donato Ridgley.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
18

Vu, Xuan Thang. "Joint Network / Channel Decoding over Noisy Wireless Networks." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01060330.

Full text
Abstract:
Network coding (NC) has gained much research attention as a potential candidate to solve the demand for higher spectral e ciency of modern wireless communications. Many research papers have investigated the performance of NC-aided networks such as throughput and outage capacity. However, the analysis of NC in practical systems where NC is combined with other techniques such as channel coding is still immature to fully understand its potential performance. In this thesis, we aim to design high performance receivers and analyze its performance for network-coded cooperative networks in practical scenarios. Firstly, we propose two Iterative Network/Channel Decoding (INCD) algorithms for the Multiple-Access Relay Channel (MARC) with two notable relaying schemes named Decode-and-Forward (DF) and Demodulate-and-Forward (DMF). The INCD algorithm operates based on turbo-like decoding methods and reduces the impact of the error propagation problem with the aid of a channel-aware receiver design. Both perfect Channel State Information (CSI) and imperfect CSI at the receiver side are investigated. We propose a practical method that forwards the quantized version of the relay decoding errors to the destination. It is shown that the proposed algorithms achieve full diversity gain and signi cantle outperforms solutions which do not take care of error propagation. We also show that the number of pilot symbols a ects only the coding gain but has a negligible impact on the diversity order, while the quantization level a cts both the diversity and coding gain. Secondly, we propose a Near Optimal Joint Network/Channel Decoding (NOJNCD) algorithm for the MARC that allows to analyze the system Bit Error Rate (BER). The NOJNCD algorithm performs network decoding and channel decoding in one decoding step of the super code, which comprises of all trellis states of individual code at the sources via NC. Furthermore, NC combined with Relay Selection (RS) is considered and the achievable diversity order is studied with the aid of outage analysis. We analytically show that Single Relay Selection (SRS) always achieves a diversity order two and Multiple Relay Selection (MRS) can achieve full diversity gain only when the number of selected relays exceeds the number of the sources. Last but not least, we propose a so-called partial relaying protocol to improve the spectral e ciency for channel coding assisted relay networks. Closed-form expression of the BER and the system diversity order are computed for partial relaying. We show, by analysis and simulations, that with a proper Convolutional Code (CC), partial relaying can achieve full diversity gain and same coding gain as the classical (full) relaying protocol in nite signal-to-noise ratio region while it obtains a better spectrum usage. Moreover, we propose a new protocol based on partial relaying in opportunistic relaying cooperative networks and show that this protocol signi cantly outperforms the NC-based cooperation in some circumstances.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Kai. "High-Performance Decoder Architectures For Low-Density Parity-Check Codes." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/17.

Full text
Abstract:
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
APA, Harvard, Vancouver, ISO, and other styles
20

Castura, Jeff. "Performance analysis and optimization of reduced complexity Low Density Parity Check decoding algorithms." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0017/MQ53426.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rankl, Tobias [Verfasser]. "Performance and Bounds of Optical Receivers with Electronic Detection and Decoding / Tobias Rankl." Aachen : Shaker, 2010. http://d-nb.info/112254619X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Dan [Verfasser]. "Iterative algorithms in achieving near-ML decoding performance in concatenated coding systems / Dan Zhang." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1048607224/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yin, Hang. "AN ADAPTIVE MULTI-FREQUENCY GPS TRACKING ALGORITHM, GPS CNAV MESSAGE DECODING, AND PERFORMANCE ANAYSIS." Miami University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=miami1407774289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Han, Yang. "LDPC Coding for Magnetic Storage: Low Floor Decoding Algorithms, System Design and Performance Analysis." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/195972.

Full text
Abstract:
Low-density parity check (LDPC) codes have experienced tremendous popularity due to their capacity-achieving performance. In this dissertation, several different aspects of LDPC coding and its applications to magnetic storage are investigated. One of the most significant issues that impedes the use of LDPC codes in many systems is the error-rate floor phenomenon associated with their iterative decoders. By delineating the fundamental principles, we extend to partial response channels algorithms for predicting the error rate performance in the floor region for the binary-input AWGN channel. We develop three classes of decoding algorithms for mitigating the error floor by directly tackling the cause of the problem: trapping sets. In our experiments, these algorithms provide multiple orders of improvement over conventional decoders at the cost of various implementation complexity increases.Product codes are widely used in magnetic recording systems where errors are both isolated and bursty. A dual-mode decoding technique for Reed-Solomon-code-based product codes is proposed, where the second decoding mode involves maximum-likelihood erasure decoding of the binary images of the Reed-Solomon codewords. By exploring a tape storage application, we demonstrate that this dual-mode decoding system dramatically improves the performance of product codes. Moreover, the complexity added by the second decoding mode is manageable. We also show the performance of this technique on a product code which has an LDPC code in the columns.Run-length-limited (RLL) codes are ubiquitous in today's disk drives. Using RLL codes has enabled drive designers to pack data very efficiently onto the platter surface by ensuring stable symbol-timing recovery. We consider a concatenation system design with an LDPC code and an RLL code as components to simultaneously achieve desirable features such as: soft information availability to the LDPC decoder, the preservation of the LDPC code's structure, and the capability of correcting long erasure bursts.We analyze the performance of LDPC-coded magnetic recording channel in the presence of media noise. We employ advanced signal processing for the pattern-dependent-noise-predictive channel detectors, and demonstrate that a gain of over 1 dB or a linear density gain of about 8% relative to a comparable Reed-Solomon is attainable by using an LDPC code.
APA, Harvard, Vancouver, ISO, and other styles
25

Rice, Michael. "Performance of Soft-Decision Block-Decoded Hybrid-ARQ Error Control." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608852.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Soft-decision correlation decoding with retransmission requests for block codes is proposed and the resulting performance is analyzed. The correlation decoding rule is modified to allow retransmission requests when the received word is rendered unreliable by the channel noise. The modification is realized by a reduction in the volume in Euclidean space of the decoding region corresponding to each codeword. The performance analysis reveals the typical throughput - reliability trade-off characteristic of error control systems which employ retransmissions. Performance comparisons with hard-decision decoding reveal performance improvements beyond those attainable with hard-decision decoding algorithms. The proposed soft-decision decoding rule permits the use of a simplified codeword searching algorithm which reduces the complexity of the correlation decoder to the point where practical implementation is feasible.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Changlin. "The performance analysis and decoding of high dimensional trellis-coded modulation for spread spectrum communications." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1174616331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Cohen, E. Judith Krisman. "The effects of a holistic-graphophonic intervention on the decoding performance of children with reading disabilities." FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/2397.

Full text
Abstract:
The purpose of this study was to investigate the effects of an intervention combining holistic and graphophonic methods on the decoding performance of children with reading disabilities. The impact of this synthesis of teaching methods was evaluated by examining five students' abilities to decode a set of training words using three common vowel patterns. Words containing these magic e, double vowel, and closed vowel patterns were presented in isolation and in context. Additional sets of novel words and nonsense words using the same vowel patterns were used to evaluate generalization. The study incorporated single-subject experimental research methods, using a multiple baseline design across vowel patterns. Data were collected and analyzed daily. The results of this study indicated that all five second-graders, three boys and two girls, demonstrated substantial gains in decoding ability on all three vowel patterns. The students increased their accuracy on training words in isolation, as well as in context. In addition, students increased their decoding accuracy on generalization words, in both novel and nonsense words. These increases in decoding accuracy were maintained during post-intervention probes and during the 7-week follow-up phase of the study.
APA, Harvard, Vancouver, ISO, and other styles
28

Zouros, Georgios. "Performance analysis of IEEE 802.11G receivers with erasure decoding to mitigate the effects of pulse-noise interference." Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FZouros.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and M.S. in Applied Physics)--Naval Postgraduate School, December 2006.
Thesis Advisor(s): R. Clark Robertson, Richard Harkins. "December 2006." Includes bibliographical references (p. 101-102). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
29

Takeuchi, Keigo. "Statistical Mechanical Informatics on MIMO DS-CDMA Systems: Design of Spreading Schemes and Performance of Multiuser Decoding." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/123857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hou, Wenshuai. "On the Simulation Performance of FEC SOQPSK-TG Systems with Symbol by Symbol and SOVA Decoding Methods." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595638.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
A number of encoding and decoding methods for SOQPSK-TG are evaluated with their simulation performance under additive white Gaussian noise (AWGN) channel. The two forward error correction (FEC) codes are serial concatenated convolutional code (SCCC) and low-density parity-check code (LDPC). Two decoding methods are applied: soft-output Viterbi algorithm (SOVA) and symbol by symbol (SxS) detection. Pulse amplitude modulation (PAM) and pulse truncation (PT) filters are designed for SOVA decoding, and numerically optimum (NO) and integrate and dump (I&D) filters are employed for the SxS methods. We discuss the performance of these schemes by comparing and analyzing their Matlab simulation. The LDPC and SOVA with PAM filter outperforms over all the other combinations. We also use timing and phase recovery with various loop bandwidths. The system structures and the synchronization technique are discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
31

Isnin, Ismail. "A study on wireless communication error performance and path loss prediction." Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/324.

Full text
Abstract:
One channel model that characterises multipath fading effect of a wireless channel is called Flat Rayleigh Fading channel model. Given the properties of Flat Rayleigh Fading channel, an equation to find the capacity of a Flat Rayleigh fading channel with hard decision decoding is derived. The difference of power requirement to achieve the Additive White Gaussian Noise (AWGN) capacity over a Flat Rayleigh Fading channel fading is found to increase exponentially with Es /N0 . Upper and lower bounds of error performance of linear block codes over a Flat Rayleigh Fading channel are also studied. With the condition that the excess delay of a channel is known earlier, it is shown that a correlator with shorter length, according to excess delay of the channel, can be constructed for use in wireless channel response measurements. Therefore, a rule of construction of a shorter length correlator is defined, involving concatenation of parts of a Constant Amplitude Zero Auto-Correlation (CAZAC) sequence. Simulation of [136,68,24] Double Circulant Code with Dorsch List Decoding is also done in order to evaluate error performance of the channel coding scheme over one of the IEEE Wireless Metropolitan Area Network (WirelessMAN) channel models, the Stanford University Interim Channel Model No. 5 (SUI-5) channel. Performance of the channel cod- ing was severely degraded over the SUI-5 channel when it is compared to its performance over the AWGN channel. Indoor path losses within three multifloor office buildings were investigated at 433 MHz, 869 MHz and 1249 MHz. The work involved series of extensive received signal strength measurements within the buildings for all of the considered frequencies. Results have shown that indoor path loss is higher within a square footprint building than indoor path loss in a rectangular building. Parameters of Log-Distance Path Loss and Floor Attenuation Factor Path Loss models have been derived from the measurement data. In addition, a new indoor path loss prediction model was derived to cater for path loss pre- diction within multifloor buildings with indoor atriums. The model performs with better prediction accuracy when compared with Log-Distance Path Loss and Floor Attenuation Factor Path Loss models.
APA, Harvard, Vancouver, ISO, and other styles
32

McCool, Lauren Zawistowski. "Religion as a Role: Decoding Performances of Mormonism in the Contemporary United States." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1343429819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Uzunoglu, Cihan. "Performance Comparison Of Message Passing Decoding Algorithms For Binary And Non-binary Low Density Parity Check (ldpc) Codes." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12609072/index.pdf.

Full text
Abstract:
In this thesis, we investigate the basics of Low-Density Parity-Check (LDPC) codes over binary and non-binary alphabets. We especially focus on the message passing decoding algorithms, which have different message definitions such as a posteriori probabilities, log-likelihood ratios and Fourier transforms of probabilities. We present the simulation results that compare the performances of small block length binary and non-binary LDPC codes, which have regular and irregular structures over GF(2),GF(4) and GF(8) alphabets. We observe that choosing non-binary alphabets improve the performance with careful selection of mean column weight by comparing LDPC codes with variable node degrees of 3, 2.8 and 2.6, since it is effective in the order of GF(2), GF(4) and GF(8) performances.
APA, Harvard, Vancouver, ISO, and other styles
34

Katsaros, Charalampos. "Performance analysis of the Link-16/JTIDS waveform with concatenated coding, soft decision Reed Solomon decoding and noise-normalization." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/5222.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Joint Tactical Information Distribution System (JTIDS) is a hybrid frequencyhopped, direct sequence spread spectrum system that employs a (31, 15) Reed-Solomon (RS) code for forward error correction coding. In this thesis, an alternative error correction coding scheme that uses concatenated coding with a (31, k) RS inner code and a rate 4/5 convolutional outer code is considered. In addition, a sequential diversity of two, consistent with the JTIDS double-pulse structure, is considered both for soft decision (SD) RS decoding and for soft diversity combining with noise-normalization. Both coherent and noncoherent detection are considered. Based on the analyses, the alternative JTIDS waveform outperforms the original in all cases considered. When only additive white Gaussian noise is present, the best performances, which result in a gain of about 1.4 dB relative to the existing JTIDS waveform, are achieved for (31, 23) RS and (31, 25) RS inner codes for coherent detection and for (31, 27) RS and (31, 29) RS inner codes for noncoherent detection. For these RS inner codes, a 23.0% and 33.0% improvement in system throughput is achieved, respectively, for coherent detection, and a 44.0% and 55.0% improvement in throughput is achieved, respectively, for noncoherent detection relative to the existing JTIDS waveform. Noise-normalization neutralizes the effects of pulse-noise interference, but no significant benefits are obtained from using SD RS decoding.
APA, Harvard, Vancouver, ISO, and other styles
35

Smedberg, Fredrik. "Performance Analysis of JavaScript." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56730.

Full text
Abstract:

In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques.

Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future.

Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.

APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Dongwon. "High-performance computer system architectures for embedded computing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42766.

Full text
Abstract:
The main objective of this thesis is to propose new methods for designing high-performance embedded computer system architectures. To achieve the goal, three major components - multi-core processing elements (PEs), DRAM main memory systems, and on/off-chip interconnection networks - in multi-processor embedded systems are examined in each section respectively. The first section of this thesis presents architectural enhancements to graphics processing units (GPUs), one of the multi- or many-core PEs, for improving performance of embedded applications. An embedded application is first mapped onto GPUs to explore the design space, and then architectural enhancements to existing GPUs are proposed for improving throughput of the embedded application. The second section proposes high-performance buffer mapping methods, which exploit useful features of DRAM main memory systems, in DSP multi-processor systems. The memory wall problem becomes increasingly severe in multiprocessor environments because of communication and synchronization overheads. To alleviate the memory wall problem, this section exploits bank concurrency and page mode access of DRAM main memory systems for increasing the performance of multiprocessor DSP systems. The final section presents a network-centric Turbo decoder and network-centric FFT processors. In the era of multi-processor systems, an interconnection network is another performance bottleneck. To handle heavy communication traffic, this section applies a crossbar switch - one of the indirect networks - to the parallel Turbo decoder, and applies a mesh topology to the parallel FFT processors. When designing the mesh FFT processors, a very different approach is taken to improve performance; an optical fiber is used as a new interconnection medium.
APA, Harvard, Vancouver, ISO, and other styles
37

Chaudhari, Pragat. "Analytical Methods for the Performance Evaluation of Binary Linear Block Codes." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/904.

Full text
Abstract:
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
APA, Harvard, Vancouver, ISO, and other styles
38

Christofis, Emmanouil. "Performance analysis of FFH/BPSK receivers with convolutional coding and soft decision Viterbi decoding over channels with partial-band noise interference." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA307754.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, March 1996.
Thesis advisor(s): R. Clark Robertson, Tri T. Ha. "March 1996." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
39

Landon, Laura L. "English Word-Level Decoding and Oral Language Factors as Predictors of Third and Fifth Grade English Language Learners' Reading Comprehension Performance." Thesis, The George Washington University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10601015.

Full text
Abstract:

This study examines the application of the Simple View of Reading (SVR), a reading comprehension theory focusing on word recognition and linguistic comprehension, to English Language Learners’ (ELLs’) English reading development. This study examines the concurrent and predictive validity of two components of the SVR, oral language and word-level decoding, for determining ELLs’ English reading comprehension in the third and fifth grades, using data from a nationally representative dataset, the Early Childhood Longitudinal Survey (ECLS-K). Literature in both first (L1) and second language (L2) reading comprehension development suggest that, in addition to word- and text-level decoding factors, oral language skills (such as listening comprehension) also impact L2 reading comprehension. This study found that while English word-level decoding skills were the strongest predictors of ELLs’ English reading comprehension in third grade, both third and fifth grade English oral language skills were stronger at predicting fifth grade ELLs’ English reading outcomes, thereby confirming the hypotheses grounded in the conceptual frameworks of ELL reading comprehension development (Proctor et al., 2005; Zadeh et al., 2011; Kim, 2015).

These findings suggest that screening fifth grade ELLs using English oral language measures may be more effective at predicting potential difficulty in reading comprehension than traditional fluency measures (such as DIBELS ORF). Moreover, while English word-level decoding factors are stronger predictors for third grade English reading comprehension, these findings indicate that third grade English oral language measures may be better at determining how ELL students will perform in English reading comprehension as they conclude elementary school in fifth grade than traditional fluency and decoding measures. In sum, the results of this study underline the importance of instruction, intervention and assessment in English oral language skills as critical components of literacy programming for elementary ELLs.

APA, Harvard, Vancouver, ISO, and other styles
40

Ghosh, Kaushik. "Downlink W-CDMA performance analysis and receiver implmentation on SC140 Motorola DSP." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969/169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Theodoss, Michael D. "Performance of FFH/BFSK systems with convolutional coding and soft decision Viterbi decoding over Rician fading channels with partial-band noise interference." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA305964.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1995.
"December 1995." Thesis advisor(s): R. Clark Robertson. Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
42

Willett, Francis R. "Intracortical Brain-Computer Interfaces: Modeling the Feedback Control Loop, Improving Decoder Performance, and Restoring Upper Limb Function with Muscle Stimulation." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case149035819787053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ramiro, Sánchez Carla. "MIMOPack: A High Performance Computing Library for MIMO Communication Systems." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/53930.

Full text
Abstract:
[EN] Nowadays, several communication standards are emerging and evolving, searching higher transmission rates, reliability and coverage. This expansion is primarily driven by the continued increase in consumption of mobile multimedia services due to the emergence of new handheld devices such as smartphones and tablets. One of the most significant techniques employed to meet these demands is the use of multiple transmit and receive antennas, known as MIMO systems. The use of this technology allows to increase the transmission rate and the quality of the transmission through the use of multiple antennas at the transmitter and receiver sides. MIMO technologies have become an essential key in several wireless standards such as WLAN, WiMAX and LTE. These technologies will be incorporated also in future standards, therefore is expected in the coming years a great deal of research in this field. Clearly, the study of MIMO systems is critical in the current investigation, however the problems that arise from this technology are very complex. High Performance Computing (HPC) systems, and specifically, modern hardware architectures as multi-core and many-cores (e.g Graphics Processing Units (GPU)) are playing a key role in the development of efficient and low-complexity algorithms for MIMO transmissions. Proof of this is that the number of scientific contributions and research projects related to its use has increased in the last years. Also, some high performance libraries have been implemented as tools for researchers involved in the development of future communication standards. Two of the most popular libraries are: IT++ that is a library based on the use of some optimized libraries for multi-core processors and the Communications System Toolbox designed for use with MATLAB, which uses GPU computing. However, there is not a library able to run on a heterogeneous platform using all the available resources. In view of the high computational requirements in MIMO application research and the shortage of tools able to satisfy them, we have made a special effort to develop a library to ease the development of adaptable parallel applications in accordance with the different architectures of the executing platform. The library, called MIMOPack, aims to implement efficiently using parallel computing, a set of functions to perform some of the critical stages of MIMO communication systems simulation. The main contribution of the thesis is the implementation of efficient Hard and Soft output detectors, since the detection stage is considered the most complex part of the communication process. These detectors are highly configurable and many of them include preprocessing techniques that reduce the computational cost and increase the performance. The proposed library shows three important features: portability, efficiency and easy of use. Current realease allows GPUs and multi-core computation, or even simultaneously, since it is designed to use on heterogeneous machines. The interface of the functions are common to all environments in order to simplify the use of the library. Moreover, some of the functions are callable from MATLAB increasing the portability of developed codes between different computing environments. According to the library design and the performance assessment, we consider that MIMOPack may facilitate industrial and academic researchers the implementation of scientific codes without having to know different programming languages and machine architectures. This will allow to include more complex algorithms in their simulations and obtain their results faster. This is particularly important in the industry, since the manufacturers work to analyze and to propose their own technologies with the aim that it will be approved as a standard. Thus allowing to enforce their intellectual property rights over their competitors, who should obtain the corresponding licenses to include these technologies into their products.
[ES] En la actualidad varios estándares de comunicación están surgiendo buscando velocidades de transmisión más altas y mayor fiabilidad. Esta expansión está impulsada por el aumento en el consumo de servicios multimedia debido a la aparición de nuevos dispositivos como los smartphones y las tabletas. Una de las técnicas empleadas más importantes es el uso de múltiples antenas de transmisión y recepción, conocida como sistemas MIMO, que permite aumentar la velocidad y la calidad de la transmisión. Las tecnologías MIMO se han convertido en una parte esencial en diferentes estándares tales como WLAN, WiMAX y LTE. Estas tecnologías se incorporarán también en futuros estándares, por lo tanto, se espera en los próximos años una gran cantidad de investigación en este campo. Está claro que el estudio de los sistemas MIMO es crítico en la investigación actual, sin embargo los problemas que surgen de esta tecnología son muy complejos. La sistemas de computación de alto rendimiento, y en concreto, las arquitecturas hardware actuales como multi-core y many-core (p. ej. GPUs) están jugando un papel clave en el desarrollo de algoritmos eficientes y de baja complejidad en las transmisiones MIMO. Prueba de ello es que el número de contribuciones científicas y proyectos de investigación relacionados con su uso se han incrementado en el últimos años. Algunas librerías de alto rendimiento se están utilizando como herramientas por investigadores en el desarrollo de futuros estándares. Dos de las librerías más destacadas son: IT++ que se basa en el uso de distintas librerías optimizadas para procesadores multi-core y el paquete Communications System Toolbox diseñada para su uso con MATLAB, que utiliza computación con GPU. Sin embargo, no hay una biblioteca capaz de ejecutarse en una plataforma heterogénea. En vista de los altos requisitos computacionales en la investigación MIMO y la escasez de herramientas capaces de satisfacerlos, hemos implementado una librería que facilita el desarrollo de aplicaciones paralelas adaptables de acuerdo con las diferentes arquitecturas de la plataforma de ejecución. La librería, llamada MIMOPack, implementa de manera eficiente un conjunto de funciones para llevar a cabo algunas de las etapas críticas en la simulación de un sistema de comunicación MIMO. La principal aportación de la tesis es la implementación de detectores eficientes de salida Hard y Soft, ya que la etapa de detección es considerada la parte más compleja en el proceso de comunicación. Estos detectores son altamente configurables y muchos de ellos incluyen técnicas de preprocesamiento que reducen el coste computacional y aumentan el rendimiento. La librería propuesta tiene tres características importantes: la portabilidad, la eficiencia y facilidad de uso. La versión actual permite computación en GPU y multi-core, incluso simultáneamente, ya que está diseñada para ser utilizada sobre plataformas heterogéneas que explotan toda la capacidad computacional. Para facilitar el uso de la biblioteca, las interfaces de las funciones son comunes para todas las arquitecturas. Algunas de las funciones se pueden llamar desde MATLAB aumentando la portabilidad de códigos desarrollados entre los diferentes entornos. De acuerdo con el diseño de la biblioteca y la evaluación del rendimiento, consideramos que MIMOPack puede facilitar la implementación de códigos sin tener que saber programar con diferentes lenguajes y arquitecturas. MIMOPack permitirá incluir algoritmos más complejos en las simulaciones y obtener los resultados más rápidamente. Esto es particularmente importante en la industria, ya que los fabricantes trabajan para proponer sus propias tecnologías lo antes posible con el objetivo de que sean aprobadas como un estándar. De este modo, los fabricantes pueden hacer valer sus derechos de propiedad intelectual frente a sus competidores, quienes luego deben obtener las correspon
[CAT] En l'actualitat diversos estàndards de comunicació estan sorgint i evolucionant cercant velocitats de transmissió més altes i major fiabilitat. Aquesta expansió, està impulsada pel continu augment en el consum de serveis multimèdia a causa de l'aparició de nous dispositius portàtils com els smartphones i les tablets. Una de les tècniques més importants és l'ús de múltiples antenes de transmissió i recepció (MIMO) que permet augmentar la velocitat de transmissió i la qualitat de transmissió. Les tecnologies MIMO s'han convertit en una part essencial en diferents estàndards inalàmbrics, tals com WLAN, WiMAX i LTE. Aquestes tecnologies s'incorporaran també en futurs estàndards, per tant, s'espera en els pròxims anys una gran quantitat d'investigació en aquest camp. L'estudi dels sistemes MIMO és crític en la recerca actual, no obstant açó, els problemes que sorgeixen d'aquesta tecnologia són molt complexos. Els sistemes de computació d'alt rendiment com els multi-core i many-core (p. ej. GPUs)), estan jugant un paper clau en el desenvolupament d'algoritmes eficients i de baixa complexitat en les transmissions MIMO. Prova d'açò és que el nombre de contribucions científiques i projectes d'investigació relacionats amb el seu ús s'han incrementat en els últims anys. Algunes llibreries d'alt rendiment estan utilitzant-se com a eines per investigadors involucrats en el desenvolupament de futurs estàndards. Dos de les llibreries més destacades són: IT++ que és una llibreria basada en lús de diferents llibreries optimitzades per a processadors multi-core i el paquet Communications System Toolbox dissenyat per al seu ús amb MATLAB, que utilitza computació amb GPU. No obstant açò, no hi ha una biblioteca capaç d'executar-se en una plataforma heterogènia. Degut als alts requisits computacionals en la investigació MIMO i l'escacès d'eines capaces de satisfer-los, hem implementat una llibreria que facilita el desenvolupament d'aplicacions paral·leles adaptables d'acord amb les diferentes arquitectures de la plataforma d'ejecució. La llibreria, anomenada MIMOPack, implementa de manera eficient, un conjunt de funcions per dur a terme algunes de les etapes crítiques en la simulació d'un sistema de comunicació MIMO. La principal aportació de la tesi és la implementació de detectors eficients d'exida Hard i Soft, ja que l'etapa de detecció és considerada la part més complexa en el procés de comunicació. Estos detectors són altament configurables i molts d'ells inclouen tècniques de preprocessament que redueixen el cost computacional i augmenten el rendiment. La llibreria proposta té tres característiques importants: la portabilitat, l'eficiència i la facilitat d'ús. La versió actual permet computació en GPU i multi-core, fins i tot simultàniament, ja que està dissenyada per a ser utilitzada sobre plataformes heterogènies que exploten tota la capacitat computacional. Amb el fi de simplificar l'ús de la biblioteca, les interfaces de les funcions són comunes per a totes les arquitectures. Algunes de les funcions poden ser utilitzades des de MATLAB augmentant la portabilitat de còdics desenvolupats entre els diferentes entorns. D'acord amb el disseny de la biblioteca i l'evaluació del rendiment, considerem que MIMOPack pot facilitar la implementació de còdics a investigadors sense haver de saber programar amb diferents llenguatges i arquitectures. MIMOPack permetrà incloure algoritmes més complexos en les seues simulacions i obtindre els seus resultats més ràpid. Açò és particularment important en la industria, ja que els fabricants treballen per a proposar les seues pròpies tecnologies el més prompte possible amb l'objectiu que siguen aprovades com un estàndard. D'aquesta menera, els fabricants podran fer valdre els seus drets de propietat intel·lectual enfront dels seus competidors, els qui després han d'obtenir les corresponents llicències si vole
Ramiro Sánchez, C. (2015). MIMOPack: A High Performance Computing Library for MIMO Communication Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53930
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
44

Trachel, Romain. "Protocoles d'interaction cerveau-machine pour améliorer la performance d'attention visuo-spatiale chez l'homme." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4038/document.

Full text
Abstract:
L'attention visuospatiale est un mécanisme de sélection et de traitement d'information qui se manifeste explicitement par l'orientation de la tête ou du regard. En anticipation d'une nouvelle information, le foyer de l'attention s'oriente implicitement en vision périphérique pour dissocier l'orientation du regard et du foyer implicite vers deux emplacements distincts. Dans cette situation, la réaction à une cible qui apparaît à l'emplacement du foyer implicite s'améliore par rapport aux autres cibles qui pourraient s'afficher dans un emplacement non-attendu. La problématique de la thèse est d'étudier comment détecter l'emplacement du foyer de l'attention implicite par décodage de l'activité cérébrale mesurée en électro-encéphalographie (EEG) avant l'affichage d'une cible visuelle dans 3 expériences réalisées chez des sujets sains. La première expérience aborde la problématique dans une condition où l'indication sur l'emplacement de la cible est globalement non-informative pour les sujets. Cependant, leur activité cérébrale suggère que ce type d'indication a tendance à induire un état d'alerte, de préparation ou d'orientation de l'attention dans le temps plutôt que dans l'espace. En lien avec ce résultat, la deuxième expérience aborde la problématique dans une condition ambiguë où l'attention du sujet s'oriente vers un emplacement sans lien systématique avec le contenu des indications
Visuospatial attention is an information selection and processing mechanism whose overt manifestations consist of head or gaze shifts. In anticipation to new information, the focus of attention can also covertly shift to peripheral vision to share attention between two distinct locations: the overt one (center of gaze) and the covert one in periphery. In such a situation, the reaction to a target appearing at the focus of attention is enhanced with respect to targets appearing at unattended locations. This thesis addresses the problem of detecting the location of covert attention by decoding neural activity measured by electroencephalography (EEG) before target onset in 3 experiments on healthy subjects. The first experiment uses visuospatial cues that are non-informative about the target location. However, the neural activity reflects that non-informative cues tend to bring the subjects into a state related to alertness, motor preparation or temporal expectation rather than a spatial shift of attention. According to this result, the second experiment uses an ambiguous precueing condition in which the sujet's attention is shifted to spatial locations which bear a non-systematic relation to the information contained in the cues. With these ambiguous cues, we find that the proportion of targets displayed at unattended locations is equivalent to a non-informative condition, and that reaction speed and accuracy are dramatically impacted
APA, Harvard, Vancouver, ISO, and other styles
45

Segkos, Michail. "Advanced techniques to improve the performance of OFDM Wireless LAN." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FSegkos.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and M.S. in Applied Physics)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Tri T. Ha, Brett H. Borden. Includes bibliographical references (p. 107-109). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
46

Goubault, de Brugière Timothée. "Methods for optimizing the synthesis of quantum circuits Quantum CNOT Circuits Synthesis for NISQ Architectures Using the Syndrome Decoding Problem Quantum circuits synthesis using Householder transformations Synthesizing quantum circuits via numerical optimization Reuse method for quantum circuit synthesis." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG018.

Full text
Abstract:
Pour exécuter un algorithme abstrait sur un ordinateur quantique il faut compiler l'algorithme en une séquence d'instructions bas niveau exécutables par le processeur. L'étape de compilation est cruciale car elle détermine la quantité de ressources nécessaire pour l'exécution d'un algorithme ; par conséquent elle se doit d'être optimisée. Dans cette thèse nous nous intéressons à une brique de la compilation~: la synthèse de circuits quantiques à partir d'une spécification abstraite d'un opérateur. Dans un premier temps nous étudions le cas où la matrice unitaire d'un opérateur quantique nous est donnée et nous explorons la minimisation des ressources quantiques et la minimisation des ressources classiques. Même si l'optimisation simultanée de ces deux types de ressources semble difficile, nous proposons de meilleurs compromis améliorant la littérature.Dans un second temps nous nous intéressons à la classe des opérateurs dits linéaires réversibles. Nous nous intéressons cette fois-ci exclusivement à l'optimisation des ressources quantiques et nous améliorons l'état de l'art dans diverses cas de métriques (taille et profondeur du circuit) et de processeurs quantiques (processeurs NISQ, ou à connectivité complète)
To run an abstract algorithm on a quantum computer, the algorithm must be compiled into a sequence of low-level instructions that can be executed by the processor. The compilation step is crucial because it determines the quantity of resources necessary for the execution of an algorithm. Therefore, the compilation stage must be optimized. In this thesis, we are interested in a brick of compilation: the synthesis of quantum circuits from an abstract specification of an operator.First, we study the case where the unitary matrix of a quantum operator is given to us and we explore the minimization of both quantum resources and classical resources. Even if the simultaneous optimization of these two types of resources seems difficult, we propose better compromises improving the literature.Secondly, we are interested in the class of so-called reversible linear operators. This time we are exclusively interested in the optimization of quantum resources and we improve the state of the art in various cases of quantum metrics (circuit size, circuit depth) and processors (NISQ, fully-connected processors)
APA, Harvard, Vancouver, ISO, and other styles
47

Kalogrias, Christos. "Performance analysis of the IEEE 802.11A WLAN standard optimum and sub-optimum receiver in frequency-selective, slowly fading Nakagami channels with AWGN and pulsed noise jamming." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FKalogrias.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and M.S. in Systems Engineering)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Clark Robertson. Includes bibliographical references (p. 143). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
48

Grospellier, Antoine. "Décodage des codes expanseurs quantiques et application au calcul quantique tolérant aux fautes." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS575.

Full text
Abstract:
Le calcul quantique tolérant aux fautes est un ensemble de techniques dont le but est d'effectuer des calculs quantiques de manière fiable en utilisant des composants bruités. Dans ce contexte, l'utilisation de codes correcteurs quantiques maintient le nombre d'erreurs présentes dans le système en dessous d'un seuil tolérable. L'un des principaux problèmes de ce domaine est d'évaluer le coût minimum (en mémoire et en temps) nécessaire pour transformer un calcul quantique idéal en un calcul tolérant aux fautes. Dans cette thèse, nous montrons que la famille des codes expanseurs quantiques associée à l'algorithme de décodage small-set-flip peut être utilisée dans la construction de ref. [arXiv:1310.2984] pour réaliser du calcul quantique tolérant aux fautes avec coût constant en mémoire. La famille de codes correcteurs ainsi que le décodeur que nous étudions ont été introduits dans ref. [arXiv:1504.00822] où un modèle de bruit adverse est considéré. En nous appuyant sur les résultats de cet article, nous analysons le comportement des codes expanseurs quantiques face à un modèle de bruit stochastique qui est pertinent dans le cadre du calcul tolérant aux fautes [arXiv:1711.08351], [arXiv:1808.03821]. De plus, nous montrons que l'algorithme de décodage peut être parallélisé pour fonctionner en temps constant. Cette propriété est essentielle pour éviter que les erreurs ne s'accumulent pendant que l'algorithme est exécuté. Au-delà des résultats théoriques décrits ci-dessus, nous avons effectué une analyse numérique des codes expanseurs quantiques dans le but d'évaluer leurs performances en pratique [arXiv:1810.03681]. Le modèle de bruit choisi pour ces simulations consiste à générer des erreurs de types X et Z de manière indépendante et identiquement distribuée sur les qubits. Les résultats obtenus pour ces codes de rendement constant sont prometteurs puisque nos simulations montrent que leur seuil est décent et que leurs performances à taille finie sont bonnes
Fault tolerant quantum computation is a technique to perform reliable quantum computation using noisy components. In this context, quantum error correcting codes are used to keep the amount of errors under a sustainable threshold. One of the main problems of this field is to determine the minimum cost, in terms of memory and time, which is needed in order to transform an ideal quantum computation into a fault-tolerant one. In this PhD thesis, we show that the family of quantum expander codes and the small-set-flip decoder can be used in the construction of ref. [arXiv:1310.2984] to produce a fault-tolerant quantum circuit with constant space overhead. The error correcting code family and the decoder that we study has been introduced in ref. [arXiv:1504.00822] where an adversarial error model was examined. Based on the results of this article, we analyze quantum expander codes subjected to a stochastic error model which is relevant for fault-tolerant quantum computation [arXiv:1711.08351], [arXiv:1808.03821]. In addition, we show that the decoding algorithm can be parallelized to run in constant time. This is very relevant to prevent errors from accumulating while the decoding algorithm is running. Beyond the theoretical results described above, we perform a numerical analysis of quantum expander codes to measure their performance in practice [arXiv:1810.03681]. The error model used during these simulations generates X and Z type errors on the qubits with an independent and identically distributed probability distribution. Our results are promising because they reveal that these constant rate codes have a decent threshold and good finite length performance
APA, Harvard, Vancouver, ISO, and other styles
49

LIN, SHIN-TE, and 林忻德. "The Research on SCL Decoding and CA-SCL Decoding Performance for Polar Codes." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/jfym47.

Full text
Abstract:
碩士
國立雲林科技大學
電機工程系
107
This thesis mainly analyzes the performance of successive cancellation list (SCL) decoder and CRC aided-successive cancellation list (CA-SCL) decoder for polar code.,SCL decoder can be considered as the improved version of a successive cancellation (SC) decoder. Since SC decoder requires the estimate of previous information bits when dealing with current ones, serious error propagation will occur under erroneous decision. SCL decoder has been focused on the improvement on the drawbacks of SC. Besides the operations carried out in SC decoder, SCL calculates the path metrics in addition, and reserves candidate paths according the list size parameters to compute the final optimal decoding output. In another improved version called CRC-aided SCL (CA-SCL), bit error rate can be further suppressed by appending cyclic redundancy check (CRC) bits to SCL blocks. In this thesis, the bit error rate (BER) and frame error rate (FER) of SC, SCL, and CA-SCL decoders are simulated and compared. We especially focus on the investigation of two important parameters, namely, the list size in SCL decoder and the CRC size in CA-SCL decoder. The performance analyses for the three decoders are presented in this thesis. And their coding gains over uncoded binary phase-shift keying (BPSK) transmission system are also summarized. Simulation results show that CA-SCL decoder can effectively overcome the drawback of SC decoder.
APA, Harvard, Vancouver, ISO, and other styles
50

Weng, Po-Cheng, and 翁柏正. "Investigating the Performance of Turbo Decoding with SOVA." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/68113613623500305671.

Full text
Abstract:
碩士
中原大學
電機工程研究所
90
Abstract In this thesis, the main ideas are investigating the performance of turbo codes applying SOVA (Soft Output Viterbi Algorithm) in wireless communication systems. Comparing with MAP (Maximum A Posteriori Algorithm), the advantage of SOVA is providing a low complexity decoder. We propose four methods to improve the performance of SOVA decoding. The simulation results verify that these methods improve the performance of original SOVA.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography