To see the other types of publications on this topic, follow the link: Error-correction.

Dissertations / Theses on the topic 'Error-correction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Error-correction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Almlöf, Jonas. "Quantum error correction." Licentiate thesis, KTH, Kvantelektronik och -optik, QEO, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106795.

Full text
Abstract:
This thesis intends to familiarise the reader with quantum error correction, and also show some relations to the well known concept of information - and the lesser known quantum information. Quantum information describes how information can be carried by quantum states, and how interaction with other systems give rise to a full set of quantum phenomena, many of which have no correspondence in classical information theory. These phenomena include decoherence, as a consequence of entanglement. Decoherence can also be understood as "information leakage", i.e., knowledge of an event is transferred to the reservoir - an effect that in general destroys superpositions of pure states. It is possible to protect quantum states (e.g., qubits) from interaction with the environment - but not by amplification or duplication, due to the "no-cloning" theorem. Instead, this is done using coding, non-demolition measurements, and recovery operations. In a typical scenario, however, not all types of destructive events are likely to occur, but only those allowed by the information carrier, the type of interaction with the environment, and how the environment "picks up" information of the error events. These characteristics can be incorporated into a code, i.e., a channel-adapted quantum error-correcting code. Often, it is assumed that the environment's ability to distinguish between error events is small, and I will denote such environments "memory-less".  This assumption is not always valid, since the ability to distinguish error events is related to the \emph{temperature} of the environment, and in the particular case of information coded onto photons,  typically holds, and one must then assume that the environment has a "memory". In this thesis, I describe a short quantum error-correcting code (QECC), adapted for photons interacting with a cold environment, i.e., this code protects from an environment that continuously records which error occurred in the coded quantum state. Also, it is of interest to compare the performance of different QECCs - But which yardstick should one use? We compare two such figures of merit, namely the quantum mutual information and the quantum fidelity, and show that they can not, in general, be simultaneously maximised in an error correcting procedure. To show this, we have used a five-qubit perfect code, but assumed a channel that only cause bit-flip errors. It appears that quantum mutual information is the better suited yardstick of the two, however more tedious to calculate than quantum fidelity - which is more commonly used.
Denna avhandling är en introduktion till kvantfelrättning, där jag undersöker släktskapet med teorin om klassisk information - men också det mindre välkända området kvantinformation. Kvantinformation beskriver hur information kan bäras av kvanttillstånd, och hur växelverkan med andra system ger upphov till åtskilliga typer av fel och effekter, varav många saknar motsvarighet i den klassiska informationsteorin. Bland dessa effekter återfinns dekoherens - en konsekvens av s.k. sammanflätning. Dekoherens kan också förstås som "informationsläckage", det vill säga att kunskap om en händelse överförs till omgivningen - en effekt som i allmänhet förstör superpositioner i rena kvanttillstånd.  Det är möjligt att med hjälp av kvantfelrättning skydda kvanttillstånd (t.ex. qubitar) från omgivningens påverkan, dock kan sådana tillstånd aldrig förstärkas eller dupliceras, p.g.a icke-kloningsteoremet. Tillstånden skyddas genom att införa redundans, varpå tillstånden interagerar med omgivningen. Felen identifieras m.h.a. icke-förstörande mätningar och återställs med unitära grindar och ancilla-tillstånd.Men i realiteten kommer inte alla tänkbara fel att inträffa, utan dessa begränsas av vilken informationsbärare som används, vilken interaktion som uppstår med omgivningen, samt hur omgivningen "fångar upp" information om felhändelserna. Med kunskap om sådan karakteristik kan man bygga koder, s.k. kanalanpassade kvantfelrättande koder. Vanligtvis antas att omgivningens förmåga att särskilja felhändelser är liten, och man kan då tala om en minneslös omgivning. Antagandet gäller inte alltid, då denna förmåga bestäms av reservoirens temperatur, och i det speciella fall då fotoner används som informationsbärare gäller typiskt , och vi måste anta att reservoiren faktiskt har ett "minne". I avhandlingen beskrivs en kort, kvantfelrättande kod som är anpassad för fotoner i växelverkan med en "kall" omgivning, d.v.s. denna kod skyddar mot en omgivning som kontinuerligt registrerar vilket fel som uppstått i det kodade tillståndet.  Det är också av stort intresse att kunna jämföra prestanda hos kvantfelrättande koder, utifrån någon slags "måttstock" - men vilken? Jag jämför två sådana mått, nämligen ömsesidig kvantinformation, samt kvantfidelitet, och visar att dessa i allmänhet inte kan maximeras samtidigt i en felrättningsprocedur. För att visa detta har en 5-qubitarskod använts i en tänkt kanal där bara bitflip-fel uppstår, och utrymme därför finns att detektera fel. Ömsesidig kvantinformation framstår som det bättre måttet, dock är detta mått betydligt mer arbetskrävande att beräkna, än kvantfidelitet - som är det mest förekommande måttet.

QC 20121206

APA, Harvard, Vancouver, ISO, and other styles
2

Peikert, Christopher Jason. "Cryptographic error correction." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38320.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (leaves 67-71).
It has been said that "cryptography is about concealing information, and coding theory is about revealing it." Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traitor tracing. We demonstrate tight lower bounds on the lengths of such codes by devising and analyzing a general collusive attack that works for any code.
by Christopher Jason Peikert.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Almlöf, Jonas. "Quantum error correction." Doctoral thesis, KTH, Kvantelektronik och -optik, QEO, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180533.

Full text
Abstract:
Quantum error correction is the art of protecting quantum states from the detrimental influence from the environment. To master this art, one must understand how the system interacts with the environment and gives rise to a full set of quantum phenomena, many of which have no correspondence in classical information theory. Such phenomena include decoherence, an effect that in general destroys superpositions of pure states as a consequence of entanglement with the environment. But decoherence can also be understood as “information leakage”, i.e., when knowledge of an encoded code block is transferred to the environment. In this event, the block’s information or entanglement content is typically lost. In a typical scenario, however, not all types of destructive events are likely to occur, but only those allowed by the information carrier, the type of interaction with the environment, and how the environment “picks up” information of the error events. These characteristics can be incorporated into a code, i.e., a channel-adapted quantum error-correcting code. Often, it is assumed that the environment’s ability to distinguish between error events is small, and I will denote such environments “memory-less”. But this assumption is not always valid, since the ability to distinguish error events is related to the temperature of the environment, and in the particular case of information coded onto photons, kBTR «ℏω typically holds, and one must then assume that the environment has a “memory”. In the thesis I describe a short quantum error-correction code adapted for photons interacting with a “cold” reservoir, i.e., a reservoir which continuously probes what error occurred in the coded state. I also study other types of environments, and show how to distill meaningful figures of merit from codes adapted for these channels, as it turns out that resource-based figures reflecting both information and entanglement can be calculated exactly for a well-studied class of channels: the Pauli channels. Starting from these resource-based figures, I establish the notion of efficiency and quality and show that there will be a trade-off between efficiency and quality for short codes. Finally I show how to incorporate, into these calculations, the choices one has to make when handling quantum states that have been detected as incorrect, but where no prospect of correcting them exists, i.e., so-called detection errors.

QC 20160115

APA, Harvard, Vancouver, ISO, and other styles
4

Ng, Wing-han Christina. "Does error correction lead to error reduction?" Thesis, Hong Kong : University of Hong Kong, 2002. http://sunzi.lib.hku.hk/hkuto/record.jsp?B26173347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jeffs, Stephen David. "Error correction in memory /." Title page, table of contents and abstract only, 2004. http://web4.library.adelaide.edu.au/theses/09ARPS/09arpsj474.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Wenbo. "Unary error correction coding." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/419401/.

Full text
Abstract:
In this thesis, we introduce the novel concept of Unary Error Correction (UEC) coding. Our UEC code is a Joint Source and Channel Coding (JSCC) scheme conceived for performing both the compression and error correction of multimedia information during its transmission from an encoder to a decoder. The UEC encoder generates a bit sequence by concatenating and encoding unary codewords, while the decoder operates on the basis of a trellis that has only a modest complexity, even when the source symbol values are selected from a set having an infinite cardinality, such as the set of all positive integers. This trellis is designed so that the transitions between its states are synchronous with the transitions between the consecutive unary codewords in the concatenated bit sequence. This allows the UEC decoder to exploit any residual redundancy that remains following UEC encoding for the purpose of error correction by using the classic Bahl, Cocke, Jelinek and Raviv (BCJR) algorithm. Owing to this, the UEC code is capable of mitigating any potential capacity loss, hence facilitating near-capacity operation, even when the cardinality of the symbol value set is infinite. We investigate the applications, characteristics and performance of the UEC code in the context of digital telecommunications. Firstly, we propose an adaptive UEC design for expediting the decoding process. By concatenating the UEC code with a turbo code, we conceive a three-stage concatenated adaptive iterative decoding technique. A Three-Dimensional (3D) EXtrinsic Information Transfer (EXIT) chart technique is proposed for both controlling the dynamic adaptation of the UEC trellis decoder, as well as for controlling the activation order between the UEC decoder and the turbo decoder. Secondly, we develop an irregular UEC design for ‘nearer-capacity’ operation. The irregular scheme employs different UEC parametrizations for the coding of different subsets of each message frame, operating on the basis of a single irregular trellis having a novel structure. This allows the irregularity to be controlled on a fine-grained bit-by-bit basis, rather than on a symbol-by-symbol basis. Hence, nearer-to-capacity operation is facilitated by exploiting this fine-grained control of the irregularity. Thirdly, we propose a learning-aided UEC design for transmitting symbol values selected from unknown and non-stationary probability distributions. The learning-aided UEC scheme is capable of heuristically inferring the source symbol distribution, hence eliminating the requirement of any prior knowledge of the symbol occurrence probabilities at either the transmitter or the receiver. This is achieved by inferring the source distribution based on the received symbols and by feeding this information back to the decoder. In this way, the quality of the recovered symbols and the estimate of the source distribution can be gradually improved in successive frames, hence allowing reliable near-capacity operation to be achieved, even if the source is unknown and non-stationary. Finally, we demonstrate that the research illustrated in this thesis can be extended in several directions, by highlighting a number of opportunities for future work. The techniques proposed for enhancing the UEC code can be extended to the Rice Error Correction (RiceEC) code, to the Elias Gamma Error Correction (EGEC) code and to the Exponential Golomb Error Correction (ExpGEC) code. In this way, our UEC scheme may be extended to the family of universal error correction codes, which facilitate the nearcapacity transmission of infinite-cardinality symbol alphabets having any arbitrary monotonic probability distribution, as well as providing a wider range of applications. With these benefits, this thesis may contribute to future standards for the reliable near-capacity transmission of multimedia information, having significant technical and economic impact.
APA, Harvard, Vancouver, ISO, and other styles
7

Babar, Zunaira. "Quantum error correction codes." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/380165/.

Full text
Abstract:
Quantum parallel processing techniques are capable of solving certain complex problems at a substantially lower complexity than their classical counterparts. From the perspective of telecommunications, this quantum-domain parallel processing provides a plausible solution for achieving full-search based multi-stream detection, which is vital for future gigabit-wireless systems. The peculiar laws of quantum mechanics have also spurred interest in the absolutely secure quantum-based communication systems. Unfortunately, quantum decoherence imposes a hitherto insurmountable impairment on the practical implementation of quantum computation as well as on quantum communication systems, which may be overcome with the aid of efficient error correction codes. In this thesis, we design error correction codes for the quantum domain, which is an intricate journey from the realm of classical channel coding theory to that of the Quantum Error Correction Codes (QECCs). Since quantum-based communication systems are capable of supporting the transmission of both classical and quantum information, we initially focus our attention on the code design for entanglementassisted classical communication over the quantum depolarizing channel. We conceive an Extrinsic Information Transfer (EXIT) chart aided near-capacity classical-quantum code design, which invokes a classical Irregular Convolutional Code (IRCC) and a Unity Rate Code (URC) in conjunction with our proposed soft-decision aided SuperDense Code (SD). Hence, it is referred to as an ‘IRCC-URCSD’ arrangement. The proposed scheme is intrinsically amalgamated both with 2-qubit as well as 3-qubit SD coding protocols and it is benchmarked against the corresponding entanglement-assisted classical capacity. Since the IRCC-URC-SD scheme is a bit-based design, it incurs a capacity loss. As a further advance, we design a symbol based concatenated code design, referred to as a symbol-based ‘CC-URC-SD’, which relies on a single-component classical Convolutional Code (CC). Additionally, for the sake of reducing the associated decoding complexity, we also investigate the impact of the constraint length of the convolutional code on the achievable performance. Our initial designs, namely IRCC-URC-SD and CC-URC-SD, exploit redundancy in the classical domain. By contrast, QECCs relying on the quantum-domain redundancy are indispensable for conceiving a quantum communication system supporting the transmission of quantum information and also for quantum computing. Therefore, we next provide insights into the transformation from the family of classical codes to the class of quantum codes known as ‘Quantum Stabilizer Codes’ (QSC), which invoke the classical syndrome decoding. Particularly, we detail the underlying quantum-to classical isomorphism, which facilitates the design of meritorious families of QECCs from the known classical codes. We further study the syndrome decoding techniques operating over classical channels, which may be exploited for decoding QSCs. In this context, we conceive a syndrome-based block decoding approach for the classical Turbo Trellis Coded Modulation (TTCM), whose performance is investigated for transmission over an Additive White Gaussian Noise (AWGN) channel as well as over an uncorrelated Rayleigh fading channel. Pursuing our objective of designing efficient QECCs, we next consider the construction of Hashingbound-approaching concatenated quantum codes. In this quest, we appropriately adapt the conventional non-binary EXIT charts for Quantum Turbo Codes (QTCs) by exploiting the intrinsic quantumto- classical isomorphism. We further demonstrate the explicit benefit of our EXIT-chart technique for achieving a Hashing-bound-approaching code design. We also propose a generically applicable structure for Quantum Irregular Convolutional Codes (QIRCCs), which can be dynamically adapted to a specific application scenario with the aid of the EXIT charts. More explicitly, we provide a detailed design example by constructing a 10-subcode QIRCC and use it as an outer code in a concatenated quantum code structure for evaluating its performance. Working further in the direction of iterative code structures, we survey Quantum Low Density Parity Check (QLPDC) codes from the perspective of code design as well as in terms of their decoding algorithms. Furthermore, we propose a radically new class of high-rate row-circulant Quasi-Cyclic QLDPC (QC-QLDPC) codes, which can be constructed from arbitrary row-circulant classical QC LDPC matrices. We also conceive a modified non-binary decoding algorithm for homogeneous Calderbank-Shor-Steane (CSS)-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-4 cycles. Our modified decoder outperforms the state-of-the-art decoders in terms of their Word Error Rate (WER) performance, despite imposing a reduced decoding complexity. Finally, we intricately amalgamate our modified decoder with the classic Uniformly-ReWeighted Belief Propagation (URW-BP) for the sake of achieving further performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
8

Fiehler, Katja. "Temporospatial characteristics of error correction /." Leipzig ; München : MPI for Human Cognitive and Brain Sciences, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=013077731&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fletcher, Andrew Stephen. "Channel-adapted quantum error correction." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40497.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 159-163).
Quantum error correction (QEC) is an essential concept for any quantum information processing device. Typically, QEC is designed with minimal assumptions about the noise process; this generic assumption exacts a high cost in efficiency and performance. We examine QEC methods that are adapted to the physical noise model. In physical systems, errors are not likely to be arbitrary; rather we will have reasonable models for the structure of quantum decoherence. We may choose quantum error correcting codes and recovery operations that specifically target the most likely errors. This can increase QEC performance and also reduce the required overhead. We present a convex optimization method to determine the optimal (in terms of average entanglement fidelity) recovery operation for a given channel, encoding, and information source. This is solvable via a semidefinite program (SDP). We derive an analytic solution to the optimal recovery for the case of stabilizer codes, the completely mixed input source, and channels characterized by Pauli group errors. We present computational algorithms to generate near-optimal recovery operations structured to begin with a projective syndrome measurement.
(cont.) These structured operations are more computationally scalable than the SDP required for computing the optimal; we can thus numerically analyze longer codes. Using Lagrange duality, we bound the performance of the structured recovery operations and show that they are nearly optimal in many relevant cases. We present two classes of channel-adapted quantum error correcting codes specifically designed for the amplitude damping channel. These have significantly higher rates with shorter block lengths than corresponding generic quantum error correcting codes. Both classes are stabilizer codes, and have good fidelity performance with stabilizer recovery operations. The encoding, syndrome measurement, and syndrome recovery operations can all be implemented with Clifford group operations.
by Andrew Stephen Fletcher.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Tao. "Elias Gamma Error Correction Code." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/400268/.

Full text
Abstract:
Shannon’s source-channel coding separation theorem states that near-capacity communication is theoretically possible, when employing Separate Source and Channel Codes (SSCCs), provided that an unlimited encoding/decoding delay and complexity can be afforded. However, it is typically impossible to remove all source redundancy with the aid of practical finite-delay and finite-complexity source encoding, which leads to capacity loss. As a potential remedy, Joint Source and Channel Codes (JSCCs) have been proposed for exploiting the residual redundancy and hence for avoiding any capacity loss. However, all previous JSCCs have been designed for representing symbols values that are selected from a set having a low cardinality and hence they suffer from an excessive decoding complexity, when the cardinality of the symbol value set is large, leading to an infinite complexity, when the cardinality is infinite. Motivated by this, we propose the family of Unary Error Correction (UEC), Elias Gamma Error Correction (EGEC) and Reordered Elias Gamma Error Correction (REGEC) codes in this thesis. Our family of codes belong to the JSCC class designed to have only a modest complexity that is independent of the cardinality of the symbol value set. We exemplify the application of each of the codes in the context of a serially concatenated iterative decoding scheme. In each coding scheme, the encoder generates a bit sequence by encoding and concatenating codewords, while the decoder performs iterative decoding using the classic Logarithmic Bahl, Cocke, Jelinek and Raviv (Log-BCJR) algorithm. Owing to this, our proposed codes are capable of mitigating any potential capacity loss, hence facilitating near-capacity operation. Our proposed UEC code is the first JSCC that maintains a low decoding complexity, when invoked for representing symbol values that are selected from a set having large or even infinite cardinality. The UEC trellis is designed to describe the unary codewords so that the transitions between its states are synchronous with the transitions between the consecutive codewords in the bit sequence. The unary code employed in the UEC code has a simple structure, which can be readily exploited for error correction without requiring an excessive number of trellis transitions and states. However, the UEC scheme has found limited applications, since the unary code is not a universal code. This motivates the design of our EGEC code, which is the first universal code in our code family. The EGEC code relies on trellis representation of the EG code, which is generated by decomposing each symbol into two sub-symbols, for the sake of simplifying the structure of the EG code. However, the reliance on these two parts requires us to carefully tailor the Unequal Protection (UEP) of the two parts for the specific source probability distribution encountered, whilst the actual source distribution may be unknown or non-stationary. Additionally, the complex structure of the EGEC code may impose further disadvantages associated with an increased decoding delay, loss of synchronisation, capacity loss and increased complexity due to puncturing. This motivates us to propose a universal JSCC REGEC code, which has a significantly simpler structure than the EGEC code. The proposed codes were benchmarked against SSCC benchmarkers throughout this thesis and they were found to offer significant gains in all cases. Finally, we demonstrate that our code family proposed in this thesis can be extended by several potential directions. The sophisticated techniques that have been subsequently proposed in the thesis for extending the UEC code, such as irregular trellis designs and the adaptive distribution-learning algorithm, can be readily applied to the REGEC codes which is an explicit benefit of its simple trellis structure. Furthermore, our proposed REGEC code can be extended using techniques that been subsequently proposed for extending the EGEC both to Rice Error Correction (RiceEC) codes and to Exponential Golomb Error Correction (ExpGEC) codes.
APA, Harvard, Vancouver, ISO, and other styles
11

Eidestedt, Richard, and Stefan Ekberg. "Evaluating forecast accuracy for Error Correction constraints and Intercept Correction." Thesis, Uppsala universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-194423.

Full text
Abstract:
This paper examines the forecast accuracy of an unrestricted Vector Autoregressive (VAR) model for GDP, relative to a comparable Vector Error Correction (VEC) model that recognizes that the data is characterized by co-integration. In addition, an alternative forecast method, Intercept Correction (IC), is considered for further comparison. Recursive out-of-sample forecasts are generated for both models and forecast techniques. The generated forecasts for each model are objectively evaluated by a selection of evaluation measures and equal accuracy tests. The result shows that the VEC models consistently outperform the VAR models. Further, IC enhances the forecast accuracy when applied to the VEC model, while there is no such indication when applied to the VAR model. For certain forecast horizons there is a significant difference in forecast ability between the VEC IC model compared to the VAR model.
APA, Harvard, Vancouver, ISO, and other styles
12

Valentini, Lorenzo. "Quantum Error Correction for Quantum Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Le quantum networks e molte altre tecnologie, quali i quantum computer, necessitano di qubit affidabili per il loro funzionamento. Per ottenere ciò, in questo elaborato, si presenta il tema della quantum error correction ponendo particolare attenzione ai codici quantum low-density parity-check (QLDPC). In aggiunta, vengono testati alcuni algoritmi su IBMQ, la serie di computer quantistici resi disponibili online da IBM, per comprenderne le problematiche. Si conclude l'elaborato con alcune riflessioni su come i codici presentati possono arginare alcune delle problematiche riscontrate durante l'implementazione su quantum computer.
APA, Harvard, Vancouver, ISO, and other styles
13

Pondini, Andrea. "Quantum error correction e toric code." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21053/.

Full text
Abstract:
L'elaborato studia la Quantum Error Correction, ovvero quella branca del Quantum Computing che studia gli errori nella computazione e come correggerli. Questo campo è di fondamentale importanza nella costruzione di computer quantistici, in cui l'interazione con l'ambiente porta rumore alla computazione e perdita di coerenza degli stati del sistema. Particolare attenzione è posta nello studio degli Stabilizer Codes, una particolare categoria di Quantum Error Correcting Codes. Tra questi si studia il Toric Code, esempio peculiare di stabilizer code ordinato topologicamente. Le peculiarità del codice sono conseguenza della sua definizione su un reticolo immerso in una superficie toroidale, come suggerisce il nome.
APA, Harvard, Vancouver, ISO, and other styles
14

Lundqvist, Henrik. "Error Correction Coding for Optical CDMA." Licentiate thesis, KTH, Microelectronics and Information Technology, IMIT, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1637.

Full text
Abstract:

The optical fiber is a very attractive communication mediumsince it offers a large bandwidth and low attenuation and cantherefore facilitate demanding services such as high-qualityvideo transmission. As the reach of optical fiber is beingextended to the access network it is economically attractive toshare fibers between different users without adding activecomponents in the network. The most common multiple accessmethod for such passive optical networks is time divisionmultiple access (TDMA), but lately there has been an increasedinterest in using wavelength division multiple access (WDMA)and optical code division multiple access (OCDMA). This thesisevaluates forward error correction as a method to improve theperformance of passive optical networks, in particular OCDMAnetworks.

Most studies of OCDMA use simple channel models focusingonly on the multiple access interference. However, beat noiseis the main performance limitation for many implementations ofOCDMA. Beat noise occurs when multiple optical fields areincident on a receiver, because of the square-law detection. Tomake a realistic evaluation of OCDMA, channel models which takeinterference, beat noise and other noise types into account arestudied in this thesis. Both direct sequencing CDMA and fastfrequency hopping are considered as spreading methods. Anefficient simulation method was developed in order to simulatesystems with forward error correction (FEC) and soft decoding.The simulations show that the performance is significantlyoverestimated when the beat noise is neglected. In order todecrease the error rate without using overly complex equipmentthe bandwidth has to be increased. Simulation results show thatit is beneficial to use error correction codes in addition tospreading codes for the bandwidth expansion. The efficiency canbe further improved by using soft decoding; therefore maximumlikelihood decoding methods for the OCDMA channels aredeveloped and demonstrate a significant reduction in the errorrate. Frequency hopping and direct sequencing are also comparedwith each other, and the results show that temporally codedOCDMA is more sensitive to beat noise.

In addition, the performance of a low complexity softdecoding method for Reed-Solomon codes is evaluated. Softdecoding of Reed Solomon codes has not yet found practical usebecause the earlier proposed methods do not offer sufficientperformance gains to motivate the increased complexity. Thebit-level Chase-decoding algorithm evaluated here can be easilyimplemented using any algebraic decoder.

APA, Harvard, Vancouver, ISO, and other styles
15

Gul, Yusuf. "Entanglement Transformations And Quantum Error Correction." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610773/index.pdf.

Full text
Abstract:
The main subject of this thesis is the investigation of the transformations of pure multipartite entangled states having Schmidt rank 2 by using only local operations assisted with classical communications (LOCC). A new parameterization is used for describing the entangled state of p particles distributed to p distant, spatially separated persons. Product, bipartite and truly multipartite states are identified in this new parametrization. Moreover, alternative parameterizations of local operations carried out by each party are provided. For the case of a deterministic transformation to a truly multipartite final state, one can find an analytic expression that determines whether such a transformation is possible. In this case, a chain of measurements by each party for carrying out the transformation is found. It can also be seen that, under deterministic LOCC transformations, there are some quantities that remain invariant. For the purpose of applying the results of this thesis in the context of the quantum information and computation, brief reviews of the entanglement purification, measurement based quantum computation and quantum codes are given.
APA, Harvard, Vancouver, ISO, and other styles
16

Almulhem, Abdulaziz S. "Error correction techniques for ATM communications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0009/NQ36627.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tang, Stanley C. "Robot positioning error analysis and correction." Thesis, This resource online, 1987. http://scholar.lib.vt.edu/theses/available/etd-04122010-083623/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Flaten, Sindre Drolsum. "Energy Efficient Reed-Solomon Error Correction." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22911.

Full text
Abstract:
Energy efficient implementations are very important in order to increase the op- erating time for battery-powered devices. In this thesis a Reed-Solomon encoder and decoder have been implemented. The implementations have been synthesized using a 45nm technology library and power estimations have been performed. To find the most energy efficient implementation, several implementation techniques were evaluated. The implemented system is a 5-bit, RS(31, 27) code. For a Reed- Solomon encoder with low activity, the energy consumption can be reduced by over 40% with the use of clock gating. Several different Reed-Solomon decoder configurations were implemented and synthesized. When comparing the energy consumption of the different configurations, a configuration with two-parallel syn- drome cells and pipelined Chien search, Forney and error correction module were found to be the most energy efficient. This configuration had a 36% lower energy consumption compared to a configuration with the same parallel syndrome cells, and no pipelined modules. It also had a 7% lower energy consumption compared to a configuration with the same pipelined modules and the standard syndrome cells.
APA, Harvard, Vancouver, ISO, and other styles
19

Bowden, T. G. "Natural language techniques for error correction." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596815.

Full text
Abstract:
Dealing with human errors such as spelling or grammar mistakes is a necessary part of natural language processing. The aim of this project was to investigate how far error detection and correction could proceed when the system purview was set a sub-sentential stretch of text. This restriction comes from cooperative error handling: detecting/correcting errors just after user entry, as the user is entering further text. Short context, or shallow, processing is also interesting because it is potentially cheaper and faster than a full-scale parse and because sentential constraints become less reliable when the 'sentence' is ill-formed. There has been no previous report on the effectiveness of local syntactic constraints on general (English) ill-formedness. Additionally all error processing programmes, other than some working in very restricted domains, have been post-processors rather than cooperative. Being post-processors, previous programs have been concerned with errors left undetected, after some degree of proofreading. Cooperative processing is also aimed at the errors people spend time backtracking to catch. In the absence of existent suitable data, a corpus of keystrokes made by subjects entering a piece of text was collated; errors were classified as caught or uncaught and various interesting analyses emerged. For context-less processing, a method based on morphological error rules and another on binary positional trigrams were devised and compared. Then to incorporate context, local syntactic constraints based on tag information were implemented, using bigram and triggram co-occurrence checks with a Markov tagging procedure. The tag-based constraints were compared with a partial parsing method. These error handlers were evaluated on data from the Keystroke Corpus and on other data manufactured and collected. The morphological error rules and tag-based checks using very short context were the most promising. As far as current comparison allows, there being a scarcity of reported results in this area, the short context techniques implemented here compared well against full-parsing error handlers. Ideas outlined for future work include a method for further identifying detected word scope errors and a practical, usable cooperative corrector based on an extension of an existing commercial application.
APA, Harvard, Vancouver, ISO, and other styles
20

Alic, Andrei Stefan. "Improved Error Correction of NGS Data." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/67630.

Full text
Abstract:
[EN] The work done for this doctorate thesis focuses on error correction of Next Generation Sequencing (NGS) data in the context of High Performance Computing (HPC). Due to the reduction in sequencing cost, the increasing output of the sequencers and the advancements in the biological and medical sciences, the amount of NGS data has increased tremendously. Humans alone are not able to keep pace with this explosion of information, therefore computers must assist them to ease the handle of the deluge of information generated by the sequencing machines. Since NGS is no longer just a research topic (used in clinical routine to detect cancer mutations, for instance), requirements in performance and accuracy are more stringent. For sequencing to be useful outside research, the analysis software must work accurately and fast. This is where HPC comes into play. NGS processing tools should leverage the full potential of multi-core and even distributed computing, as those platforms are extensively available. Moreover, as the performance of the individual core has hit a barrier, current computing tendencies focus on adding more cores and explicitly split the computation to take advantage of them. This thesis starts with a deep analysis of all these problems in a general and comprehensive way (to reach out to a very wide audience), in the form of an exhaustive and objective review of the NGS error correction field. We dedicate a chapter to this topic to introduce the reader gradually and gently into the world of sequencing. It presents real problems and applications of NGS that demonstrate the impact this technology has on science. The review results in the following conclusions: the need of understanding of the specificities of NGS data samples (given the high variety of technologies and features) and the need of flexible, efficient and accurate tools for error correction as a preliminary step of any NGS postprocessing. As a result of the explosion of NGS data, we introduce MuffinInfo. It is a piece of software capable of extracting information from the raw data produced by the sequencer to help the user understand the data. MuffinInfo uses HTML5, therefore it runs in almost any software and hardware environment. It supports custom statistics to mould itself to specific requirements. MuffinInfo can reload the results of a run which are stored in JSON format for easier integration with third party applications. Finally, our application uses threads to perform the calculations, to load the data from the disk and to handle the UI. In continuation to our research and as a result of the single core performance limitation, we leverage the power of multi-core computers to develop a new error correction tool. The error correction of the NGS data is normally the first step of any analysis targeting NGS. As we conclude from the review performed within the frame of this thesis, many projects in different real-life applications have opted for this step before further analysis. In this sense, we propose MuffinEC, a multi-technology (Illumina, Roche 454, Ion Torrent and PacBio -experimental), any-type-of-error handling (mismatches, deletions insertions and unknown values) corrector. It surpasses other similar software by providing higher accuracy (demonstrated by three type of tests) and using less computational resources. It follows a multi-steps approach that starts by grouping all the reads using a k-mers based metric. Next, it employs the powerful Smith-Waterman algorithm to refine the groups and generate Multiple Sequence Alignments (MSAs). These MSAs are corrected by taking each column and looking for the correct base, determined by a user-adjustable percentage. This manuscript is structured in chapters based on material that has been previously published in prestigious journals indexed by the Journal of Citation Reports (on outstanding positions) and relevant congresses.
[ES] El trabajo realizado en el marco de esta tesis doctoral se centra en la corrección de errores en datos provenientes de técnicas NGS utilizando técnicas de computación intensiva. Debido a la reducción de costes y el incremento en las prestaciones de los secuenciadores, la cantidad de datos disponibles en NGS se ha incrementado notablemente. La utilización de computadores en el análisis de estas muestras se hace imprescindible para poder dar respuesta a la avalancha de información generada por estas técnicas. El uso de NGS transciende la investigación con numerosos ejemplos de uso clínico y agronómico, por lo que aparecen nuevas necesidades en cuanto al tiempo de proceso y la fiabilidad de los resultados. Para maximizar su aplicabilidad clínica, las técnicas de proceso de datos de NGS deben acelerarse y producir datos más precisos. En este contexto es en el que las técnicas de comptuación intensiva juegan un papel relevante. En la actualidad, es común disponer de computadores con varios núcleos de proceso e incluso utilizar múltiples computadores mediante técnicas de computación paralela distribuida. Las tendencias actuales hacia arquitecturas con un mayor número de núcleos ponen de manifiesto que es ésta una aproximación relevante. Esta tesis comienza con un análisis de los problemas fundamentales del proceso de datos en NGS de forma general y adaptado para su comprensión por una amplia audiencia, a través de una exhaustiva revisión del estado del arte en la corrección de datos de NGS. Esta revisión introduce gradualmente al lector en las técnicas de secuenciación masiva, presentando problemas y aplicaciones reales de las técnicas de NGS, destacando el impacto de esta tecnología en ciencia. De este estudio se concluyen dos ideas principales: La necesidad de analizar de forma adecuada las características de los datos de NGS, atendiendo a la enorme variedad intrínseca que tienen las diferentes técnicas de NGS; y la necesidad de disponer de una herramienta versátil, eficiente y precisa para la corrección de errores. En el contexto del análisis de datos, la tesis presenta MuffinInfo. La herramienta MuffinInfo es una aplicación software implementada mediante HTML5. MuffinInfo obtiene información relevante de datos crudos de NGS para favorecer el entendimiento de sus características y la aplicación de técnicas de corrección de errores, soportando además la extensión mediante funciones que implementen estadísticos definidos por el usuario. MuffinInfo almacena los resultados del proceso en ficheros JSON. Al usar HTML5, MuffinInfo puede funcionar en casi cualquier entorno hardware y software. La herramienta está implementada aprovechando múltiples hilos de ejecución por la gestión del interfaz. La segunda conclusión del análisis del estado del arte nos lleva a la oportunidad de aplicar de forma extensiva técnicas de computación de altas prestaciones en la corrección de errores para desarrollar una herramienta que soporte múltiples tecnologías (Illumina, Roche 454, Ion Torrent y experimentalmente PacBio). La herramienta propuesta (MuffinEC), soporta diferentes tipos de errores (sustituciones, indels y valores desconocidos). MuffinEC supera los resultados obtenidos por las herramientas existentes en este ámbito. Ofrece una mejor tasa de corrección, en un tiempo muy inferior y utilizando menos recursos, lo que facilita además su aplicación en muestras de mayor tamaño en computadores convencionales. MuffinEC utiliza una aproximación basada en etapas multiples. Primero agrupa todas las secuencias utilizando la métrica de los k-mers. En segundo lugar realiza un refinamiento de los grupos mediante el alineamiento con Smith-Waterman, generando contigs. Estos contigs resultan de la corrección por columnas de atendiendo a la frecuencia individual de cada base. La tesis se estructura por capítulos cuya base ha sido previamente publicada en revistas indexadas en posiciones dest
[CAT] El treball realitzat en el marc d'aquesta tesi doctoral se centra en la correcció d'errors en dades provinents de tècniques de NGS utilitzant tècniques de computació intensiva. A causa de la reducció de costos i l'increment en les prestacions dels seqüenciadors, la quantitat de dades disponibles a NGS s'ha incrementat notablement. La utilització de computadors en l'anàlisi d'aquestes mostres es fa imprescindible per poder donar resposta a l'allau d'informació generada per aquestes tècniques. L'ús de NGS transcendeix la investigació amb nombrosos exemples d'ús clínic i agronòmic, per la qual cosa apareixen noves necessitats quant al temps de procés i la fiabilitat dels resultats. Per a maximitzar la seua aplicabilitat clínica, les tècniques de procés de dades de NGS han d'accelerar-se i produir dades més precises. En este context és en el que les tècniques de comptuación intensiva juguen un paper rellevant. En l'actualitat, és comú disposar de computadors amb diversos nuclis de procés i inclús utilitzar múltiples computadors per mitjà de tècniques de computació paral·lela distribuïda. Les tendències actuals cap a arquitectures amb un nombre més gran de nuclis posen de manifest que és esta una aproximació rellevant. Aquesta tesi comença amb una anàlisi dels problemes fonamentals del procés de dades en NGS de forma general i adaptat per a la seua comprensió per una àmplia audiència, a través d'una exhaustiva revisió de l'estat de l'art en la correcció de dades de NGS. Esta revisió introduïx gradualment al lector en les tècniques de seqüenciació massiva, presentant problemes i aplicacions reals de les tècniques de NGS, destacant l'impacte d'esta tecnologia en ciència. D'este estudi es conclouen dos idees principals: La necessitat d'analitzar de forma adequada les característiques de les dades de NGS, atenent a l'enorme varietat intrínseca que tenen les diferents tècniques de NGS; i la necessitat de disposar d'una ferramenta versàtil, eficient i precisa per a la correcció d'errors. En el context de l'anàlisi de dades, la tesi presenta MuffinInfo. La ferramenta MuffinInfo és una aplicació programari implementada per mitjà de HTML5. MuffinInfo obté informació rellevant de dades crues de NGS per a afavorir l'enteniment de les seues característiques i l'aplicació de tècniques de correcció d'errors, suportant a més l'extensió per mitjà de funcions que implementen estadístics definits per l'usuari. MuffinInfo emmagatzema els resultats del procés en fitxers JSON. A l'usar HTML5, MuffinInfo pot funcionar en gairebé qualsevol entorn maquinari i programari. La ferramenta està implementada aprofitant múltiples fils d'execució per la gestió de l'interfície. La segona conclusió de l'anàlisi de l'estat de l'art ens porta a l'oportunitat d'aplicar de forma extensiva tècniques de computació d'altes prestacions en la correcció d'errors per a desenrotllar una ferramenta que suport múltiples tecnologies (Illumina, Roche 454, Ió Torrent i experimentalment PacBio). La ferramenta proposada (MuffinEC), suporta diferents tipus d'errors (substitucions, indels i valors desconeguts). MuffinEC supera els resultats obtinguts per les ferramentes existents en este àmbit. Oferix una millor taxa de correcció, en un temps molt inferior i utilitzant menys recursos, la qual cosa facilita a més la seua aplicació en mostres més gran en computadors convencionals. MuffinEC utilitza una aproximació basada en etapes multiples. Primer agrupa totes les seqüències utilitzant la mètrica dels k-mers. En segon lloc realitza un refinament dels grups per mitjà de l'alineament amb Smith-Waterman, generant contigs. Estos contigs resulten de la correcció per columnes d'atenent a la freqüència individual de cada base. La tesi s'estructura per capítols la base de la qual ha sigut prèviament publicada en revistes indexades en posicions destacades de l'índex del Journal of Citation Repor
Alic, AS. (2016). Improved Error Correction of NGS Data [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/67630
TESIS
APA, Harvard, Vancouver, ISO, and other styles
21

Gonzales, Alvin Rafer. "QUANTUM ERROR CORRECTION FOR GENERAL NOISE." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/dissertations/1894.

Full text
Abstract:
Large quantum computers have the potential to vastly outperform any classical computer. The biggest obstacle to building quantum computers of such size is noise. For example, state of the art superconducting quantum computers have average decoherence (loss of information) times of just microseconds. Thus, the field of quantum error correction is especially crucial to progress in the development of quantum technologies. In this research, we study quantum error correction for general noise, which is given by a linear Hermitian map. In standard quantum error correction, the usual assumption is to constrain the errors to completely positive maps, which is a special case of linear Hermitian maps. We establish constraints and sufficient conditions for the possible error correcting codes that can be used for linear Hermitian maps. Afterwards, we expand these sufficient conditions to cover a large class of general errors. These conditions lead to currently known conditions in the limit that the error map becomes completely positive. The later chapters give general results for quantum evolution maps: a set of weak repeated projective measurements that never break entanglement and the asymmetric depolarizing map composed with a not completely positive map that gives a completely positive composition. Finally, we give examples.
APA, Harvard, Vancouver, ISO, and other styles
22

Mei, Yan. "Combined forward error correction and error concealment for digital video transmission." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ64061.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gaspari, Andrea. "Quantum error correction and the toric code." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21591/.

Full text
Abstract:
Quantum error correction is the main subject of this thesis. After a general introduction of the fundamentals of quantum mechanics and quantum computing, the problem is presented and further analysed using two different approaches, one, more practical, based on quantum circuits and one, purely theoretical, based on the stabilizer formalism. Examples of the principal quantum codes are progressively supplied to help the comprehension. To conclude the attention is drawn to the Toric code which represents one of the most promising platforms to store quantum information.
APA, Harvard, Vancouver, ISO, and other styles
24

Dickinson, Markus. "Error detection and correction in annotated corpora." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1123788552.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xvi, 269 p.; also includes graphics (some col.). Includes bibliographical references (p. 241-259). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
25

Valverde, Martínez David, and Otte Francisco Javier Parada. "Forward Error Correction for Packet Switched Networks." Thesis, Linköping University, Communication Systems, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11093.

Full text
Abstract:

The main goal in this thesis is to select and test Forward Error Correction (FEC) schemes suitable for network video transmission over RTP/UDP. There is a general concern in communication networks which is to achieve a tradeoff between reliable transmission and the delay that it takes. Our purpose is to look for techniques that improve the reliability while the realtime delay constraints are fulfilled. In order to achieve it, the FEC techniques focus on recovering the packet losses that come up along any transmission. The FEC schemes that we have selected are Parity Check algorithm, ReedSolomon (RS) codes and a Convolutional code. Simulations are performed to test the different schemes.

The results obtained show that the RS codes are the more powerful schemes in terms of recovery capabilities. However they can not be deployed for every configuration since they go beyond the delay threshold. On the other hand, despite of the Parity Check codes being the less efficient in terms of error recovery, they show a reasonable low delay. Therefore, depending on the packet loss probability that we are working with, we may chose one or other of the different schemes. To summarize, this thesis includes a theoretical background, a thorough analysis of the FEC schemes chosen, simulation results, conclusions and proposed future work.

APA, Harvard, Vancouver, ISO, and other styles
26

Ratzer, Edward Alexander. "Error-correction on non-standard communication channels." Thesis, University of Cambridge, 2004. https://www.repository.cam.ac.uk/handle/1810/237471.

Full text
Abstract:
Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code.
APA, Harvard, Vancouver, ISO, and other styles
27

Lewis, Matthew. "Error correction of generalised algebraic-geometry codes." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Spagnolo, Fabio. "Nonlinear error-correction models with regime switching." Thesis, Birkbeck (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

MAGRI, RAFAEL RIBEIRO. "NONLINEAR ERROR CORRECTION MODELS: ESTIMATION AND TESTING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34955@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Testes existentes para não-linearidade em Modelos de Correção de Erros são altamente intensivos computacionalmente e apresentam parâmetros de estorvo na distribuição assintótica, que precisam ser levantadas através de simulações por bootstrap. É proposto um teste consistente, implementável em qualquer pacote estatístico e que apresenta distribuição assintótica Qui-Quadrado. Além disso, experimentos de Monte Carlo mostram que em pequena amostra o teste tem boas propriedades de tamanho e poder, muitas vezes melhores do que os testes existentes. Também é apresentada uma condição sob a qual um estimador em dois estágios para os parâmetros do modelo é assintoticamente normal. A aplicação do modelo a preços internacionais de commodities agrícolas mostra evidência de ajuste não-linear nos preços de trigo.
Existing tests for nonlinearity in vector error correction models are highly intensive computationally and have nuisance parameters in the asymptotic distribution, what calls for cumbersome bootstrap calculations in order to assess the distribution. Our work proposes a consistent test which is implementable in any statistical package and has Chi-Squared asymptotics. Moreover, Monte Carlo experiments show that in small samples our test has nice size and power properties, often better than the preexisting tests. We also provide a condition under which a consistent two step estimator for the model parameters is asymptotically normal. Application to international agricultural commodities prices show evidence of nonlinear adjustment to the long run equilibrium on the wheat prices.
APA, Harvard, Vancouver, ISO, and other styles
30

Chang, Danica H. (Danica Hill). "Natural error correction techniques for sketch recognition." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82371.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 55-56).
Over the past few years, a plethora of tablet devices has made it very easy for users to input information by sketching as if on paper. In addition, sketch recognition systems help users convert these sketches into information that the computer understands. While lots of work has been done in developing better sketch recognizers, very little work has previously been done on how to edit the sketch once it's been drawn, whether the error is the user's or the sketch recognizer's. In response, we developed and studied intuitive methods of interacting with a sketch recognition system to correct errors made by both the recognizer and the user. The editor allows users to click and lasso to select parts of the sketch, label the selected strokes, erase by scribbling over strokes, and even overwrite errors. Letting users provide feedback to the sketch recognizer helps improve the accuracy of the sketch as well as allows the sketch recognizer's performance to improve over time.
by Danica H. Chang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
31

Sheldon, Sarah (Sarah Elizabeth). "Second order error correction in quantum computing." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44834.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2008.
Includes bibliographical references (leaf 23).
Error correction codes are necessary for the development of reliable quantum computers. Such codes can prevent the lost of information from decoherence caused by external perturbations. This thesis evaluates a five qubit code for correcting second order bit-flip errors. The code consists of encoding, decoherence, decoding, and error correction steps. This work analyzes the proposed code using geometric algebra methods and examines the state of the system after each step in the process.
by Sarah Sheldon.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
32

Buys, Jan Moolman. "Probabilistic tree transducers for grammatical error correction." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85592.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: We investigate the application of weighted tree transducers to correcting grammatical errors in natural language. Weighted finite-state transducers (FST) have been used successfully in a wide range of natural language processing (NLP) tasks, even though the expressiveness of the linguistic transformations they perform is limited. Recently, there has been an increase in the use of weighted tree transducers and related formalisms that can express syntax-based natural language transformations in a probabilistic setting. The NLP task that we investigate is the automatic correction of grammar errors made by English language learners. In contrast to spelling correction, which can be performed with a very high accuracy, the performance of grammar correction systems is still low for most error types. Commercial grammar correction systems mostly use rule-based methods. The most common approach in recent grammatical error correction research is to use statistical classifiers that make local decisions about the occurrence of specific error types. The approach that we investigate is related to a number of other approaches inspired by statistical machine translation (SMT) or based on language modelling. Corpora of language learner writing annotated with error corrections are used as training data. Our baseline model is a noisy-channel FST model consisting of an n-gram language model and a FST error model, which performs word insertion, deletion and replacement operations. The tree transducer model we use to perform error correction is a weighted top-down tree-to-string transducer, formulated to perform transformations between parse trees of correct sentences and incorrect sentences. Using an algorithm developed for syntax-based SMT, transducer rules are extracted from training data of which the correct version of sentences have been parsed. Rule weights are also estimated from the training data. Hypothesis sentences generated by the tree transducer are reranked using an n-gram language model. We perform experiments to evaluate the performance of different configurations of the proposed models. In our implementation an existing tree transducer toolkit is used. To make decoding time feasible sentences are split into clauses and heuristic pruning is performed during decoding. We consider different modelling choices in the construction of transducer rules. The evaluation of our models is based on precision and recall. Experiments are performed to correct various error types on two learner corpora. The results show that our system is competitive with existing approaches on several error types.
AFRIKAANSE OPSOMMING: Ons ondersoek die toepassing van geweegde boomoutomate om grammatikafoute in natuurlike taal outomaties reg te stel. Geweegde eindigetoestand outomate word suksesvol gebruik in ’n wye omvang van take in natuurlike taalverwerking, alhoewel die uitdrukkingskrag van die taalkundige transformasies wat hulle uitvoer beperk is. Daar is die afgelope tyd ’n toename in die gebruik van geweegde boomoutomate en verwante formalismes wat sintaktiese transformasies in natuurlike taal in ’n probabilistiese raamwerk voorstel. Die natuurlike taalverwerkingstoepassing wat ons ondersoek is die outomatiese regstelling van taalfoute wat gemaak word deur Engelse taalleerders. Terwyl speltoetsing in Engels met ’n baie hoë akkuraatheid gedoen kan word, is die prestasie van taalregstellingstelsels nog relatief swak vir meeste fouttipes. Kommersiële taalregstellingstelsels maak oorwegend gebruik van reël-gebaseerde metodes. Die algemeenste benadering in onlangse navorsing oor grammatikale foutkorreksie is om statistiese klassifiseerders wat plaaslike besluite oor die voorkoms van spesifieke fouttipes maak te gebruik. Die benadering wat ons ondersoek is verwant aan ’n aantal ander benaderings wat geïnspireer is deur statistiese masjienvertaling of op taalmodellering gebaseer is. Korpora van taalleerderskryfwerk wat met foutregstellings geannoteer is, word as afrigdata gebruik. Ons kontrolestelsel is ’n geraaskanaal eindigetoestand outomaatmodel wat bestaan uit ’n n-gram taalmodel en ’n foutmodel wat invoegings-, verwyderings- en vervangingsoperasies op woordvlak uitvoer. Die boomoutomaatmodel wat ons gebruik vir grammatikale foutkorreksie is ’n geweegde bo-na-onder boom-na-string omsetteroutomaat geformuleer om transformasies tussen sintaksbome van korrekte sinne en foutiewe sinne te maak. ’n Algoritme wat ontwikkel is vir sintaksgebaseerde statistiese masjienvertaling word gebruik om reëls te onttrek uit die afrigdata, waarvan sintaksontleding op die korrekte weergawe van die sinne gedoen is. Reëlgewigte word ook vanaf die afrigdata beraam. Hipotese-sinne gegenereer deur die boomoutomaat word herrangskik met behulp van ’n n-gram taalmodel. Ons voer eksperimente uit om die doeltreffendheid van verskillende opstellings van die voorgestelde modelle te evalueer. In ons implementering word ’n bestaande boomoutomaat sagtewarepakket gebruik. Om die dekoderingstyd te verminder word sinne in frases verdeel en die soekruimte heuristies besnoei. Ons oorweeg verskeie modelleringskeuses in die samestelling van outomaatreëls. Die evaluering van ons modelle word gebaseer op presisie en herroepvermoë. Eksperimente word uitgevoer om verskeie fouttipes reg te maak op twee leerderkorpora. Die resultate wys dat ons model kompeterend is met bestaande benaderings op verskeie fouttipes.
APA, Harvard, Vancouver, ISO, and other styles
33

Jones, David Rhys. "Forward Error Correction in Memoryless Optical Modulation." UNF Digital Commons, 2011. http://digitalcommons.unf.edu/etd/171.

Full text
Abstract:
The unprecedented growth in demand for digital media has led to an all-time high in society’s demand for information. This demand will in all likelihood continue to grow as technology such as 3D television service, on-demand video and peer-to-peer networking continue to become more common place. The large amount of information required is currently transmitted optically using a wavelength division multiplexing (WDM) network structure. The need to increase the capacity of the existing WDM network infrastructure efficiently is essential to continue to provide new high bandwidth services to end-users, while at the same time minimizing network providers’ costs. In WDM systems the key to reducing the cost per transported information bit is to effectively share all optical components. These components must operate within the same wavelength limited window; therefore it is necessary to place the WDM channels as close together as possible. At the same time, the correct modulation format must be selected in order to create flexible, cost-effective, high-capacity optical networks. This thesis presents a detailed comparison of Differential Quadrature Phase Shift Keying (DQPSK) to other modulation formats. This comparison is implemented through a series of simulations in which the bit error rate of various modulation formats are compared both with and without the presence of forward error correction techniques. Based off of these simulation results, the top performing modulation formats are placed into a multiplexed simulation to assess their overall robustness in the face of multiple filtering impairments.
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Yi. "Error correction codes for molecular communication systems." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88085/.

Full text
Abstract:
Molecular communications (MC) is a bio-inspired paradigm that aims to utilise molecules to exchange information among nano-machines. Given the tiny devices used in a MC system and the feasibility of MC in biological environments, MC can be applied to many applications ranging from the healthcare to manufacturing fields. In order to better realize these applications in the future, this Ph.D. research is dedicated to the investigation of a more functional, precise and reliable Diffusion-based Molecular Communications (DBMC) system. To achieve this goal, the contributions of this thesis are as follows. Firstly, the point-to-point (PTP) DBMC system with the absorbing receiver model is established and investigated. A study of the accuracy of the analytical channel model is also introduced. Secondly, dependent on different types of the transmitter (TX) and receiver (RX), three different communication scenarios are proposed. Thirdly, to enhance the reliability of the information at RX, the Error Correction Codes (ECCs), as the most prominent technique is employed within the DBMC system to control or correct any errors introduced during the transmission process. Fourthly, due to the limitation of the power budget of the nano-machines, the energy efficiency of the system is also taken into account. Finally, a two-receiver broadcast DBMC system is established with an absorbing interfering receiver (RI) and an absorbing target receiver (RT). By analysing the performance of the communication link between TX and RT (target communication link), the impact of the positions of RI on RT is studied. This study indicates that the application of ECCs does enhance the performance of PTP DBMC systems. In addition, the encoder and decoder design, and the BER performance are shown to be the two primary factors for selecting the most suitable ECC for the application. Finally, considering a two-receiver broadcast DBMC system with absorbing receivers, the existence of RI does affect the performance of the target communication link which is crucial result for the field moving forward.
APA, Harvard, Vancouver, ISO, and other styles
35

Андрейко, Лариса Володимирівна, Лариса Владимировна Андрейко, and Larysa Volodymyrivna Andreyko. "Error correction strategies in English speaking classroom." Thesis, ВВП "Мрія", 2014. http://essuir.sumdu.edu.ua/handle/123456789/59245.

Full text
Abstract:
The article analyses effectiveness of error correction in speaking classrooms, when and how to correct errors in order to help learners improve their speaking skills. Some of the most effective error correction strategies are suggested.
Проаналізовано питання доцільності виправлення помилок під час мовленнєвої діяльності студентів на заняттях з англійської мови, зокрема коли і як їх виправляти. Наведені приклади ефективних прийомів виправлення помилок.
APA, Harvard, Vancouver, ISO, and other styles
36

Layden, David. "Device- and application-adapted quantum error correction." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127314.

Full text
Abstract:
Thesis: Ph. D. in Quantum Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 185-194).
Precise control of coherent quantum systems could enable new generations of sensing, communication and computing technologies. Such systems, however, are typically noisy and difficult to stabilize. One promising technique to this end is called quantum error correction, which encodes quantum states in such a way that errors can be detected and corrected, much like in classical error-correcting codes. Quantum error-correcting codes usually cast a wide net, in that they are designed to correct errors regardless of their physical origins. In large-scale devices, this is an essential feature. It comes at a cost, however: conventional quantum codes are typically resource-intensive in terms of both the system size and the control operations they require. Yet, in smaller-scale devices the main error sources are often well-understood. In the near term, it may therefore be advantageous to cast a more targeted net through specialized codes. This thesis presents new families of such quantum error-correcting codes, which are adapted either for leading candidate devices, or for near-term applications. The device-adapted codes require exponentially less overhead than conventional codes to achieve the same level of protection, whereas the application-adapted codes can enhance quantum sensors, in which conventional codes cannot readily be used. The new techniques presented in this thesis adapt cornerstones of conventional theory in light of key experimental challenges and opportunities. The ultimate goal of this research is to help bridge the gap between the exacting requirements of proposed quantum technologies and the realities of emerging quantum devices. Bridging this gap is critical, if quantum technologies are to realize their full potential.
by David Layden.
Ph. D. in Quantum Science and Engineering
Ph.D.inQuantumScienceandEngineering Massachusetts Institute of Technology, Department of Nuclear Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
37

Cohen, Joachim. "Autonomous quantum error correction with superconducting qubits." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE008/document.

Full text
Abstract:
Dans cette thèse, nous développons plusieurs outils pour la Correction d’Erreur Quantique (CEQ) autonome avec les qubits supraconducteurs.Nous proposons un schéma de CEQ autonome qui repose sur la technique du « reservoir engineering », dans lequel trois qubits de type transmon sont couplés à un ou plusieurs modes dissipatifs. Grâce à la mise au point d’une interaction effective entre les systèmes, l’entropie créée par les éventuelles erreurs est évacuée à travers les modes dissipatifs.La deuxième partie de ce travail porte sur un type de code récemment développé, le code des chats, à travers lequel l’information logique est encodée dans le vaste espace de Hilbert d’un oscillateur harmonique. Nous proposons un protocole pour réaliser des mesures continues et non-perturbatrices de la parité du nombre de photons dans une cavité micro-onde, ce qui correspond au syndrome d’erreur pour le code des chats. Enfin, en utilisant les résultats précédents, nous présentons plusieurs protocoles de CEQ continus et/ou autonomes basés sur le code des chats. Ces protocoles offrent une protection robuste contre les canaux d’erreur dominants en présence de dissipation stimulée à plusieurs photons
In this thesis, we develop several tools in the direction of autonomous Quantum Error Correction (QEC) with superconducting qubits. We design an autonomous QEC scheme based on quantum reservoir engineering, in which transmon qubits are coupled to lossy modes. Through an engineered interaction between these systems, the entropy created by eventual errors is evacuated via the dissipative modes.The second part of this work focus on the recently developed cat codes, through which the logical information is encoded in the large Hilbert space of a harmonic oscillator. We propose a scheme to perform continuous and quantum non-demolition measurements of photon-number parity in a microwave cavity, which corresponds to the error syndrome in the cat code. In our design, we exploit the strongly nonlinear Hamiltonian of a highimpedance Josephson circuit, coupling ahigh-Q cavity storage cavity mode to a low-Q readout one. Last, as a follow up of the above results, we present several continuous and/or autonomous QEC schemes using the cat code. These schemes provide a robust protection against dominant error channels in the presence of multi-photon driven dissipation
APA, Harvard, Vancouver, ISO, and other styles
38

Cheung, Kar-Ming McEliece Robert J. "Error-correction coding in data storage systems /." Diss., Pasadena, Calif. : California Institute of Technology, 1987. http://resolver.caltech.edu/CaltechETD:etd-02282008-133009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Urbani, Camilla. "Stabilizer Codes for Quantum Error Correction and Synchronization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
This thesis project aims to deepen the basic concepts of quantum mechanics with particular reference to quantum information theory and quantum error correction codes, fundamental for a correct reception of information. The relations between these codes and classical ones have been investigated, based upon their representation in terms of stabilizers and then developing a possible error detection code. It has also been examined a classical problem in communication systems, namely frame synchronization, discussing it in quantum communication systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Rummel, David. "Correction for covariate measurement error in nonparametric regression." Diss., [S.l.] : [s.n.], 2006. http://edoc.ub.uni-muenchen.de/archive/00006436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bäckström, Fredrik, and Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.

Full text
Abstract:

Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.

Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.

The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.

The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.

APA, Harvard, Vancouver, ISO, and other styles
42

Klinc, Demijan. "On applications of puncturing in error-correction coding." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39610.

Full text
Abstract:
This thesis investigates applications of puncturing in error-correction coding and physical layer security with an emphasis on binary and non-binary LDPC codes. Theoretical framework for the analysis of punctured binary LDPC codes at short block lengths is developed and a novel decoding scheme is designed that achieves considerably faster convergence than conventional approaches. Subsequently, optimized puncturing and shortening is studied for non-binary LDPC codes over binary input channels. Framework for the analysis of punctured/shortened non-binary LDPC codes over the BEC channel is developed, which enables the optimization of puncturing and shortening patterns. Insight from this analysis is used to develop algorithms for puncturing and shortening of non-binary LDPC codes at finite block lengths that perform well. It is confirmed that symbol-wise puncturing is generally bad and that bit-wise punctured non-binary LDPC codes can significantly outperform their binary counterparts, thus making them an attractive solution for future communication systems; both for error-correction and distributed compression. Puncturing is also considered in the context of physical layer security. It is shown that puncturing can be used effectively for coding over the wiretap channel to hide the message bits from eavesdroppers. Further, it is shown how puncturing patterns can be optimized for enhanced secrecy. Asymptotic analysis confirms that eavesdroppers are forced to operate at BERs very close to 0.5, even if their signal is only slightly worse than that of the legitimate receivers. The proposed coding scheme is naturally applicable at finite block lengths and allows for efficient, almost-linear time encoding. Finally, it is shown how error-correcting codes can be used to solve an open problem of compressing data encrypted with block ciphers such as AES. Coding schemes for multiple chaining modes are proposed and it is verified that considerable compression gains are attainable for binary sources.
APA, Harvard, Vancouver, ISO, and other styles
43

Han, Hillary H. "Measurement-error bias correction in spawner-recruitment relationships." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37541.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Davey, M. C. "Error-correction using low-density parity-check codes." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598305.

Full text
Abstract:
Gallager's low-density parity-check codes are defined by sparse parity-check matrices, usually with a random contruction. Such codes have near Shannon limit performance when decoded using an iterative probabilistic decoding algorithm. We report two advances that improve the error-correction performance of these codes. First, defining the codes over non-binary fields we can obtain a 0.6 dB improvement in signal to noise ratio for a given bit error rate. Second, using irregular parity-check matrices with non-uniform row and column weights we obtain gains of up to 0.5 dB. The empirical error-correction performance of irregular low-density parity-check codes is unbeaten for the additive white Gaussian noise channel. Low-density parity-check codes are also shown to be useful for communicating over channels which make insertions and deletions as well as additive (substitution) errors. Error-correction for such channels has not been widely studied, but is of importance whenever synchronisation of sender and receiver is imperfect. We introduce concatenated codes using novel non-linear inner codes which we call 'watermark' codes, and low-density parity-check codes over non-binary fields as outer codes. The inner code allows resynchronisation using a probabilistic decoder, providing soft outputs for the outer low-density parity-check decoder. Error-correction performance using watermark codes is several orders of magnitude better than any comparable results in the literature.
APA, Harvard, Vancouver, ISO, and other styles
45

Tomita, Yu. "Numerical and analytical studies of quantum error correction." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53468.

Full text
Abstract:
A reliable large-scale quantum computer, if built, can solve many real-life problems exponentially faster than the existing digital devices. The biggest obstacle to building one is that they are extremely sensitive and error-prone regardless of the selection of physical implementation. Both data storage and data manipulation require careful implementation and precise control due to its quantum mechanical nature. For the development of a practical and scalable computer, it is essential to identify possible quantum errors and reduce them throughout every layer of the hierarchy of quantum computation. In this dissertation, we present our investigation into new methods to reduce errors in quantum computers from three different directions: quantum memory, quantum control, and quantum error correcting codes. For quantum memory, we pursue the potential of the quantum equivalent of a magnetic hard drive using two-body-interaction structures in fractal dimensions. With regard to quantum control, we show that it is possible to arbitrarily reduce error when manipulating multiple quantum bits using a technique popular in nuclear magnetic resonance. Finally, we introduce an efficient tool to study quantum error correcting codes and present analysis of the codes' performance on model quantum architectures.
APA, Harvard, Vancouver, ISO, and other styles
46

Johnson, Ian. "Towards the identification of intent for error correction." Thesis, King's College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kokkola, N. "A double-error correction computational model of learning." Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/18838/.

Full text
Abstract:
In this thesis, the Double Error model, a general computational model of real-time learning is presented. It builds upon previous real-time error-correction models and assumes that associative connections form not only between stimuli and reinforcers, but between all types of stimuli in a connectionist network. The stimulus representation uses temporally-distributed elements with memory traces, and a process of expectation-based attentional modulation for both reinforcers and non-reinforcing stimuli is introduced. A modified error-correction learning rule is proposed, which incorporates both an error-term for the predicted and predicting stimulus. The static asymptote of learning familiar from other models of learning is replaced by a similarity measure between the activities of said stimuli, resulting in more temporally correlated stimulus representations forming stronger associative links. Associative retrieval based on previously formed associative links result in the model predicting mediated learning and pre-exposure effects. As a general model of learning, it accounts for phenomena predicted by extant learning models. For instance, its usage of error-correction learning produces a natural account of cue-competition effects such as blocking and overshadowing. Its elemental framework, which incorporates overlapping sets of elements to represent stimuli, leads to it predicting non-linear discriminations including biconditional discriminations and negative patterning. The observation that adding a cue to an excitatory compound stimulus leads to a lower generalization decrement as compared to removing a cue from said compound also follows from this representational assumption. The model further makes a number of unique predictions. The apparent contradiction of mediated learning in backward blocking and mediated conditioning proceeding in opposite directions is predicted through the model’s dynamic asymptote. Latent inhibition is accounted for as occurring through both learning and selective attention. The selective attention of the model likewise produces emergent effects when instantiated in the real-time dynamics of the model, predicting that the relatively best predictor of an outcome can sustain the largest amount of attention when compared to poorer predictors of said outcome. The model is evaluated theoretically, through simulations of learning experiments, and mathematically to demonstrate its generality and formal validity. Further, a simplified version of the model is contrasted against other models on a simple artificial classification task, showcasing the power of the fully-connected nature of the model, as well as its second error term in enabling the model’s performance as a classifier. Finally, numerous avenues of future work have been explored. I have completed a proof-of-concept deep recurrent network extension of the model, instantiated with reference to machine learning theory, and applied the second error term of the model to modulating backpropagation in time of a vanilla RNN. Both the former and latter were applied to a natural language processing task.
APA, Harvard, Vancouver, ISO, and other styles
48

Park, Jason (Jason Sun-Hyung). "Designing methods for error correction in gene fabrication." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32931.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
Includes bibliographical references (p. 20-22).
Gene Fabrication technology involves the development and optimization of methods relevant to the in vitro synthesis of any given target gene sequence(s) in the absence of template. The driving purpose of this field of research is to bring about the capability for on-demand fabrication of a DNA construct of arbitrary length and sequence quickly, efficiently, and cost-effectively. One of the main challenges in gene fabrication is to not only synthesize a given DNA target, but to do so without making any errors. At high error rates, fabrication of long gene targets is expensive and impractical - in some cases, it is impossible. Improvements in error rates are essential for continued progress in the development of gene fabrication technology. Error reduction technologies can be broadly split into three categories at present: error filtration, error correction, and error prevention. This thesis presents the past, present, and future design of a number of quick, easy, robust, economical, and effective error reduction methods in gene fabrication. .
by Jason Park.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
49

Park, Jason (Jason Sun-Hyung). "Synthesis and error correction methods in gene fabrication." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37961.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Biological Engineering Division, 2006.
Includes bibliographical references (leaves 56-59).
Gene Fabrication technology involves the development and optimization of methods relevant to the in vitro synthesis of any given target gene sequence(s) in the absence of template. The driving purpose of this field of research is to bring about the capability for on-demand fabrication of a DNA construct of arbitrary length and sequence quickly, efficiently, and cost-effectively. The first part of this document describes many of the important considerations in performing successful de novo gene synthesis from a survey of the literature as well as from our own work. Recommendations are made for a universally effective, robust, and simple protocol for potential users of gene synthesis, discussing important factors such as choice of protocol, source of commercial oligonucleotides, and polymerase choice. The second part of this document focuses on error correction. Reducing error rates is one of the main challenges in gene fabrication because high error rates preclude the possibility of fabricating long gene targets in a practical and economical manner. Improvements in error rates are essential for continued progress in the development of gene fabrication technology. I discuss the importance of error rate in gene synthesis from a practical standpoint and show results in the development of novel methods for the removal of errors from a pool of synthesized DNA.
by Jason Sun-hyung Park.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
50

Chilappagari, Shashi Kiran, and Dung Viet Nguyen. "On Guaranteed Error Correction Capability of GLDPC Codes." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606241.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
In this paper, it is shown that generalized LDPC codes can correct a linear fraction of errors under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. A lower bound on the size of variable node sets which have required expansion is established as a function of the column weight of the code, the girth of the Tanner graph and the error correction capability of the sub-code. It is also shown that the bound on the required expansion cannot be improved when the column weight is even by studying a class of trapping sets. An upper bound on the guaranteed error correction capability is found by investigating the size of smallest possible trapping sets.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography