Dissertations / Theses on the topic 'Error-correction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Error-correction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Almlöf, Jonas. "Quantum error correction." Licentiate thesis, KTH, Kvantelektronik och -optik, QEO, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106795.
Full textDenna avhandling är en introduktion till kvantfelrättning, där jag undersöker släktskapet med teorin om klassisk information - men också det mindre välkända området kvantinformation. Kvantinformation beskriver hur information kan bäras av kvanttillstånd, och hur växelverkan med andra system ger upphov till åtskilliga typer av fel och effekter, varav många saknar motsvarighet i den klassiska informationsteorin. Bland dessa effekter återfinns dekoherens - en konsekvens av s.k. sammanflätning. Dekoherens kan också förstås som "informationsläckage", det vill säga att kunskap om en händelse överförs till omgivningen - en effekt som i allmänhet förstör superpositioner i rena kvanttillstånd. Det är möjligt att med hjälp av kvantfelrättning skydda kvanttillstånd (t.ex. qubitar) från omgivningens påverkan, dock kan sådana tillstånd aldrig förstärkas eller dupliceras, p.g.a icke-kloningsteoremet. Tillstånden skyddas genom att införa redundans, varpå tillstånden interagerar med omgivningen. Felen identifieras m.h.a. icke-förstörande mätningar och återställs med unitära grindar och ancilla-tillstånd.Men i realiteten kommer inte alla tänkbara fel att inträffa, utan dessa begränsas av vilken informationsbärare som används, vilken interaktion som uppstår med omgivningen, samt hur omgivningen "fångar upp" information om felhändelserna. Med kunskap om sådan karakteristik kan man bygga koder, s.k. kanalanpassade kvantfelrättande koder. Vanligtvis antas att omgivningens förmåga att särskilja felhändelser är liten, och man kan då tala om en minneslös omgivning. Antagandet gäller inte alltid, då denna förmåga bestäms av reservoirens temperatur, och i det speciella fall då fotoner används som informationsbärare gäller typiskt , och vi måste anta att reservoiren faktiskt har ett "minne". I avhandlingen beskrivs en kort, kvantfelrättande kod som är anpassad för fotoner i växelverkan med en "kall" omgivning, d.v.s. denna kod skyddar mot en omgivning som kontinuerligt registrerar vilket fel som uppstått i det kodade tillståndet. Det är också av stort intresse att kunna jämföra prestanda hos kvantfelrättande koder, utifrån någon slags "måttstock" - men vilken? Jag jämför två sådana mått, nämligen ömsesidig kvantinformation, samt kvantfidelitet, och visar att dessa i allmänhet inte kan maximeras samtidigt i en felrättningsprocedur. För att visa detta har en 5-qubitarskod använts i en tänkt kanal där bara bitflip-fel uppstår, och utrymme därför finns att detektera fel. Ömsesidig kvantinformation framstår som det bättre måttet, dock är detta mått betydligt mer arbetskrävande att beräkna, än kvantfidelitet - som är det mest förekommande måttet.
QC 20121206
Peikert, Christopher Jason. "Cryptographic error correction." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38320.
Full textIncludes bibliographical references (leaves 67-71).
It has been said that "cryptography is about concealing information, and coding theory is about revealing it." Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traitor tracing. We demonstrate tight lower bounds on the lengths of such codes by devising and analyzing a general collusive attack that works for any code.
by Christopher Jason Peikert.
Ph.D.
Almlöf, Jonas. "Quantum error correction." Doctoral thesis, KTH, Kvantelektronik och -optik, QEO, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180533.
Full textQC 20160115
Ng, Wing-han Christina. "Does error correction lead to error reduction?" Thesis, Hong Kong : University of Hong Kong, 2002. http://sunzi.lib.hku.hk/hkuto/record.jsp?B26173347.
Full textJeffs, Stephen David. "Error correction in memory /." Title page, table of contents and abstract only, 2004. http://web4.library.adelaide.edu.au/theses/09ARPS/09arpsj474.pdf.
Full textZhang, Wenbo. "Unary error correction coding." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/419401/.
Full textBabar, Zunaira. "Quantum error correction codes." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/380165/.
Full textFiehler, Katja. "Temporospatial characteristics of error correction /." Leipzig ; München : MPI for Human Cognitive and Brain Sciences, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=013077731&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textFletcher, Andrew Stephen. "Channel-adapted quantum error correction." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40497.
Full textIncludes bibliographical references (p. 159-163).
Quantum error correction (QEC) is an essential concept for any quantum information processing device. Typically, QEC is designed with minimal assumptions about the noise process; this generic assumption exacts a high cost in efficiency and performance. We examine QEC methods that are adapted to the physical noise model. In physical systems, errors are not likely to be arbitrary; rather we will have reasonable models for the structure of quantum decoherence. We may choose quantum error correcting codes and recovery operations that specifically target the most likely errors. This can increase QEC performance and also reduce the required overhead. We present a convex optimization method to determine the optimal (in terms of average entanglement fidelity) recovery operation for a given channel, encoding, and information source. This is solvable via a semidefinite program (SDP). We derive an analytic solution to the optimal recovery for the case of stabilizer codes, the completely mixed input source, and channels characterized by Pauli group errors. We present computational algorithms to generate near-optimal recovery operations structured to begin with a projective syndrome measurement.
(cont.) These structured operations are more computationally scalable than the SDP required for computing the optimal; we can thus numerically analyze longer codes. Using Lagrange duality, we bound the performance of the structured recovery operations and show that they are nearly optimal in many relevant cases. We present two classes of channel-adapted quantum error correcting codes specifically designed for the amplitude damping channel. These have significantly higher rates with shorter block lengths than corresponding generic quantum error correcting codes. Both classes are stabilizer codes, and have good fidelity performance with stabilizer recovery operations. The encoding, syndrome measurement, and syndrome recovery operations can all be implemented with Clifford group operations.
by Andrew Stephen Fletcher.
Ph.D.
Wang, Tao. "Elias Gamma Error Correction Code." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/400268/.
Full textEidestedt, Richard, and Stefan Ekberg. "Evaluating forecast accuracy for Error Correction constraints and Intercept Correction." Thesis, Uppsala universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-194423.
Full textValentini, Lorenzo. "Quantum Error Correction for Quantum Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Find full textPondini, Andrea. "Quantum error correction e toric code." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21053/.
Full textLundqvist, Henrik. "Error Correction Coding for Optical CDMA." Licentiate thesis, KTH, Microelectronics and Information Technology, IMIT, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1637.
Full textThe optical fiber is a very attractive communication mediumsince it offers a large bandwidth and low attenuation and cantherefore facilitate demanding services such as high-qualityvideo transmission. As the reach of optical fiber is beingextended to the access network it is economically attractive toshare fibers between different users without adding activecomponents in the network. The most common multiple accessmethod for such passive optical networks is time divisionmultiple access (TDMA), but lately there has been an increasedinterest in using wavelength division multiple access (WDMA)and optical code division multiple access (OCDMA). This thesisevaluates forward error correction as a method to improve theperformance of passive optical networks, in particular OCDMAnetworks.
Most studies of OCDMA use simple channel models focusingonly on the multiple access interference. However, beat noiseis the main performance limitation for many implementations ofOCDMA. Beat noise occurs when multiple optical fields areincident on a receiver, because of the square-law detection. Tomake a realistic evaluation of OCDMA, channel models which takeinterference, beat noise and other noise types into account arestudied in this thesis. Both direct sequencing CDMA and fastfrequency hopping are considered as spreading methods. Anefficient simulation method was developed in order to simulatesystems with forward error correction (FEC) and soft decoding.The simulations show that the performance is significantlyoverestimated when the beat noise is neglected. In order todecrease the error rate without using overly complex equipmentthe bandwidth has to be increased. Simulation results show thatit is beneficial to use error correction codes in addition tospreading codes for the bandwidth expansion. The efficiency canbe further improved by using soft decoding; therefore maximumlikelihood decoding methods for the OCDMA channels aredeveloped and demonstrate a significant reduction in the errorrate. Frequency hopping and direct sequencing are also comparedwith each other, and the results show that temporally codedOCDMA is more sensitive to beat noise.
In addition, the performance of a low complexity softdecoding method for Reed-Solomon codes is evaluated. Softdecoding of Reed Solomon codes has not yet found practical usebecause the earlier proposed methods do not offer sufficientperformance gains to motivate the increased complexity. Thebit-level Chase-decoding algorithm evaluated here can be easilyimplemented using any algebraic decoder.
Gul, Yusuf. "Entanglement Transformations And Quantum Error Correction." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610773/index.pdf.
Full textAlmulhem, Abdulaziz S. "Error correction techniques for ATM communications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0009/NQ36627.pdf.
Full textTang, Stanley C. "Robot positioning error analysis and correction." Thesis, This resource online, 1987. http://scholar.lib.vt.edu/theses/available/etd-04122010-083623/.
Full textFlaten, Sindre Drolsum. "Energy Efficient Reed-Solomon Error Correction." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22911.
Full textBowden, T. G. "Natural language techniques for error correction." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596815.
Full textAlic, Andrei Stefan. "Improved Error Correction of NGS Data." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/67630.
Full text[ES] El trabajo realizado en el marco de esta tesis doctoral se centra en la corrección de errores en datos provenientes de técnicas NGS utilizando técnicas de computación intensiva. Debido a la reducción de costes y el incremento en las prestaciones de los secuenciadores, la cantidad de datos disponibles en NGS se ha incrementado notablemente. La utilización de computadores en el análisis de estas muestras se hace imprescindible para poder dar respuesta a la avalancha de información generada por estas técnicas. El uso de NGS transciende la investigación con numerosos ejemplos de uso clínico y agronómico, por lo que aparecen nuevas necesidades en cuanto al tiempo de proceso y la fiabilidad de los resultados. Para maximizar su aplicabilidad clínica, las técnicas de proceso de datos de NGS deben acelerarse y producir datos más precisos. En este contexto es en el que las técnicas de comptuación intensiva juegan un papel relevante. En la actualidad, es común disponer de computadores con varios núcleos de proceso e incluso utilizar múltiples computadores mediante técnicas de computación paralela distribuida. Las tendencias actuales hacia arquitecturas con un mayor número de núcleos ponen de manifiesto que es ésta una aproximación relevante. Esta tesis comienza con un análisis de los problemas fundamentales del proceso de datos en NGS de forma general y adaptado para su comprensión por una amplia audiencia, a través de una exhaustiva revisión del estado del arte en la corrección de datos de NGS. Esta revisión introduce gradualmente al lector en las técnicas de secuenciación masiva, presentando problemas y aplicaciones reales de las técnicas de NGS, destacando el impacto de esta tecnología en ciencia. De este estudio se concluyen dos ideas principales: La necesidad de analizar de forma adecuada las características de los datos de NGS, atendiendo a la enorme variedad intrínseca que tienen las diferentes técnicas de NGS; y la necesidad de disponer de una herramienta versátil, eficiente y precisa para la corrección de errores. En el contexto del análisis de datos, la tesis presenta MuffinInfo. La herramienta MuffinInfo es una aplicación software implementada mediante HTML5. MuffinInfo obtiene información relevante de datos crudos de NGS para favorecer el entendimiento de sus características y la aplicación de técnicas de corrección de errores, soportando además la extensión mediante funciones que implementen estadísticos definidos por el usuario. MuffinInfo almacena los resultados del proceso en ficheros JSON. Al usar HTML5, MuffinInfo puede funcionar en casi cualquier entorno hardware y software. La herramienta está implementada aprovechando múltiples hilos de ejecución por la gestión del interfaz. La segunda conclusión del análisis del estado del arte nos lleva a la oportunidad de aplicar de forma extensiva técnicas de computación de altas prestaciones en la corrección de errores para desarrollar una herramienta que soporte múltiples tecnologías (Illumina, Roche 454, Ion Torrent y experimentalmente PacBio). La herramienta propuesta (MuffinEC), soporta diferentes tipos de errores (sustituciones, indels y valores desconocidos). MuffinEC supera los resultados obtenidos por las herramientas existentes en este ámbito. Ofrece una mejor tasa de corrección, en un tiempo muy inferior y utilizando menos recursos, lo que facilita además su aplicación en muestras de mayor tamaño en computadores convencionales. MuffinEC utiliza una aproximación basada en etapas multiples. Primero agrupa todas las secuencias utilizando la métrica de los k-mers. En segundo lugar realiza un refinamiento de los grupos mediante el alineamiento con Smith-Waterman, generando contigs. Estos contigs resultan de la corrección por columnas de atendiendo a la frecuencia individual de cada base. La tesis se estructura por capítulos cuya base ha sido previamente publicada en revistas indexadas en posiciones dest
[CAT] El treball realitzat en el marc d'aquesta tesi doctoral se centra en la correcció d'errors en dades provinents de tècniques de NGS utilitzant tècniques de computació intensiva. A causa de la reducció de costos i l'increment en les prestacions dels seqüenciadors, la quantitat de dades disponibles a NGS s'ha incrementat notablement. La utilització de computadors en l'anàlisi d'aquestes mostres es fa imprescindible per poder donar resposta a l'allau d'informació generada per aquestes tècniques. L'ús de NGS transcendeix la investigació amb nombrosos exemples d'ús clínic i agronòmic, per la qual cosa apareixen noves necessitats quant al temps de procés i la fiabilitat dels resultats. Per a maximitzar la seua aplicabilitat clínica, les tècniques de procés de dades de NGS han d'accelerar-se i produir dades més precises. En este context és en el que les tècniques de comptuación intensiva juguen un paper rellevant. En l'actualitat, és comú disposar de computadors amb diversos nuclis de procés i inclús utilitzar múltiples computadors per mitjà de tècniques de computació paral·lela distribuïda. Les tendències actuals cap a arquitectures amb un nombre més gran de nuclis posen de manifest que és esta una aproximació rellevant. Aquesta tesi comença amb una anàlisi dels problemes fonamentals del procés de dades en NGS de forma general i adaptat per a la seua comprensió per una àmplia audiència, a través d'una exhaustiva revisió de l'estat de l'art en la correcció de dades de NGS. Esta revisió introduïx gradualment al lector en les tècniques de seqüenciació massiva, presentant problemes i aplicacions reals de les tècniques de NGS, destacant l'impacte d'esta tecnologia en ciència. D'este estudi es conclouen dos idees principals: La necessitat d'analitzar de forma adequada les característiques de les dades de NGS, atenent a l'enorme varietat intrínseca que tenen les diferents tècniques de NGS; i la necessitat de disposar d'una ferramenta versàtil, eficient i precisa per a la correcció d'errors. En el context de l'anàlisi de dades, la tesi presenta MuffinInfo. La ferramenta MuffinInfo és una aplicació programari implementada per mitjà de HTML5. MuffinInfo obté informació rellevant de dades crues de NGS per a afavorir l'enteniment de les seues característiques i l'aplicació de tècniques de correcció d'errors, suportant a més l'extensió per mitjà de funcions que implementen estadístics definits per l'usuari. MuffinInfo emmagatzema els resultats del procés en fitxers JSON. A l'usar HTML5, MuffinInfo pot funcionar en gairebé qualsevol entorn maquinari i programari. La ferramenta està implementada aprofitant múltiples fils d'execució per la gestió de l'interfície. La segona conclusió de l'anàlisi de l'estat de l'art ens porta a l'oportunitat d'aplicar de forma extensiva tècniques de computació d'altes prestacions en la correcció d'errors per a desenrotllar una ferramenta que suport múltiples tecnologies (Illumina, Roche 454, Ió Torrent i experimentalment PacBio). La ferramenta proposada (MuffinEC), suporta diferents tipus d'errors (substitucions, indels i valors desconeguts). MuffinEC supera els resultats obtinguts per les ferramentes existents en este àmbit. Oferix una millor taxa de correcció, en un temps molt inferior i utilitzant menys recursos, la qual cosa facilita a més la seua aplicació en mostres més gran en computadors convencionals. MuffinEC utilitza una aproximació basada en etapes multiples. Primer agrupa totes les seqüències utilitzant la mètrica dels k-mers. En segon lloc realitza un refinament dels grups per mitjà de l'alineament amb Smith-Waterman, generant contigs. Estos contigs resulten de la correcció per columnes d'atenent a la freqüència individual de cada base. La tesi s'estructura per capítols la base de la qual ha sigut prèviament publicada en revistes indexades en posicions destacades de l'índex del Journal of Citation Repor
Alic, AS. (2016). Improved Error Correction of NGS Data [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/67630
TESIS
Gonzales, Alvin Rafer. "QUANTUM ERROR CORRECTION FOR GENERAL NOISE." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/dissertations/1894.
Full textMei, Yan. "Combined forward error correction and error concealment for digital video transmission." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ64061.pdf.
Full textGaspari, Andrea. "Quantum error correction and the toric code." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21591/.
Full textDickinson, Markus. "Error detection and correction in annotated corpora." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1123788552.
Full textTitle from first page of PDF file. Document formatted into pages; contains xvi, 269 p.; also includes graphics (some col.). Includes bibliographical references (p. 241-259). Available online via OhioLINK's ETD Center
Valverde, Martínez David, and Otte Francisco Javier Parada. "Forward Error Correction for Packet Switched Networks." Thesis, Linköping University, Communication Systems, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11093.
Full textThe main goal in this thesis is to select and test Forward Error Correction (FEC) schemes suitable for network video transmission over RTP/UDP. There is a general concern in communication networks which is to achieve a tradeoff between reliable transmission and the delay that it takes. Our purpose is to look for techniques that improve the reliability while the realtime delay constraints are fulfilled. In order to achieve it, the FEC techniques focus on recovering the packet losses that come up along any transmission. The FEC schemes that we have selected are Parity Check algorithm, ReedSolomon (RS) codes and a Convolutional code. Simulations are performed to test the different schemes.
The results obtained show that the RS codes are the more powerful schemes in terms of recovery capabilities. However they can not be deployed for every configuration since they go beyond the delay threshold. On the other hand, despite of the Parity Check codes being the less efficient in terms of error recovery, they show a reasonable low delay. Therefore, depending on the packet loss probability that we are working with, we may chose one or other of the different schemes. To summarize, this thesis includes a theoretical background, a thorough analysis of the FEC schemes chosen, simulation results, conclusions and proposed future work.
Ratzer, Edward Alexander. "Error-correction on non-standard communication channels." Thesis, University of Cambridge, 2004. https://www.repository.cam.ac.uk/handle/1810/237471.
Full textLewis, Matthew. "Error correction of generalised algebraic-geometry codes." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407473.
Full textSpagnolo, Fabio. "Nonlinear error-correction models with regime switching." Thesis, Birkbeck (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368915.
Full textMAGRI, RAFAEL RIBEIRO. "NONLINEAR ERROR CORRECTION MODELS: ESTIMATION AND TESTING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34955@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Testes existentes para não-linearidade em Modelos de Correção de Erros são altamente intensivos computacionalmente e apresentam parâmetros de estorvo na distribuição assintótica, que precisam ser levantadas através de simulações por bootstrap. É proposto um teste consistente, implementável em qualquer pacote estatístico e que apresenta distribuição assintótica Qui-Quadrado. Além disso, experimentos de Monte Carlo mostram que em pequena amostra o teste tem boas propriedades de tamanho e poder, muitas vezes melhores do que os testes existentes. Também é apresentada uma condição sob a qual um estimador em dois estágios para os parâmetros do modelo é assintoticamente normal. A aplicação do modelo a preços internacionais de commodities agrícolas mostra evidência de ajuste não-linear nos preços de trigo.
Existing tests for nonlinearity in vector error correction models are highly intensive computationally and have nuisance parameters in the asymptotic distribution, what calls for cumbersome bootstrap calculations in order to assess the distribution. Our work proposes a consistent test which is implementable in any statistical package and has Chi-Squared asymptotics. Moreover, Monte Carlo experiments show that in small samples our test has nice size and power properties, often better than the preexisting tests. We also provide a condition under which a consistent two step estimator for the model parameters is asymptotically normal. Application to international agricultural commodities prices show evidence of nonlinear adjustment to the long run equilibrium on the wheat prices.
Chang, Danica H. (Danica Hill). "Natural error correction techniques for sketch recognition." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82371.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 55-56).
Over the past few years, a plethora of tablet devices has made it very easy for users to input information by sketching as if on paper. In addition, sketch recognition systems help users convert these sketches into information that the computer understands. While lots of work has been done in developing better sketch recognizers, very little work has previously been done on how to edit the sketch once it's been drawn, whether the error is the user's or the sketch recognizer's. In response, we developed and studied intuitive methods of interacting with a sketch recognition system to correct errors made by both the recognizer and the user. The editor allows users to click and lasso to select parts of the sketch, label the selected strokes, erase by scribbling over strokes, and even overwrite errors. Letting users provide feedback to the sketch recognizer helps improve the accuracy of the sketch as well as allows the sketch recognizer's performance to improve over time.
by Danica H. Chang.
S.M.
Sheldon, Sarah (Sarah Elizabeth). "Second order error correction in quantum computing." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44834.
Full textIncludes bibliographical references (leaf 23).
Error correction codes are necessary for the development of reliable quantum computers. Such codes can prevent the lost of information from decoherence caused by external perturbations. This thesis evaluates a five qubit code for correcting second order bit-flip errors. The code consists of encoding, decoherence, decoding, and error correction steps. This work analyzes the proposed code using geometric algebra methods and examines the state of the system after each step in the process.
by Sarah Sheldon.
S.B.
Buys, Jan Moolman. "Probabilistic tree transducers for grammatical error correction." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85592.
Full textENGLISH ABSTRACT: We investigate the application of weighted tree transducers to correcting grammatical errors in natural language. Weighted finite-state transducers (FST) have been used successfully in a wide range of natural language processing (NLP) tasks, even though the expressiveness of the linguistic transformations they perform is limited. Recently, there has been an increase in the use of weighted tree transducers and related formalisms that can express syntax-based natural language transformations in a probabilistic setting. The NLP task that we investigate is the automatic correction of grammar errors made by English language learners. In contrast to spelling correction, which can be performed with a very high accuracy, the performance of grammar correction systems is still low for most error types. Commercial grammar correction systems mostly use rule-based methods. The most common approach in recent grammatical error correction research is to use statistical classifiers that make local decisions about the occurrence of specific error types. The approach that we investigate is related to a number of other approaches inspired by statistical machine translation (SMT) or based on language modelling. Corpora of language learner writing annotated with error corrections are used as training data. Our baseline model is a noisy-channel FST model consisting of an n-gram language model and a FST error model, which performs word insertion, deletion and replacement operations. The tree transducer model we use to perform error correction is a weighted top-down tree-to-string transducer, formulated to perform transformations between parse trees of correct sentences and incorrect sentences. Using an algorithm developed for syntax-based SMT, transducer rules are extracted from training data of which the correct version of sentences have been parsed. Rule weights are also estimated from the training data. Hypothesis sentences generated by the tree transducer are reranked using an n-gram language model. We perform experiments to evaluate the performance of different configurations of the proposed models. In our implementation an existing tree transducer toolkit is used. To make decoding time feasible sentences are split into clauses and heuristic pruning is performed during decoding. We consider different modelling choices in the construction of transducer rules. The evaluation of our models is based on precision and recall. Experiments are performed to correct various error types on two learner corpora. The results show that our system is competitive with existing approaches on several error types.
AFRIKAANSE OPSOMMING: Ons ondersoek die toepassing van geweegde boomoutomate om grammatikafoute in natuurlike taal outomaties reg te stel. Geweegde eindigetoestand outomate word suksesvol gebruik in ’n wye omvang van take in natuurlike taalverwerking, alhoewel die uitdrukkingskrag van die taalkundige transformasies wat hulle uitvoer beperk is. Daar is die afgelope tyd ’n toename in die gebruik van geweegde boomoutomate en verwante formalismes wat sintaktiese transformasies in natuurlike taal in ’n probabilistiese raamwerk voorstel. Die natuurlike taalverwerkingstoepassing wat ons ondersoek is die outomatiese regstelling van taalfoute wat gemaak word deur Engelse taalleerders. Terwyl speltoetsing in Engels met ’n baie hoë akkuraatheid gedoen kan word, is die prestasie van taalregstellingstelsels nog relatief swak vir meeste fouttipes. Kommersiële taalregstellingstelsels maak oorwegend gebruik van reël-gebaseerde metodes. Die algemeenste benadering in onlangse navorsing oor grammatikale foutkorreksie is om statistiese klassifiseerders wat plaaslike besluite oor die voorkoms van spesifieke fouttipes maak te gebruik. Die benadering wat ons ondersoek is verwant aan ’n aantal ander benaderings wat geïnspireer is deur statistiese masjienvertaling of op taalmodellering gebaseer is. Korpora van taalleerderskryfwerk wat met foutregstellings geannoteer is, word as afrigdata gebruik. Ons kontrolestelsel is ’n geraaskanaal eindigetoestand outomaatmodel wat bestaan uit ’n n-gram taalmodel en ’n foutmodel wat invoegings-, verwyderings- en vervangingsoperasies op woordvlak uitvoer. Die boomoutomaatmodel wat ons gebruik vir grammatikale foutkorreksie is ’n geweegde bo-na-onder boom-na-string omsetteroutomaat geformuleer om transformasies tussen sintaksbome van korrekte sinne en foutiewe sinne te maak. ’n Algoritme wat ontwikkel is vir sintaksgebaseerde statistiese masjienvertaling word gebruik om reëls te onttrek uit die afrigdata, waarvan sintaksontleding op die korrekte weergawe van die sinne gedoen is. Reëlgewigte word ook vanaf die afrigdata beraam. Hipotese-sinne gegenereer deur die boomoutomaat word herrangskik met behulp van ’n n-gram taalmodel. Ons voer eksperimente uit om die doeltreffendheid van verskillende opstellings van die voorgestelde modelle te evalueer. In ons implementering word ’n bestaande boomoutomaat sagtewarepakket gebruik. Om die dekoderingstyd te verminder word sinne in frases verdeel en die soekruimte heuristies besnoei. Ons oorweeg verskeie modelleringskeuses in die samestelling van outomaatreëls. Die evaluering van ons modelle word gebaseer op presisie en herroepvermoë. Eksperimente word uitgevoer om verskeie fouttipes reg te maak op twee leerderkorpora. Die resultate wys dat ons model kompeterend is met bestaande benaderings op verskeie fouttipes.
Jones, David Rhys. "Forward Error Correction in Memoryless Optical Modulation." UNF Digital Commons, 2011. http://digitalcommons.unf.edu/etd/171.
Full textLu, Yi. "Error correction codes for molecular communication systems." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88085/.
Full textАндрейко, Лариса Володимирівна, Лариса Владимировна Андрейко, and Larysa Volodymyrivna Andreyko. "Error correction strategies in English speaking classroom." Thesis, ВВП "Мрія", 2014. http://essuir.sumdu.edu.ua/handle/123456789/59245.
Full textПроаналізовано питання доцільності виправлення помилок під час мовленнєвої діяльності студентів на заняттях з англійської мови, зокрема коли і як їх виправляти. Наведені приклади ефективних прийомів виправлення помилок.
Layden, David. "Device- and application-adapted quantum error correction." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127314.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 185-194).
Precise control of coherent quantum systems could enable new generations of sensing, communication and computing technologies. Such systems, however, are typically noisy and difficult to stabilize. One promising technique to this end is called quantum error correction, which encodes quantum states in such a way that errors can be detected and corrected, much like in classical error-correcting codes. Quantum error-correcting codes usually cast a wide net, in that they are designed to correct errors regardless of their physical origins. In large-scale devices, this is an essential feature. It comes at a cost, however: conventional quantum codes are typically resource-intensive in terms of both the system size and the control operations they require. Yet, in smaller-scale devices the main error sources are often well-understood. In the near term, it may therefore be advantageous to cast a more targeted net through specialized codes. This thesis presents new families of such quantum error-correcting codes, which are adapted either for leading candidate devices, or for near-term applications. The device-adapted codes require exponentially less overhead than conventional codes to achieve the same level of protection, whereas the application-adapted codes can enhance quantum sensors, in which conventional codes cannot readily be used. The new techniques presented in this thesis adapt cornerstones of conventional theory in light of key experimental challenges and opportunities. The ultimate goal of this research is to help bridge the gap between the exacting requirements of proposed quantum technologies and the realities of emerging quantum devices. Bridging this gap is critical, if quantum technologies are to realize their full potential.
by David Layden.
Ph. D. in Quantum Science and Engineering
Ph.D.inQuantumScienceandEngineering Massachusetts Institute of Technology, Department of Nuclear Science and Engineering
Cohen, Joachim. "Autonomous quantum error correction with superconducting qubits." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE008/document.
Full textIn this thesis, we develop several tools in the direction of autonomous Quantum Error Correction (QEC) with superconducting qubits. We design an autonomous QEC scheme based on quantum reservoir engineering, in which transmon qubits are coupled to lossy modes. Through an engineered interaction between these systems, the entropy created by eventual errors is evacuated via the dissipative modes.The second part of this work focus on the recently developed cat codes, through which the logical information is encoded in the large Hilbert space of a harmonic oscillator. We propose a scheme to perform continuous and quantum non-demolition measurements of photon-number parity in a microwave cavity, which corresponds to the error syndrome in the cat code. In our design, we exploit the strongly nonlinear Hamiltonian of a highimpedance Josephson circuit, coupling ahigh-Q cavity storage cavity mode to a low-Q readout one. Last, as a follow up of the above results, we present several continuous and/or autonomous QEC schemes using the cat code. These schemes provide a robust protection against dominant error channels in the presence of multi-photon driven dissipation
Cheung, Kar-Ming McEliece Robert J. "Error-correction coding in data storage systems /." Diss., Pasadena, Calif. : California Institute of Technology, 1987. http://resolver.caltech.edu/CaltechETD:etd-02282008-133009.
Full textUrbani, Camilla. "Stabilizer Codes for Quantum Error Correction and Synchronization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textRummel, David. "Correction for covariate measurement error in nonparametric regression." Diss., [S.l.] : [s.n.], 2006. http://edoc.ub.uni-muenchen.de/archive/00006436.
Full textBäckström, Fredrik, and Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.
Full textModeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.
Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.
The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.
The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.
Klinc, Demijan. "On applications of puncturing in error-correction coding." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39610.
Full textHan, Hillary H. "Measurement-error bias correction in spawner-recruitment relationships." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37541.pdf.
Full textDavey, M. C. "Error-correction using low-density parity-check codes." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598305.
Full textTomita, Yu. "Numerical and analytical studies of quantum error correction." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53468.
Full textJohnson, Ian. "Towards the identification of intent for error correction." Thesis, King's College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418062.
Full textKokkola, N. "A double-error correction computational model of learning." Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/18838/.
Full textPark, Jason (Jason Sun-Hyung). "Designing methods for error correction in gene fabrication." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32931.
Full textIncludes bibliographical references (p. 20-22).
Gene Fabrication technology involves the development and optimization of methods relevant to the in vitro synthesis of any given target gene sequence(s) in the absence of template. The driving purpose of this field of research is to bring about the capability for on-demand fabrication of a DNA construct of arbitrary length and sequence quickly, efficiently, and cost-effectively. One of the main challenges in gene fabrication is to not only synthesize a given DNA target, but to do so without making any errors. At high error rates, fabrication of long gene targets is expensive and impractical - in some cases, it is impossible. Improvements in error rates are essential for continued progress in the development of gene fabrication technology. Error reduction technologies can be broadly split into three categories at present: error filtration, error correction, and error prevention. This thesis presents the past, present, and future design of a number of quick, easy, robust, economical, and effective error reduction methods in gene fabrication. .
by Jason Park.
S.B.
Park, Jason (Jason Sun-Hyung). "Synthesis and error correction methods in gene fabrication." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37961.
Full textIncludes bibliographical references (leaves 56-59).
Gene Fabrication technology involves the development and optimization of methods relevant to the in vitro synthesis of any given target gene sequence(s) in the absence of template. The driving purpose of this field of research is to bring about the capability for on-demand fabrication of a DNA construct of arbitrary length and sequence quickly, efficiently, and cost-effectively. The first part of this document describes many of the important considerations in performing successful de novo gene synthesis from a survey of the literature as well as from our own work. Recommendations are made for a universally effective, robust, and simple protocol for potential users of gene synthesis, discussing important factors such as choice of protocol, source of commercial oligonucleotides, and polymerase choice. The second part of this document focuses on error correction. Reducing error rates is one of the main challenges in gene fabrication because high error rates preclude the possibility of fabricating long gene targets in a practical and economical manner. Improvements in error rates are essential for continued progress in the development of gene fabrication technology. I discuss the importance of error rate in gene synthesis from a practical standpoint and show results in the development of novel methods for the removal of errors from a pool of synthesized DNA.
by Jason Sun-hyung Park.
M.Eng.
Chilappagari, Shashi Kiran, and Dung Viet Nguyen. "On Guaranteed Error Correction Capability of GLDPC Codes." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606241.
Full textIn this paper, it is shown that generalized LDPC codes can correct a linear fraction of errors under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. A lower bound on the size of variable node sets which have required expansion is established as a function of the column weight of the code, the girth of the Tanner graph and the error correction capability of the sub-code. It is also shown that the bound on the required expansion cannot be improved when the column weight is even by studying a class of trapping sets. An upper bound on the guaranteed error correction capability is found by investigating the size of smallest possible trapping sets.