Dissertationen zum Thema „Error detection algorithms“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-35 Dissertationen für die Forschung zum Thema "Error detection algorithms" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Pazaitis, Dimitrios I. „Performance improvement in adaptive signal processing algorithms“. Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368771.
Der volle Inhalt der QuelleO'Connell, Jeffrey R. „Algorithms for linkage analysis, error detection and haplotyping in pedigrees“. Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325622.
Der volle Inhalt der QuelleSantos, Fernando Fernandes dos. „Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/159210.
Der volle Inhalt der QuellePedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
Lee, Ho. „Algorithms to Improve the Quality of Freeway Traffic Detector Data“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345522267.
Der volle Inhalt der QuelleHua, Nan. „Space-efficient data sketching algorithms for network applications“. Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44899.
Der volle Inhalt der QuelleValent, Adam. „Jednosměrná sériová komunikace laserem na větší vzdálenost“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442524.
Der volle Inhalt der QuelleFeng, Jingbin. „Quasi-Static Deflection Compensation Control of Flexible Manipulator“. PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4759.
Der volle Inhalt der QuelleXie, Yichen. „Static detection of software errors precise and scalable algorithms for automatic detection of software errors“. Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2991792&prov=M&dok_var=1&dok_ext=htm.
Der volle Inhalt der QuelleKapfunde, Goodwell. „Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems“. Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11350.
Der volle Inhalt der QuelleSevim, Alaettin. „Including state excitation in the fixed-interval smoothing algorithm and implementation of the maneuver detection method using error residuals“. Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA246336.
Der volle Inhalt der QuelleThesis Advisor: Titus, Harold A. Second Reader: Burl, Jeff. "December 1990." Description based on title screen as viewed on April 2, 2010. DTIC Identifier(s): Noise processing, maneuver detection. Author(s) subject terms: Kalman filter, smoothing, noise process, maneuver detection. Includes bibliographical references (p. 98-99). Also available in print.
Grymel, Martin-Thomas. „Error control with binary cyclic codes“. Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/error-control-with-binary-cyclic-codes(a5750b4a-e4d6-49a8-915b-3e015387ad36).html.
Der volle Inhalt der QuelleTran, Ngoc Nguyen. „Infeasibility detection and regularization strategies in nonlinear optimization“. Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0059/document.
Der volle Inhalt der QuelleThis thesis is devoted to the study of numerical algorithms for nonlinear optimization. On the one hand, we propose new strategies for the rapid infeasibility detection. On the other hand, we analyze the local behavior of primal-dual algorithms for the solution of singular problems. In the first part, we present a modification of an augmented Lagrangian algorithm for equality constrained optimization. The quadratic convergence of the new algorithm in the infeasible case is theoretically and numerically demonstrated. The second part is dedicated to extending the previous result to the solution of general nonlinear optimization problems with equality and inequality constraints. We propose a modification of a mixed logarithmic barrier-augmented Lagrangian algorithm. The theoretical convergence results and the numerical experiments show the advantage of the new algorithm for the infeasibility detection. In the third part, we study the local behavior of a primal-dual interior point algorithm for bound constrained optimization. The local analysis is done without the standard assumption of the second-order sufficient optimality conditions. These conditions are replaced by a weaker assumption based on a local error bound condition. We propose a regularization technique of the Jacobian matrix of the optimality system. We then demonstrate some boundedness properties of the inverse of these regularized matrices, which allow us to prove the superlinear convergence of our algorithm. The last part is devoted to the local convergence analysis of the primal-dual algorithm used in the first two parts of this thesis. In practice, it has been observed that this algorithm converges rapidly even in the case where the constraints do not satisfy the Mangasarian-Fromovitz constraint qualification. We demonstrate the superlinear and quadratic convergence of this algorithm without any assumption of constraint qualification
Palkki, Ryan D. „Chemical identification under a poisson model for Raman spectroscopy“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45935.
Der volle Inhalt der QuelleLIMA, Christiane Ferreira Lemos. „Detecção de intrusos em redes de computadores com uso de códigos corretores de erros e medidas de informação“. Universidade Federal de Campina Grande, 2013. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1430.
Der volle Inhalt der QuelleMade available in DSpace on 2018-08-13T19:50:34Z (GMT). No. of bitstreams: 1 CHRISTIANE FERREIRA LEMOS LIMA - TESE PPGEE 2013..pdf: 5704501 bytes, checksum: da700470760daace1ac791c6514082a3 (MD5) Previous issue date: 2013-04-19
Capes
Este trabalho de tese tem como objetivo principal apresentar um novo esquema, direcionado à proteção de sistemas computacionais contra a ocorrência de invasões, fazendo uso de códigos corretores de erros e de medidas de informação. Para isto, considera-se que a identificação de diferentes tipos de ataques a uma rede de computadores pode ser vista como uma tarefa de classificação multiclasses, por envolver a discriminação de ataques em diversos tipos ou categorias. Com base nessa abordagem, o presente trabalho apresenta uma estratégia para classificação multiclasses, baseada nos princípios dos códigos corretores de erros, tendo em vista que para cada uma dasM classes do problema proposto é associada a um conjunto de palavras códigos de comprimento igual a N, em que N é o número de atributos, selecionados por meio de medidas de informação, e que serão monitorados por dispositivos de software, aqui chamados de detectores de rede e detectores dehost. Nesta abordagem, as palavras código que formam uma tabela são restritas a um sub-código de um código linear do tipo BCH (Bose-ChaudhuriHocquenghem), permitindo que a etapa de decodificação seja realizada, utilizando-se algoritmos de decodificação algébrica, o que não é possível para palavras código selecionadas aleatoriamente. Nesse contexto, o uso de uma variante de algoritmo genético é aplicado no projeto da tabela que será utilizada na identificação de ataques em redes de computadores. Dentre as contribuições efetivas desta tese, cujas comprovações são demonstradas em experimentos realizados por simulação computacional, tem-se a aplicação da teoria da codificação para a determinação de palavras código adequadas ao problema de detectar intrusões numa rede de computadores; a determinação dos atributos por meio do uso de árvores de decisão C4.5 baseadas nas medidas de informação de Rényi e Tsallis; a utilização de decodificação algébrica, baseado nos conceitos de decodificação tradicional e na decodificação por lista.
The thesis’s main objective is to present a scheme to protect computer networks against the occurrence of invasions by making use of error correcting codes and information measures. For this, the identification of attacks in a network is viewed as a multiclass classification task because it involves attacks discrimination into various categories. Based on this approach, this work presents strategies for multiclass problems based on the error correcting codes principles, where eachM class is associated with a codeword of lengthN, whereN is the number of selected attributes, chosen by use of information measures. These attributes are monitored by software devices, here called network detectors and detectors host. In this approach, the codewords that form a codewords table are a sub-code of a BCH-type linear code. This approach allows decoding step to be performed using algebraic decoding algorithms, what is not possible with random selected codewords. In this context, the use of a variant of genetic algorithm are applied in the table-approach design to be used in attacks identification in networks. The effective contributions of this thesis, demonstrated on the cientific experiments, are: the application of coding theory to determine the appropriate code words to network intrusion detection; the application of C4.5 decision tree based on information measurements of Rényi and Tsallis for attributes selection; the use of algebraic decoding, based on the concepts of the traditional decoding and list decoding techniques.
Povalač, Karel. „Sledování spektra a optimalizace systémů s více nosnými pro kognitivní rádio“. Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233577.
Der volle Inhalt der QuelleLin, TsungPo. „An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24819.
Der volle Inhalt der QuelleCommittee Chair: Dimitri Mavris; Committee Member: Erwing Calleros; Committee Member: Hongmei Chen; Committee Member: Mark Waters; Committee Member: Vitali Volovoi.
Holzschuch, Nicolas. „Le contrôle de l'erreur dans la méthode de radiosité hiérarchique“. Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004994.
Der volle Inhalt der QuelleHunter, Brandon. „Channel Probing for an Indoor Wireless Communications Channel“. BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.
Der volle Inhalt der QuelleVasiga, Troy Michael John. „Error Detection in Number-Theoretic and Algebraic Algorithms“. Thesis, 2008. http://hdl.handle.net/10012/3895.
Der volle Inhalt der QuelleYen, Chia-Chih, und 顏嘉志. „Algorithms for Efficient Design Error Detection and Diagnosis“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/06604751304421872000.
Der volle Inhalt der Quelle國立交通大學
電子工程系所
93
Functional verification now accounts for most of the time spent in the product development due to the increasing complexity of modern ASIC and SoC designs. Assertion based verification (ABV) helps design teams identify and debug design errors more quickly than traditional techniques. It compares the implementation of a design against its specified assertions by embedding assertions in a design and having them monitor design activities. As a result, ABV is recognized as a critical component of the design verification process. In general, detecting and diagnosing design errors play the most important role in ABV. Unfortunately, the proposed techniques for design error detection cannot keep up with the rapid growth of design complexity. Furthermore, the generated error traces are usually very lengthy such that diagnosing counterexamples becomes very tedious and difficult. In this dissertation, we focus on three strategies that address the problem of efficiently detecting design errors and easing debug process. We first propose a practical cycle bound calculation algorithm for guiding bounded model checking (BMC). Many reports have shown the effectiveness of BMC in design error detection; however, the flaw of BMC is that it needs a pre-computed bound to ensure completeness. To determine the bound, we develop a practical ap-proach that takes branch-and-bound manner. We reduce the search space by applying a partitioning as well as a pruning method. Furthermore, we propose a novel formula-tion and use the SAT-solver to search states and thus determine the cycle bound. Ex-perimental results show that our algorithm considerably enhances the performance compared with the results of the previous work. We then propose an advanced semi-formal verification algorithm for identifying hard-to-detect design errors. Generally, semi-formal verification combines the simula-tive and formal methods to tackle the tough verification problems in a real industrial environment. Nevertheless, the monotonous cooperation of the heterogeneous meth-ods cannot keep pace with the rapid growth of design complexity. Therefore, we pro-pose an elegant algorithm that takes divide-and-conquer paradigm to orchestrate those approaches. We introduce a partitioning technique to recursively divide a design into smaller-sized components. Moreover, we present the approaches of handling each divided component efficiently while keeping the entire design function correct. Ex-perimental results demonstrate that our strategy indeed detects much more design errors than the traditional methods do. At last, we propose powerful error trace compaction algorithms to ease design error diagnosis. Usually, the error traces used to exercise and observe design behavior are very lengthy such that designers need to spend considerable effort to understand them. To alleviate designers’ burden, we present a SAT-based algorithm for reducing the lengths of the error traces. The algorithm not only halves the search space recur-sively but also guarantees to acquire the shortest lengths. Based on the optimum algo-rithm, we also develop two robust heuristics to handle real designs. Experimental results indicate that our approaches greatly surpass previous work and certainly give the promising results.
Yu, Namin. „Compact hardware implementation of advanced encryption standard with concurrent error detection /“. 2005.
Den vollen Inhalt der Quelle findenSU, JIAN-JUN, und 蘇建郡. „NEW ALGORITHMS AND VLSI IMPLEMENTATIONS FOR SINGLE ERROR DETECTION AND CORRECTION IN RESIDUE NUMBER SYSTEMS“. Thesis, 1990. http://ndltd.ncl.edu.tw/handle/88432246501240649784.
Der volle Inhalt der Quelle國立清華大學
電機工程研究所
79
During the recent years, the residue number system (RNS) hsa been receiving considerable interest due to its parallel and fault-tolerant properties. In this dissertation, a new algorithm for single residue digit error correction is proposed. This algorithm is fully based on the mixed radix conversion (MRC) and redundant MR digits can be used to establish a look-up table to correct single residue digit errors. The necessary and sufficient conditions for the proposed algorithm are also derived. The proposed algorithm can be extended to correct errors in the quadratic RNS (QRNS). It can be shown that single digital pair errors in a QRNS can be corrected. Since the scaling and the residue-to-binary conversion can be completed via the MRC, the relationship between the single error correction (or detection) and the scaling (or residue-to-binary conversion) is investigated. Then two operations can be unified in one hardware thereby reducing the complexity of these implementations, The original scaling algorithm will reduce the dynamic range. In this dissertation, this shortcoming can be eliminated by the use of the polarity shift. The scaling error es is analyzed and found to be limited to the interval of -1 < es < 1. Since the scaling is often required in the computations of digital signal processing (DSP) which is the primary application of the RNS, an error correction circuit with scaling (ECCS) or error detection circuit with scaling (EDCS) will be very efficient in the fault-tolerant systems for the DSP applications, Two examples for the application of the proposed ECCS, EDCS or error correction circuit with residue-to-binary conversion are given :One is the fast Fourier transform (FFT) network and the other is the infinite impulse response (IIR) filter. Based on the proposed algorithms, the fault-tolerant FFT networks and IIR filters can be completed. Finally, a VLSI layout (using double metal CMOS process and two-phase nonoverlapping clock strategy) for the proposed single error correction algorithm is implemented. The moduliset {16,17,19,23,29,31} is used and the total and dynamic ranges are about 27 and 17 bits, respcctively. Function and timing verifications are simulated. The chip latency is 8 clock cycles and the throughput is about 10MHz. In this layout implementation, it can be shown that the proposed algorithm is suitable for VLSI design. 餘數數字系統(RNS)具有優良的平行處理及容錯特性,近來由於硬體價格的降低,使其 再度受到極大的重視,在本論文中,作者針對單一餘數數字錯誤之更正提出一新型的 演算法,此演算法依據混合基數轉換法(MRC) 之原理,而使冗餘的混合基數數字足以 建立更正表,其所需的充分必要條件皆將於本文中被推導出來。此外,此演算法亦能 推廣到二次餘數數字系統(QRNS),作者證出在QRNS中,任何單一數字組之錯誤,皆能 被更正。 此演算法最大大的好處,就是能與定比運算(Scalig)結合在同一硬體上,而其定比誤 差經分析結果,是介於-1到1之間,數位訊號處理(DSP)為RNS應用最廣的領域,而 定比運算在DSP 中又常被使用,因之,此演算法將能十分有效地使用於具容錯性質的 DSP 應用中,諸如快速傅立葉轉換(FFT) 及無限脈衝響應濾波器(IIR filter)等。 最後,作者亦完成了此新型演算法的超大型積體電路(VLSI)之佈局,功能及時序的模 擬,並且提出了具有層次性及模組性的設計方法。依此方法,將可發現此演算法相當 適合於VLSI的設計。
„InCheck - An Integrated Recovery Methodology for Fine-grained Soft-Error Detection Schemes“. Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40720.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Computer Science 2016
Musti, S. S. Phaniram. „Development Of Algorithms For Bad Data Detection In Power System State Estimation“. Thesis, 2007. http://hdl.handle.net/2005/496.
Der volle Inhalt der QuellePatel, Dimpesh. „VLSI Implementation of Digital Signal Processing Algorithms for MIMO Detection and Channel Pre-processing“. Thesis, 2010. http://hdl.handle.net/1807/29986.
Der volle Inhalt der QuelleFalcon, Lins Rafael Jose. „Feature-Based Mesh Simplification With Quadric Error Metric Using A Line Simplification Algorithm“. 2010. http://hdl.handle.net/10222/13071.
Der volle Inhalt der QuelleLee, Chen-xing, und 李晨心. „Study on the Algorithm of Automatic Error Detection for Multibeam Data“. Thesis, 1999. http://ndltd.ncl.edu.tw/handle/45361075477628417032.
Der volle Inhalt der Quelle國立中山大學
海洋環境及工程學系
87
Abstract: The invention of multi-beam echo sounder system is a wonderful news for the people who have dealing with marine research and engineering. Its character of high data density gives us better understanding about the seabed topography and its change. However, the error propagation of multibeam data is hard to derive because all the sounding data is computed from related sensors. Besides, some irregular soundings or outliers indeed influence the charting result, and may spoil the quality of decision making. It may not be difficult to remove the outliers from the data measured in the absolutely flat and smooth seabed. But for the seabed with artificial reef or rocky coast, we need more logical rule to identify the anomalous data. Most current published automatic error detection algorithms for multi-beam data rely on comparison with its neighbor soundings. Thus the efficiency of error detection is very much related to the data quality. However, our automatic error detection algorithm for multi-beam data has following characteristics. At first, we find the relationship between the beam angles and the positions where the beams hit the seabed in a profile. Secondly, the "shadow effect" caused by the features on the seabed is been investigated. Thirdly, time-based moving average algorithm was used for previous stages flagged data points by comparing with its neighbors from its adjacent line. The outliers are removed due to above mentioned prefiltering processes. And finally, the consistency of the inter-swath data and the seabed roughness were also checked by comparing inter-swath data . The result shows that the two phases error flagging process originated from our error detection algorithm not only filter out the big mistake but detect small unreasonable soundings. However, it is found that a lot of soundings around artificial reefs don''t have enough neighbor soundings for depth accuracy estimation owing to lack of inter-swath overlay area at the second flagging phase. We can easily distinguish the depth points located on the surface of artificial reefs from outliers by overlapping two adjacent survey lines.
Lin, Ming-Jun, und 林銘軍. „A Layout-Aware Implication Selection Algorithm without Loss on Probability of Error Detection“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/trn77z.
Der volle Inhalt der Quelle國立中山大學
電機工程學系研究所
107
Due to the development of advanced semi-conductor process, the circuit in a chip could be implemented with a smaller size. However, circuit may cause fails because of the defects during manufacturing. In the case that the size of integrated circuits is shrinking, the impact of process defects on product yield is also worth paying attention to. Besides, it is also possible that soft errors caused by cosmic rays or thermal neutrons would temporarity change the function of a chip and make the system unreliable. In order to enhance the reliability of the circuit, concurrent error detection (CED) is one promising method. This method can detect errors during normal function operations. In recent years, a new CED method called implication-based CED has been proposed. Implications are constant relationship between inputs and outputs of logic gates as well as between wires in a circuit. By using implications, particular checking logic is designed to test a circuit by checking whether the implications are valid or not. If at least one implication is violated, a particular error indication signal will be activated to warn users or enable re-configuration of a system. However, there may be a lot of implications in a circuit. Selection of appropriate implications is thus very critical, which would affect not only the incurred area cost, but also the placement and routing complexity and performance. Based on the implication-based CED. In this thesis, we proposed an implication selection method. This method considers the number of nodes for logic gates, their routing characteristics during the physical layout process, as well as the detectability of errors to select appropriate implications. During the auto place and route (APR) procedure, we focus on realizing the maximum density without violating any design rules. In order to evaluate the effectiveness of the proposed method, a total of 18 ISCAS’85 and ITC’99 benchmark circuits are employed. The experimental results show that our proposed method can increase the circuit density without any DRC violation by 11% on average. The area overhead is thus reduced by 36.1%.
Chang, Hung-Chia, und 張宏嘉. „The study of using different objective functions with algorithm in gross error detection“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4m2r49.
Der volle Inhalt der Quelle國立政治大學
地政學系
106
In the field of surveying, Least Square (LS) methods are often used in adjustment. However, LS built on observations usually show with random errors. If observations have gross errors, the solution of LS will be effected easily. So this study uses different objective functions to calculate with different algorithms, and analyzes the ability of gross errors detection with test statistic. The methods in this study are equal weight LS、Iteratively Reweighted LS (IRLS)、Least Absolute Deviation (LAD) and Optimal Weight Matrix (OWM). This study proposes a concept “inverse weight matrix of LAD” to solve the problem that LAD lacks test statistics. And assess the different methods’ results with weight value、 standardized residual and redundant observation component. In simulated data, when observations have less redundant observations, OWM and “inverse weight matrix of LAD” have better ability of gross error detection, and them make the gross errors have lower weight value. With more observations, the IRLS has better result. In real data, the posteriori variance will be effected easily, and lead to every methods can’t locate the gross error. However, OWM and “inverse weight matrix of LAD” can enlarge the standardized residual of gross errors and help user to check the observations.
Kai-Ting, Shih. „Perturbed decoding algorithm for concatenated error correcting and detecting codes systems“. 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2607200606405100.
Der volle Inhalt der QuelleShih, Kai-Ting, und 施凱挺. „Perturbed decoding algorithm for concatenated error correcting and detecting codes systems“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/55482165469812622566.
Der volle Inhalt der Quelle國立臺灣大學
電信工程學研究所
94
We consider a concatenated coded system consists of an inner error correcting code and an outer error detecting code. In a conventional decoding scheme, the inner decoder produces the best codeword from its perspective. The best codeword is then checked by the outer decoder. The performance of the concatenated coded system can be improved by having the inner decoder produce not only the most likely candidate but also other highly likely candidates. In this paper, we propose a new algorithm called "perturbed decoding algorithm"(PA). In PA, other highly likely candidate is produced by feeding the inner decoder with slightly perturbed versions of the received signal. The concept of PA is compatible with most combinations of inner code and outer code. Because PA does not require the use of a sophisticated inner decoder, it is straightforward to implement in silicon technology. From our simulation, PA can achieve a performance gain greater than 1 dB.
Lin, Kuang-Chun, und 林劻錞. „A Probability of Error Detection Lossless Implication Selection Algorithm with Consideration of Interconnect Loading“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ks9r5b.
Der volle Inhalt der Quelle國立中山大學
電機工程學系研究所
107
With the advancement of the semi-conductor manufacturing process, the feature size of transistors shrinks aggressively. The chip size could thus become smaller. However, this also makes the chip more susceptible to manufacturing defects/noises and thus more unreliable. For some mission-critical applications, such as aviation, automotive, medical electronics, if errors appear during normal operations, it may cause significant damage to human life and property safety. In the recent years, a new concurrent error detection (CED) method, called implication-based CED, has been proposed to detect errors during normal function operations of a circuit. By using the existing invariant relationships between circuit wires, particular error detection logic can be designed to improve the reliability and diagnosability of a target circuit. The implications exist not only between inputs and outputs of a logic gate, but also between wires in the circuit. One issue for the implication-based CED is that unaffordable circuit delay overhead may incur if too many implications are employed for error checking. How to select appropriate implications is thus of great importance. In this thesis, we carefully analyze the previously developed implication selection algorithm, and identify three factors that are critically related to the significance of the delay overhead. Accordingly, a systematic method is proposed to reduce the delay overhead, including selecting appropriate implications with a small size as well as inserting buffers to heavily loaded wires. By using the proposed procedure, not only the probability of error detection is guaranteed to be lossless, but also the delay overhead due to implication checking logic can be significantly reduced. In order to evaluate the effectiveness of the proposed method, 18 ISCAS’85 and ITC’99 benchmark circuits are considered for experiments. The experimental results show that our proposed method reduces 29% delay overhead on average.
Yang, Yung-Ning, und 楊詠甯. „A New Double-Talk Detection Using Square-Error-Based Regularization NLMS Algorithm For Acoustic Echo Cancellation“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/22713058231971175644.
Der volle Inhalt der Quelle元智大學
電機工程學系
96
The main purpose of this thesis is to study the double-talk detection of the acoustic echo cancellation. We use the frame of the two-path echo cancellation which is combined with cross-correlation method to be our double-talk detection. The detector can actually differentiate between echo and near-end speech. We introduce the square-error-based regularization NLMS algorithm acoustic echo cancellation method with double-talk detection. A large number of simulations demonstated the usefulness of our sheme.
Cook, Robert Joseph. „Application of item response theory models to the algorithmic detection of shift errors on paper and pencil tests“. 2013. https://scholarworks.umass.edu/dissertations/AAI3603070.
Der volle Inhalt der QuelleΚαρύμπαλη, Ειρήνη. „Αποδοτικές τεχνικές αντιστοίχισης και ψηφιακής υδατογράφησης εικόνων“. 2005. http://nemertes.lis.upatras.gr/jspui/handle/10889/270.
Der volle Inhalt der QuelleImage registration aims at finding geometrical or other differences between two or more images. Image watermarking offers copyright protection by embedding in the images an invisible signal, a watermark, in such a way that it is difficult to be removed. Image registration can be part of a watermark detector. Moreover, similar (or the same) similarity measures are used for both image registration and watermark detection. Thus, any improvement concerning the image registration or the similarity measures can have positive effects on image watermarking, too. Our research concerning the image registration problem deals with the spatial cross-correlation, which has the following drawback: the region around its maximum value can be rather wide, affecting the registration accuracy. This problem can be solved, by properly pre-whitening the images with the prediction error filter. Furthermore, an iterative algorithm is proposed for registering images with translation and rotation differences, which is then applied in sequences of medical images for cancer diagnosis. A second disadvantage of the spatial correlation is its computational cost. A fast computation scheme is proposed, based on a proper partitioning of the images and the Fourier transform. Also, the most computationally intensive part of a registration process is the evaluation of the involved measure for different relative image positions. Thus, an efficient iterative algorithm is developed that considerably reduces the number of searches required for finding the correlation coefficient maximum value and provides pixel accuracy. Finally, an image registration technique with subpixel accuracy is proposed, which is based on the correlation coefficient maximization. This technique does not require the reconstruction of the intensity values and provides a closed form solution to the subpixel translation estimation problem. As far as the problem of image watermarking is concerned, our research aims at embedding robust watermarks in spatial domain and improving their detection. First, a spatial perceptual mask is proposed, based on the local variance of the initial image prediction error. A blind detector is also developed, which performs better than the existing ones. This is theoretically proved for the general attack case with linear filter and noise. Furthermore, a new spatial perceptual mask is proposed that allows for a significantly increased strength of the watermark, while at the same time the image quality remains very good. Its performance is compared to known and widely used masks and is proved to be much better. Moreover, an improved detector is developed, which, combined with the new mask, performs very well. Finally, a new multiplicative watermark embedding is proposed, which uses space-time block coding (specifically a 4x4 real orthogonal design). This scheme is proved to perform much better than the repetitive watermarking.