Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Error detection algorithms.

Dissertationen zum Thema „Error detection algorithms“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-35 Dissertationen für die Forschung zum Thema "Error detection algorithms" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Pazaitis, Dimitrios I. „Performance improvement in adaptive signal processing algorithms“. Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368771.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

O'Connell, Jeffrey R. „Algorithms for linkage analysis, error detection and haplotyping in pedigrees“. Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325622.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Santos, Fernando Fernandes dos. „Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/159210.

Der volle Inhalt der Quelle
Annotation:
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo.
Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lee, Ho. „Algorithms to Improve the Quality of Freeway Traffic Detector Data“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345522267.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hua, Nan. „Space-efficient data sketching algorithms for network applications“. Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44899.

Der volle Inhalt der Quelle
Annotation:
Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Valent, Adam. „Jednosměrná sériová komunikace laserem na větší vzdálenost“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442524.

Der volle Inhalt der Quelle
Annotation:
The subject of this thesis is the construction of one-way communication device. This device consists of the transmitter and the receiver, both of which are connected to its respective computer via USB interface. This device allows sending UTF-8 characters or files from one computer to another. Both computers are running a graphical user interface program. The core of a transmitter is a digital signal modulating laser diode. The receiver is made of photovoltaic panel with a resonance circuit and an amplifier. Communication between the electronics and the computer is driven by microcontrollers. Received messages are verified with one of multiple error detection algorithms, which can be selected by user in the utility program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Feng, Jingbin. „Quasi-Static Deflection Compensation Control of Flexible Manipulator“. PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4759.

Der volle Inhalt der Quelle
Annotation:
The growing need in industrial applications of high-performance robots has led to designs of lightweight robot arms. However the light-weight robot arm introduces accuracy and vibration problems. The classical robot design and control method based on the rigid body assumption is no longer satisfactory for the light-weight manipulators. The effects of flexibility of light-weight manipulators have been an active research area in recent years. A new approach to correct the quasi-static position and orientation error of the end-effector of a manipulator with flexible links is studied in this project. In this approach, strain gages are used to monitor the elastic reactions of the flexible links due to the weight of the manipulator and the payload in real time, the errors are then compensated on-line by a control algorithm. Although this approach is designed to work for general loading conditions, only the bending deflection in a plane is investigated in detail. It is found that a minimum of two strain gages per link are needed to monitor the deflection of a robot arm subjected to bending. A mathematical model relating the deflections and strains is developed using Castigliano's theorem of least work. The parameters of the governing equations are obtained using the identification method. With the identification method, the geometric details of the robot arms and the carrying load need not be known. The deflections monitored by strain gages are fed back to the kinematic model of the manipulator to find the position and orientation of the end-effector of the manipulator. A control algorithm is developed to compensate the deflections. The inverse kinematics that includes deflections as variables is solved in closed form. If the deflections at target position are known, this inverse kinematics will generate the exact joint command for the flexible manipulator. However the deflections of the robot arms at the target position are unknown ahead of time, the current deflections at each sampling time are used to predict the deflections at target position and the joint command is modified until the required accuracy is obtained. An experiment is set up to verify the mathematical model relating the strains to the deflections. The results of the experiment show good agreement with the model. The compensation control algorithm is first simulated in a computer program. The simulation also shows good convergence. An experimental manipulator with two flexible links is built to prove this approach. The experimental results show that this compensation control improves the position accuracy of the flexible manipulator significantly. The following are the brief advantages of this approach: the deflections can be monitored without measuring the payload directly and without the detailed knowledge of link geometry~ the manipulator calibrates itself with minimum human intervention; the compensation control algorithm can be easily integrated with the existing uncompensated rigid-body algorithm~ it is inexpensive and practical for implementation to manipulators installed in workplaces.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Xie, Yichen. „Static detection of software errors precise and scalable algorithms for automatic detection of software errors“. Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2991792&prov=M&dok_var=1&dok_ext=htm.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kapfunde, Goodwell. „Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems“. Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11350.

Der volle Inhalt der Quelle
Annotation:
The search for the closest lattice point arises in many communication problems, and is known to be NP-hard. The Maximum Likelihood (ML) Detector is the optimal detector which yields an optimal solution to this problem, but at the expense of high computational complexity. Existing near-optimal methods used to solve the problem are based on the Sphere Decoder (SD), which searches for lattice points confined in a hyper-sphere around the received point. The SD has emerged as a powerful means of finding the solution to the ML detection problem for MIMO systems. However the bottleneck lies in the determination of the initial radius. This thesis is concerned with the detection of transmitted wireless signals in Multiple-Input Multiple-Output (MIMO) digital communication systems as efficiently and effectively as possible. The main objective of this thesis is to design efficient ML detection algorithms for MIMO systems based on the depth-first search (DFS) algorithms whilst taking into account complexity and bit error rate performance requirements for advanced digital communication systems. The increased capacity and improved link reliability of MIMO systems without sacrificing bandwidth efficiency and transmit power will serve as the key motivation behind the study of MIMO detection schemes. The fundamental principles behind MIMO systems are explored in Chapter 2. A generic framework for linear and non-linear tree search based detection schemes is then presented Chapter 3. This paves way for different methods of improving the achievable performance-complexity trade-off for all SD-based detection algorithms. The suboptimal detection schemes, in particular the Minimum Mean Squared Error-Successive Interference Cancellation (MMSE-SIC), will also serve as pre-processing as well as comparison techniques whilst channel capacity approaching Low Density Parity Check (LDPC) codes will be employed to evaluate the performance of the proposed SD. Numerical and simulation results show that non-linear detection schemes yield better performance compared to linear detection schemes, however, at the expense of a slight increase in complexity. The first contribution in this thesis is the design of a near ML-achieving SD algorithm for MIMO digital communication systems that reduces the number of search operations within the sphere-constrained search space at reduced detection complexity in Chapter 4. In this design, the distance between the ML estimate and the received signal is used to control the lower and upper bound radii of the proposed SD to prevent NP-complete problems. The detection method is based on the DFS algorithm and the Successive Interference Cancellation (SIC). The SIC ensures that the effects of dominant signals are effectively removed. Simulation results presented in this thesis show that by employing pre-processing detection schemes, the complexity of the proposed SD can be significantly reduced, though at marginal performance penalty. The second contribution is the determination of the initial sphere radius in Chapter 5. The new initial radius proposed in this thesis is based on the variable parameter α which is commonly based on experience and is chosen to ensure that at least a lattice point exists inside the sphere with high probability. Using the variable parameter α, a new noise covariance matrix which incorporates the number of transmit antennas, the energy of the transmitted symbols and the channel matrix is defined. The new covariance matrix is then incorporated into the EMMSE model to generate an improved EMMSE estimate. The EMMSE radius is finally found by computing the distance between the sphere centre and the improved EMMSE estimate. This distance can be fine-tuned by varying the variable parameter α. The beauty of the proposed method is that it reduces the complexity of the preprocessing step of the EMMSE to that of the Zero-Forcing (ZF) detector without significant performance degradation of the SD, particularly at low Signal-to-Noise Ratios (SNR). More specifically, it will be shown through simulation results that using the EMMSE preprocessing step will substantially improve performance whenever the complexity of the tree search is fixed or upper bounded. The final contribution is the design of the LRAD-MMSE-SIC based SD detection scheme which introduces a trade-off between performance and increased computational complexity in Chapter 6. The Lenstra-Lenstra-Lovasz (LLL) algorithm will be utilised to orthogonalise the channel matrix H to a new near orthogonal channel matrix H ̅.The increased computational complexity introduced by the LLL algorithm will be significantly decreased by employing sorted QR decomposition of the transformed channel H ̅ into a unitary matrix and an upper triangular matrix which retains the property of the channel matrix. The SIC algorithm will ensure that the interference due to dominant signals will be minimised while the LDPC will effectively stop the propagation of errors within the entire system. Through simulations, it will be demonstrated that the proposed detector still approaches the ML performance while requiring much lower complexity compared to the conventional SD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sevim, Alaettin. „Including state excitation in the fixed-interval smoothing algorithm and implementation of the maneuver detection method using error residuals“. Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA246336.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Engineering Science)--Naval Postgraduate School, December 1990.
Thesis Advisor: Titus, Harold A. Second Reader: Burl, Jeff. "December 1990." Description based on title screen as viewed on April 2, 2010. DTIC Identifier(s): Noise processing, maneuver detection. Author(s) subject terms: Kalman filter, smoothing, noise process, maneuver detection. Includes bibliographical references (p. 98-99). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Grymel, Martin-Thomas. „Error control with binary cyclic codes“. Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/error-control-with-binary-cyclic-codes(a5750b4a-e4d6-49a8-915b-3e015387ad36).html.

Der volle Inhalt der Quelle
Annotation:
Error-control codes provide a mechanism to increase the reliability of digital data being processed, transmitted, or stored under noisy conditions. Cyclic codes constitute an important class of error-control code, offering powerful error detection and correction capabilities. They can easily be generated and verified in hardware, which makes them particularly well suited to the practical use as error detecting codes.A cyclic code is based on a generator polynomial which determines its properties including the specific error detection strength. The optimal choice of polynomial depends on many factors that may be influenced by the underlying application. It is therefore advantageous to employ programmable cyclic code hardware that allows a flexible choice of polynomial to be applied to different requirements. A novel method is presented in this thesis to realise programmable cyclic code circuits that are fast, energy-efficient and minimise implementation resources.It can be shown that the correction of a single-bit error on the basis of a cyclic code is equivalent to the solution of an instance of the discrete logarithm problem. A new approach is proposed for computing discrete logarithms; this leads to a generic deterministic algorithm for analysed group orders that equal Mersenne numbers with an exponent of a power of two. The algorithm exhibits a worst-case runtime in the order of the square root of the group order and constant space requirements.This thesis establishes new relationships for finite fields that are represented as the polynomial ring over the binary field modulo a primitive polynomial. With a subset of these properties, a novel approach is developed for the solution of the discrete logarithm in the multiplicative groups of these fields. This leads to a deterministic algorithm for small group orders that has linear space and linearithmic time requirements in the degree of defining polynomial, enabling an efficient correction of single-bit errors based on the corresponding cyclic codes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Tran, Ngoc Nguyen. „Infeasibility detection and regularization strategies in nonlinear optimization“. Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0059/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous nous étudions des algorithmes d’optimisation non linéaire. D’une part nous proposons des techniques de détection rapide de la non-réalisabilité d’un problème à résoudre. D’autre part, nous analysons le comportement local des algorithmes pour la résolution de problèmes singuliers. Dans la première partie, nous présentons une modification d’un algorithme de lagrangien augmenté pour l’optimisation avec contraintes d’égalité. La convergence quadratique du nouvel algorithme dans le cas non-réalisable est démontrée théoriquement et numériquement. La seconde partie est dédiée à l’extension du résultat précédent aux problèmes d’optimisation non linéaire généraux avec contraintes d’égalité et d’inégalité. Nous proposons une modification d’un algorithme de pénalisation mixte basé sur un lagrangien augmenté et une barrière logarithmique. Les résultats théoriques de l’analyse de convergence et quelques tests numériques montrent l’avantage du nouvel algorithme dans la détection de la non-réalisabilité. La troisième partie est consacrée à étudier le comportement local d’un algorithme primal-dual de points intérieurs pour l’optimisation sous contraintes de borne. L’analyse locale est effectuée sans l’hypothèse classique des conditions suffisantes d’optimalité de second ordre. Celle-ci est remplacée par une hypothèse plus faible basée sur la notion de borne d’erreur locale. Nous proposons une technique de régularisation de la jacobienne du système d’optimalité à résoudre. Nous démontrons ensuite des propriétés de bornitude de l’inverse de ces matrices régularisées, ce qui nous permet de montrer la convergence superlinéaire de l’algorithme. La dernière partie est consacrée à l’analyse de convergence locale de l’algorithme primal-dual qui est utilisé dans les deux premières parties de la thèse. En pratique, il a été observé que cet algorithme converge rapidement même dans le cas où les contraintes ne vérifient l’hypothèse de qualification de Mangasarian-Fromovitz. Nous démontrons la convergence superlinéaire et quadratique de cet algorithme, sans hypothèse de qualification des contraintes
This thesis is devoted to the study of numerical algorithms for nonlinear optimization. On the one hand, we propose new strategies for the rapid infeasibility detection. On the other hand, we analyze the local behavior of primal-dual algorithms for the solution of singular problems. In the first part, we present a modification of an augmented Lagrangian algorithm for equality constrained optimization. The quadratic convergence of the new algorithm in the infeasible case is theoretically and numerically demonstrated. The second part is dedicated to extending the previous result to the solution of general nonlinear optimization problems with equality and inequality constraints. We propose a modification of a mixed logarithmic barrier-augmented Lagrangian algorithm. The theoretical convergence results and the numerical experiments show the advantage of the new algorithm for the infeasibility detection. In the third part, we study the local behavior of a primal-dual interior point algorithm for bound constrained optimization. The local analysis is done without the standard assumption of the second-order sufficient optimality conditions. These conditions are replaced by a weaker assumption based on a local error bound condition. We propose a regularization technique of the Jacobian matrix of the optimality system. We then demonstrate some boundedness properties of the inverse of these regularized matrices, which allow us to prove the superlinear convergence of our algorithm. The last part is devoted to the local convergence analysis of the primal-dual algorithm used in the first two parts of this thesis. In practice, it has been observed that this algorithm converges rapidly even in the case where the constraints do not satisfy the Mangasarian-Fromovitz constraint qualification. We demonstrate the superlinear and quadratic convergence of this algorithm without any assumption of constraint qualification
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Palkki, Ryan D. „Chemical identification under a poisson model for Raman spectroscopy“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45935.

Der volle Inhalt der Quelle
Annotation:
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory. The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB). Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference. In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class. The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem. Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

LIMA, Christiane Ferreira Lemos. „Detecção de intrusos em redes de computadores com uso de códigos corretores de erros e medidas de informação“. Universidade Federal de Campina Grande, 2013. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1430.

Der volle Inhalt der Quelle
Annotation:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-13T19:50:34Z No. of bitstreams: 1 CHRISTIANE FERREIRA LEMOS LIMA - TESE PPGEE 2013..pdf: 5704501 bytes, checksum: da700470760daace1ac791c6514082a3 (MD5)
Made available in DSpace on 2018-08-13T19:50:34Z (GMT). No. of bitstreams: 1 CHRISTIANE FERREIRA LEMOS LIMA - TESE PPGEE 2013..pdf: 5704501 bytes, checksum: da700470760daace1ac791c6514082a3 (MD5) Previous issue date: 2013-04-19
Capes
Este trabalho de tese tem como objetivo principal apresentar um novo esquema, direcionado à proteção de sistemas computacionais contra a ocorrência de invasões, fazendo uso de códigos corretores de erros e de medidas de informação. Para isto, considera-se que a identificação de diferentes tipos de ataques a uma rede de computadores pode ser vista como uma tarefa de classificação multiclasses, por envolver a discriminação de ataques em diversos tipos ou categorias. Com base nessa abordagem, o presente trabalho apresenta uma estratégia para classificação multiclasses, baseada nos princípios dos códigos corretores de erros, tendo em vista que para cada uma dasM classes do problema proposto é associada a um conjunto de palavras códigos de comprimento igual a N, em que N é o número de atributos, selecionados por meio de medidas de informação, e que serão monitorados por dispositivos de software, aqui chamados de detectores de rede e detectores dehost. Nesta abordagem, as palavras código que formam uma tabela são restritas a um sub-código de um código linear do tipo BCH (Bose-ChaudhuriHocquenghem), permitindo que a etapa de decodificação seja realizada, utilizando-se algoritmos de decodificação algébrica, o que não é possível para palavras código selecionadas aleatoriamente. Nesse contexto, o uso de uma variante de algoritmo genético é aplicado no projeto da tabela que será utilizada na identificação de ataques em redes de computadores. Dentre as contribuições efetivas desta tese, cujas comprovações são demonstradas em experimentos realizados por simulação computacional, tem-se a aplicação da teoria da codificação para a determinação de palavras código adequadas ao problema de detectar intrusões numa rede de computadores; a determinação dos atributos por meio do uso de árvores de decisão C4.5 baseadas nas medidas de informação de Rényi e Tsallis; a utilização de decodificação algébrica, baseado nos conceitos de decodificação tradicional e na decodificação por lista.
The thesis’s main objective is to present a scheme to protect computer networks against the occurrence of invasions by making use of error correcting codes and information measures. For this, the identification of attacks in a network is viewed as a multiclass classification task because it involves attacks discrimination into various categories. Based on this approach, this work presents strategies for multiclass problems based on the error correcting codes principles, where eachM class is associated with a codeword of lengthN, whereN is the number of selected attributes, chosen by use of information measures. These attributes are monitored by software devices, here called network detectors and detectors host. In this approach, the codewords that form a codewords table are a sub-code of a BCH-type linear code. This approach allows decoding step to be performed using algebraic decoding algorithms, what is not possible with random selected codewords. In this context, the use of a variant of genetic algorithm are applied in the table-approach design to be used in attacks identification in networks. The effective contributions of this thesis, demonstrated on the cientific experiments, are: the application of coding theory to determine the appropriate code words to network intrusion detection; the application of C4.5 decision tree based on information measurements of Rényi and Tsallis for attributes selection; the use of algebraic decoding, based on the concepts of the traditional decoding and list decoding techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Povalač, Karel. „Sledování spektra a optimalizace systémů s více nosnými pro kognitivní rádio“. Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233577.

Der volle Inhalt der Quelle
Annotation:
The doctoral thesis deals with spectrum sensing and subsequent use of the frequency spectrum by multicarrier communication system, which parameters are set on the basis of the optimization technique. Adaptation settings can be made with respect to several requirements as well as state and occupancy of individual communication channels. The system, which is characterized above is often referred as cognitive radio. Equipments operating on cognitive radio principles will be widely used in the near future, because of frequency spectrum limitation. One of the main contributions of the work is the novel usage of the Kolmogorov – Smirnov statistical test as an alternative detection of primary user signal presence. The new fitness function for Particle Swarm Optimization (PSO) has been introduced and the Error Vector Magnitude (EVM) parameter has been used in the adaptive greedy algorithm and PSO optimization. The dissertation thesis also incorporates information about the reliability of the frequency spectrum sensing in the modified greedy algorithm. The proposed methods are verified by the simulations and the frequency domain energy detection is implemented on the development board with FPGA.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Lin, TsungPo. „An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24819.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Dimitri Mavris; Committee Member: Erwing Calleros; Committee Member: Hongmei Chen; Committee Member: Mark Waters; Committee Member: Vitali Volovoi.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Holzschuch, Nicolas. „Le contrôle de l'erreur dans la méthode de radiosité hiérarchique“. Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004994.

Der volle Inhalt der Quelle
Annotation:
Nous présentons ici plusieurs améliorations d'un algorithme de modélisation de l'éclairage, la méthode de radiosité. Pour commencer, une analyse détaillée de la méthode de radiosité hiérarchique permet de souligner ses points faibles et de mettre en évidence deux améliorations simples : une évaluation paresseuse des interactions entre les objets, et un nouveau critère de raffinement qui élimine en grande partie les raffinements inutiles. Un bref rappel des propriétés des fonctions de plusieurs variables et de leurs dérivées suit, qui permet d'abord de déduire une réécriture de l'expression de la radiosité, d'où un calcul numérique plus précis. Les méthodes d'estimation de l'erreur produite au cours du processus de modélisation de la lumière sont introduites. Nous voyons alors comment les propriétés de concavité de la fonction de radiosité permettent -- grâce au calcul des dérivées successives de la radiosité -- un contrôle complet de l'erreur commise dans la modélisation des interactions entre les objets, et donc un encadrement précis de la radiosité. Nous présentons un critère de raffinement basé sur cette modélisation des interactions, et un algorithme complet de radiosité hiérarchique intégrant ce critère de raffinement, et donc permettant un contrôle de l'erreur commise sur la radiosité au cours de la résolution. Finalement, nous présentons les méthodes de calcul pratique des dérivées successives de la radiosité (gradient et Hessien) dans le cas d'un émetteur constant sans obstacles tout d'abord, puis dans le cas d'un émetteur constant en présence d'obstacles et dans le cas d'un émetteur sur lequel la radiosité varie de façon linéaire.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Hunter, Brandon. „Channel Probing for an Indoor Wireless Communications Channel“. BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.

Der volle Inhalt der Quelle
Annotation:
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Vasiga, Troy Michael John. „Error Detection in Number-Theoretic and Algebraic Algorithms“. Thesis, 2008. http://hdl.handle.net/10012/3895.

Der volle Inhalt der Quelle
Annotation:
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Yen, Chia-Chih, und 顏嘉志. „Algorithms for Efficient Design Error Detection and Diagnosis“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/06604751304421872000.

Der volle Inhalt der Quelle
Annotation:
博士
國立交通大學
電子工程系所
93
Functional verification now accounts for most of the time spent in the product development due to the increasing complexity of modern ASIC and SoC designs. Assertion based verification (ABV) helps design teams identify and debug design errors more quickly than traditional techniques. It compares the implementation of a design against its specified assertions by embedding assertions in a design and having them monitor design activities. As a result, ABV is recognized as a critical component of the design verification process. In general, detecting and diagnosing design errors play the most important role in ABV. Unfortunately, the proposed techniques for design error detection cannot keep up with the rapid growth of design complexity. Furthermore, the generated error traces are usually very lengthy such that diagnosing counterexamples becomes very tedious and difficult. In this dissertation, we focus on three strategies that address the problem of efficiently detecting design errors and easing debug process. We first propose a practical cycle bound calculation algorithm for guiding bounded model checking (BMC). Many reports have shown the effectiveness of BMC in design error detection; however, the flaw of BMC is that it needs a pre-computed bound to ensure completeness. To determine the bound, we develop a practical ap-proach that takes branch-and-bound manner. We reduce the search space by applying a partitioning as well as a pruning method. Furthermore, we propose a novel formula-tion and use the SAT-solver to search states and thus determine the cycle bound. Ex-perimental results show that our algorithm considerably enhances the performance compared with the results of the previous work. We then propose an advanced semi-formal verification algorithm for identifying hard-to-detect design errors. Generally, semi-formal verification combines the simula-tive and formal methods to tackle the tough verification problems in a real industrial environment. Nevertheless, the monotonous cooperation of the heterogeneous meth-ods cannot keep pace with the rapid growth of design complexity. Therefore, we pro-pose an elegant algorithm that takes divide-and-conquer paradigm to orchestrate those approaches. We introduce a partitioning technique to recursively divide a design into smaller-sized components. Moreover, we present the approaches of handling each divided component efficiently while keeping the entire design function correct. Ex-perimental results demonstrate that our strategy indeed detects much more design errors than the traditional methods do. At last, we propose powerful error trace compaction algorithms to ease design error diagnosis. Usually, the error traces used to exercise and observe design behavior are very lengthy such that designers need to spend considerable effort to understand them. To alleviate designers’ burden, we present a SAT-based algorithm for reducing the lengths of the error traces. The algorithm not only halves the search space recur-sively but also guarantees to acquire the shortest lengths. Based on the optimum algo-rithm, we also develop two robust heuristics to handle real designs. Experimental results indicate that our approaches greatly surpass previous work and certainly give the promising results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Yu, Namin. „Compact hardware implementation of advanced encryption standard with concurrent error detection /“. 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

SU, JIAN-JUN, und 蘇建郡. „NEW ALGORITHMS AND VLSI IMPLEMENTATIONS FOR SINGLE ERROR DETECTION AND CORRECTION IN RESIDUE NUMBER SYSTEMS“. Thesis, 1990. http://ndltd.ncl.edu.tw/handle/88432246501240649784.

Der volle Inhalt der Quelle
Annotation:
博士
國立清華大學
電機工程研究所
79
During the recent years, the residue number system (RNS) hsa been receiving considerable interest due to its parallel and fault-tolerant properties. In this dissertation, a new algorithm for single residue digit error correction is proposed. This algorithm is fully based on the mixed radix conversion (MRC) and redundant MR digits can be used to establish a look-up table to correct single residue digit errors. The necessary and sufficient conditions for the proposed algorithm are also derived. The proposed algorithm can be extended to correct errors in the quadratic RNS (QRNS). It can be shown that single digital pair errors in a QRNS can be corrected. Since the scaling and the residue-to-binary conversion can be completed via the MRC, the relationship between the single error correction (or detection) and the scaling (or residue-to-binary conversion) is investigated. Then two operations can be unified in one hardware thereby reducing the complexity of these implementations, The original scaling algorithm will reduce the dynamic range. In this dissertation, this shortcoming can be eliminated by the use of the polarity shift. The scaling error es is analyzed and found to be limited to the interval of -1 < es < 1. Since the scaling is often required in the computations of digital signal processing (DSP) which is the primary application of the RNS, an error correction circuit with scaling (ECCS) or error detection circuit with scaling (EDCS) will be very efficient in the fault-tolerant systems for the DSP applications, Two examples for the application of the proposed ECCS, EDCS or error correction circuit with residue-to-binary conversion are given :One is the fast Fourier transform (FFT) network and the other is the infinite impulse response (IIR) filter. Based on the proposed algorithms, the fault-tolerant FFT networks and IIR filters can be completed. Finally, a VLSI layout (using double metal CMOS process and two-phase nonoverlapping clock strategy) for the proposed single error correction algorithm is implemented. The moduliset {16,17,19,23,29,31} is used and the total and dynamic ranges are about 27 and 17 bits, respcctively. Function and timing verifications are simulated. The chip latency is 8 clock cycles and the throughput is about 10MHz. In this layout implementation, it can be shown that the proposed algorithm is suitable for VLSI design. 餘數數字系統(RNS)具有優良的平行處理及容錯特性,近來由於硬體價格的降低,使其 再度受到極大的重視,在本論文中,作者針對單一餘數數字錯誤之更正提出一新型的 演算法,此演算法依據混合基數轉換法(MRC) 之原理,而使冗餘的混合基數數字足以 建立更正表,其所需的充分必要條件皆將於本文中被推導出來。此外,此演算法亦能 推廣到二次餘數數字系統(QRNS),作者證出在QRNS中,任何單一數字組之錯誤,皆能 被更正。 此演算法最大大的好處,就是能與定比運算(Scalig)結合在同一硬體上,而其定比誤 差經分析結果,是介於-1到1之間,數位訊號處理(DSP)為RNS應用最廣的領域,而 定比運算在DSP 中又常被使用,因之,此演算法將能十分有效地使用於具容錯性質的 DSP 應用中,諸如快速傅立葉轉換(FFT) 及無限脈衝響應濾波器(IIR filter)等。 最後,作者亦完成了此新型演算法的超大型積體電路(VLSI)之佈局,功能及時序的模 擬,並且提出了具有層次性及模組性的設計方法。依此方法,將可發現此演算法相當 適合於VLSI的設計。
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

„InCheck - An Integrated Recovery Methodology for Fine-grained Soft-Error Detection Schemes“. Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40720.

Der volle Inhalt der Quelle
Annotation:
abstract: Soft errors are considered as a key reliability challenge for sub-nano scale transistors. An ideal solution for such a challenge should ultimately eliminate the effect of soft errors from the microprocessor. While forward recovery techniques achieve fast recovery from errors by simply voting out the wrong values, they incur the overhead of three copies execution. Backward recovery techniques only need two copies of execution, but suffer from check-pointing overhead. In this work I explored the efficiency of integrating check-pointing into the application and the effectiveness of recovery that can be performed upon it. After evaluating the available fine-grained approaches to perform recovery, I am introducing InCheck, an in-application recovery scheme that can be integrated into instruction-duplication based techniques, thus providing a fast error recovery. The proposed technique makes light-weight checkpoints at the basic-block granularity, and uses them for recovery purposes. To evaluate the effectiveness of the proposed technique, 10,000 fault injection experiments were performed on different hardware components of a modern ARM in-order simulated processor. InCheck was able to recover from all detected errors by replaying about 20 instructions, however, the state of the art recovery scheme failed more than 200 times.
Dissertation/Thesis
Masters Thesis Computer Science 2016
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Musti, S. S. Phaniram. „Development Of Algorithms For Bad Data Detection In Power System State Estimation“. Thesis, 2007. http://hdl.handle.net/2005/496.

Der volle Inhalt der Quelle
Annotation:
Power system state estimation (PSSE) is an energy management system function responsible for the computation of the most likely values of state variables viz., bus voltage magnitudes and angles. The state estimation is obtained within a network at a given instant by solving a system of mostly non-linear equations whose parameters are the redundant measurements, both static such as transformer/line parameters and dynamic such as, status of circuit breakers/isolators, transformer tap positions, active/reactive power flows, generator active/reactive power outputs etc. PSSE involves solving an over determined set of nonlinear equations by minimizing a weighted norm of the measurement residuals. Typically, the L1 and L2 norms are employed. The use of L2 norm leads to state estimation based on the weighted least squares (WLS) criterion. This method is known to exhibit efficient filtering capability when the errors are Gaussian but fails in the case of presence of bad data. The method of hypothesis testing identification can be incorporated into the WLS estimator to detect and identify bad data. Nevertheless, it is prone to failure when the measurement is a leverage point. On the other hand state estimation based on the weighted least absolute value (WLAV) criterion using L1 norm, has superior bad data suppression capability. But it also fails in rejecting bad data measurements associated with leverage points. Leverage points are highly influential measurements that attract the state estimator solution towards them. Consequently, much research effort has focused recently, on producing a LAV estimator that remains robust in the presence of bad leverage measurements. This problem has been addressed in the thesis work. Two methods, which aims development of robust estimator that are insensitive to bad leverage points, have been proposed viz., (i) The objective function used here is obtained by linearizing L2 norm of the error function. In addition to the constraints corresponding to measurement set, constraints corresponding to bounds of state variables are also involved. Linear programming (LP) optimization is carried out using upper bound optimization technique. (ii) A hybrid optimization algorithm which is combination of”upper bound optimization technique” and ”an improved algorithm for discrete l1 linear approximation”, to restrict the state variables not to leave the basis during optimization process. Linear programming optimization, with bounds of state variables as additional constraints is carried out using the proposed hybrid optimization algorithm. The proposed state estimator algorithms are tested on 24-bus EHV equivalent of southern power network, 36-bus EHV equivalent of western grid, 205-bus interconnected grid system of southern region and IEEE-39 bus New England system. Performances of the proposed two methods are compared with the WLAV estimator in the presence of bad data associated with leverage points. Also, the effect of bad leverage measurements on the interacting bad data, which are non-leverage, has been compared. Results show that proposed state estimator algorithms rejects bad data associated with leverage points efficiently.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Patel, Dimpesh. „VLSI Implementation of Digital Signal Processing Algorithms for MIMO Detection and Channel Pre-processing“. Thesis, 2010. http://hdl.handle.net/1807/29986.

Der volle Inhalt der Quelle
Annotation:
The efficient high-throughput VLSI implementation of Soft-output MIMO detectors for high-order constellations and large antenna configurations has been a major challenge in the literature. This thesis introduces a novel Soft-output K-Best scheme that improves BER performance and reduces the computational complexity significantly by using three major improvement ideas. It also presents an area and power efficient VLSI implementation of a 4x4 64-QAM Soft K-Best MIMO detector that attains the highest detection throughput of 2 Gbps and second lowest energy/bit reported in the literature, fulfilling the aggressive requirements of emerging 4G standards such as IEEE 802.16m and LTE-Advanced. A low-complexity and highly parallel algorithm for QR Decomposition, an essential channel pre-processing task, is also developed that uses 2D, Householder 3D and 4D Givens Rotations. Test results for the QRD chip, fabricated in 0.13um CMOS, show that it attains the lowest reported latency of 144ns and highest QR Processing Efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Falcon, Lins Rafael Jose. „Feature-Based Mesh Simplification With Quadric Error Metric Using A Line Simplification Algorithm“. 2010. http://hdl.handle.net/10222/13071.

Der volle Inhalt der Quelle
Annotation:
Mesh simplification is an important task in Computer Graphics due to the ever increasing complexity of polygonal geometric models. Specifically in real-time rendering, there is a necessity that these models, which can be acquired through 3D scanning or through artistic conception, have to be simplified or optimized to be rendered on today's hardware while losing as little detail as possible. This thesis proposes a mesh simplification algorithm that works by identifying and simplifying features. Then it simplifies the remaining mesh with the simplified features frozen. The algorithm is called Quadric Error with Feature Curves (QEFC). Quadric Error with Feature Curves works as a tool that allows the user to interactively select a percentage of the most important points of the feature curves to be preserved along with the points determined by the Quadric Error Metric algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Lee, Chen-xing, und 李晨心. „Study on the Algorithm of Automatic Error Detection for Multibeam Data“. Thesis, 1999. http://ndltd.ncl.edu.tw/handle/45361075477628417032.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
海洋環境及工程學系
87
Abstract: The invention of multi-beam echo sounder system is a wonderful news for the people who have dealing with marine research and engineering. Its character of high data density gives us better understanding about the seabed topography and its change. However, the error propagation of multibeam data is hard to derive because all the sounding data is computed from related sensors. Besides, some irregular soundings or outliers indeed influence the charting result, and may spoil the quality of decision making. It may not be difficult to remove the outliers from the data measured in the absolutely flat and smooth seabed. But for the seabed with artificial reef or rocky coast, we need more logical rule to identify the anomalous data. Most current published automatic error detection algorithms for multi-beam data rely on comparison with its neighbor soundings. Thus the efficiency of error detection is very much related to the data quality. However, our automatic error detection algorithm for multi-beam data has following characteristics. At first, we find the relationship between the beam angles and the positions where the beams hit the seabed in a profile. Secondly, the "shadow effect" caused by the features on the seabed is been investigated. Thirdly, time-based moving average algorithm was used for previous stages flagged data points by comparing with its neighbors from its adjacent line. The outliers are removed due to above mentioned prefiltering processes. And finally, the consistency of the inter-swath data and the seabed roughness were also checked by comparing inter-swath data . The result shows that the two phases error flagging process originated from our error detection algorithm not only filter out the big mistake but detect small unreasonable soundings. However, it is found that a lot of soundings around artificial reefs don''t have enough neighbor soundings for depth accuracy estimation owing to lack of inter-swath overlay area at the second flagging phase. We can easily distinguish the depth points located on the surface of artificial reefs from outliers by overlapping two adjacent survey lines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Lin, Ming-Jun, und 林銘軍. „A Layout-Aware Implication Selection Algorithm without Loss on Probability of Error Detection“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/trn77z.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
電機工程學系研究所
107
Due to the development of advanced semi-conductor process, the circuit in a chip could be implemented with a smaller size. However, circuit may cause fails because of the defects during manufacturing. In the case that the size of integrated circuits is shrinking, the impact of process defects on product yield is also worth paying attention to. Besides, it is also possible that soft errors caused by cosmic rays or thermal neutrons would temporarity change the function of a chip and make the system unreliable. In order to enhance the reliability of the circuit, concurrent error detection (CED) is one promising method. This method can detect errors during normal function operations. In recent years, a new CED method called implication-based CED has been proposed. Implications are constant relationship between inputs and outputs of logic gates as well as between wires in a circuit. By using implications, particular checking logic is designed to test a circuit by checking whether the implications are valid or not. If at least one implication is violated, a particular error indication signal will be activated to warn users or enable re-configuration of a system. However, there may be a lot of implications in a circuit. Selection of appropriate implications is thus very critical, which would affect not only the incurred area cost, but also the placement and routing complexity and performance. Based on the implication-based CED. In this thesis, we proposed an implication selection method. This method considers the number of nodes for logic gates, their routing characteristics during the physical layout process, as well as the detectability of errors to select appropriate implications. During the auto place and route (APR) procedure, we focus on realizing the maximum density without violating any design rules. In order to evaluate the effectiveness of the proposed method, a total of 18 ISCAS’85 and ITC’99 benchmark circuits are employed. The experimental results show that our proposed method can increase the circuit density without any DRC violation by 11% on average. The area overhead is thus reduced by 36.1%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Chang, Hung-Chia, und 張宏嘉. „The study of using different objective functions with algorithm in gross error detection“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4m2r49.

Der volle Inhalt der Quelle
Annotation:
碩士
國立政治大學
地政學系
106
In the field of surveying, Least Square (LS) methods are often used in adjustment. However, LS built on observations usually show with random errors. If observations have gross errors, the solution of LS will be effected easily. So this study uses different objective functions to calculate with different algorithms, and analyzes the ability of gross errors detection with test statistic. The methods in this study are equal weight LS、Iteratively Reweighted LS (IRLS)、Least Absolute Deviation (LAD) and Optimal Weight Matrix (OWM). This study proposes a concept “inverse weight matrix of LAD” to solve the problem that LAD lacks test statistics. And assess the different methods’ results with weight value、 standardized residual and redundant observation component. In simulated data, when observations have less redundant observations, OWM and “inverse weight matrix of LAD” have better ability of gross error detection, and them make the gross errors have lower weight value. With more observations, the IRLS has better result. In real data, the posteriori variance will be effected easily, and lead to every methods can’t locate the gross error. However, OWM and “inverse weight matrix of LAD” can enlarge the standardized residual of gross errors and help user to check the observations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Kai-Ting, Shih. „Perturbed decoding algorithm for concatenated error correcting and detecting codes systems“. 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2607200606405100.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Shih, Kai-Ting, und 施凱挺. „Perturbed decoding algorithm for concatenated error correcting and detecting codes systems“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/55482165469812622566.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
電信工程學研究所
94
We consider a concatenated coded system consists of an inner error correcting code and an outer error detecting code. In a conventional decoding scheme, the inner decoder produces the best codeword from its perspective. The best codeword is then checked by the outer decoder. The performance of the concatenated coded system can be improved by having the inner decoder produce not only the most likely candidate but also other highly likely candidates. In this paper, we propose a new algorithm called "perturbed decoding algorithm"(PA). In PA, other highly likely candidate is produced by feeding the inner decoder with slightly perturbed versions of the received signal. The concept of PA is compatible with most combinations of inner code and outer code. Because PA does not require the use of a sophisticated inner decoder, it is straightforward to implement in silicon technology. From our simulation, PA can achieve a performance gain greater than 1 dB.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Lin, Kuang-Chun, und 林劻錞. „A Probability of Error Detection Lossless Implication Selection Algorithm with Consideration of Interconnect Loading“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ks9r5b.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
電機工程學系研究所
107
With the advancement of the semi-conductor manufacturing process, the feature size of transistors shrinks aggressively. The chip size could thus become smaller. However, this also makes the chip more susceptible to manufacturing defects/noises and thus more unreliable. For some mission-critical applications, such as aviation, automotive, medical electronics, if errors appear during normal operations, it may cause significant damage to human life and property safety. In the recent years, a new concurrent error detection (CED) method, called implication-based CED, has been proposed to detect errors during normal function operations of a circuit. By using the existing invariant relationships between circuit wires, particular error detection logic can be designed to improve the reliability and diagnosability of a target circuit. The implications exist not only between inputs and outputs of a logic gate, but also between wires in the circuit. One issue for the implication-based CED is that unaffordable circuit delay overhead may incur if too many implications are employed for error checking. How to select appropriate implications is thus of great importance. In this thesis, we carefully analyze the previously developed implication selection algorithm, and identify three factors that are critically related to the significance of the delay overhead. Accordingly, a systematic method is proposed to reduce the delay overhead, including selecting appropriate implications with a small size as well as inserting buffers to heavily loaded wires. By using the proposed procedure, not only the probability of error detection is guaranteed to be lossless, but also the delay overhead due to implication checking logic can be significantly reduced. In order to evaluate the effectiveness of the proposed method, 18 ISCAS’85 and ITC’99 benchmark circuits are considered for experiments. The experimental results show that our proposed method reduces 29% delay overhead on average.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Yang, Yung-Ning, und 楊詠甯. „A New Double-Talk Detection Using Square-Error-Based Regularization NLMS Algorithm For Acoustic Echo Cancellation“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/22713058231971175644.

Der volle Inhalt der Quelle
Annotation:
碩士
元智大學
電機工程學系
96
The main purpose of this thesis is to study the double-talk detection of the acoustic echo cancellation. We use the frame of the two-path echo cancellation which is combined with cross-correlation method to be our double-talk detection. The detector can actually differentiate between echo and near-end speech. We introduce the square-error-based regularization NLMS algorithm acoustic echo cancellation method with double-talk detection. A large number of simulations demonstated the usefulness of our sheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Cook, Robert Joseph. „Application of item response theory models to the algorithmic detection of shift errors on paper and pencil tests“. 2013. https://scholarworks.umass.edu/dissertations/AAI3603070.

Der volle Inhalt der Quelle
Annotation:
On paper and pencil multiple choice tests, the potential for examinees to mark their answers in incorrect locations presents a serious threat to the validity of test score interpretations. When an examinee skips one or more items (i.e., answers out of sequence) but fails to accurately reflect the size of that skip on their answer sheet, that can trigger a string of misaligned responses called shift errors. Shift errors can result in correct answers being marked as incorrect, leading to possible underestimation of an examinee's true ability. Despite movement toward computerized testing in recent years, paper and pencil multiple choice tests are still pervasive in many high stakes assessment settings, including K 12 testing (e.g., MCAS) and college entrance exams (e.g., SAT), leaving a continuing need to address issues that arise within this format. Techniques for detecting aberrant response patterns are well established but do little to recognize reasons for the aberrance, limiting options for addressing the misfitting patterns. While some work has been done to detect and address specific forms of aberrant response behavior, little has been done in the area of shift error detection, leaving great room for improvement in addressing this source of aberrance. The opportunity to accurately detect construct irrelevant errors and either adjust scores to more accurately reflect examinee ability or flag examinees with inaccurate scores for removal from the dataset and retesting would improve the validity of important decisions based on test scores, and could positively impact model fit by allowing for more accurate item parameter and ability estimation. The purpose of this study is to investigate new algorithms for shift error detection that employ IRT models for probabilistic determination as to whether misfitting patterns are likely to be shift errors. The study examines a matrix of detection algorithms, probabilistic models, and person parameter methods, testing combinations of these factors for their selectivity (i.e., true positives vs. false positives), sensitivity (i.e., true shift errors detected vs. undetected), and robustness to parameter bias, all under a carefully manipulated, multifaceted simulation environment. This investigation attempts to provide answers to the following questions, applicable across detection methods, bias reduction procedures, shift conditions, and ability levels, but stated generally as: 1) How sensitively and selectively can an IRT based probabilistic model detect shift error across the full range of probabilities under specific conditions?, 2) How robust is each detection method to the parameter bias introduced by shift error?, 3) How well does the detection method detect shift errors compared to other, more general, indices of person fit?, 4) What is the impact on bias of making proposed corrections to detected shift errors?, and 4) To what extent does shift error, as detected by the method, occur within an empirical data set? Results show that the proposed methods can indeed detect shift errors at reasonably high detection rates with only a minimal number of false positives, that detection improves when detecting longer shift errors, and that examinee ability is a huge determinant factor in the effectiveness of the shift error detection techniques. Though some detection ability is lost to person parameter bias, when detecting all but the shortest shift errors, this loss is minimal. Application to empirical data also proved effective, though some discrepancies in projected total counts suggest that refinements in the technique are required. Use of a person fit statistic to detect examinees with shift errors was shown to be completely ineffective, underscoring the value of shift error specific detection methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Καρύμπαλη, Ειρήνη. „Αποδοτικές τεχνικές αντιστοίχισης και ψηφιακής υδατογράφησης εικόνων“. 2005. http://nemertes.lis.upatras.gr/jspui/handle/10889/270.

Der volle Inhalt der Quelle
Annotation:
Η αντιστοίχιση εικόνων έχει σαν σκοπό την εύρεση γεωμετρικών και άλλων διαφορών ανάμεσα σε δύο ή περισσότερες εικόνες. Η ψηφιακή υδατογράφηση εικόνων προσφέρει κατοχύρωση των πνευματικών δικαιωμάτων, εισάγοντας στις εικόνες ένα αδιόρατο σήμα, ένα υδατογράφημα, με τέτοιο τρόπο ώστε να είναι δύσκολο να αφαιρεθεί. Η αντιστοίχιση μπορεί να αποτελέσει τμήμα της ψηφιακής υδατογράφησης, στη φάση της ανίχνευσης του υδατογραφήματος. Επιπλέον, για την ανίχνευση του υδατογραφήματος χρησιμοποιούνται παρόμοιες ή και ίδιες μετρικές ομοιότητας με αυτές που χρησιμοποιούνται στην αντιστοίχιση. Έτσι, οποιαδήποτε βελτίωση αφορά την αντιστοίχιση ή τις μετρικές ομοιότητας μπορεί να έχει θετικές επιδράσεις και στην ψηφιακή υδατογράφηση. Η έρευνα που έγινε στα πλαίσια της διδακτορικής διατριβής σε σχέση με το πρόβλημα της αντιστοίχισης αφορά τη συσχέτιση των εικόνων στο χωρικό πεδίο, η οποία έχει το εξής μειονέκτημα: η περιοχή γύρω από τη μέγιστη τιμή της μπορεί να έχει μεγάλο εύρος και να επηρεάζει την ακρίβεια της αντιστοίχισης. Για την αντιμετώπιση αυτού του προβλήματος, προτείνεται μια διαδικασία προ-λεύκανσης των εικόνων, βασισμένη στο φίλτρο σφάλματος πρόβλεψης. Επίσης, αναπτύσσεται ένας επαναληπτικός αλγόριθμος αντιστοίχισης για μετατοπίσεις και περιστροφές, ο οποίος εφαρμόζεται σε ακολουθίες ιατρικών εικόνων με σκοπό τη διάγνωση δυσπλασιών και κακοηθειών. Ένα δεύτερο μειονέκτημα της χωρικής συσχέτισης είναι το μεγάλο υπολογιστικό της κόστος. Στη διδακτορική διατριβή προτείνεται ένα γρήγορο σχήμα υπολογισμού της, το οποίο βασίζεται σε κατάλληλη τμηματοποίηση της εικόνας και στη χρήση του μετασχηματισμού Fourier. Επίσης, το πιο απαιτητικό κομμάτι της διαδικασίας αντιστοίχισης είναι ο υπολογισμός της χρησιμοποιούμενης μετρικής σαν συνάρτηση της σχετικής θέσης των εικόνων. Έτσι, αναπτύσσεται ένας αποδοτικός επαναληπτικός αλγόριθμος, ο οποίος μειώνει σημαντικά τις αναζητήσεις που απαιτούνται για την εύρεση του μεγίστου του συντελεστή συσχέτισης και παρέχει ακρίβεια εικονοστοιχείου. Τέλος, προτείνεται μια τεχνική η οποία παρέχει ακρίβεια υποδιαίρεσης εικονοστοιχείου και βασίζεται στη μεγιστοποίηση του συντελεστή συσχέτισης. Η τεχνική αυτή δεν απαιτεί ανακατασκευή των τιμών της έντασης και παρέχει μια λύση κλειστού τύπου για την εκτίμηση της μετατόπισης. Όσο αφορά το πρόβλημα της υδατογράφησης, η έρευνα που έγινε στα πλαίσια της διδακτορικής διατριβής στοχεύει στην ένθεση ισχυρών υδατογραφημάτων στο χωρικό πεδίο και στη βελτίωση της ανίχνευσής τους. Καταρχήν, προτείνεται μια χωρική αντιληπτική μάσκα, η οποία βασίζεται στην τοπική διασπορά του σφάλματος πρόβλεψης της αρχικής εικόνας. Παράλληλα, αναπτύσσεται ένα «τυφλό» σύστημα ανίχνευσης και η βελτιωμένη απόδοσή του σε σχέση με υπάρχοντες ανιχνευτές αποδεικνύεται θεωρητικά για τη γενική περίπτωση επίθεσης με γραμμικό φίλτρο και θόρυβο. Στη συνέχεια, παράγεται μια νέα χωρική μάσκα η οποία επιτρέπει την ένθεση υδατογραφημάτων με εξαιρετικά μεγάλη ενέργεια, διατηρώντας ταυτόχρονα την ποιότητα της εικόνας σε πολύ καλό επίπεδο. Η απόδοσή της συγκρίνεται με πολύ γνωστές και ευρέως χρησιμοποιούμενες μάσκες και αποδεικνύεται σημαντικά καλύτερη. Επίσης, αναπτύσσεται ένα βελτιωμένο σχήμα ανίχνευσης, το οποίο σε συνδυασμό με την προτεινόμενη μάσκα έχει πολύ καλή απόδοση. Τέλος, προτείνεται μια μέθοδος εισαγωγής υδατογραφήματος στην εικόνα με πολλαπλασιαστικό τρόπο, χρησιμοποιώντας χωρο-χρονική κωδικοποίηση μπλοκ και ειδικότερα μια 4x4 πραγματική, ορθογώνια διάταξη συμβόλων. Το σχήμα αυτό αποδεικνύεται να έχει πολύ καλύτερη απόδοση σε σχέση με την επαναληπτική υδατογράφηση.
Image registration aims at finding geometrical or other differences between two or more images. Image watermarking offers copyright protection by embedding in the images an invisible signal, a watermark, in such a way that it is difficult to be removed. Image registration can be part of a watermark detector. Moreover, similar (or the same) similarity measures are used for both image registration and watermark detection. Thus, any improvement concerning the image registration or the similarity measures can have positive effects on image watermarking, too. Our research concerning the image registration problem deals with the spatial cross-correlation, which has the following drawback: the region around its maximum value can be rather wide, affecting the registration accuracy. This problem can be solved, by properly pre-whitening the images with the prediction error filter. Furthermore, an iterative algorithm is proposed for registering images with translation and rotation differences, which is then applied in sequences of medical images for cancer diagnosis. A second disadvantage of the spatial correlation is its computational cost. A fast computation scheme is proposed, based on a proper partitioning of the images and the Fourier transform. Also, the most computationally intensive part of a registration process is the evaluation of the involved measure for different relative image positions. Thus, an efficient iterative algorithm is developed that considerably reduces the number of searches required for finding the correlation coefficient maximum value and provides pixel accuracy. Finally, an image registration technique with subpixel accuracy is proposed, which is based on the correlation coefficient maximization. This technique does not require the reconstruction of the intensity values and provides a closed form solution to the subpixel translation estimation problem. As far as the problem of image watermarking is concerned, our research aims at embedding robust watermarks in spatial domain and improving their detection. First, a spatial perceptual mask is proposed, based on the local variance of the initial image prediction error. A blind detector is also developed, which performs better than the existing ones. This is theoretically proved for the general attack case with linear filter and noise. Furthermore, a new spatial perceptual mask is proposed that allows for a significantly increased strength of the watermark, while at the same time the image quality remains very good. Its performance is compared to known and widely used masks and is proved to be much better. Moreover, an improved detector is developed, which, combined with the new mask, performs very well. Finally, a new multiplicative watermark embedding is proposed, which uses space-time block coding (specifically a 4x4 real orthogonal design). This scheme is proved to perform much better than the repetitive watermarking.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie