Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Error detection algorithms.

Zeitschriftenartikel zum Thema „Error detection algorithms“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Error detection algorithms" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Liu, Qing Min, Xue Li und L. Zhang. „Realization Method for Detection on Arc Based on CCD“. Applied Mechanics and Materials 687-691 (November 2014): 856–60. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.856.

Der volle Inhalt der Quelle
Annotation:
s: Arc detection is difficult for processing, assembly and testing of industrial production because of limitations of detection methods, algorithms and instruments. The least-squares algorithm is used to fit data in circle detection. The application of conventional least-squares algorithm is limited, as roundness error is bigger, precision is lower. For detecting arc with data points of non-uniform distribution, improved least-squares algorithm, developed an analysis algorithm for assessing the minimum zone roundness error. Center and radius can be solved, without iteration and truncation error. Using the discrete data instances verified different roundness error evaluation methods. Visual measurements have been carried out using the proposed methods. Calculated results using the four kinds of roundness error evaluation methods (Figure 7-10). Ball diameter errors are-0.0245mm、0.0176mm、-0.1052mm and 0.302mm, roundness errors are 0.07mm、0.063mm、0.078mm and 0.146mm. The improved least-squares algorithm and the minimum zone algorithm are suitable for distributed data of all kinds situations, particularly suitable for the realization of machine vision inspection system, fast speed, high precision, wide application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Abdulsattar, Ruaa Alaadeen, und Nada Hussein M. Ali. „Lookup Table Algorithm for Error Correction in Color Images“. JOIV : International Journal on Informatics Visualization 2, Nr. 2 (03.03.2018): 63. http://dx.doi.org/10.30630/joiv.2.2.113.

Der volle Inhalt der Quelle
Annotation:
Error correction and error detection techniques are often used in wireless transmission systems. A color image of type BMP is considered as an application of developed lookup table algorithms to detect and correct errors in these images. Decimal Matrix Code (DMC) and Hamming code (HC) techniques were integrated to compose Hybrid Matrix Code (HMC) to maximize the error detection and correction. The results obtained from HMC still have some error not corrected because the redundant bits added by Hamming codes to the data are considered inadequate, and it is suitable when the error rate is low for detection and correction processes. Besides, a Hamming code could not detect large burst error period, in addition, the have same values sometimes which lead to not detect the error and consequently increase the error ratio. The proposed algorithm LUT_CORR is presented to detect and correct errors in color images over noisy channels, the proposed algorithm depends on the parallel Cyclic Redundancy Code (CRC) method that's based on two algorithms: Sarwate and slicing By N algorithms. The LUT-CORR and the aforementioned algorithms were merged to correct errors in color images, the output results correct the corrupted images with a 100 % ratio almost. The above high correction ratio due to some unique values that the LUT-CORR algorithm have. The HMC and the proposed algorithm applied to different BMP images, the obtained results from LUT-CORR are compared to HMC for both Mean Square Error (MSE) and correction ratio. The outcome from the proposed algorithm shows a good performance and has a high correction ratio to retrieve the source BMP image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Morisette, Jeffrey T., Louis Giglio, Ivan Csiszar, Alberto Setzer, Wilfrid Schroeder, Douglas Morton und Christopher O. Justice. „Validation of MODIS Active Fire Detection Products Derived from Two Algorithms“. Earth Interactions 9, Nr. 9 (01.07.2005): 1–25. http://dx.doi.org/10.1175/ei141.1.

Der volle Inhalt der Quelle
Annotation:
Abstract Fire influences global change and tropical ecosystems through its connection to land-cover dynamics, atmospheric composition, and the global carbon cycle. As such, the climate change community, the Brazilian government, and the Large-Scale Biosphere–Atmosphere (LBA) Experiment in Amazonia are interested in the use of satellites to monitor and quantify fire occurrence throughout Brazil. Because multiple satellites and algorithms are being utilized, it is important to quantify the accuracy of the derived products. In this paper the characteristics of two fire detection algorithms are evaluated, both of which are applied to Terra’s Moderate Resolution Imagine Spectroradiometer (MODIS) data and with both operationally producing publicly available fire locations. The two algorithms are NASA’s operational Earth Observing System (EOS) MODIS fire detection product and Brazil’s Instituto Nacional de Pesquisas Espaciais (INPE) algorithm. Both algorithms are compared to fire maps that are derived independently from 30-m spatial resolution Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery. A quantitative comparison is accomplished through logistic regression and error matrices. Results show that the likelihood of MODIS fire detection, for either algorithm, is a function of both the number of ASTER fire pixels within the MODIS pixel as well as the contiguity of those pixels. Both algorithms have similar omission errors and each has a fairly high likelihood of detecting relatively small fires, as observed in the ASTER data. However, INPE’s commission error is roughly 3 times more than that of the EOS algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nguyen, G. D. „Error-detection codes: algorithms and fast implementation“. IEEE Transactions on Computers 54, Nr. 1 (Januar 2005): 1–11. http://dx.doi.org/10.1109/tc.2005.7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Toghuj, Wael, und Ghazi I. Alkhatib. „Improved Algorithm for Error Correction“. International Journal of Information Technology and Web Engineering 6, Nr. 1 (Januar 2011): 1–12. http://dx.doi.org/10.4018/jitwe.2011010101.

Der volle Inhalt der Quelle
Annotation:
Digital communication systems are an important part of modern society, and they rely on computers and networks to achieve critical tasks. Critical tasks require systems with a high level of reliability that can provide continuous correct operations. This paper presents a new algorithm for data encoding and decoding using a two-dimensional code that can be implemented in digital communication systems, electronic memories (DRAMs and SRAMs), and web engineering. The developed algorithms correct three errors in codeword and detect four, reaching an acceptable performance level. The program that is based on these algorithms enables the modeling of error detection and correction processes, optimizes the redundancy of the code, monitors the decoding procedures, and defines the speed of execution. The performance of the derived code improves error detection and correction over the classical code and with less complexity. Several extensible applications of the algorithms are also given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kari, Lila, Stavros Konstantinidis, Steffen Kopecki und Meng Yang. „Efficient Algorithms for Computing the Inner Edit Distance of a Regular Language via Transducers“. Algorithms 11, Nr. 11 (23.10.2018): 165. http://dx.doi.org/10.3390/a11110165.

Der volle Inhalt der Quelle
Annotation:
The concept of edit distance and its variants has applications in many areas such as computational linguistics, bioinformatics, and synchronization error detection in data communications. Here, we revisit the problem of computing the inner edit distance of a regular language given via a Nondeterministic Finite Automaton (NFA). This problem relates to the inherent maximal error-detecting capability of the language in question. We present two efficient algorithms for solving this problem, both of which execute in time O ( r 2 n 2 d ) , where r is the cardinality of the alphabet involved, n is the number of transitions in the given NFA, and d is the computed edit distance. We have implemented one of the two algorithms and present here a set of performance tests. The correctness of the algorithms is based on the connection between word distances and error detection and the fact that nondeterministic transducers can be used to represent the errors (resp., edit operations) involved in error-detection (resp., in word distances).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhu, Yongkuan, Gurjot Singh Gaba, Fahad M. Almansour, Roobaea Alroobaea und Mehedi Masud. „Application of data mining technology in detecting network intrusion and security maintenance“. Journal of Intelligent Systems 30, Nr. 1 (01.01.2021): 664–76. http://dx.doi.org/10.1515/jisys-2020-0146.

Der volle Inhalt der Quelle
Annotation:
Abstract In order to correct the deficiencies of intrusion detection technology, the entire computer and network security system are needed to be more perfect. This work proposes an improved k-means algorithm and an improved Apriori algorithm applied in data mining technology to detect network intrusion and security maintenance. The classical KDDCUP99 dataset has been utilized in this work for performing the experimentation with the improved algorithms. The algorithm’s detection rate and false alarm rate are compared with the experimental data before the improvement. The outcomes of proposed algorithms are analyzed in terms of various simulation parameters like average time, false alarm rate, absolute error as well as accuracy value. The results show that the improved algorithm advances the detection efficiency and accuracy using the designed detection model. The improved and tested detection model is then applied to a new intrusion detection system. After intrusion detection experiments, the experimental results show that the proposed system improves detection accuracy and reduces the false alarm rate. A significant improvement of 90.57% can be seen in detecting new attack type intrusion detection using the proposed algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Song, Yan, Haixu Tang, Haoyu Zhang und Qin Zhang. „Overlap detection on long, error-prone sequencing reads via smooth q-gram“. Bioinformatics 36, Nr. 19 (20.04.2020): 4838–45. http://dx.doi.org/10.1093/bioinformatics/btaa252.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivation Third generation sequencing techniques, such as the Single Molecule Real Time technique from PacBio and the MinION technique from Oxford Nanopore, can generate long, error-prone sequencing reads which pose new challenges for fragment assembly algorithms. In this paper, we study the overlap detection problem for error-prone reads, which is the first and most critical step in the de novo fragment assembly. We observe that all the state-of-the-art methods cannot achieve an ideal accuracy for overlap detection (in terms of relatively low precision and recall) due to the high sequencing error rates, especially when the overlap lengths between reads are relatively short (e.g. <2000 bases). This limitation appears inherent to these algorithms due to their usage of q-gram-based seeds under the seed-extension framework. Results We propose smooth q-gram, a variant of q-gram that captures q-gram pairs within small edit distances and design a novel algorithm for detecting overlapping reads using smooth q-gram-based seeds. We implemented the algorithm and tested it on both PacBio and Nanopore sequencing datasets. Our benchmarking results demonstrated that our algorithm outperforms the existing q-gram-based overlap detection algorithms, especially for reads with relatively short overlapping lengths. Availability and implementation The source code of our implementation in C++ is available at https://github.com/FIGOGO/smoothq. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chen, Xin W., und Shimon Y. Nof. „Error Detection and Prediction Algorithms: Application in Robotics“. Journal of Intelligent and Robotic Systems 48, Nr. 2 (05.01.2007): 225–52. http://dx.doi.org/10.1007/s10846-006-9094-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

JUNG, YUNHO, SEONGJOO LEE und JAESEOK KIM. „DESIGN AND IMPLEMENTATION OF SYMBOL DETECTOR FOR MIMO SPATIAL MULTIPLEXING SYSTEMS“. Journal of Circuits, Systems and Computers 20, Nr. 04 (Juni 2011): 727–39. http://dx.doi.org/10.1142/s0218126611007578.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose an efficient symbol detection algorithm for multiple-input multiple-output spatial multiplexing (MIMO-SM) systems and present its design and implementation results. By enhancing the error performance of the first detected symbol that causes error propagation, the proposed algorithm achieves a considerable performance gain compared with the conventional sorted QR decomposition (SQRD) based detection and the ordered successive detection (OSD) algorithms. The bit error rate (BER) performance of the proposed detection algorithm is evaluated by a simulation. In the case of a 16QAM MIMO-SM system with 4 transmit and 4 receive (4 × 4) antennas, at BER = 10-3 the proposed algorithm results in a gain improvement of about 2.5–13.5 dB over the previous algorithms. The proposed detection algorithm was designed in a hardware description language (HDL) and synthesized to gate-level circuits using 0.18 μm 1.8 V CMOS standard cell library. The results show that the proposed algorithm can be implemented without increasing the hardware costs significantly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wang, Lei, Bao Yu Zheng und Jing Wu Cui. „New Recursive Algorithms for Signal Detection“. Advanced Materials Research 271-273 (Juli 2011): 1059–62. http://dx.doi.org/10.4028/www.scientific.net/amr.271-273.1059.

Der volle Inhalt der Quelle
Annotation:
Based on the synthesis and analysis of recursive receivers, new algorithms are proposed to achieve satisfactory performance with moderate computational complexity. During the analysis, some interesting properties shared by the proposed procedures are described. Finally, the performance assessment shows that new schemes are superior to the linear detector and ordinary grouping algorithm, and achieve a bit-error rate close to that of the optimum receiver.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Nagaraj, Nithin. „Using cantor sets for error detection“. PeerJ Computer Science 5 (14.01.2019): e171. http://dx.doi.org/10.7717/peerj-cs.171.

Der volle Inhalt der Quelle
Annotation:
Error detection is a fundamental need in most computer networks and communication systems in order to combat the effect of noise. Error detection techniques have also been incorporated with lossless data compression algorithms for transmission across communication networks. In this paper, we propose to incorporate a novel error detection scheme into a Shannon optimal lossless data compression algorithm known as Generalized Luröth Series (GLS) coding. GLS-coding is a generalization of the popular Arithmetic Coding which is an integral part of the JPEG2000 standard for still image compression. GLS-coding encodes the input message as a symbolic sequence on an appropriate 1D chaotic map Generalized Luröth Series (GLS) and the compressed file is obtained as the initial value by iterating backwards on the map. However, in the presence of noise, even small errors in the compressed file leads to catastrophic decoding errors owing to sensitive dependence on initial values, the hallmark of deterministic chaos. In this paper, we first show that repetition codes, the oldest and the most basic error correction and detection codes in literature, actually lie on a Cantor set with a fractal dimension of $ \frac{1}{n} $, which is also the rate of the code. Inspired by this, we incorporate error detection capability to GLS-coding by ensuring that the compressed file (initial value on the chaotic map) lies on a Cantor set. Even a 1-bit error in the initial value will throw it outside the Cantor set, which can be detected while decoding. The rate of the code can be adjusted by the fractal dimension of the Cantor set, thereby controlling the error detection performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

BADIEZADEGAN, SHIRIN, und HAMID SOLTANIAN-ZADEH. „DESIGN AND EVALUATION OF MATCHED WAVELETS WITH MAXIMUM CODING GAIN AND MINIMUM APPROXIMATION ERROR CRITERIA FOR R PEAK DETECTION IN ECG“. International Journal of Wavelets, Multiresolution and Information Processing 06, Nr. 06 (November 2008): 799–825. http://dx.doi.org/10.1142/s0219691308002690.

Der volle Inhalt der Quelle
Annotation:
Recently, several wavelet-based algorithms have been proposed for feature extraction in non-stationary signals such as ECG. These methods, however, have mainly used general purpose (unmatched) wavelet bases such as Daubechies and Quadratic Spline. In this paper, five new matched wavelet bases, with minimum approximation error and maximum coding gain criteria, are designed and applied to ECG signal analysis. To study the effect of using different wavelet bases for this application, two different wavelet-based R peak detection algorithms are implemented: (1) a conventional wavelet-based method; and (2) a modified wavelet-based R peak detection algorithm. Both algorithms are evaluated using the MIT-BIH Arrhythmia database. Experimental results show lower computational complexity (up to 76%) of the proposed R peak detection method compared to the conventional method. They also show considerable decrease in the number of failed detections (up to 55%) for both the conventional and the proposed algorithms when using matched wavelets instead of Quadratic Spline wavelet which, according to the literature, has generated the best detection results among all conventional wavelet bases studied previously for ECG signal analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Zhou, Qianqian, Tze Ping Loh, Tony Badrick und Qianqian Zhou. „Impact of combining data from multiple instruments on performance of patient-based real-time quality control“. Biochemia medica 31, Nr. 2 (15.06.2021): 276–82. http://dx.doi.org/10.11613/bm.2021.020705.

Der volle Inhalt der Quelle
Annotation:
It is unclear what is the best strategy for applying patient-based real-time quality control (PBRTQC) algorithm in the presence of multiple instruments. This simulation study compared the error detection capability of applying PBRTQC algorithms for instruments individually and in combination using serum sodium as an example. Four sets of random serum sodium measurements were generated with differing means and standard deviations to represent four simulated instruments. Moving median with winsorization was selected as the PBRTQC algorithm. The PBRTQC parameters (block size and control limits) were optimized and applied to the four simulated laboratory data sets individually and in combination. When the PBRTQC algorithm were individually optimized and applied to the data of the individual simulated instruments, it was able to detect bias several folds faster than when they were combined. Similarly, the individually applied algorithms had perfect error detection rates across different magnitudes of bias, whereas the error detection rates of the algorithm applied on the combined data missed smaller biases. The performance of the individually applied PBRTQC algorithm performed more consistently among the simulated instruments compared to when the data were combined. While combining data from different instruments can increase the data stream and hence, increase the speed of error detection, it may widen the control limits and compromising the probability of error detection. The presence of multiple instruments in the data stream may dilute the effect of the error when it only affects a selected instrument.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Young, G. E., M. R. Moan und E. A. Misawa. „Transient Error Detection During UDUT Covariance Calculations Using Algorithm Based Fault Tolerance“. Journal of Dynamic Systems, Measurement, and Control 119, Nr. 2 (01.06.1997): 284–86. http://dx.doi.org/10.1115/1.2801246.

Der volle Inhalt der Quelle
Annotation:
Algorithm based fault tolerance is studied for use with linear estimation algorithms such as the Kalman filter. Emphasis is placed on the study of algorithm based fault tolerance used in conjunction with the UDUT covariance calculation. A real-time simulation has been performed in a VMEbus environment utilizing a single board computer and a commercial real-time operating system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Yan, Shi Hao, Jian Ping Xing und De Qiang Wang. „Correlation between Estimation Error and Possible Region in Localization Algorithms“. Advanced Materials Research 457-458 (Januar 2012): 1514–20. http://dx.doi.org/10.4028/www.scientific.net/amr.457-458.1514.

Der volle Inhalt der Quelle
Annotation:
Many localization algorithms in wireless sensor networks mention possible regions to increase the degree of localization precision. In this paper, we present the definite correlation between the estimation error and the possible region. The estimation error, which is the most important indictor to judge the performance of a localization algorithm, is proportional to the square root of the area of the possible region and the factor of proportionality relates to the shape of the possible region. We also propose two applications of the definite correlation, including estimation errors detection and energy conservation. The simulation results show that the definite correlation is suitable for all kinds of possible regions and it is feasible to detect estimation errors and conserve energy when we fix reasonable areas of possible regions in wireless sensor networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Egiazarian, Karen, Pauli Kuosmanen und Ciprian Bilcu. „Variable step-size LMS adaptive filters for CDMA multiuser detection“. Facta universitatis - series: Electronics and Energetics 17, Nr. 1 (2004): 21–32. http://dx.doi.org/10.2298/fuee0401021e.

Der volle Inhalt der Quelle
Annotation:
Due to its simplicity the adaptive Least Mean Square (LMS) algorithm is widely used in Code-Division Multiple access (CDMA) detectors. However its convergence speed is highly dependent on the eigen value spread of the input covariance matrix. For highly correlated inputs the LMS algorithm has a slow convergence which require long training sequences and therefore low transmission speeds. Another drawback of the LMS is the trade-off between convergence speed and steady-state error since both are controlled by the same parameter, the step-size. In order to eliminate these drawbacks, the class of Variable Step-Size LMS (VSSLMS) algorithms was introduced. In this paper, we study the behavior of some algorithms belonging to the class of VSSLMS for training based multiuser detection in a CDMA system. We show that the proposed Complementary Pair Variable Step-Size LMS algorithms highly increase the speed of convergence while reducing the trade-off between the convergence speed and the output error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Killian, Cédric, Camel Tanougast, Fabrice Monteiro und Abbas Dandache. „A New Efficient and Reliable Dynamically ReconfigurableNetwork-on-Chip“. Journal of Electrical and Computer Engineering 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/843239.

Der volle Inhalt der Quelle
Annotation:
We present a new reliableNetwork-on-Chip(NoC) suitable forDynamically Reconfigurable Multiprocessors on Chipsystems. The proposedNoCis based on routers performing online error detection of routing algorithm and data packet errors. Our work focuses on adaptive routing algorithms which allow to bypass faulty components or processor elements dynamically implemented inside the network. The proposed routing error detection mechanism allows to distinguish routing errors from bypasses of faulty components. The new router architecture is based on additional diagonal state indications and specific logic blocks allowing the reliable operation of theNoC. The main originality in the proposedNoCis that only the permanently faulty parts of the routers are disconnected. Therefore, our approach maintains a high run time throughput in theNoCwithout data packet loss thanks to a self-loopbackmechanism inside each router.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Lee, Seongjin, Wonteak Lim und Myoungho Sunwoo. „Robust Parking Path Planning with Error-Adaptive Sampling under Perception Uncertainty“. Sensors 20, Nr. 12 (23.06.2020): 3560. http://dx.doi.org/10.3390/s20123560.

Der volle Inhalt der Quelle
Annotation:
In automated parking systems, a path planner generates a path to reach the vacant parking space detected by a perception system. To generate a safe parking path, accurate detection performance is required. However, the perception system always includes perception uncertainty, such as detection errors due to sensor noise and imperfect algorithms. If the parking path planner generates the parking path under uncertainty, problems may arise that cause the vehicle to collide due to the automated parking system. To avoid these problems, it is a challenging problem to generate the parking path from the erroneous parking space. To solve this conundrum, it is important to estimate the perception uncertainty and adapt the detection error in the planning process. This paper proposes a robust parking path planning that combines an error-adaptive sampling of generating possible path candidates with a utility-based method of making an optimal decision under uncertainty. By integrating the sampling-based method and the utility-based method, the proposed algorithm continuously generates an adaptable path considering the detection errors. As a result, the proposed algorithm ensures that the vehicle is safely located in the true position and orientation of the parking space under perception uncertainty.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Guardabasso, V., G. De Nicolao, M. Rocchetti und D. Rodbard. „Evaluation of pulse-detection algorithms by computer simulation of hormone secretion“. American Journal of Physiology-Endocrinology and Metabolism 255, Nr. 6 (01.12.1988): E775—E784. http://dx.doi.org/10.1152/ajpendo.1988.255.6.e775.

Der volle Inhalt der Quelle
Annotation:
A versatile method is presented for generating synthetic hormonal time series, containing peaks at known locations, to be used to objectively evaluate both the false-negative (F-) and false-positive (F+) statistical error rates of computerized pulse-detection algorithms. Synthetic data are generated by assuming hormone secretion to occur as a succession of instantaneous release pulses, distributed as Poisson events, separated by quiescent intervals. The pulses are convolved to simulate cumulation of consecutive events and clearance of the hormone. Randomly generated errors, corresponding in magnitude to typical experimental measurement error, are then added to the convolved series. The choice of different values for simulation parameters (e.g., frequency and amplitude of pulses) allows one to emulate some typical physiological patterns of hormone secretion for luteinizing hormone, growth hormone, and thyrotropin or other hormones. Various subsets can be extracted from a simulated time series to study the effect of sampling frequency on the detection of pulses. We show that in sampled series the "observable frequency" of pulses is less than the true nominal frequency. Methods for evaluating pulse-detection algorithms and expressing the results are presented. Simulations of LH secretion were analyzed with the program DETECT. We show that minimizing F+ error rates only might lead to excessively high F- rates. A proper choice of sampling frequency and program probability levels can be made to provide acceptable F+ and F- error rates for various patterns of hormone secretion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Pouliot, D. A., D. J. King und D. G. Pitt. „Development and evaluation of an automated tree detection–delineation algorithm for monitoring regenerating coniferous forests“. Canadian Journal of Forest Research 35, Nr. 10 (01.10.2005): 2332–45. http://dx.doi.org/10.1139/x05-145.

Der volle Inhalt der Quelle
Annotation:
An algorithm is presented for automated detection–delineation of coniferous tree regeneration that combines strategies of several existing algorithms, including image processing to isolate conifer crowns, optimal image scale determination, initial crown detection, and crown boundary segmentation and refinement. The algorithm is evaluated using 6-cm pixel airborne imagery in operational regeneration conditions typically encountered in the boreal forest 5–10 years after harvest. Detection omission and commission errors as well as an accuracy index combining both error types were assessed on a tree by tree basis, on an aggregated basis for each study area, in relation to tree size and the amount of woody competition present. Delineation error was assessed in a similar manner using field-measured crown diameters as a reference. The individual tree detection accuracy index improved with increasing tree size and was >70% for trees larger than 30 cm crown diameter. Crown diameter absolute error measured from automated delineations was <23%. Large crown diameters tended to be slightly underestimated. The presence of overtopping woody competition had a negligible effect on detection accuracy and only reduced estimates of crown diameter slightly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Breitinger, Frank, Georgios Stivaktakis und Vassil Roussev. „Evaluating detection error trade-offs for bytewise approximate matching algorithms“. Digital Investigation 11, Nr. 2 (Juni 2014): 81–89. http://dx.doi.org/10.1016/j.diin.2014.05.002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Dai, Bo, und Ming Lu Ma. „An Automatic Measurement for Pipeline Thickness Detection Using Ultrasonic Method“. Applied Mechanics and Materials 229-231 (November 2012): 1427–36. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1427.

Der volle Inhalt der Quelle
Annotation:
The measurement of wall thickness of pipelines is an important procedure of pipeline corrosion inspection. This procedure can be done automatically in a computer by processing data acquired from ultrasound probe, forming C-scan image, and running thickness detection algorithms. This paper presents in detail the comparison of three ways of processing, which are FFT algorithm, twice FFT algorithm, and improved twice FFT algorithm. The final results show that improved twice FFT algorithm has the best precision compared with the other two algorithms. It has higher accuracy than FFT algorithm and less decision error than twice FFT algorithm. Using this method, defects of a pipeline can be identified and measured effectively using ultrasonic wave.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

An, Yinan, Sifan Liu und Hongzhi Wang. „Error Detection in a Large-Scale Lexical Taxonomy“. Information 11, Nr. 2 (11.02.2020): 97. http://dx.doi.org/10.3390/info11020097.

Der volle Inhalt der Quelle
Annotation:
Knowledge base (KB) is an important aspect in artificial intelligence. One significant challenge faced by KB construction is that it contains many noises, which prevent its effective usage. Even though some KB cleansing algorithms have been proposed, they focus on the structure of the knowledge graph and neglect the relation between the concepts, which could be helpful to discover wrong relations in KB. Motived by this, we measure the relation of two concepts by the distance between their corresponding instances and detect errors within the intersection of the conflicting concept sets. For efficient and effective knowledge base cleansing, we first apply a distance-based model to determine the conflicting concept sets using two different methods. Then, we propose and analyze several algorithms on how to detect and repair the errors based on our model, where we use a hash method for an efficient way to calculate distance. Experimental results demonstrate that the proposed approaches could cleanse the knowledge bases efficiently and effectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Liu, Chang, Sara Shirowzhan, Samad M. E. Sepasgozar und Ali Kaboli. „Evaluation of Classical Operators and Fuzzy Logic Algorithms for Edge Detection of Panels at Exterior Cladding of Buildings“. Buildings 9, Nr. 2 (06.02.2019): 40. http://dx.doi.org/10.3390/buildings9020040.

Der volle Inhalt der Quelle
Annotation:
The automated process of construction defect detection using non-contact methods provides vital information for quality control and updating building information modelling. The external cladding in modular construction should be regularly controlled in terms of the quality of panels and proper installation because its appearance is very important for clients. However, there are limited computational methods for examining the installation issues of external cladding remotely in an automated manner. These issues could be the incorrect sitting of a panel, unequal joints in an elevation, scratches or cracks on the face of a panel or dimensions of different elements of external cladding. This paper aims to present seven algorithms to detect panel edges and statistically compare their performance through application on two scenarios of buildings in construction sites. Two different scenarios are selected, where the building façades are available to the public, and a sample of 100 images is taken using a state-of-the-art 3D camera for edge detection analysis. The experimentation results are validated by using a series of computational error and accuracy analyses and statistical methods including Mean Square Error, Peak Signal to Noise Ratio and Structural Similarity Index. The performance of an image processing algorithm depends on the quality of images and the algorithm utilised. The results show better performance of the fuzzy logic algorithm because it detects clear edges for installed panels. The applications of classical operators including Sobel, Canny, LoG, Prewitt and Roberts algorithms give similar results and show similarities in terms of the average of errors and accuracy. In addition, the results show that the minor difference of the average of the error and accuracy indices for Sobel, Canny, LoG, Prewitt and Roberts methods between both scenarios are not statistically significant, while the difference in the average of the error and accuracy indices for RGB-Sobel and Fuzzy methods between both scenarios are statistically significant. The accuracy of the algorithms can be improved by removing unwanted items such as vegetation and clouds in the sky. The evaluated algorithms assist practitioners to analyse their images collected day to day from construction sites, and to update building information modelling and the project digital drawings. Future work may need to focus on the combination of the evaluated algorithms using new data sets including colour edge detection for automatic defect identification using RGB and 360-degree images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Kim, Bongseok, Youngseok Jin, Youngdoo Choi, Jonghun Lee und Sangdong Kim. „Low-Complexity Super-Resolution Detection for Range-Vital Doppler Estimation FMCW Radar“. Journal of Electromagnetic Engineering and Science 21, Nr. 3 (31.07.2021): 236–45. http://dx.doi.org/10.26866/jees.2021.3.r.31.

Der volle Inhalt der Quelle
Annotation:
This paper proposes low-complexity super-resolution detection for range-vital Doppler estimation frequency-modulated continuous wave (FMCW) radar. In regards to vital radar, and in order to estimate joint range and vital Doppler information such as the human heartbeat and respiration, two-dimensional (2D) detection algorithms such as 2D-FFT (fast Fourier transform) and 2D-MUSIC (multiple signal classification) are required. However, due to the high complexity of 2D full-search algorithms, it is difficult to apply this process to low-cost vital FMCW systems. In this paper, we propose a method to estimate the range and vital Doppler parameters by using 1D-FFT and 1D-MUSIC algorithms, respectively. Among 1D-FFT outputs for range detection, we extract 1D-FFT results based solely on human target information with phase variation of respiration for each chirp; subsequently, the 1D-MUSIC algorithm is employed to obtain accurate vital Doppler results. By reducing the dimensions of the estimation algorithm from 2D to 1D, the computational burden is reduced. In order to verify the performance of the proposed algorithm, we compare the Monte Carlo simulation and root-mean-square error results. The simulation and experiment results show that the complexity of the proposed algorithm is significantly lower than that of an algorithm detecting signals in several regions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Sun, Guangpei, Peng Jiang, Huan Xu, Shanen Yu, Dong Guo, Guang Lin und Hui Wu. „Outlier Detection and Correction for Monitoring Data of Water Quality Based on Improved VMD and LSSVM“. Complexity 2019 (03.02.2019): 1–12. http://dx.doi.org/10.1155/2019/9643921.

Der volle Inhalt der Quelle
Annotation:
To improve the detection rate and reduce the correction error of abnormal data for water quality, an outlier detection and correction method is proposed based on the improved Variational Mode Decomposition (improved VMD) and Least Square Support Vector Machine (LSSVM) algorithms. The correlation coefficient is introduced, for solving the optimal parameter k of VMD algorithm, and an improved VMD algorithm is obtained. Combined with LSSVM algorithm, the outliers of water quality can be detected and repaired. This method is applied for the detection and correction of water quality monitoring outliers using dissolved oxygen which is retrieved from the water quality monitoring station in Hangzhou, Zhejiang Province, China. The result shows that the improved VMD algorithm is of higher detection rate and lower error rate than those of Empirical Mode Decomposition (EMD) and Ensemble Empirical Mode Decomposition (EEMD). The LSSVM algorithm increases the fitting accuracy and decreases correction error in comparison with SVM and BP neural network, which provides important references for the implementation of environmental protection measures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

WANG, LUSHENG, und LIANG DONG. „RANDOMIZED ALGORITHMS FOR MOTIF DETECTION“. Journal of Bioinformatics and Computational Biology 03, Nr. 05 (Oktober 2005): 1039–52. http://dx.doi.org/10.1142/s0219720005001508.

Der volle Inhalt der Quelle
Annotation:
Motivation: Motif detection for DNA sequences has many important applications in biological studies, e.g. locating binding sites regulatory signals, designing genetic probes etc. In this paper, we propose a randomized algorithm, design an improved EM algorithm and combine them to form a software tool. Results: (1) We design a randomized algorithm for consensus pattern problem. We can show that with high probability, our randomized algorithm finds a pattern in polynomial time with cost error at most ∊ × l for each string, where l is the length of the motif and ∊ can be any positive number given by the user. (2) We design an improved EM algorithm that outperforms the original EM algorithm. (3) We develop a software tool, MotifDetector, that uses our randomized algorithm to find good seeds and uses the improved EM algorithm to do local search. We compare MotifDetector with Buhler and Tompa's PROJECTION which is considered to be the best known software for motif detection. Simulations show that MotifDetector is slower than PROJECTION when the pattern length is relatively small, and outperforms PROJECTION when the pattern length becomes large. Availability: It is available for free at , subject to copyright restrictions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Yang, Hongtao, Mei Shen, Li Li, Yu Zhang, Qun Ma und Mengyao Zhang. „New identification method for computer numerical control geometric errors“. Measurement and Control 54, Nr. 5-6 (Mai 2021): 1055–67. http://dx.doi.org/10.1177/00202940211010835.

Der volle Inhalt der Quelle
Annotation:
To address the problems of the low accuracy of geometric error identification and incomplete identification results of the linear axis detection of computer numerical control (CNC) machine tools, a new 21-item geometric error identification method based on double ball-bar measurement was proposed. The model between the double ball-bar reading and the geometric error term in each plane was obtained according to the three-plane arc trajectory measurement. The mathematical model of geometric error components of CNC machine tools is established, and the error fitting coefficients are solved through the beetle antennae search particle swarm optimization (BAS–PSO) algorithm, in which 21 geometric errors, including roll angle errors, were identified. Experiments were performed to compare the optimization effect of the BAS–PSO and PSO and BAS and genetic particle swarm optimization (GA–PSO) algorithms. Experimental results show that the PSO algorithm is trapped in the local optimum, and the BAS–PSO is superior to the other three algorithms in terms of convergence speed and stability, has higher identification accuracy, has better optimization performance, and is suitable for identifying the geometric error coefficient of CNC machine tools. The accuracy and validity of the identification results are verified by the comparison with the results of the individual geometric errors detected through laser interferometer experiments. The identification accuracy of the double ball-bar is below 2.7 µm. The proposed identification method is inexpensive, has a short processing time, is easy to operate, and possesses a reference value for the identification and compensation of the linear axes of machine tools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Wang, Jianping, Shujing Zhang, Wei Chen, Dechuan Kong und Zhou Yu. „Convex optimization–based multi-user detection in underwater acoustic sensor networks“. International Journal of Distributed Sensor Networks 14, Nr. 2 (Februar 2018): 155014771875766. http://dx.doi.org/10.1177/1550147718757665.

Der volle Inhalt der Quelle
Annotation:
Multi-carrier code-division multiple access is an important technical means for high-performance underwater acoustic sensor networks. Nevertheless, severe multiple access interference is a huge challenge. As the core technology, multi-user detection is used to eliminate multiple access interference. The traditional optimal detection algorithms (e.g. maximum likelihood) have very high computational complexity, and the performances of suboptimal detection methods (i.e. zero forcing, minimum mean square error, etc.) are poor. Therefore, taking into account the characteristics of underwater acoustic sensor networks, it is of great significance to design multi-user detection algorithms for achieving a tradeoff between the detection performance and the computational complexity in multi-carrier code-division multiple access systems. In this article, we design a transmitter model of underwater multi-carrier code-division multiple access system and then implement a multi-user detection algorithm based on convex optimization, which is named convex optimization–based algorithm. Next, we conduct the detection performance and computational complexity comparisons of maximum likelihood, zero forcing, minimum mean square error, and convex optimization–based algorithm. The results show that the performance of convex optimization–based algorithm is close to that of maximum likelihood, and the complexity is close to that of zero forcing. Therefore, a tradeoff between the computational complexity and the detection performance is realized in convex optimization–based algorithm. It means that convex optimization–based algorithm is suitable for the multi-user detection in multi-carrier code-division multiple access systems of underwater acoustic sensor networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Peeta, Srinivas, und Debjit Das. „Continuous Learning Framework for Freeway Incident Detection“. Transportation Research Record: Journal of the Transportation Research Board 1644, Nr. 1 (Januar 1998): 124–31. http://dx.doi.org/10.3141/1644-13.

Der volle Inhalt der Quelle
Annotation:
Existing freeway incident detection algorithms predominantly require extensive off-line training and calibration precluding transferability to new sites. Also, they are insensitive to demand and supply changes in the current site without recalibration. We propose two neural network-based approaches that incorporate an on-line learning capability, thereby ensuring transferability, and adaptability to changes at the current site. The least-squares technique and the error back propagation algorithm are used to develop on-line neural network-trained versions of the popular California algorithm and the more recent McMaster algorithm. Simulated data from the integrated traffic simulation model is used to analyze performance of the neural network-based versions of the California and McMaster algorithms over a broad spectrum of operational scenarios. The results illustrate the superior performance of the neural net implementations in terms of detection rate, false alarm rate, and time to detection. Of implications to current practice, they suggest that just introducing a continuous learning capability to commonly used detection algorithms in practice such as the California algorithm enhances their performance with time in service, allows transferability, and ensures adaptability to changes at the current site. An added advantage of this strategy is that existing traffic measures used (such as volume, occupancy, and so forth.) in those algorithms are sufficient, circumventing the need for new traffic measures, new threshold parameters, and variables that require subjective decisions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Liang, Sheng, Yong Xie, Gao Feng Pan, Jun Xue und Xin Feng Yu. „Simulation and Analysis of Linear Multiple User Detection Based on Simulink“. Applied Mechanics and Materials 263-266 (Dezember 2012): 354–59. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.354.

Der volle Inhalt der Quelle
Annotation:
In order to study the feasibility of linear Multiple User Detection (MUD) used in multi-objective spread spectrum TT&C, mathematical models of Decorrelation Algorithm (DEC) and Minimum Mean Square Error Algorithm (MMSE) are built. Simulation framework based on Simulink is given. Two algorithms are compared with traditional single user detector. The results of simulation show the two linear MUDs are better than traditional single user detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Yee Lim, Chun, Tony Badrick und Tze Ping Loh. „Patient-based quality control for glucometers using the moving sum of positive patient results and moving average“. Biochemia medica 30, Nr. 2 (14.06.2020): 296–306. http://dx.doi.org/10.11613/bm.2020.020709.

Der volle Inhalt der Quelle
Annotation:
Introduction: The capability of glucometer internal quality control (QC) in detecting varying magnitude of systematic error (bias), and the potential use of moving sum of positive results (MovSum) and moving average (MA) techniques as potential alternatives were evaluated. Materials and methods: The probability of error detection using routine QC and manufacturer’s control limits were investigated using historical data. Moving sum of positive results and MA algorithms were developed and optimized before being evaluated through numerical simulation for false positive rate and probability of error detection. Results: When the manufacturer’s default control limits (that are multiple times higher than the running standard deviation (SD) of the glucometer) was used, they had 0-75% probability of detecting small errors up to 0.8 mmol/L. However, the error detection capability improved to 20-100% when the running SD of the glucometer was used. At a binarization threshold of 6.2 mmol/L and block sizes of 200 to 400, MovSum has a 100% probability of detecting a bias that is greater than 0.5 mmol/L. Compared to MovSum, the MA technique had lower probability of bias detection, especially for smaller bias magnitudes; MA also had higher false positive rates. Conclusions: The MovSum technique is suited for detecting small, but clinically significant biases. Point of care QC should follow conventional practice by setting the control limits according to the running mean and SD to allow proper error detection. The glucometer manufacturers have an active role to play in liberalizing QC settings and also enhancing the middleware to facility patient-based QC practices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zhao, Jiandong, Mingmin Han, Changcheng Li und Xin Xin. „Visibility Video Detection with Dark Channel Prior on Highway“. Mathematical Problems in Engineering 2016 (2016): 1–21. http://dx.doi.org/10.1155/2016/7638985.

Der volle Inhalt der Quelle
Annotation:
Dark channel prior (DCP) has advantages in image enhancement and image haze removal and is explored to detect highway visibility according to the physical relationship between transmittance and extinction coefficient. However, there are three major error sources in calculating transmittance. The first is that sky regions do not satisfy the assumptions of DCP algorithm. So the optimization algorithms combined with region growing and coefficient correction method are proposed. When extracting atmospheric brightness, different values lead to the second error. Therefore, according to different visibility conditions, a multimode classification method is designed. Image blocky effect causes the third error. Then guided image filtering is introduced to obtain accurate transmittance of each pixel of image. Next, according to the definition meteorological optical visual range and the relationship between transmittance and extinction coefficient of Lambert-Beer’s Law, accurate visibility value can be calculated. A comparative experimental system including visibility detector and video camera was set up to verify the accuracy of these optimization algorithms. Finally, a large number of highway section videos were selected to test the validity of DCP method in different models. The results indicate that these detection visibility methods are feasible and reliable for the smooth operation of highways.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Zhang, He, Rui Peng und Xiao Dong Zhao. „Step Detection Algorithm Using Fast Fourier Transformation“. Advanced Materials Research 1049-1050 (Oktober 2014): 1218–21. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1218.

Der volle Inhalt der Quelle
Annotation:
Pedestrian Dead Reckoning (PDR) is a core component in pedestrian navigation. Usually, PDR algorithms use the current position and movement information to figure out position in the future in order to accomplish the navigation task. Step detection, as a basic portion of PDR, is significant for the implementation of Pedestrian Navigation. In this paper, a step detection algorithm is designed based on the existing research in the relative area. To improve accuracy, the algorithm involves a Fast Fourier Transformation (FFT) for optimizing. At last, an experiment is conducted for this algorithm, and the error rate of step detection is less than 1%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Liu, Xiao Zhi, und Jing Li. „Multi-User Detection Based on Improved KICA with Bat Algorithm“. Applied Mechanics and Materials 336-338 (Juli 2013): 1867–70. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.1867.

Der volle Inhalt der Quelle
Annotation:
In this paper, an improved kernel independent component analysis (KICA) algorithm is proposed for multi-user detection (MUD). In this algorithm, a new hybrid kernel function is adopted. In addition, the bat algorithm is applied to the optimizing process of independent component separation. Simulation results show that the new hybrid kernel function performs better in MUD than other kernel functions, and the improved KICA with bat algorithm has the smallest bit error rate (BER) when compared with classical FastICA and KICA algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Klishkovskaia, Tatiana, Andrey Aksenov, Aleksandr Sinitca, Anna Zamansky, Oleg A. Markelov und Dmitry Kaplun. „Development of Classification Algorithms for the Detection of Postures Using Non-Marker-Based Motion Capture Systems“. Applied Sciences 10, Nr. 11 (10.06.2020): 4028. http://dx.doi.org/10.3390/app10114028.

Der volle Inhalt der Quelle
Annotation:
The rapid development of algorithms for skeletal postural detection with relatively inexpensive contactless systems and cameras opens up the possibility of monitoring and assessing the health and wellbeing of humans. However, the evaluation and confirmation of posture classifications are still needed. The purpose of this study was therefore to develop a simple algorithm for the automatic classification of human posture detection. The most affordable solution for this project was through using a Kinect V2, enabling the identification of 25 joints, so as to record movements and postures for data analysis. A total of 10 subjects volunteered for this study. Three algorithms were developed for the classification of different postures in Matlab. These were based on a total error of vector lengths, a total error of angles, multiplication of these two parameters and the simultaneous analysis of the first and second parameters. A base of 13 exercises was then created to test the recognition of postures by the algorithm and analyze subject performance. The best results for posture classification were shown by the second algorithm, with an accuracy of 94.9%. The average degree of correctness of the exercises among the 10 participants was 94.2% (SD1.8%). It was shown that the proposed algorithms provide the same accuracy as that obtained from machine learning-based algorithms and algorithms with neural networks, but have less computational complexity and do not need resources for training. The algorithms developed and evaluated in this study have demonstrated a reasonable level of accuracy, and could potentially form the basis for developing a low-cost system for the remote monitoring of humans.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Hurtado Sánchez, Johanna Andrea, und Pablo Emilio Jojoa Gómez. „Effects of blind channel equalization using the regressive accelerator algorithm version ɣ“. Sistemas y Telemática 16, Nr. 46 (27.06.2018): 9–20. http://dx.doi.org/10.18046/syt.v16i46.3009.

Der volle Inhalt der Quelle
Annotation:
We present a blind channel equalization scheme, applied to ɣ version regressive acceleration algorithm, which uses self-taught equalization techniques to study the characteristics of both, the second and the higher order moments for the transmitted signal, used to calculate the signal of error and thus, to make an optimal estimation of the transmitted symbols. This way, simulations of the obtained results are done in comparison with the algorithms based on the stochastic gradient and with the Bussgang algorithms. The results of that simulations show how, using the regressive acceleration algorithm version ɣ, a better detection of transmitted bits and higher convergence speeds are obtained, with a minimum mean square error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hassan, Abdus, Umar Afzaal, Tooba Arifeen und Jeong Lee. „Input-Aware Implication Selection Scheme Utilizing ATPG for Efficient Concurrent Error Detection“. Electronics 7, Nr. 10 (17.10.2018): 258. http://dx.doi.org/10.3390/electronics7100258.

Der volle Inhalt der Quelle
Annotation:
Recently, concurrent error detection enabled through invariant relationships between different wires in a circuit has been proposed. Because there are many such implications in a circuit, selection strategies have been developed to select the most valuable implications for inclusion in the checker hardware such that a sufficiently high probability of error detection ( P d e t e c t i o n ) is achieved. These algorithms, however, due to their heuristic nature cannot guarantee a lossless P d e t e c t i o n . In this paper, we develop a new input-aware implication selection algorithm with the help of ATPG which minimizes loss on P d e t e c t i o n . In our algorithm, the detectability of errors for each candidate implication is carefully evaluated using error prone vectors. The evaluation results are then utilized to select the most efficient candidates for achieving optimal P d e t e c t i o n . The experimental results on 15 representative combinatorial benchmark circuits from the MCNC benchmarks suite show that the implications selected from our algorithm achieve better P d e t e c t i o n in comparison to the state of the art. The proposed method also offers better performance, up to 41.10%, in terms of the proposed impact-level metric, which is the ratio of achieved P d e t e c t i o n to the implication count.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Haibing, Hu, Xipeng Zheng, Jiajie Yin und Yueyan Wang. „Research on O-ring Dimension Measurement Algorithm Based on Cubic Spline Interpolation“. Applied Sciences 11, Nr. 8 (20.04.2021): 3716. http://dx.doi.org/10.3390/app11083716.

Der volle Inhalt der Quelle
Annotation:
Current O-ring dimension measurement algorithms based on machine vision are mainly whole-pixel level algorithms, which have the disadvantage of a low measurement accuracy. In order to improve the stability and accuracy of O-ring dimension measurement, a sub-pixel edge detection algorithm based on cubic spline interpolation is proposed for O-ring dimension measurement. After image pre-processing of the O-ring graphics, the whole-pixel-level O-ring edges are obtained by using a noise-resistant mathematical morphology method, and then the sub-pixel edge contours are obtained using a sub-pixel edge detection algorithm based on cubic spline interpolation. Finally, the edge curve is fitted with the least squares method to obtain its inner and outer diameter as well as the size of the wire diameter. The experimental data show that the algorithm has a mean square error of 4.8 μm for the outer diameter and 0.18 μm for the wire diameter. The outer diameter error is kept within ±100 μm and the wire diameter error can be kept within ±15 μm. Compared with the whole pixel algorithm, the measurement accuracy has been greatly improved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Ni, Hongxia, und Yufeng Li. „Spatial Error Concealment Algorithm Based on Adaptive Edge Threshold and Directional Weight“. International Journal of Pattern Recognition and Artificial Intelligence 31, Nr. 08 (09.05.2017): 1754014. http://dx.doi.org/10.1142/s0218001417540143.

Der volle Inhalt der Quelle
Annotation:
In order to improve the H.264/AVC compressed video stream error resilience in wireless channel transmission, this paper presents a spatial error concealment algorithm based on adaptive edge threshold and directional weight. Firstly, this algorithm makes use of Sobel gradient operator of image edge detection to detect the edge of adjacent macro blocks; secondly, according to specific information of adjacent macro-block of damaged macro-block, it can set gradient adaptive threshold; thirdly, it makes the direction weighted interpolation to damaged macro-block with the Sobel gradient operator of image edge detection. Experiments show that the image reconstruction quality is greatly improved by using this algorithm, which has higher application value for different video sequence as compared to the traditional spatial error concealment algorithms. This algorithm not only improves the quality of image restoration, but also has higher application value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Židek, Kamil, Alexander Hošovský und Ján Dubják. „Diagnostics of Surface Errors by Embedded Vision System and its Classification by Machine Learning Algorithms“. Key Engineering Materials 669 (Oktober 2015): 459–66. http://dx.doi.org/10.4028/www.scientific.net/kem.669.459.

Der volle Inhalt der Quelle
Annotation:
The Article deals with usability and advantages of embedded vision systems for surface error detection and usability of advanced algorithms, technics and methods from machine learning and artificial intelligence for error classification in machine vision systems. We provide experiments with following classification algorithms: Support Vector Machines (SVM), Random Threes, Gradient Boosted Threes, K-Nearest Neighbor and Normal Bayes Classifier. Next comparison experiment was conducted with multilayer perceptron (MLP), because currently it is very popular for classification in the field of artificial intelligence. These classification approaches are compared by precision, reliability, speed of teaching and algorithm implementation difficulty.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Imaoka, Hitoshi, und Kenji Okajima. „An Algorithm for the Detection of Faces on the Basis of Gabor Features and Information Maximization“. Neural Computation 16, Nr. 6 (01.06.2004): 1163–91. http://dx.doi.org/10.1162/089976604773717577.

Der volle Inhalt der Quelle
Annotation:
We propose an algorithm for the detection of facial regions within input images. The characteristics of this algorithm are (1) a vast number of Gabor-type features (196,800) in various orientations, and with various frequencies and central positions, which are used as feature candidates in representing the patterns of an image, and (2) an information maximization principle, which is used to select several hundred features that are suitable for the detection of faces from among these candidates. Using only the selected features in face detection leads to reduced computational cost and is also expected to reduce generalization error. We applied the system, after training, to 42 input images with complex backgrounds (Test Set A from the Carnegie Mellon University face data set). The result was a high detection rate of 87.0%, with only six false detections. We compared the result with other published face detection algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Liu, Zilong, Kexian Gong, Peng Sun, Jicai Deng, Kunheng Zou und Linlin Duan. „OQPSK Synchronization Parameter Estimation Based on Burst Signal Detection“. Electronics 10, Nr. 1 (02.01.2021): 69. http://dx.doi.org/10.3390/electronics10010069.

Der volle Inhalt der Quelle
Annotation:
The fast estimation of synchronization parameters plays an extremely important role in the demodulation of burst signals. In order to solve the problem of high computational complexity in the implementation of traditional algorithms, a synchronization parameter (frequency offset, phase offset, and timing error) estimation algorithm based on Offset Quadrature Phase Shift Keying (OQPSK) burst signal detection is proposed in this article. We first use the Data-Aided (DA) method to detect where the burst signal begins by taking the segment correlation between the receiving signals and the known local Unique Word (UW). In the sequel, the above results are adopted directly to estimate the synchronization parameters, which is obviously different from the conventional algorithms. In this way, the complexity of the proposed algorithm is greatly reduced, and it is more convenient for hardware implementation. The simulation results show that the proposed algorithm has high accuracy and can track the Modified Cramer–Rao Bound (MCRB) closely.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Liang, Xiao Yang, und Zhang Cen Guan. „Ceph CRUSH Data Distribution Algorithms“. Applied Mechanics and Materials 596 (Juli 2014): 196–99. http://dx.doi.org/10.4028/www.scientific.net/amm.596.196.

Der volle Inhalt der Quelle
Annotation:
CRUSH is one of the Ceph module,mainly solve the controllable, extensible, decentralized distribution of data copy. Ceph's biggest characteristics is a distributed metadata server is through CRUSH algorithm to allocate file storage location, its core is RADOS (Reliable, Autonomic Distributed Object Store), an object storage cluster, provide high availability of the object itself, error detection and repair.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

QUINTANA, FERNANDO A., PILAR L. IGLESIAS und HELENO BOLFARINE. „BAYESIAN IDENTIFICATION OF OUTLIERS AND CHANGE-POINTS IN MEASUREMENT ERROR MODELS“. Advances in Complex Systems 08, Nr. 04 (Dezember 2005): 433–49. http://dx.doi.org/10.1142/s0219525905000567.

Der volle Inhalt der Quelle
Annotation:
The problem of outlier and change-point identification has received considerable attention in traditional linear regression models from both, classical and Bayesian standpoints. In contrast, for the case of regression models with measurement errors, also known as error-in-variables models, the corresponding literature is scarce and largely focused on classical solutions for the normal case. The main object of this paper is to propose clustering algorithms for outlier detection and change-point identification in scale mixture of error-in-variables models. We propose an approach based on product partition models (PPMs) which allows one to study clustering for the models under consideration. This includes the change-point problem and outlier detection as special cases. The outlier identification problem is approached by adapting the algorithms developed by Quintana and Iglesias [32] for simple linear regression models. A special algorithm is developed for the change-point problem which can be applied in a more general setup. The methods are illustrated with two applications: (i) outlier identification in a problem involving the relationship between two methods for measuring serum kanamycin in blood samples from babies, and (ii) change-point identification in the relationship between the monthly dollar volume of sales on the Boston Stock Exchange and the combined monthly dollar volumes for the New York and American Stock Exchanges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Guo, J. Hung, Kuo Lan Su und Yi Lin Liao. „Comparison of the On-Line Power Detection and Prediction System“. Applied Mechanics and Materials 300-301 (Februar 2013): 537–41. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.537.

Der volle Inhalt der Quelle
Annotation:
The article presents a power detection and prediction system (PDPS) using fusion algorithms to be applied in the on-line power monitoring of the target device. The system contains multiple power detection units, a data integration unit, a target device, a power source and a main controller. Each power detection unit measures the assigned power source on real-time, and uses four current sensors to measure the current variety. We use fusion algorithms to be applied in current and voltage detection. We can calculate the real-time power values according to the estimated values of current and voltage measurement values. The main controller predicts the power loading for each power detection unit using auto-regression algorithm, and calculates the error value between the prediction value and the measurement value for each detection value, and compares the relation value on various condition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Shen, Chen, Yu, Ge, Han und Duan. „A Direct Current Measurement Method Based on Terbium Gallium Garnet Crystal and a Double Correlation Detection Algorithm“. Sensors 19, Nr. 13 (07.07.2019): 2997. http://dx.doi.org/10.3390/s19132997.

Der volle Inhalt der Quelle
Annotation:
When applying an optical current transformer (OCT) to direct current measurement, output signals exhibit a low signal-to-noise ratio and signal-to-noise band overlap. Sinusoidal wave modulation is used to solve this problem. A double correlation detection algorithm is used to extract the direct current (DC) signal, remove white noise and improve the signal-to-noise ratio. Our sensing unit uses a terbium gallium garnet crystal in order to increase the output signal-to-noise ratio and measurement sensitivity. Measurement errors of single correlation and double correlation detection algorithms are compared, and experimental results showed that this measurement method can control measurement error to about 0.3%, thus verifying its feasibility.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Xu, LM, F. Fan, YX Hu, Z. Zhang und DJ Hu. „A vision-based processing methodology for profile grinding of contour surfaces“. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 234, Nr. 1-2 (21.06.2019): 27–39. http://dx.doi.org/10.1177/0954405419857401.

Der volle Inhalt der Quelle
Annotation:
On-machine direct detection of profile errors is vital to improve accuracy and efficiency in profile grinding. However, achieving such detection processes is difficult because of harsh machining conditions. This study presents a novel machine-vision-based processing methodology for the profile grinding of contour surfaces instead of the traditional optical-enlargement-based profile grinding which is manual dependent and low efficient. Grinding errors were efficiently detected online through machine vision. A specific vision system was coordinately designed with the profile grinding system to ensure distortionless measurement of workpiece contour and overcome the interferences of machining environment during profile grinding. A machining error detection principle was proposed based on the online captured workpiece contour image. Real-time error identification and compensation algorithms were developed through the synthetic error measurement. Simulations and experiments were conducted successively. The results indicated that profile errors were considerably reduced and measurement efficiency was improved, validating the effectiveness of the proposed methodology for profile grinding of contour surface. The findings can also provide a reference for the direct measurement of machining errors in other machines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Dong, Yu Bing, Hai Yan Wang und Ming Jing Li. „Comparison of Thresholding and Edge Detection Segmentation Techniques“. Advanced Materials Research 860-863 (Dezember 2013): 2783–86. http://dx.doi.org/10.4028/www.scientific.net/amr.860-863.2783.

Der volle Inhalt der Quelle
Annotation:
Edge detection and thresholding segmentation algorithms are presented and tested with variety of grayscale images in different fields. In order to analyze and evaluate the quality of image segmentation, Root Mean Square Error is used. The smaller error value is, the better image segmentation effect is. The experimental results show that a segmentation method is not suitable for all images segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie