Academic literature on the topic 'Approximate Error Detection-Correction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Approximate Error Detection-Correction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Approximate Error Detection-Correction"

1

Rizzo, Roberto G., Andrea Calimera, and Jun Zhou. "Approximate Error Detection-Correction for efficient Adaptive Voltage Over-Scaling." Integration 63 (September 2018): 220–31. http://dx.doi.org/10.1016/j.vlsi.2018.04.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Zhixi, Xianbin Li, and Jun Yang. "Power Efficient and High-Accuracy Approximate Multiplier with Error Correction." Journal of Circuits, Systems and Computers 29, no. 15 (June 30, 2020): 2050241. http://dx.doi.org/10.1142/s0218126620502412.

Full text
Abstract:
Approximate arithmetic circuits have been considered as an innovative circuit paradigm with improved performance for error-resilient applications which could tolerant certain loss of accuracy. In this paper, a novel approximate multiplier with a different scheme of partial product reduction is proposed. An analysis of accuracy (measured by error distance, pass rate and accuracy of amplitude) as well as circuit-based design metrics (power, delay and area, etc.) is utilized to assess the performance of the proposed approximate multiplier. Extensive simulation results show that the proposed design achieves a higher accuracy than the other approximate multipliers from the previous works. Moreover, the proposed design has a better performance under comprehensive comparisons taking both accuracy and circuit-related metrics into considerations. In addition, an error detection and correction (EDC) circuit is used to correct the approximate results to accurate results. Compared with the exact Wallace tree multiplier, the proposed approximate multiplier design with the error detection and correction circuit still has up to 15% and 10% saving for power and delay, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Babenko, Mikhail, Anton Nazarov, Maxim Deryabin, Nikolay Kucherov, Andrei Tchernykh, Nguyen Viet Hung, Arutyun Avetisyan, and Victor Toporkov. "Multiple Error Correction in Redundant Residue Number Systems: A Modified Modular Projection Method with Maximum Likelihood Decoding." Applied Sciences 12, no. 1 (January 4, 2022): 463. http://dx.doi.org/10.3390/app12010463.

Full text
Abstract:
Error detection and correction codes based on redundant residue number systems are powerful tools to control and correct arithmetic processing and data transmission errors. Decoding the magnitude and location of a multiple error is a complex computational problem: it requires verifying a huge number of different possible combinations of erroneous residual digit positions in the error localization stage. This paper proposes a modified correcting method based on calculating the approximate weighted characteristics of modular projections. The new procedure for correcting errors and restoring numbers in a weighted number system involves the Chinese Remainder Theorem with fractions. This approach calculates the rank of each modular projection efficiently. The ranks are used to calculate the Hamming distances. The new method speeds up the procedure for correcting multiple errors and restoring numbers in weighted form by an average of 18% compared to state-of-the-art analogs.
APA, Harvard, Vancouver, ISO, and other styles
4

Rizzo, Roberto G., Andrea Calimera, and Jun Zhou. "Corrigendum to“Approximate error detection-correction for efficient adaptive voltage Over-Scaling”[Integration 63 (2018) 220–231]." Integration 70 (January 2020): 159. http://dx.doi.org/10.1016/j.vlsi.2019.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, JH, JJ Zhang, RJ Gao, CH Jiang, R. Ma, ZM Qi, H. Jin, HD Zhang, and XC Wang. "Research on Modified Algorithms of Cylindrical External Thread Profile Based on Machine Vision." Measurement Science Review 20, no. 1 (February 1, 2020): 15–21. http://dx.doi.org/10.2478/msr-2020-0003.

Full text
Abstract:
AbstractIn the non-contact detection of thread profile boundary correction, it remains challenging to ensure that the thread axis intersects the CCD camera axis perpendicularly. Here, we addressed this issue using modified algorithms. We established the Cartesian coordinate system according to the spatial geometric relationship of the thread. We used the center of the bottom of the thread as the origin, and the image of the extreme position image was replaced by the image of the approximate extreme position. In addition, we analyzed the relationship between the boundary of the theoretical thread image and the theoretical profile. We calculated the coordinate transformation of the point on the theoretical tooth profile and the coordinate function of the point on the boundary of the theoretical image. At the same time, the extreme value of the function was obtained, and the boundary equation of the theoretical thread image was deduced. The difference equation between the two functions was used to correct the boundary point of the actual thread image, and the fitting results were used to detect the key parameters of the external thread of the cylinder. Further experiment proves that the above algorithm effectively improves the detection accuracy of thread quality, and the detection error of main geometric parameters is reduced by more than 50 %.
APA, Harvard, Vancouver, ISO, and other styles
6

Finkelstein, S. M., J. R. Budd, Lisa B. Ewing, L. Catherine, W. J. Warwick, and Sue J. Kujawa. "Data Quality Assurance for a Health Monitoring Program." Methods of Information in Medicine 24, no. 04 (October 1985): 192–96. http://dx.doi.org/10.1055/s-0038-1635372.

Full text
Abstract:
AbstractThe objective of data quality assurance procedures in clinical studies is to reduce the number of data errors that appear on the data record to a level which is acceptable and compatible with the ultimate use of the recorded information. A semi-automatic procedure has been developed to detect and correct data entry errors in a study of the feasibility and efficacy of home health monitoring for patients with cystic fibrosis. Daily self-measurements are recorded in a diary, mailed to the study coordinating center weekly, and entered into the study’s INSIGHT clinical database. A statistical error detection test has been combined with manual error correction to provide a satisfactory, reasonable cost procedure for such a program. Approximately 76% of the errors from a test diary entry period were detected and corrected by this method. Those errors not detected were within an acceptable range so as not to impact the clinical decisions derived from this data. A completely manual method detected SS% of all errors, but the review and correction process was four times more costly, based on the time needed to conduct each procedure.
APA, Harvard, Vancouver, ISO, and other styles
7

Alicki, Robert. "Quantum Decay Cannot Be Completely Reversed: The 5% Rule." Open Systems & Information Dynamics 16, no. 01 (March 2009): 49–53. http://dx.doi.org/10.1142/s1230161209000049.

Full text
Abstract:
Using an exactly solvable model of the Wigner-Weisskopf atom, it is shown that an unstable quantum state cannot be recovered completely by the procedure involving detection of the decay products followed by the creation of time-reversed decay products state, as proposed in [1]. The universal lower bound on the recovery error is approximately equal to 5% of the error per cycle — the dimensionless parameter characterizing decay process in the Markovian approximation. This result has consequences for the efficiency of quantum error correction procedures which are based on syndrome measurements and corrective operations.
APA, Harvard, Vancouver, ISO, and other styles
8

Goes, Marlos, Gustavo Goni, and Klaus Keller. "Reducing Biases in XBT Measurements by Including Discrete Information from Pressure Switches." Journal of Atmospheric and Oceanic Technology 30, no. 4 (April 1, 2013): 810–24. http://dx.doi.org/10.1175/jtech-d-12-00126.1.

Full text
Abstract:
Abstract Biases in the depth estimation of expendable bathythermograph (XBT) measurements cause considerable errors in oceanic estimates of climate variables. Efforts are currently underway to improve XBT probes by including pressure switches. Information from these pressure measurements can be used to minimize errors in the XBT depth estimation. This paper presents a simple method to correct the XBT depth biases using a number of discrete pressure measurements. A blend of controlled simulations of XBT measurements and collocated XBT/CTD data is used along with statistical methods to estimate error parameters, and to optimize the use of pressure switches in terms of number of switches, optimal depth detection, and errors in the pressure switch measurements to most efficiently correct XBT profiles. The results show that given the typical XBT depth biases, using just two pressure switches is a reliable strategy for reducing depth errors, as it uses the least number of switches for an improved accuracy and reduces the variance of the resulting correction. Using only one pressure switch efficiently corrects XBT depth errors when the surface depth offset is small, its optimal location is at middepth (around or below 300 m), and the pressure switch measurement errors are insignificant. If two pressure switches are used, then results indicate that the measurements should be taken in the lower thermocline and deeper in the profile, at approximately 80 and 600 m, respectively, with an RMSE of approximately 1.6 m for pressure errors of 1 m.
APA, Harvard, Vancouver, ISO, and other styles
9

Pesantez-Narvaez, Jessica, Montserrat Guillen, and Manuela Alcañiz. "RiskLogitboost Regression for Rare Events in Binary Response: An Econometric Approach." Mathematics 9, no. 5 (March 9, 2021): 579. http://dx.doi.org/10.3390/math9050579.

Full text
Abstract:
A boosting-based machine learning algorithm is presented to model a binary response with large imbalance, i.e., a rare event. The new method (i) reduces the prediction error of the rare class, and (ii) approximates an econometric model that allows interpretability. RiskLogitboost regression includes a weighting mechanism that oversamples or undersamples observations according to their misclassification likelihood and a generalized least squares bias correction strategy to reduce the prediction error. An illustration using a real French third-party liability motor insurance data set is presented. The results show that RiskLogitboost regression improves the rate of detection of rare events compared to some boosting-based and tree-based algorithms and some existing methods designed to treat imbalanced responses.
APA, Harvard, Vancouver, ISO, and other styles
10

He, Huanran, Suxiang Yao, Anning Huang, and Kejian Gong. "Evaluation and Error Correction of the ECMWF Subseasonal Precipitation Forecast over Eastern China during Summer." Advances in Meteorology 2020 (March 17, 2020): 1–20. http://dx.doi.org/10.1155/2020/1920841.

Full text
Abstract:
Subseasonal-to-seasonal (S2S) prediction is a highly regarded skill around the world. To improve the S2S forecast skill, an S2S prediction project and an extensive database have been established. In this study, the European Center for Medium-Range Weather Forecasts (ECMWF) model hindcast, which participates in the S2S prediction project, is systematically assessed by focusing on the hindcast quality for the summer accumulated ten-day precipitation at lead times of 0–30 days during 1995–2014 in eastern China. Additionally, the hindcast error is corrected by utilizing the preceding sea surface temperature (SST). The metrics employed to measure the ECMWF hindcast performance indicate that the ECMWF model performance drops as the lead time increases and exhibits strong interannual differences among the five subregions of eastern China. In addition, the precipitation forecast skill of the ECMWF hindcast is best at approximately 15 days in some areas of Southeast China; after correcting the forecast error, the forecast skill is increased to 30 days. At lead times of 0–30 days, regardless of whether the forecast error is corrected, the root mean square errors are lowest in Northeast China. After correcting the forecast error, the performance of the ECMWF hindcast shows better improvement in depicting the quantity and temporal and spatial variation of precipitation at lead times of 0–30 days in eastern China. The false alarm ratio (FAR), probability of detection (POD), and equitable threat score (ETS) reveal that the ECMWF model has a preferable performance at forecasting accumulated ten-day precipitation rates of approximately 20∼50 mm and indicates an improved hindcast quality after the forecast error correction. In short, adopting the preceding SST to correct the summer subseasonal precipitation of the ECMWF hindcast is preferable.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Approximate Error Detection-Correction"

1

Yu, Tao. "Unambiguous Phase Difference Direction Finding Based on Short Baseline Array." In Passive Location Method Based on Phase Difference Measurement, 88–125. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815079425122010006.

Full text
Abstract:
In this chapter, the fuzzy-free phase difference direction finding method based on a short baseline array is introduced and three different methods are presented. Firstly, the virtual short baseline direction finding method based on one dimensional double-base asymmetrical array is studied deeply, which is constructed by subtraction of the ratio of different sides between two adjacent baselines. The author's findings show that, although the difference between the lengths of two adjacent baselines is less than half a wavelength, the difference of the integers of wavelengths will not be zero in the direction of partial arrival angles but will jump. In this regard, the correction can be realized by the determination of the sine value of the arrival angle by adopting a method like the fuzzy-free detection analysis of the phase difference rate. The second approach is the orthogonal phase difference direction finding method based on equivalent simulation. It is found in the simulation that the curve shape of the differential function of path difference per unit wavelength obtained after phase jump correction is very similar to that of the cosine function. If the maximum value of the function is used for normalization processing and simple square root processing, then the function obtained is basically equivalent to the cosine function. At this time, it can be proved in principle that the results given are equivalent to the Doppler direction finding technique. Then, using the orthogonal array, the maximum value of the function which cannot be known in the one-dimensional array is eliminated by means of the orthogonal ratio, so the real-time direction finding based on phase difference measurement without phase ambiguity is realized. The third approach is the airborne direction finding method based on Doppler-phase measurement. The study shows that the airborne single-baseline interferometer can achieve high precision direction finding without phase ambiguity after integrating Doppler measurement information. The main method is to directly obtain the wavelength integer solution of the radial distance by comprehensively utilizing the velocity vector equation, Doppler frequency shift and its rate of change. Thus, the integral value of the wavelength contained in the path difference between two adjacent array elements can be given. By means of the phase difference measurement, the value less than the wavelength integer in the path difference can be determined. This chapter also explores the effect of phase difference measurement errors on the difference of wavelength integers. The expression of the wavelength number difference based on the phase difference measurement can also be approximated by the unambiguous phase difference direction finding method based on the virtual short baseline. The root mean square measurement error of the wavelength number difference is derived. Through analysis, it is revealed that the wavelength number difference has little effect on the accuracy of a single baseline phase difference direction finding.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Approximate Error Detection-Correction"

1

Mehta, Ashutosh, Shivani Maurya, Nawaz Sharief, Babu M. Pranay, Srivatsava Jandhyala, and Suresh Purini. "Accuracy-configurable approximate multiplier with error detection and correction." In TENCON 2015 - 2015 IEEE Region 10 Conference. IEEE, 2015. http://dx.doi.org/10.1109/tencon.2015.7372902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Koulidis, Alexis, Guang Ooi, Jelena Skenderija, and Shehab Ahmed. "Drilling Data Quality and Reliability: A Novel Algorithm for Data Correction and Validation." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22139-ms.

Full text
Abstract:
Abstract The difficulty of drilling data analysis stems from the fact that such data often contain errors and disturbances such as missing samples, desynchronization, misinterpretation, ambient noise, and unfit equations and models. In this study, an algorithm is developed to identify different operations during the drilling processes and automate the procedure of validating data quality. The proper analysis and processing of drilling data is crucial in ensuring its quality. Initially, the algorithm separates drilling data reported in 5 sections from the Equinor Volve dataset into different intervals depending on their mode of operation. Additionally, the algorithm performs data correction and provide analysis on data quality. The data is corrected based on a series of physics- and operation-informed conditions. Key performance indicators (KPIs) are calculated for the rate of penetration (ROP), weight on bit (WOB), bit depth, measured depth, and torque. The KPIs include validity, accuracy, and Kuder-Richardson 20 (KR-20) scores. The KR-20 score, which is a measure of the lower bound of the reliability of data, is selected since it considers the difficulty of measurement of each attribute to be unequal, i.e. some attributes may be more prone to error than others. Based on the results, the corrected data displays better correlation between the aforementioned drilling data. The results prove that the intelligent software analysis provides an automated workflow that allows the separation of all operations and the required time in addition to the nigh instantaneous generation of corrected data and data uncertainty qualification. The detection of invalid data points was performed by investigating every operation i.e. rotating off bottom, drilling mode and evaluate the corresponding measurements. The analysis showed that approximately 76% of ROP values are invalid data, which validates the importance of data correction in order to be used for validation. The developed algorithm allows the rapid and reliable analysis of unprocessed drilling data to facilitate decision making and quality control. The key advantages of the intelligent algorithm are that it provides fast data assimilation and comprehensive analysis of drilling parameters by allowing rapid visualization and quantification of data quality and reliability. The correlation of drilling parameters for various operations can be evaluated and visualized with correlation matrices (heatmap). The primary qualification from the original set of data shows a low correlation i.e. rate of penetration values on rotating off bottom operation, which indicates low data validity. The current method and procedure show a significant correction of data point correlation from −1 to up to 1 depending on the operation. More importantly, for a specific set of the original data, ROP showed a validity of 0.23 in contrast 0.75 for bit depth and with accuracy of 0.58 and 0.99 respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Hattori, Takatoshi, and Michiya Sasaki. "Development of Waste Monitor of Clearance Level to Ensure Social Reliance on Recycled Metal From Nuclear Facilities." In ASME 2003 9th International Conference on Radioactive Waste Management and Environmental Remediation. ASMEDC, 2003. http://dx.doi.org/10.1115/icem2003-4534.

Full text
Abstract:
Metal and concrete wastes in the decommissioning of nuclear facilities are classified according to their radioactivity level after decontamination. Radioactive waste below the clearance level (e.g., 0.4Bq.g−1 for Co-60 in Japan) can be disposed of as general industrial waste or recycled. Metal wastes mainly originate from equipment in buildings, except for the metal bars in reinforced concrete. Since contaminated equipment must be decontaminated after dismantling, the main target of measurement would be fragments of equipment, of various shapes, numbers and sizes. In order to transport such metallic fragments out of controlled areas, a surface contamination survey must be performed to confirm that the contamination level is below the legal standard level (e.g., 4Bq.cm−2 for beta or gamma emitters in Japan) in addition to satisfying the clearance level. Taking account of social reliance on recycled metal after inspection of the clearance level and the surface contamination level, it is important to remove the possibility of overlooking contamination above these levels in the recycled metal. The measurement of beta rays is suitable for determining surface contamination on metal because almost none of the beta particles from inside the metal can be detected and the detected radiation can be mostly limited to that from the surface. This is the reason why a survey meter for measuring surface contamination has a detector with a higher sensitivity for beta particles than for gamma rays. Considering the characteristics of the survey meter, it may be difficult to measure the contamination level of the surface of a metal fragment, particularly when the surface is not flat. Moreover, in the case of internal contamination of a small metal pipe, measurement is impossible. The permeability of gamma rays is much greater than that of beta particles. Therefore, gamma rays can be detected even from internal contamination in metal. For gamma ray measurement, accurate and easy calibration of the actual radioactivity level and count rate obtained using a measurement instrument is important. If gamma ray measurement can confirm that the radioactivity level is less than about 400Bq, both the clearance level and the surface contamination level could be inspected simultaneously. In addition, the great amount of labor needed for manual inspection using a survey meter could be saved, and there will be no possibility of missing hot spots of radioactivity due to human error. In this study, a new technique for precise and automatic measurement of gamma emitters in metal waste has been developed using 3D noncontact shape measurement and Monte-Carlo calculation techniques to objectively confirm that the specific radioactivity level of metal waste satisfies the clearance level and furthermore, that the surface contamination level of the metal waste is below the legal standard level. The technique can yield a calibration factor for every measurement target automatically and realizes automatic correction for the reduction of the background count rate in gamma measurements due to the self-shielding effect of the measurement target. A practical monitor (Clearance Automatic Laser Inspection System, CLALIS) has been developed. The accuracy of the automatic calibration and correction of background reduction of the practical monitor has been clarified using mock metal wastes of various shapes, numbers and sizes. It was found that the values measured using the present monitor and the actual radioactivity level agreed within +/−20%, and the corrected and actual background reductions agreed within +/−2%. The detection limit of the present monitor was estimated as being 100Bq for Co-60, taking into consideration the calibration error and correction error of the reduction of the background count rate. The monitor accomplished precise measurements with a 100sec (30sec for gamma ray measurement, 30sec for background measurement) process time per inspection. This indicates that approximately 5 tons of metal waste can be measured per day (1,000 tons per year) in 20kg batches at that process speed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography