Journal articles on the topic 'Approximate Error Detection-Correction'

To see the other types of publications on this topic, follow the link: Approximate Error Detection-Correction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 journal articles for your research on the topic 'Approximate Error Detection-Correction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rizzo, Roberto G., Andrea Calimera, and Jun Zhou. "Approximate Error Detection-Correction for efficient Adaptive Voltage Over-Scaling." Integration 63 (September 2018): 220–31. http://dx.doi.org/10.1016/j.vlsi.2018.04.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Zhixi, Xianbin Li, and Jun Yang. "Power Efficient and High-Accuracy Approximate Multiplier with Error Correction." Journal of Circuits, Systems and Computers 29, no. 15 (June 30, 2020): 2050241. http://dx.doi.org/10.1142/s0218126620502412.

Full text
Abstract:
Approximate arithmetic circuits have been considered as an innovative circuit paradigm with improved performance for error-resilient applications which could tolerant certain loss of accuracy. In this paper, a novel approximate multiplier with a different scheme of partial product reduction is proposed. An analysis of accuracy (measured by error distance, pass rate and accuracy of amplitude) as well as circuit-based design metrics (power, delay and area, etc.) is utilized to assess the performance of the proposed approximate multiplier. Extensive simulation results show that the proposed design achieves a higher accuracy than the other approximate multipliers from the previous works. Moreover, the proposed design has a better performance under comprehensive comparisons taking both accuracy and circuit-related metrics into considerations. In addition, an error detection and correction (EDC) circuit is used to correct the approximate results to accurate results. Compared with the exact Wallace tree multiplier, the proposed approximate multiplier design with the error detection and correction circuit still has up to 15% and 10% saving for power and delay, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Babenko, Mikhail, Anton Nazarov, Maxim Deryabin, Nikolay Kucherov, Andrei Tchernykh, Nguyen Viet Hung, Arutyun Avetisyan, and Victor Toporkov. "Multiple Error Correction in Redundant Residue Number Systems: A Modified Modular Projection Method with Maximum Likelihood Decoding." Applied Sciences 12, no. 1 (January 4, 2022): 463. http://dx.doi.org/10.3390/app12010463.

Full text
Abstract:
Error detection and correction codes based on redundant residue number systems are powerful tools to control and correct arithmetic processing and data transmission errors. Decoding the magnitude and location of a multiple error is a complex computational problem: it requires verifying a huge number of different possible combinations of erroneous residual digit positions in the error localization stage. This paper proposes a modified correcting method based on calculating the approximate weighted characteristics of modular projections. The new procedure for correcting errors and restoring numbers in a weighted number system involves the Chinese Remainder Theorem with fractions. This approach calculates the rank of each modular projection efficiently. The ranks are used to calculate the Hamming distances. The new method speeds up the procedure for correcting multiple errors and restoring numbers in weighted form by an average of 18% compared to state-of-the-art analogs.
APA, Harvard, Vancouver, ISO, and other styles
4

Rizzo, Roberto G., Andrea Calimera, and Jun Zhou. "Corrigendum to“Approximate error detection-correction for efficient adaptive voltage Over-Scaling”[Integration 63 (2018) 220–231]." Integration 70 (January 2020): 159. http://dx.doi.org/10.1016/j.vlsi.2019.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, JH, JJ Zhang, RJ Gao, CH Jiang, R. Ma, ZM Qi, H. Jin, HD Zhang, and XC Wang. "Research on Modified Algorithms of Cylindrical External Thread Profile Based on Machine Vision." Measurement Science Review 20, no. 1 (February 1, 2020): 15–21. http://dx.doi.org/10.2478/msr-2020-0003.

Full text
Abstract:
AbstractIn the non-contact detection of thread profile boundary correction, it remains challenging to ensure that the thread axis intersects the CCD camera axis perpendicularly. Here, we addressed this issue using modified algorithms. We established the Cartesian coordinate system according to the spatial geometric relationship of the thread. We used the center of the bottom of the thread as the origin, and the image of the extreme position image was replaced by the image of the approximate extreme position. In addition, we analyzed the relationship between the boundary of the theoretical thread image and the theoretical profile. We calculated the coordinate transformation of the point on the theoretical tooth profile and the coordinate function of the point on the boundary of the theoretical image. At the same time, the extreme value of the function was obtained, and the boundary equation of the theoretical thread image was deduced. The difference equation between the two functions was used to correct the boundary point of the actual thread image, and the fitting results were used to detect the key parameters of the external thread of the cylinder. Further experiment proves that the above algorithm effectively improves the detection accuracy of thread quality, and the detection error of main geometric parameters is reduced by more than 50 %.
APA, Harvard, Vancouver, ISO, and other styles
6

Finkelstein, S. M., J. R. Budd, Lisa B. Ewing, L. Catherine, W. J. Warwick, and Sue J. Kujawa. "Data Quality Assurance for a Health Monitoring Program." Methods of Information in Medicine 24, no. 04 (October 1985): 192–96. http://dx.doi.org/10.1055/s-0038-1635372.

Full text
Abstract:
AbstractThe objective of data quality assurance procedures in clinical studies is to reduce the number of data errors that appear on the data record to a level which is acceptable and compatible with the ultimate use of the recorded information. A semi-automatic procedure has been developed to detect and correct data entry errors in a study of the feasibility and efficacy of home health monitoring for patients with cystic fibrosis. Daily self-measurements are recorded in a diary, mailed to the study coordinating center weekly, and entered into the study’s INSIGHT clinical database. A statistical error detection test has been combined with manual error correction to provide a satisfactory, reasonable cost procedure for such a program. Approximately 76% of the errors from a test diary entry period were detected and corrected by this method. Those errors not detected were within an acceptable range so as not to impact the clinical decisions derived from this data. A completely manual method detected SS% of all errors, but the review and correction process was four times more costly, based on the time needed to conduct each procedure.
APA, Harvard, Vancouver, ISO, and other styles
7

Alicki, Robert. "Quantum Decay Cannot Be Completely Reversed: The 5% Rule." Open Systems & Information Dynamics 16, no. 01 (March 2009): 49–53. http://dx.doi.org/10.1142/s1230161209000049.

Full text
Abstract:
Using an exactly solvable model of the Wigner-Weisskopf atom, it is shown that an unstable quantum state cannot be recovered completely by the procedure involving detection of the decay products followed by the creation of time-reversed decay products state, as proposed in [1]. The universal lower bound on the recovery error is approximately equal to 5% of the error per cycle — the dimensionless parameter characterizing decay process in the Markovian approximation. This result has consequences for the efficiency of quantum error correction procedures which are based on syndrome measurements and corrective operations.
APA, Harvard, Vancouver, ISO, and other styles
8

Goes, Marlos, Gustavo Goni, and Klaus Keller. "Reducing Biases in XBT Measurements by Including Discrete Information from Pressure Switches." Journal of Atmospheric and Oceanic Technology 30, no. 4 (April 1, 2013): 810–24. http://dx.doi.org/10.1175/jtech-d-12-00126.1.

Full text
Abstract:
Abstract Biases in the depth estimation of expendable bathythermograph (XBT) measurements cause considerable errors in oceanic estimates of climate variables. Efforts are currently underway to improve XBT probes by including pressure switches. Information from these pressure measurements can be used to minimize errors in the XBT depth estimation. This paper presents a simple method to correct the XBT depth biases using a number of discrete pressure measurements. A blend of controlled simulations of XBT measurements and collocated XBT/CTD data is used along with statistical methods to estimate error parameters, and to optimize the use of pressure switches in terms of number of switches, optimal depth detection, and errors in the pressure switch measurements to most efficiently correct XBT profiles. The results show that given the typical XBT depth biases, using just two pressure switches is a reliable strategy for reducing depth errors, as it uses the least number of switches for an improved accuracy and reduces the variance of the resulting correction. Using only one pressure switch efficiently corrects XBT depth errors when the surface depth offset is small, its optimal location is at middepth (around or below 300 m), and the pressure switch measurement errors are insignificant. If two pressure switches are used, then results indicate that the measurements should be taken in the lower thermocline and deeper in the profile, at approximately 80 and 600 m, respectively, with an RMSE of approximately 1.6 m for pressure errors of 1 m.
APA, Harvard, Vancouver, ISO, and other styles
9

Pesantez-Narvaez, Jessica, Montserrat Guillen, and Manuela Alcañiz. "RiskLogitboost Regression for Rare Events in Binary Response: An Econometric Approach." Mathematics 9, no. 5 (March 9, 2021): 579. http://dx.doi.org/10.3390/math9050579.

Full text
Abstract:
A boosting-based machine learning algorithm is presented to model a binary response with large imbalance, i.e., a rare event. The new method (i) reduces the prediction error of the rare class, and (ii) approximates an econometric model that allows interpretability. RiskLogitboost regression includes a weighting mechanism that oversamples or undersamples observations according to their misclassification likelihood and a generalized least squares bias correction strategy to reduce the prediction error. An illustration using a real French third-party liability motor insurance data set is presented. The results show that RiskLogitboost regression improves the rate of detection of rare events compared to some boosting-based and tree-based algorithms and some existing methods designed to treat imbalanced responses.
APA, Harvard, Vancouver, ISO, and other styles
10

He, Huanran, Suxiang Yao, Anning Huang, and Kejian Gong. "Evaluation and Error Correction of the ECMWF Subseasonal Precipitation Forecast over Eastern China during Summer." Advances in Meteorology 2020 (March 17, 2020): 1–20. http://dx.doi.org/10.1155/2020/1920841.

Full text
Abstract:
Subseasonal-to-seasonal (S2S) prediction is a highly regarded skill around the world. To improve the S2S forecast skill, an S2S prediction project and an extensive database have been established. In this study, the European Center for Medium-Range Weather Forecasts (ECMWF) model hindcast, which participates in the S2S prediction project, is systematically assessed by focusing on the hindcast quality for the summer accumulated ten-day precipitation at lead times of 0–30 days during 1995–2014 in eastern China. Additionally, the hindcast error is corrected by utilizing the preceding sea surface temperature (SST). The metrics employed to measure the ECMWF hindcast performance indicate that the ECMWF model performance drops as the lead time increases and exhibits strong interannual differences among the five subregions of eastern China. In addition, the precipitation forecast skill of the ECMWF hindcast is best at approximately 15 days in some areas of Southeast China; after correcting the forecast error, the forecast skill is increased to 30 days. At lead times of 0–30 days, regardless of whether the forecast error is corrected, the root mean square errors are lowest in Northeast China. After correcting the forecast error, the performance of the ECMWF hindcast shows better improvement in depicting the quantity and temporal and spatial variation of precipitation at lead times of 0–30 days in eastern China. The false alarm ratio (FAR), probability of detection (POD), and equitable threat score (ETS) reveal that the ECMWF model has a preferable performance at forecasting accumulated ten-day precipitation rates of approximately 20∼50 mm and indicates an improved hindcast quality after the forecast error correction. In short, adopting the preceding SST to correct the summer subseasonal precipitation of the ECMWF hindcast is preferable.
APA, Harvard, Vancouver, ISO, and other styles
11

Jones, Frank E. "LIMITATIONS ON UNDERGROUND STORAGE TANK LEAK DETECTION SYSTEMS." International Oil Spill Conference Proceedings 1989, no. 1 (February 1, 1989): 3–5. http://dx.doi.org/10.7901/2169-3358-1989-1-3.

Full text
Abstract:
ABSTRACT This paper discusses the limitations imposed on internal volumetric leak detection systems for underground gasoline storage tanks by uncertainty in the value of the thermal expansion coefficient for gasoline and uncertainties in measurements of the temperature of the gasoline. For leak detection or level sensing systems that are used to infer or measure volumetric leak rates, correction must be made to account for the expansion or contraction of the gasoline. An analysis is made of experimental determinations, in other work, of the density of samples of gasoline and calculated values of the thermal expansion coefficient. The data are divided according to three categories of gasoline: regular, unleaded, and premium. In each of these categories the estimate of the standard deviation of the thermal expansion coefficient is approximately 3 percent of the mean value. Examples are given of the magnitude of the apparent leak rate or error in leak rate due to uncertainties in the thermal expansion coefficient. In order to correct for expansion or contraction of the gasoline, the mean temperature of the entire quantity of the gasoline must be known. An error in mean temperature will result in an apparent leak rate or an error in leak rate. Examples are given of the magnitude of the apparent leak rate or error in leak rate.
APA, Harvard, Vancouver, ISO, and other styles
12

Reddy Hemantha, G., S. Varadarajan, and M. N. Giriprasad. "DA Based Systematic Approach Using Speculative Addition for High Speed DSP Applications." International Journal of Engineering & Technology 7, no. 2.24 (April 25, 2018): 197. http://dx.doi.org/10.14419/ijet.v7i2.24.12030.

Full text
Abstract:
In recent years Parallel-prefix topologies has been emerged to offer a high-speed solution for many DSP applications. Here in this paper carrier approximation is introduced to incorporate speculation in Han Carlson prefix method. And overall latency is considerably reduced using single Brent-Kung addition as a pre and post processing unit. In order to improve the reliability error detection network is combined with the approximated adder and it is assert the error correction unit whenever speculation fails during carries propagation from LSB segment to MSB unit. The proposed speculative adder based on Han-Carlson parallel-prefix topology attains better latency reduction than variable latency Kogge-Stone topology. Finally, multiplier-accumulation unit (MAC) is designed using serial shift-based accumulation where the proposed speculative adder is used for partial product addition iteratively. The performance merits and latency reduction of proposed adder unit is proved through FPGA hardware synthesis. Obtained results show that proposed MAC unit outperforms both previously proposed speculative architectures and all other high-speed multiplication methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Ko, Young Sin, Yoo Mi Choi, Mujin Kim, Youngjin Park, Murtaza Ashraf, Willmer Rafell Quiñones Robles, Min-Ju Kim, et al. "Improving quality control in the routine practice for histopathological interpretation of gastrointestinal endoscopic biopsies using artificial intelligence." PLOS ONE 17, no. 12 (December 15, 2022): e0278542. http://dx.doi.org/10.1371/journal.pone.0278542.

Full text
Abstract:
Background Colorectal and gastric cancer are major causes of cancer-related deaths. In Korea, gastrointestinal (GI) endoscopic biopsy specimens account for a high percentage of histopathologic examinations. Lack of a sufficient pathologist workforce can cause an increase in human errors, threatening patient safety. Therefore, we developed a digital pathology total solution combining artificial intelligence (AI) classifier models and pathology laboratory information system for GI endoscopic biopsy specimens to establish a post-analytic daily fast quality control (QC) system, which was applied in clinical practice for a 3-month trial run by four pathologists. Methods and findings Our whole slide image (WSI) classification framework comprised patch-generator, patch-level classifier, and WSI-level classifier. The classifiers were both based on DenseNet (Dense Convolutional Network). In laboratory tests, the WSI classifier achieved accuracy rates of 95.8% and 96.0% in classifying histopathological WSIs of colorectal and gastric endoscopic biopsy specimens, respectively, into three classes (Negative for dysplasia, Dysplasia, and Malignant). Classification by pathologic diagnosis and AI prediction were compared and daily reviews were conducted, focusing on discordant cases for early detection of potential human errors by the pathologists, allowing immediate correction, before the pathology report error is conveyed to the patients. During the 3-month AI-assisted daily QC trial run period, approximately 7–10 times the number of slides compared to that in the conventional monthly QC (33 months) were reviewed by pathologists; nearly 100% of GI endoscopy biopsy slides were double-checked by the AI models. Further, approximately 17–30 times the number of potential human errors were detected within an average of 1.2 days. Conclusions The AI-assisted daily QC system that we developed and established demonstrated notable improvements in QC, in quantitative, qualitative, and time utility aspects. Ultimately, we developed an independent AI-assisted post-analytic daily fast QC system that was clinically applicable and influential, which could enhance patient safety.
APA, Harvard, Vancouver, ISO, and other styles
14

Pahwa, Payal, Rajiv Arora, and Garima Thakur. "An Efficient Algorithm for Data Cleaning." International Journal of Knowledge-Based Organizations 1, no. 4 (October 2011): 56–71. http://dx.doi.org/10.4018/ijkbo.2011100104.

Full text
Abstract:
The quality of real world data that is being fed into a data warehouse is a major concern of today. As the data comes from a variety of sources before loading the data in the data warehouse, it must be checked for errors and anomalies. There may be exact duplicate records or approximate duplicate records in the source data. The presence of incorrect or inconsistent data can significantly distort the results of analyses, often negating the potential benefits of information-driven approaches. This paper addresses issues related to detection and correction of such duplicate records. Also, it analyzes data quality and various factors that degrade it. A brief analysis of existing work is discussed, pointing out its major limitations. Thus, a new framework is proposed that is an improvement over the existing technique.
APA, Harvard, Vancouver, ISO, and other styles
15

Ranzini, Stenio M., Francesco Da Ros, Henning Bülow, and Darko Zibar. "Tunable Optoelectronic Chromatic Dispersion Compensation Based on Machine Learning for Short-Reach Transmission." Applied Sciences 9, no. 20 (October 15, 2019): 4332. http://dx.doi.org/10.3390/app9204332.

Full text
Abstract:
In this paper, a machine learning-based tunable optical-digital signal processor is demonstrated for a short-reach optical communication system. The effect of fiber chromatic dispersion after square-law detection is mitigated using a hybrid structure, which shares the complexity between the optical and the digital domain. The optical part mitigates the chromatic dispersion by slicing the signal into small sub-bands and delaying them accordingly, before regrouping the signal again. The optimal delay is calculated in each scenario to minimize the bit error rate. The digital part is a nonlinear equalizer based on a neural network. The results are analyzed in terms of signal-to-noise penalty at the KP4 forward error correction threshold. The penalty is calculated with respect to a back-to-back transmission without equalization. Considering 32 GBd transmission and 0 dB penalty, the proposed hybrid solution shows chromatic dispersion mitigation up to 200 ps/nm (12 km of equivalent standard single-mode fiber length) for stage 1 of the hybrid module and roughly double for the second stage. A simplified version of the optical module is demonstrated with an approximated 1.5 dB penalty compared to the complete two-stage hybrid module. Chromatic dispersion tolerance for a fixed optical structure and a simpler configuration of the nonlinear equalizer is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
16

Bos, S. P., K. L. Miller, J. Lozi, O. Guyon, D. S. Doelman, S. Vievard, A. Sahoo, et al. "First on-sky demonstration of spatial Linear Dark Field Control with the vector-Apodizing Phase Plate at Subaru/SCExAO." Astronomy & Astrophysics 653 (September 2021): A42. http://dx.doi.org/10.1051/0004-6361/202040134.

Full text
Abstract:
Context. One of the key noise sources that currently limits high-contrast imaging observations for exoplanet detection is quasi-static speckles. Quasi-static speckles originate from slowly evolving non-common path aberrations (NCPA). These NCPA are related to the different optics encountered in the wavefront sensing path and the science path, and they also exhibit a chromatic component due to the difference in the wavelength between the science camera and the main wavefront sensor. These speckles degrade the contrast in the high-contrast region (or dark hole) generated by the coronagraph and make the calibration in post-processing more challenging. Aims. The purpose of this work is to present a proof-of-concept on-sky demonstration of spatial Linear Dark Field Control (LDFC). The ultimate goal of LDFC is to stabilize the point spread function by addressing NCPA using the science image as additional wavefront sensor. Methods. We combined spatial LDFC with the Asymmetric Pupil vector-Apodizing Phase Plate (APvAPP) on the Subaru Coronagraphic Extreme Adaptive Optics system at the Subaru Telescope. To allow for rapid prototyping and easy interfacing with the instrument, LDFC was implemented in Python. This limited the speed of the correction loop to approximately 20 Hz. With the APvAPP, we derive a high-contrast reference image to be utilized by LDFC. LDFC is then deployed on-sky to stabilize the science image and maintain the high-contrast achieved in the reference image. Results. In this paper, we report the results of the first successful proof-of-principle LDFC on-sky tests. We present results from two types of cases: (1) correction of instrumental errors and atmospheric residuals plus artificially induced static aberrations introduced on the deformable mirror and (2) correction of only atmospheric residuals and instrumental aberrations. When introducing artificial static wavefront aberrations on the DM, we find that LDFC can improve the raw contrast by a factor of 3–7 over the dark hole. In these tests, the residual wavefront error decreased by ∼50 nm RMS, from ∼90 nm to ∼40 nm RMS. In the case with only residual atmospheric wavefront errors and instrumental aberrations, we show that LDFC is able to suppress evolving aberrations that have timescales of < 0.1–0.4 Hz. We find that the power at 10−2 Hz is reduced by a factor of ∼20, 7, and 4 for spatial frequency bins at 2.5, 5.5, and 8.5λ/D, respectively. Conclusions. We have identified multiplied challenges that have to be overcome before LDFC can become an integral part of science observations. The results presented in this work show that LDFC is a promising technique for enabling the high-contrast imaging goals of the upcoming generation of extremely large telescopes.
APA, Harvard, Vancouver, ISO, and other styles
17

Guo, Wulong, Cheng Wang, Haisheng Zhao, Shaodong Zhang, Le Cao, Peng Xiao, Lu Liu, Liang Chen, and Yuanyuan Zhang. "Ionospheric Sounding Based on Spaceborne PolSAR in P-Band." Atmosphere 13, no. 4 (March 25, 2022): 524. http://dx.doi.org/10.3390/atmos13040524.

Full text
Abstract:
The signal of spaceborne low-frequency full-polarization synthetic aperture radar (full-pol SAR) contains abundant ionospheric information. Phased Array L-band Synthetic Aperture Radar (PALSAR) working in the L-band has been verified as an emerging ionospheric sounding technology. Aiming for a future P-band SAR system, this paper investigates the ability of the P-band SAR system in ionospheric one-dimensional and two-dimensional detection. First, considering different systematic error levels, the total electron content (TEC) retrieval in L/P-band is studied by using three typical full-pol SAR data sets based on a circular polarization algorithm. Second, the TEC data retrieved by SAR are fused with the ionosonde, and the joint retrieval of ionospheric electron density is performed. Results show that the P-band TEC retrieval is approximately twice as accurate as the L-band retrieval under the same conditions, and possesses excellent robustness. In addition, the TEC obtained by L/P-band SAR can be used to correct the electron density of the topside on the ionosonde. Results also show that compared with the topside correction accuracy of L-band SAR, that of the P-band SAR is improved by more than 20%. SAR has natural high-resolution characteristics and the P-band signal contains more obvious ionospheric information than the L-band signal. Therefore, future spaceborne P-band SAR has many advantages in two-dimensional fine ionospheric observation and one-dimensional electron density retrieval.
APA, Harvard, Vancouver, ISO, and other styles
18

ameer, Ghufran, and Nawal Kh Gazal. "Evaluation of Microwave-Optical-Infrared Satellite Imagery for Land Cover Mapping." Journal of Physics: Conference Series 2114, no. 1 (December 1, 2021): 012090. http://dx.doi.org/10.1088/1742-6596/2114/1/012090.

Full text
Abstract:
Abstract Satellite images are vital tool in various applications like land use, land cover mapping and geographic information system (GIS) etc. A variety of factors involved in the process of image acquisition, introduce geometric distortions, which are removed by pre-processing of the digital imagery. Geometric correction is the process of rectification of geometric errors introduced in the imagery during the process of its acquisition. From practical point of view, the Sentinel-1 images are to be depended as source of microwave satellite imagery. While, Sentinel-2 are to be used for providing the study with the required visible-infrared images. The study includes performing different digital image processing and analysis techniques, such as: geometric and radiometric corrections, spatial merge (fusion), feature extraction with using different spatial filtering techniques and spectral classification to reveal which LULC image presents better accuracy results. The microwave portion of the spectrum covers the range from approximately 1cm to 1m in wavelength. Because of their long wavelengths, compared to the visible and infrared, microwaves have special properties that are important for remote sensing. Longer wavelength microwave radiation can penetrate through cloud cover, haze, dust, and all but the heaviest rainfall as the longer wavelengths are not susceptible to atmospheric scattering which affects shorter optical wavelengths. This property allows detection of microwave energy under almost all weather and environmental conditions so that data can be collected at any time.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Qingshan, and Wei Zhang. "An efficient diffraction stacking interferometric imaging location method for microseismic events." GEOPHYSICS 87, no. 3 (March 8, 2022): KS73—KS82. http://dx.doi.org/10.1190/geo2021-0233.1.

Full text
Abstract:
Migration-based location methods automatically obtain microseismic event locations without manual picking and often are used in microseismic monitoring during hydraulic fracturing. They use waveform stacking over all receivers to enhance the ability to detect weak events. However, these methods may fail to effectively increase the signal-to-noise ratio (S/N) of the stacked trace and may not obtain an accurate source location if polarity reversals occur in seismic records. To overcome this problem, traditional approaches rely on polarity correction by characteristic function or polarity determination by source moment-tensor inversion and focal mechanism search. To ensure location accuracy and efficiency, we have developed the diffraction stacking interferometric imaging (DSII) method for locating microseismic events. We first obtain the approximately symmetrical pattern responding to the source moment tensor by using diffraction stacking (DS) and then apply interferometric imaging to focus the pattern onto its center and extract the source location. The DSII method overcomes the inaccurate location problem under polarity reversal caused by source mechanisms, and it also benefits from the high efficiency and noise suppression of the DS. Moreover, the symmetrical pattern in the DS source image also can be used as a means of quality control for field data. We use synthetic and field data to demonstrate the location ability of this method. Our results indicate that the DSII can efficiently reduce the effects of strong noise on event detection and location. We can achieve a quasi-symmetrical pattern on the DS image and obtain a focused interferometric image when the velocity has systematic errors. Compared with the DS, the DSII method also has a better performance on sparse-receiver networks. Finally, the DSII method is applied to field data acquired using a surface array during hydraulic fracturing.
APA, Harvard, Vancouver, ISO, and other styles
20

Stopsack, Konrad H., Irenaeus C. Chan, Evelyn Schmidt, Alex Panchot, Samantha McNulty, Nicole A. Schreiber, Yiwen Zhang, et al. "Abstract P011: Clonal hematopoiesis and risk of lethal prostate cancer: a prospective cohort study with long-term follow-up." Cancer Prevention Research 16, no. 1_Supplement (January 1, 2023): P011. http://dx.doi.org/10.1158/1940-6215.precprev22-p011.

Full text
Abstract:
Abstract Background: Clonal hematopoiesis (CH), the presence of acquired mutations in leukemia driver genes, promotes systemic inflammation and is common among aging men. Prostate cancer, a subset of which is lethal, also develops in aging men. We hypothesized that CH contributes to development of lethal prostate cancer. Methods: We conducted nested case-cohort studies for metastatic prostate cancer and prostate cancer-specific death (lethal prostate cancer) within the prospective Health Professionals Follow-up Study. First, we followed 1155 men free of prostate cancer and cardiovascular disease at blood draw (1993-1995) for development of lethal prostate cancer over up to 26 years. Second, we followed 532 men with incident non-metastatic prostate cancer for development of lethal prostate cancer. We sequenced blood DNA from 1488 participants for putative CH driver mutations in the 9 most common CH-defining genes with a custom targeted panel (VariantPlex, Invitae, Inc.) at ultra-high depth (mean, 18,000x), employing unique molecular identifiers for error correction. CH variant calling used a novel ensemble calling approach, ArCCH, validated with in-silico tumor dilutions and blinded technical replicates, which had high accuracy for variant allele frequencies (VAFs) as low as 0.1%. We estimated hazard ratios (HRs) with 95% confidence intervals (CIs) in proportional hazards regression with Prentice case-cohort weights. Results: In a random sample of 968 men initially free of prostate cancer and cardiovascular disease (median age at blood draw 60 years, interquartile range 52 to 67), 80% of men had CH at a variant allele frequency (VAF) of &gt;0.1%, 15% had VAFs &gt;2%, and 3% had VAFs &gt;10%; 75% of men had variants in epigenetic modifier genes (DNMT3A, TET2, ASXL1), and 21% had variants in DNA repair genes (PPM1D, TP53, CHEK2). CH burden was strongly age-associated, with approximately one additional CH variant per decade of age at blood draw (mean difference 1.07 variants, 95% CI 0.93 to 1.21). Among men initially free from prostate cancer, after adjusting for age at blood draw, CH clones between 0.1% and 10% VAF were not related to lethal prostate cancer (206 events total; HR 0.93, 95% CI 0.49-1.76 for VAFs 2-10% vs. no variants detected at &gt;0.1% VAF). Results for epigenetic modifiers and DNA repair genes were similar. While inconclusive, data were compatible with positive associations among younger men (&lt; 65 years) or for VAFs &gt;10%. Among men initially diagnosed with non-metastatic prostate cancer, results were similarly null for progression to lethal prostate cancer (164 events). Conclusions: This large prospective study with long-term follow-up suggests that low-level CH is unlikely a major contributor to and not well suited for early detection of lethal prostate cancer. Citation Format: Konrad H. Stopsack, Irenaeus C. Chan, Evelyn Schmidt, Alex Panchot, Samantha McNulty, Nicole A. Schreiber, Yiwen Zhang, Kathryn L. Penney, Michael F. Berger, Luis A. Diaz, Ross L. Levine, Kelly L. Bolton, Lorelei A. Mucci, Philip W. Kantoff. Clonal hematopoiesis and risk of lethal prostate cancer: a prospective cohort study with long-term follow-up. [abstract]. In: Proceedings of the AACR Special Conference: Precision Prevention, Early Detection, and Interception of Cancer; 2022 Nov 17-19; Austin, TX. Philadelphia (PA): AACR; Can Prev Res 2023;16(1 Suppl): Abstract nr P011.
APA, Harvard, Vancouver, ISO, and other styles
21

Wienecke, Clara, Bennet Heida, Katrin Teich, Konstantin Büttner, Alessandro Liebich, Razif Gabdoulline, Letizia Venturini, et al. "Clonal Relapse Dynamics in Acute Myeloid Leukemia Following Allogeneic Hematopoietic Cell Transplantation." Blood 138, Supplement 1 (November 5, 2021): 611. http://dx.doi.org/10.1182/blood-2021-149081.

Full text
Abstract:
Abstract Introduction The 2-year survival for AML patients relapsing after allogeneic hematopoietic cell transplantation (alloHCT) is &lt;20%, independent of the choice of relapse-treatment. Relapse detection in its molecular state enables early interventions and possibly prevention of hematological recurrence of the disease. The role of measurable residual disease (MRD) monitoring for risk stratification has been described for pre and post-alloHCT MRD analyses. Yet, it remains unclear, if and by which lead-time NGS assessment can detect MRD before impending relapse. We hypothesize that the functional class of mutations determines the relapse kinetics in AML after alloHCT. Methods We identified mutations present at AML relapse after alloHCT by Illumina myeloid panel sequencing covering 48 AML associated genes. Peripheral whole blood samples were retrospectively collected before hematological relapse, with a minimum of one sample per patient at three months prior to relapse and if available, additional monthly samples. Amplicon-based NGS and bioinformatics error-correction were performed on those samples as described in Thol et al. 2018. Positive MRD was defined as MRD detectable above the limit of detection. In the last step, we performed polynomic curve interpolation to model relapse dynamics. Results MRD was assessed in 75 AML patients after alloHCT using 203 AML-related mutations present at the time of relapse, corresponding to a median of 2.7 trackable mutations per patient (range 1-7). In total, 305 MRD analyses were performed from peripheral blood (median 1.5 per mutation, range 1-5) prior to relapse. VAFs measured above the limit of detection (median LOD across all targets 0.0315) ranged from 0.0048-26% (median 1.3%). In 45 of 75 patients (60%), we detected MRD in at least one sample and one marker before relapse. Of those, 23 patients (51%) were MRD positive in all markers before relapse and 22 patients (49%) were MRD positive in some, but not all markers before relapse. The majority of MRD-positive patients (30 of 45) were first detected three or fewer months before relapse, whereas 15 (33%) of 45 patients were MRD positive more than 3 months before relapse. The median time to relapse from the first MRD-positive sample to relapse was 2.9 months (range 0.6-10.2). Among the 203 mutations found in relapse, 93 (46%) were detectable by MRD monitoring before relapse while the remaining 110 markers (54%) remained undetectable prior to relapse. Of note, 88 of those 110 markers (80%) were measured only once before relapse, indicating that frequent sampling increases the likelihood of MRD detection. Genes in which mutations were found mostly MRD-positive were TET2 (6 out of 6), ASXL2 (4 out of 5), SF3B1 (4 out of 5), and RUNX1 (7 out of 9). Mutations in WT1 (1 out of 13), NRAS (1 out of 8), FLT3-ITD (9 out of 29), and PTPN11 (1 out of 5) were among the most common MRD negative mutations before relapse. To assess clonal relapse dynamics, pre-relapse samples were assigned to the monthly interval that best matched the sampling time. If MRD was measured positive at one time point, all the following monthly intervals were considered MRD-positive, whether a sample was available for that interval or not. The fraction of positive samples from all samples per time point was plotted against time to relapse and the function was approximated by fifth-order polynomials. The percentage of patients being MRD positive increased markedly with shortened distance to relapse. Thus, 29% of patients were MRD positive at 3 months, 44% at 2 months and 66% 1 month prior to relapse. Summarized by functional gene classes, mutations in tumor suppressor genes and especially signaling genes showed a higher slope and thus a shorter lead-time to relapse than mutations in epigenetic modifier genes (Figure 2). Conclusion In summary, hematologic relapse can be detected in peripheral blood in 29, 44, and 66% of patients at 3, 2, and 1 months before relapse by NGS-MRD analysis, respectively. Mutations in epigenetic modifier genes show a higher fraction of MRD positivity before relapse than other mutations. In contrast, mutations in signaling genes show a shorter lead-time to relapse. Figure 1 Figure 1. Disclosures Ganser: Celgene: Honoraria; Novartis: Honoraria; Jazz Pharmaceuticals: Honoraria. Thol: Abbvie: Honoraria; Astellas: Honoraria; Novartis: Honoraria; Pfizer: Honoraria; Jazz: Honoraria; BMS/Celgene: Honoraria, Research Funding. Heuser: BergenBio: Research Funding; Bayer Pharma AG: Research Funding; AbbVie: Membership on an entity's Board of Directors or advisory committees, Research Funding; BMS/Celgene: Membership on an entity's Board of Directors or advisory committees, Research Funding; Janssen: Honoraria; Novartis: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding; Pfizer: Membership on an entity's Board of Directors or advisory committees, Research Funding; Daiichi Sankyo: Membership on an entity's Board of Directors or advisory committees, Research Funding; Jazz: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding; Astellas: Research Funding; Tolremo: Membership on an entity's Board of Directors or advisory committees; Karyopharm: Research Funding; Roche: Membership on an entity's Board of Directors or advisory committees, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
22

Roussel, N., F. Frappart, G. Ramillien, J. Darrozes, C. Desjardins, P. Gegout, F. Pérosanz, and R. Biancale. "Simulations of direct and reflected wave trajectories for ground-based GNSS-R experiments." Geoscientific Model Development 7, no. 5 (October 2, 2014): 2261–79. http://dx.doi.org/10.5194/gmd-7-2261-2014.

Full text
Abstract:
Abstract. The detection of Global Navigation Satellite System (GNSS) signals that are reflected off the surface, along with the reception of direct GNSS signals, offers a unique opportunity to monitor water level variations over land and ocean. The time delay between the reception of the direct and reflected signals gives access to the altitude of the receiver over the reflecting surface. The field of view of the receiver is highly dependent on both the orbits of the GNSS satellites and the configuration of the study site geometries. A simulator has been developed to determine the location of the reflection points on the surface accurately by modeling the trajectories of GNSS electromagnetic waves that are reflected by the surface of the Earth. Only the geometric problem was considered using a specular reflection assumption. The orbit of the GNSS constellation satellites (mainly GPS, GLONASS and Galileo), and the position of a fixed receiver, are used as inputs. Four different simulation modes are proposed, depending on the choice of the Earth surface model (local plane, osculating sphere or ellipsoid) and the consideration of topography likely to cause masking effects. Angular refraction effects derived from adaptive mapping functions are also taken into account. This simulator was developed to determine where the GNSS-R receivers should be located to monitor a given study area efficiently. In this study, two test sites were considered: the first one at the top of the 65 m Cordouan lighthouse in the Gironde estuary, France, and the second one on the shore of Lake Geneva (50 m above the reflecting surface), at the border between France and Switzerland. This site is hidden by mountains in the south (orthometric altitude up to 2000 m), and overlooking the lake in the north (orthometric altitude of 370 m). For this second test site configuration, reflections occur until 560 m from the receiver. The planimetric (arc length) differences (or altimetric difference as WGS84 ellipsoid height) between the positions of the specular reflection points obtained considering the Earth's surface as an osculating sphere or as an ellipsoid were found to be on average 9 cm (or less than 1 mm) for satellite elevation angles greater than 10°, and 13.9 cm (or less than 1 mm) for satellite elevation angles between 5 and 10°. The altimetric and planimetric differences between the plane and sphere approximations are on average below 1.4 cm (or less than 1 mm) for satellite elevation angles greater than 10° and below 6.2 cm (or 2.4 mm) for satellite elevation angles between 5 and 10°. These results are the means of the differences obtained during a 24 h simulation with a complete GPS and GLONASS constellation, and thus depend on how the satellite elevation angle is sampled over the day of simulation. The simulations highlight the importance of the digital elevation model (DEM) integration: average planimetric differences (or altimetric) with and without integrating the DEM (with respect to the ellipsoid approximation) were found to be about 6.3 m (or 1.74 m), with the minimum elevation angle equal to 5°. The correction of the angular refraction due to troposphere on the signal leads to planimetric (or altimetric) differences of an approximately 18 m (or 6 cm) maximum for a 50 m receiver height above the reflecting surface, whereas the maximum is 2.9 m (or 7 mm) for a 5 m receiver height above the reflecting surface. These errors increase deeply with the receiver height above the reflecting surface. By setting it to 300 m, the planimetric errors reach 116 m, and the altimetric errors reach 32 cm for satellite elevation angles lower than 10°. The tests performed with the simulator presented in this paper highlight the importance of the choice of the Earth's representation and also the non-negligible effect of angular refraction due to the troposphere on the specular reflection point positions. Various outputs (time-varying reflection point coordinates, satellite positions and ground paths, wave trajectories, first Fresnel zones, etc.) are provided either as text or KML files for visualization with Google Earth.
APA, Harvard, Vancouver, ISO, and other styles
23

Mr. Pradeep Nayak, Madhushree, Meghana K, Mohammaed Firoz, and Madhu M. "Error Correction and Error Detection in Network." International Journal of Advanced Research in Science, Communication and Technology, August 29, 2022, 752–58. http://dx.doi.org/10.48175/ijarsct-7046.

Full text
Abstract:
Errors manipulation explains how errors are handled and determined using the network, specifically on the data connection layer. We offer a top level view of error manipulate in this paintings, including mistakes detection and mistakes restore. Information hyperlink layer mistakes manipulate takes place there. We specially communicate approximately the varieties of error detection algorithms used to locate faults and the way to repair them so the receiver can get the real statistics. The essential requirement of each communique system in the realm of wi-fi conversation these days is the potential to ship and receive errorless statistics over any noisy channel. The assets of noise and interference have also grown as a result of the development in records transmission. Engineers have made several tries to address the demand for more dependable and effective strategies for detecting and correcting errors within the acquired statistics. Various techniques are hired to pick out and attach information transmission faults. This evaluation paper affordan a extensive range of error detection and correction strategies thathave been around for a while. More than one tend error in SRAM memory rise whilst the technology scaled down, inflicting unmarried cellular and more than one cellular upsets to emerge. Blunders-correcting codes, such the preliminary approach of the (7,four) hamming code, wherein 7 stands for the overall code word, four stands for statistics bits, and 3 stands for parity bits, had been positioned into use and their encoding and decoding processes have been examined. The main drawback of this hamming code is that it's far handiest suitable forsingle-bitt errors detection and rectification.
APA, Harvard, Vancouver, ISO, and other styles
24

Schoenmaker, Linde, Olivier J. M. Béquignon, Willem Jespers, and Gerard J. P. van Westen. "UnCorrupt SMILES: a novel approach to de novo design." Journal of Cheminformatics 15, no. 1 (February 14, 2023). http://dx.doi.org/10.1186/s13321-023-00696-x.

Full text
Abstract:
AbstractGenerative deep learning models have emerged as a powerful approach for de novo drug design as they aid researchers in finding new molecules with desired properties. Despite continuous improvements in the field, a subset of the outputs that sequence-based de novo generators produce cannot be progressed due to errors. Here, we propose to fix these invalid outputs post hoc. In similar tasks, transformer models from the field of natural language processing have been shown to be very effective. Therefore, here this type of model was trained to translate invalid Simplified Molecular-Input Line-Entry System (SMILES) into valid representations. The performance of this SMILES corrector was evaluated on four representative methods of de novo generation: a recurrent neural network (RNN), a target-directed RNN, a generative adversarial network (GAN), and a variational autoencoder (VAE). This study has found that the percentage of invalid outputs from these specific generative models ranges between 4 and 89%, with different models having different error-type distributions. Post hoc correction of SMILES was shown to increase model validity. The SMILES corrector trained with one error per input alters 60–90% of invalid generator outputs and fixes 35–80% of them. However, a higher error detection and performance was obtained for transformer models trained with multiple errors per input. In this case, the best model was able to correct 60–95% of invalid generator outputs. Further analysis showed that these fixed molecules are comparable to the correct molecules from the de novo generators based on novelty and similarity. Additionally, the SMILES corrector can be used to expand the amount of interesting new molecules within the targeted chemical space. Introducing different errors into existing molecules yields novel analogs with a uniqueness of 39% and a novelty of approximately 20%. The results of this research demonstrate that SMILES correction is a viable post hoc extension and can enhance the search for better drug candidates. Graphical Abstract
APA, Harvard, Vancouver, ISO, and other styles
25

Ramos, Mariana F., Armando N. Pinto, and Nuno A. Silva. "Polarization based discrete variables quantum key distribution via conjugated homodyne detection." Scientific Reports 12, no. 1 (April 12, 2022). http://dx.doi.org/10.1038/s41598-022-10181-4.

Full text
Abstract:
AbstractOptical homodyne detection is widely adopted in continuous-variable quantum key distribution for high-rate field measurement quadratures. Besides that, those detection schemes have been being implemented for single-photon statistics characterization in the field of quantum tomography. In this work, we propose a discrete-variable quantum key distribution (DV-QKD) implementation that combines the use of phase modulators for high-speed state of polarization (SOP) generation, with a conjugate homodyne detection scheme which enables the deployment of high speed QKD systems. The channel discretization relies on the application of a detection threshold that allows to map the measured voltages as a click or no-click. Our scheme relies also on the use of a time-multiplexed pilot tone—quantum signal architecture which enables the use of a Bob locally generated local oscillator and opens the door to an effective polarization drift compensation scheme. Besides that, our results shows that for higher detection threshold values we obtain a very low quantum bit error rate (QBER) on the sifted key. Nevertheless, due to huge number of discarded qubits the obtained secure key length abruptly decreases. From our results, we observe that optimizing the detection threshold and considering a system operating at 500 MHz symbol generation clock, a secure key rate of approximately 46.9 Mbps, with a sifted QBER of $$1.5\%$$ 1.5 % over 40 km of optical fiber. This considering the error correction and privacy amplification steps necessary to obtain a final secure key.
APA, Harvard, Vancouver, ISO, and other styles
26

Schaller, David, Manuela Geiß, Marc Hellmuth, and Peter F. Stadler. "Heuristic algorithms for best match graph editing." Algorithms for Molecular Biology 16, no. 1 (August 17, 2021). http://dx.doi.org/10.1186/s13015-021-00196-3.

Full text
Abstract:
Abstract Background Best match graphs (BMGs) are a class of colored digraphs that naturally appear in mathematical phylogenetics as a representation of the pairwise most closely related genes among multiple species. An arc connects a gene x with a gene y from another species (vertex color) Y whenever it is one of the phylogenetically closest relatives of x. BMGs can be approximated with the help of similarity measures between gene sequences, albeit not without errors. Empirical estimates thus will usually violate the theoretical properties of BMGs. The corresponding graph editing problem can be used to guide error correction for best match data. Since the arc set modification problems for BMGs are NP-complete, efficient heuristics are needed if BMGs are to be used for the practical analysis of biological sequence data. Results Since BMGs have a characterization in terms of consistency of a certain set of rooted triples (binary trees on three vertices) defined on the set of genes, we consider heuristics that operate on triple sets. As an alternative, we show that there is a close connection to a set partitioning problem that leads to a class of top-down recursive algorithms that are similar to Aho’s supertree algorithm and give rise to BMG editing algorithms that are consistent in the sense that they leave BMGs invariant. Extensive benchmarking shows that community detection algorithms for the partitioning steps perform best for BMG editing. Conclusion Noisy BMG data can be corrected with sufficient accuracy and efficiency to make BMGs an attractive alternative to classical phylogenetic methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Arai, Takehiko, Tatsuaki Okada, Satoshi Tanaka, Tetsuya Fukuhara, Hirohide Demura, Toru Kouyama, Naoya Sakatani, et al. "Geometric correction for thermographic images of asteroid 162173 Ryugu by TIR (thermal infrared imager) onboard Hayabusa2." Earth, Planets and Space 73, no. 1 (May 26, 2021). http://dx.doi.org/10.1186/s40623-021-01437-w.

Full text
Abstract:
AbstractThe thermal infrared imager (TIR) onboard the Hayabusa2 spacecraft performed thermographic observations of the asteroid 162173 Ryugu (1999 JU$$_3$$ 3 ) from June 2018 to November 2019. Our previous reports revealed that the surface of Ryugu was globally filled with porous materials and had high surface roughness. These results were derived from making the observed temperature maps of TIR using a projection method onto the shape model of Ryugu as geometric corrections. The pointing directions of TIR were calculated using an interpolation of data from the SPICE kernels (NASA/NAIF) during the periods when the optical navigation camera (ONC) and the light detection and ranging (LIDAR) observations were performed. However, the mapping accuracy of the observed TIR images was degraded when the ONC and LIDAR were not performed with TIR. Also, the orbital and attitudinal fluctuations of Hayabusa2 increased the error of the temperature maps. In this paper, to solve the temperature image mapping problems, we improved the correction method by fitting all of the observed TIR images with the surface coordinate addressed on the high-definition shape model of Ryugu (SFM 800k v20180804). This correction adjusted the pointing direction of TIR by rotating the TIR frame relative to the Hayabusa2 frame using a least squares fit. As a result, the temperature maps spatially spreading areas were converged within high-resolved $$0.5^\circ$$ 0 . 5 ∘ by $$0.5^\circ$$ 0 . 5 ∘ maps. The estimated thermal inertia, for instance, was approximately 300$$\sim$$ ∼ 350 Jm$$^{-2}$$ - 2 s$$^{-0.5}$$ - 0.5 K$$^{-1}$$ - 1 at the hot area of the Ejima Saxum. This estimation was succeeded in case that the surface topographic features were larger than the pixel scale of TIR. However, the thermal inertia estimation of smooth terrains, such as the Urashima crater, was difficult because of surface roughness effects, where roughness was probably much smaller than the pixel scale of TIR.
APA, Harvard, Vancouver, ISO, and other styles
28

SEVDİ, Ali. "THE SOURCES USED IN THE DETECTION AND CORRECTION OF THE LAḤN FACT IN THE ARABIC LANGUAGE LITERATURE." Kilis 7 December University Journal of Theology, June 21, 2022. http://dx.doi.org/10.46353/k7auifd.1083103.

Full text
Abstract:
Morphologically, the infinitive of the verb لحَنَ يَلْحَنُ the concept of “اللِحْنُ /al-laḥn”, lexicographic tune, composition, melody, harmonious sound that is pleasing to the ear; It has different meanings, such as speaking incorrectly and speaking in a closed and pseudonymous way that no one else can understand. In the Arabic language terminology, the concept in question is described as the opposite of fluent Arabic and in the pronunciation of letters or words; in the meaning and use of words; it means to make mistakes in the formation of the sentence and in the irapta/movements. The phenomenon of laḥn, which can be briefly defined as the incorrect use in the language, is a subject that needs to be examined in many ways. In this study, the subject of Detection of the Laḥn Phenomenon in the Arabic Language Literature and Sources Used in Tasḥîḥ has been examined. It is beyond argument that language, which is one of the hallmarks of human beings, is in a state of constant change as a living entity by nature, depending on both cultural, economic and technological developments. Functionally, language is the most vital part of life, the basic element of literature and national culture, and has an important place in human life. In addition to these, as a means of communication and expression, it has features such as approximator and fuser. Because of these features, peoples have attached importance to its correct development. Arabs, who had a distinguished place in the pre-Islamic period with their fluency and eloquence, clearly put language and literature above everything else and considered this a source of pride. With the disposition they inherited from their ancestors, they could easily understand which word signified which meaning. They did not need any grammatical rules to correct or strengthen the selica. Along with Islam - especially with the people's influx of people entering Islam in groups as a result of the conquests - the contact of Arabs with many foreign cultures, and the widespread use of laḥn in the Arabic language began to appear in almost every part of the society and in many different fields. It is known that this situation played a leading role in laying the foundation of the Arabic grammar and starting the dictionary studies in the eyes of the scholars of the period. But II/VII. Since the middle of the century, studies such as lexicon and grammar were not sufficient and the phenomenon of laḥn continued to become widespread, and the "Laḥnu'l-âmme, Laḥnu'l-hâṣṣe, Mâ telhânü fîhi'l-ʿamme, Laḥnu'l-avâm" aimed at detecting and correcting erroneous uses. and laḥnu'l-havâṣ”, many works have been written. Ali b. Hamza al-Kisâî's (d. 189/805) Mâ telḥanü fîhi'l-ʿavâm, and Ibn al-Sikkît's (d. 244/858) Iṣlâḥu'l-manṭık; Mufaddal b. Mâ telḥanü fîhi'l-ʿamme of Salama (d. 290/903); Kitâbü'l-Faṣîḥ by Abu'l-Abbas Sa'leb (d. 291/904); Leyse fî kalami'l-ʿArab of Ibn Halaveyh al-Hamedani (d. 370/980); Abu Bakr al-Zubaidi's (d. 379/989) Laḥnü'l-ʿavâm; The Tes̱ḳīfü'l-lisân of Ibn Makkî (d. 501/1108); Abu Muhammad al-Harîrî's (d. 516/1122) Dürretü'l-Şavvâṣ fî evhâmi'l-ḫavâṣ; Ebû Mansûr al-Cevâlîkî's (d. 540/1145) Tekmileti îṣlâḥ mâ taġleṭu fîhi'l-ʿâmme; Ibn Hisham al-Lahmi's (d. 577/1181) al-Medḫal to taḳvîmi'l-lisân; Ibn Berrî al-Maqdisî's (d. 582/1187) Ġalaṭü'ḍ-ḍuʿafâʾ mine'l-fuḳahâʾ; Works such as İbnü'l-Cevzî's (d. 597/1201) Calendarü'l-lisân and Ebü's-Safâ es-Safedî's (d. 764/1363) Taṣḥîḥü't-taṣḥîf are some of the works written in this field. are important works. Recently, some works have been written on the detection and correction of the phenomenon of laḥn in the Arabic language. Tehẕîbü'l-elfâẓi'l-'âmmiyye by Muhammad Ali ed-Dusûkî; Teẕkiretü'l-kâtib of Es'ad Halîl Dâğir, Aḫtâunâ fi's- ṣuḥûf ve'd-devâvîn of Salâhuddîn ez-Ze'balâvî; Abdulkadir al-Mağribî's Asarâtü'l-lisân fi'l-luga; Works such as Muhammad al-Adnânî's Mu'cemü'l-ağlâṭi'l-lükeviyya al-muâṣıra and Şevkî Dayf's Taḥrifâtü'l-'âmmiyye li'l-füṣḥâ are some important works that stand out in this context. are studies. The authors in question tried to prove the authentic format based on some sources, not randomly, while detecting and correcting the errors they observed. Although the quantity and quality vary from person to person, these sources are generally Arabic poetry, the Qur'an, recitations, hadiths, fluent Arabic prose, the lexical meaning of the relevant word or phrase, and Arabic grammar.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography