Journal articles on the topic 'Statistical biases of RNG'

To see the other types of publications on this topic, follow the link: Statistical biases of RNG.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Statistical biases of RNG.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zaim, Samir Rachid, Colleen Kenost, Hao Helen Zhang, and Yves A. Lussier. "Personalized beyond Precision: Designing Unbiased Gold Standards to Improve Single-Subject Studies of Personal Genome Dynamics from Gene Products." Journal of Personalized Medicine 11, no. 1 (December 31, 2020): 24. http://dx.doi.org/10.3390/jpm11010024.

Full text
Abstract:
Background: Developing patient-centric baseline standards that enable the detection of clinically significant outlier gene products on a genome-scale remains an unaddressed challenge required for advancing personalized medicine beyond the small pools of subjects implied by “precision medicine”. This manuscript proposes a novel approach for reference standard development to evaluate the accuracy of single-subject analyses of transcriptomes and offers extensions into proteomes and metabolomes. In evaluation frameworks for which the distributional assumptions of statistical testing imperfectly model genome dynamics of gene products, artefacts and biases are confounded with authentic signals. Model confirmation biases escalate when studies use the same analytical methods in the discovery sets and reference standards. In such studies, replicated biases are confounded with measures of accuracy. We hypothesized that developing method-agnostic reference standards would reduce such replication biases. We propose to evaluate discovery methods with a reference standard derived from a consensus of analytical methods distinct from the discovery one to minimize statistical artefact biases. Our methods involve thresholding effect-size and expression-level filtering of results to improve consensus between analytical methods. We developed and released an R package “referenceNof1” to facilitate the construction of robust reference standards. Results: Since RNA-Seq data analysis methods often rely on binomial and negative binomial assumptions to non-parametric analyses, the differences create statistical noise and make the reference standards method dependent. In our experimental design, the accuracy of 30 distinct combinations of fold changes (FC) and expression counts (hereinafter “expression”) were determined for five types of RNA analyses in two different datasets. This design was applied to two distinct datasets: Breast cancer cell lines and a yeast study with isogenic biological replicates in two experimental conditions. Furthermore, the reference standard (RS) comprised all RNA analytical methods with the exception of the method testing accuracy. To mitigate biases towards a specific analytical method, the pairwise Jaccard Concordance Index between observed results of distinct analytical methods were calculated for optimization. Optimization through thresholding effect-size and expression-level reduced the greatest discordances between distinct methods’ analytical results and resulted in a 65% increase in concordance. Conclusions: We have demonstrated that comparing accuracies of different single-subject analysis methods for clinical optimization in transcriptomics requires a new evaluation framework. Reliable and robust reference standards, independent of the evaluated method, can be obtained under a limited number of parameter combinations: Fold change (FC) ranges thresholds, expression level cutoffs, and exclusion of the tested method from the RS development process. When applying anticonservative reference standard frameworks (e.g., using the same method for RS development and prediction), most of the concordant signal between prediction and Gold Standard (GS) cannot be confirmed by other methods, which we conclude as biased results. Statistical tests to determine DEGs from a single-subject study generate many biased results requiring subsequent filtering to increase reliability. Conventional single-subject studies pertain to one or a few patient’s measures over time and require a substantial conceptual framework extension to address the numerous measures in genome-wide analyses of gene products. The proposed referenceNof1 framework addresses some of the inherent challenges for improving transcriptome scale single-subject analyses by providing a robust approach to constructing reference standards.
APA, Harvard, Vancouver, ISO, and other styles
2

Marschner, Ian C., Rebecca A. Betensky, Victor DeGruttola, Scott M. Hammer, and Daniel R. Kuritzkes. "Clinical Trials Using HIV-1 RNA-Based Primary Endpoints: Statistical Analysis and Potential Biases." Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology 20, no. 3 (March 1999): 220–27. http://dx.doi.org/10.1097/00042560-199903010-00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yamaguchi, David K. "More on estimating the statistical significance of cross-dating positions for "floating" tree-ring series." Canadian Journal of Forest Research 24, no. 2 (February 1, 1994): 427–29. http://dx.doi.org/10.1139/x94-058.

Full text
Abstract:
Tabulated Student's t-values and climatic insensitivity among inner tree-ring widths can bias estimates of statistical significance for cross correlations relating "floating" and master tree-ring series. These biases can be removed by (i) directly computing significance levels for cross-correlation coefficients at dating positions and (ii) deleting insensitive inner rings from a dated floating sample before final correlation analysis. The number of early rings to delete can be determined from plots of cross-correlation coefficients linking a dated floating series of artificially decreasing length with a master series. These modifications improve the precision of Yamaguchi and Allen's approach (D.K. Yamaguchi and G.L. Allen. 1992. Can. J. For. Res. 22: 1215–1221) for estimating significance.
APA, Harvard, Vancouver, ISO, and other styles
4

Flandre, Philippe, Christine Durier, Diane Descamps, Odile Launay, and Véronique Joly. "On the Use of Magnitude of Reduction in HIV-1 RNA in Clinical Trials: Statistical Analysis and Potential Biases." JAIDS Journal of Acquired Immune Deficiency Syndromes 30, no. 1 (May 2002): 59–64. http://dx.doi.org/10.1097/00126334-200205010-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Flandre, Philippe, Christine Durier, Diane Descamps, Odile Launay, and Véronique Joly. "On the Use of Magnitude of Reduction in HIV-1 RNA in Clinical Trials: Statistical Analysis and Potential Biases." JAIDS Journal of Acquired Immune Deficiency Syndromes 30, no. 1 (May 2002): 59–64. http://dx.doi.org/10.1097/00042560-200205010-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Petegrosso, Raphael, Zhuliu Li, and Rui Kuang. "Machine learning and statistical methods for clustering single-cell RNA-sequencing data." Briefings in Bioinformatics 21, no. 4 (June 27, 2019): 1209–23. http://dx.doi.org/10.1093/bib/bbz063.

Full text
Abstract:
Abstract Single-cell RNAsequencing (scRNA-seq) technologies have enabled the large-scale whole-transcriptome profiling of each individual single cell in a cell population. A core analysis of the scRNA-seq transcriptome profiles is to cluster the single cells to reveal cell subtypes and infer cell lineages based on the relations among the cells. This article reviews the machine learning and statistical methods for clustering scRNA-seq transcriptomes developed in the past few years. The review focuses on how conventional clustering techniques such as hierarchical clustering, graph-based clustering, mixture models, $k$-means, ensemble learning, neural networks and density-based clustering are modified or customized to tackle the unique challenges in scRNA-seq data analysis, such as the dropout of low-expression genes, low and uneven read coverage of transcripts, highly variable total mRNAs from single cells and ambiguous cell markers in the presence of technical biases and irrelevant confounding biological variations. We review how cell-specific normalization, the imputation of dropouts and dimension reduction methods can be applied with new statistical or optimization strategies to improve the clustering of single cells. We will also introduce those more advanced approaches to cluster scRNA-seq transcriptomes in time series data and multiple cell populations and to detect rare cell types. Several software packages developed to support the cluster analysis of scRNA-seq data are also reviewed and experimentally compared to evaluate their performance and efficiency. Finally, we conclude with useful observations and possible future directions in scRNA-seq data analytics. Availability All the source code and data are available at https://github.com/kuanglab/single-cell-review.
APA, Harvard, Vancouver, ISO, and other styles
7

Jaffe, Andrew E., Ran Tao, Alexis L. Norris, Marc Kealhofer, Abhinav Nellore, Joo Heon Shin, Dewey Kim, et al. "qSVA framework for RNA quality correction in differential expression analysis." Proceedings of the National Academy of Sciences 114, no. 27 (June 20, 2017): 7130–35. http://dx.doi.org/10.1073/pnas.1617384114.

Full text
Abstract:
RNA sequencing (RNA-seq) is a powerful approach for measuring gene expression levels in cells and tissues, but it relies on high-quality RNA. We demonstrate here that statistical adjustment using existing quality measures largely fails to remove the effects of RNA degradation when RNA quality associates with the outcome of interest. Using RNA-seq data from molecular degradation experiments of human primary tissues, we introduce a method—quality surrogate variable analysis (qSVA)—as a framework for estimating and removing the confounding effect of RNA quality in differential expression analysis. We show that this approach results in greatly improved replication rates (>3×) across two large independent postmortem human brain studies of schizophrenia and also removes potential RNA quality biases in earlier published work that compared expression levels of different brain regions and other diagnostic groups. Our approach can therefore improve the interpretation of differential expression analysis of transcriptomic data from human tissue.
APA, Harvard, Vancouver, ISO, and other styles
8

Waweru, Jacqueline Wahura, Zaydah de Laurent, Everlyn Kamau, Khadija Said, Elijah Gicheru, Martin Mutunga, Caleb Kibet, et al. "Enrichment approach for unbiased sequencing of respiratory syncytial virus directly from clinical samples." Wellcome Open Research 6 (May 7, 2021): 99. http://dx.doi.org/10.12688/wellcomeopenres.16756.1.

Full text
Abstract:
Background: Nasopharyngeal samples contain higher quantities of bacterial and host nucleic acids relative to viruses; presenting challenges during virus metagenomics sequencing, which underpins agnostic sequencing protocols. We aimed to develop a viral enrichment protocol for unbiased whole-genome sequencing of respiratory syncytial virus (RSV) from nasopharyngeal samples using the Oxford Nanopore Technology (ONT) MinION platform. Methods: We assessed two protocols using RSV positive samples. Protocol 1 involved physical pre-treatment of samples by centrifugal processing before RNA extraction, while Protocol 2 entailed direct RNA extraction without prior enrichment. Concentrates from Protocol 1 and RNA extracts from Protocol 2 were each divided into two fractions; one was DNase treated while the other was not. RNA was then extracted from both concentrate fractions per sample and RNA from both protocols converted to cDNA, which was then amplified using the tagged Endoh primers through Sequence-Independent Single-Primer Amplification (SISPA) approach, a library prepared, and sequencing done. Statistical significance during analysis was tested using the Wilcoxon signed-rank test. Results: DNase-treated fractions from both protocols recorded significantly reduced host and bacterial contamination unlike the untreated fractions (in each protocol p<0.01). Additionally, DNase treatment after RNA extraction (Protocol 2) enhanced host and bacterial read reduction compared to when done before (Protocol 1). However, neither protocol yielded whole RSV genomes. Sequenced reads mapped to parts of the nucleoprotein (N gene) and polymerase complex (L gene) from Protocol 1 and 2, respectively. Conclusions: DNase treatment was most effective in reducing host and bacterial contamination, but its effectiveness improved if done after RNA extraction than before. We attribute the incomplete genome segments to amplification biases resulting from the use of short length random sequence (6 bases) in tagged Endoh primers. Increasing the length of the random nucleotides from six hexamers to nine or 12 in future studies may reduce the coverage biases.
APA, Harvard, Vancouver, ISO, and other styles
9

Bergsten, Emma, Denis Mestivier, and Iradj Sobhani. "The Limits and Avoidance of Biases in Metagenomic Analyses of Human Fecal Microbiota." Microorganisms 8, no. 12 (December 9, 2020): 1954. http://dx.doi.org/10.3390/microorganisms8121954.

Full text
Abstract:
An increasing body of evidence highlights the role of fecal microbiota in various human diseases. However, more than two-thirds of fecal bacteria cannot be cultivated by routine laboratory techniques. Thus, physicians and scientists use DNA sequencing and statistical tools to identify associations between bacterial subgroup abundances and disease. However, discrepancies between studies weaken these results. In the present study, we focus on biases that might account for these discrepancies. First, three different DNA extraction methods (G’NOME, QIAGEN, and PROMEGA) were compared with regard to their efficiency, i.e., the quality and quantity of DNA recovered from feces of 10 healthy volunteers. Then, the impact of the DNA extraction method on the bacteria identification and quantification was evaluated using our published cohort of sample subjected to both 16S rRNA sequencing and whole metagenome sequencing (WMS). WMS taxonomical assignation employed the universal marker genes profiler mOTU-v2, which is considered the gold standard. The three standard pipelines for 16S RNA analysis (MALT and MEGAN6, QIIME1, and DADA2) were applied for comparison. Taken together, our results indicate that the G’NOME-based method was optimal in terms of quantity and quality of DNA extracts. 16S rRNA sequence-based identification of abundant bacteria genera showed acceptable congruence with WMS sequencing, with the DADA2 pipeline yielding the highest congruent levels. However, for low abundance genera (<0.5% of the total abundance) two pipelines and/or validation by quantitative polymerase chain reaction (qPCR) or WMS are required. Hence, 16S rRNA sequencing for bacteria identification and quantification in clinical and translational studies should be limited to diagnostic purposes in well-characterized and abundant genera. Additional techniques are warranted for low abundant genera, such as WMS, qPCR, or the use of two bio-informatics pipelines.
APA, Harvard, Vancouver, ISO, and other styles
10

Goncearenco, Alexander, Bin-Guang Ma, and Igor N. Berezovsky. "Molecular mechanisms of adaptation emerging from the physics and evolution of nucleic acids and proteins." Nucleic Acids Research 42, no. 5 (December 25, 2013): 2879–92. http://dx.doi.org/10.1093/nar/gkt1336.

Full text
Abstract:
Abstract DNA, RNA and proteins are major biological macromolecules that coevolve and adapt to environments as components of one highly interconnected system. We explore here sequence/structure determinants of mechanisms of adaptation of these molecules, links between them, and results of their mutual evolution. We complemented statistical analysis of genomic and proteomic sequences with folding simulations of RNA molecules, unraveling causal relations between compositional and sequence biases reflecting molecular adaptation on DNA, RNA and protein levels. We found many compositional peculiarities related to environmental adaptation and the life style. Specifically, thermal adaptation of protein-coding sequences in Archaea is characterized by a stronger codon bias than in Bacteria. Guanine and cytosine load in the third codon position is important for supporting the aerobic life style, and it is highly pronounced in Bacteria. The third codon position also provides a tradeoff between arginine and lysine, which are favorable for thermal adaptation and aerobicity, respectively. Dinucleotide composition provides stability of nucleic acids via strong base-stacking in ApG dinucleotides. In relation to coevolution of nucleic acids and proteins, thermostability-related demands on the amino acid composition affect the nucleotide content in the second codon position in Archaea.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Anqi, Avi Srivastava, Joseph G. Ibrahim, Rob Patro, and Michael I. Love. "Nonparametric expression analysis using inferential replicate counts." Nucleic Acids Research 47, no. 18 (August 2, 2019): e105-e105. http://dx.doi.org/10.1093/nar/gkz622.

Full text
Abstract:
Abstract A primary challenge in the analysis of RNA-seq data is to identify differentially expressed genes or transcripts while controlling for technical biases. Ideally, a statistical testing procedure should incorporate the inherent uncertainty of the abundance estimates arising from the quantification step. Most popular methods for RNA-seq differential expression analysis fit a parametric model to the counts for each gene or transcript, and a subset of methods can incorporate uncertainty. Previous work has shown that nonparametric models for RNA-seq differential expression may have better control of the false discovery rate, and adapt well to new data types without requiring reformulation of a parametric model. Existing nonparametric models do not take into account inferential uncertainty, leading to an inflated false discovery rate, in particular at the transcript level. We propose a nonparametric model for differential expression analysis using inferential replicate counts, extending the existing SAMseq method to account for inferential uncertainty. We compare our method, Swish, with popular differential expression analysis methods. Swish has improved control of the false discovery rate, in particular for transcripts with high inferential uncertainty. We apply Swish to a single-cell RNA-seq dataset, assessing differential expression between sub-populations of cells, and compare its performance to the Wilcoxon test.
APA, Harvard, Vancouver, ISO, and other styles
12

Vu, Trung Nghia, Ha-Nam Nguyen, Stefano Calza, Krishna R. Kalari, Liewei Wang, and Yudi Pawitan. "Cell-level somatic mutation detection from single-cell RNA sequencing." Bioinformatics 35, no. 22 (April 26, 2019): 4679–87. http://dx.doi.org/10.1093/bioinformatics/btz288.

Full text
Abstract:
Abstract Motivation Both single-cell RNA sequencing (scRNA-seq) and DNA sequencing (scDNA-seq) have been applied for cell-level genomic profiling. For mutation profiling, the latter seems more natural. However, the task is highly challenging due to the limited input materials from only two copies of DNA molecules, while whole-genome amplification generates biases and other technical noises. ScRNA-seq starts with a higher input amount, so generally has better data quality. There exists various methods for mutation detection from DNA sequencing, it is not clear whether these methods work for scRNA-seq data. Results Mutation detection methods developed for either bulk-cell sequencing data or scDNA-seq data do not work well for the scRNA-seq data, as they produce substantial numbers of false positives. We develop a novel and robust statistical method—called SCmut—to identify specific cells that harbor mutations discovered in bulk-cell data. Statistically SCmut controls the false positives using the 2D local false discovery rate method. We apply SCmut to several scRNA-seq datasets. In scRNA-seq breast cancer datasets SCmut identifies a number of highly confident cell-level mutations that are recurrent in many cells and consistent in different samples. In a scRNA-seq glioblastoma dataset, we discover a recurrent cell-level mutation in the PDGFRA gene that is highly correlated with a well-known in-frame deletion in the gene. To conclude, this study contributes a novel method to discover cell-level mutation information from scRNA-seq that can facilitate investigation of cell-to-cell heterogeneity. Availability and implementation The source codes and bioinformatics pipeline of SCmut are available at https://github.com/nghiavtr/SCmut. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
13

Roos, Williamson, and Bowman. "Is Anthropogenic Pyrodiversity Invisible in Paleofire Records?" Fire 2, no. 3 (July 18, 2019): 42. http://dx.doi.org/10.3390/fire2030042.

Full text
Abstract:
Paleofire studies frequently discount the impact of human activities in past fire regimes. Globally, we know that a common pattern of anthropogenic burning regimes is to burn many small patches at high frequency, thereby generating landscape heterogeneity. Is this type of anthropogenic pyrodiversity necessarily obscured in paleofire records because of fundamental limitations of those records? We evaluate this with a cellular automata model designed to replicate different fire regimes with identical fire rotations but different fire frequencies and patchiness. Our results indicate that high frequency patch burning can be identified in tree-ring records at relatively modest sampling intensities. However, standard methods that filter out fires represented by few trees systematically biases the records against patch burning. In simulated fire regime shifts, fading records, sample size, and the contrast between the shifted fire regimes all interact to make statistical identification of regime shifts challenging without other information. Recent studies indicate that integration of information from history, archaeology, or anthropology and paleofire data generate the most reliable inferences of anthropogenic patch burning and fire regime changes associated with cultural changes.
APA, Harvard, Vancouver, ISO, and other styles
14

Hogg, Alan G., Timothy J. Heaton, Christopher Bronk Ramsey, Gretel Boswijk, Jonathan G. Palmer, Chris S. M. Turney, John Southon, and Warren Gumbley. "The Influence of Calibration Curve Construction and Composition on the Accuracy and Precision of Radiocarbon Wiggle-Matching of Tree Rings, Illustrated by Southern Hemisphere Atmospheric Data Sets from AD 1500–1950." Radiocarbon 61, no. 5 (May 28, 2019): 1265–91. http://dx.doi.org/10.1017/rdc.2019.42.

Full text
Abstract:
ABSTRACTThis research investigates two factors influencing the ability of tree-ring data to provide accurate 14C calibration information: the fitness and rigor of the statistical model used to combine the data into a curve; and the accuracy, precision and reproducibility of the component 14C data sets. It presents a new Bayesian spline method for calibration curve construction and tests it on extant and new Southern Hemisphere (SH) data sets (also examining their dendrochronology and pretreatment) for the post-Little Ice Age (LIA) interval AD 1500–1950. The new method of construction allows calculation of component data offsets, permitting identification of laboratory and geographic biases. Application of the new method to the 10 suitable SH 14C data sets suggests that individual offset ranges for component data sets appear to be in the region of ± 10 yr. Data sets with individual offsets larger than this need to be carefully assessed before selection for calibration purposes. We identify a potential geographical offset associated with the Southern Ocean (high latitude) Campbell Island data. We test the new methodology for wiggle-matching short tree-ring sequences and use an OxCal simulation to assess the likely precision obtainable by wiggle-matching in the post-LIA interval.
APA, Harvard, Vancouver, ISO, and other styles
15

Ryabko, Boris. "Time-Adaptive Statistical Test for Random Number Generators." Entropy 22, no. 6 (June 7, 2020): 630. http://dx.doi.org/10.3390/e22060630.

Full text
Abstract:
The problem of constructing effective statistical tests for random number generators (RNG) is considered. Currently, there are hundreds of RNG statistical tests that are often combined into so-called batteries, each containing from a dozen to more than one hundred tests. When a battery test is used, it is applied to a sequence generated by the RNG, and the calculation time is determined by the length of the sequence and the number of tests. Generally speaking, the longer is the sequence, the smaller are the deviations from randomness that can be found by a specific test. Thus, when a battery is applied, on the one hand, the “better” are the tests in the battery, the more chances there are to reject a “bad” RNG. On the other hand, the larger is the battery, the less time it can spend on each test and, therefore, the shorter is the test sequence. In turn, this reduces the ability to find small deviations from randomness. To reduce this trade-off, we propose an adaptive way to use batteries (and other sets) of tests, which requires less time but, in a certain sense, preserves the power of the original battery. We call this method time-adaptive battery of tests. The suggested method is based on the theorem which describes asymptotic properties of the so-called p-values of tests. Namely, the theorem claims that, if the RNG can be modeled by a stationary ergodic source, the value − l o g π ( x 1 x 2 … x n ) / n goes to 1 − h when n grows, where x 1 x 2 … is the sequence, π ( ) is the p-value of the most powerful test, and h is the limit Shannon entropy of the source.
APA, Harvard, Vancouver, ISO, and other styles
16

Mu, Penghua, Wei Pan, Shuiying Xiang, Nianqiang Li, Xinkai Liu, and Xihua Zou. "Fast physical and pseudo random number generation based on a nonlinear optoelectronic oscillator." Modern Physics Letters B 29, no. 24 (September 3, 2015): 1550142. http://dx.doi.org/10.1142/s0217984915501420.

Full text
Abstract:
High speed random number generation (RNG) utilizing a nonlinear optoelectronic oscillator (OEO) is explored experimentally. It has been found that by simply adjusting either the injected optical power or the gain of the modulator driver, low complexity dynamics such as square wave, and more complex dynamics including fully developed chaos can be experimentally achieved. More importantly, physical RNG based on high-speed-oscilloscope measurements and pseudo RNG based on post-processing are implemented in this paper. The generated bit sequences pass all the standard statistical random tests, indicating that fast physical and pseudo RNG could be achieved based on the same OEO entropy source. Our results could provide further insight into the implementation of RNG based on chaotic optical systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Riedel, Kurt S., Lawrence N. Mertz, and Richard Petrasso. "Physicists' Statistical Biases Evaluated." Physics Today 46, no. 6 (June 1993): 106–7. http://dx.doi.org/10.1063/1.2808944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chun, Sungwoo, Seung-Beck Lee, Masahiko Hara, Wanjun Park, and Song-Ju Kim. "High-Density Physical Random Number Generator Using Spin Signals in Multidomain Ferromagnetic Layer." Advances in Condensed Matter Physics 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/251819.

Full text
Abstract:
A high-density random number generator (RNG) based on spin signals in a multidomain ferromagnetic layer in a magnetic tunnel junction (MTJ) is proposed and fabricated. Unlike conventional spin-based RNGs, the proposed method does not require one to control an applied current, leading to a time delay in the system. RNG demonstrations are performed at room temperature. The randomness of the bit sequences generated by the proposed RNG is verified using the FIPS 140-2 statistical test suite provided by the NIST. The test results validate the effectiveness of the proposed RNGs. Our results suggest that we can obtain high-density, ultrafast RNGs if we can achieve high integration on the chip.
APA, Harvard, Vancouver, ISO, and other styles
19

Klimushyn, Petro, Tetiana Solianyk, Oleksandr Mozhaiev, Yurii Gnusov, Oleksandr Manzhai, and Vitaliy Svitlychny. "CRYPTO-RESISTANT METHODS AND RANDOM NUMBER GENERATORS IN INTERNET OF THINGS (IOT) DEVICES." Innovative Technologies and Scientific Solutions for Industries, no. 2 (20) (June 30, 2022): 22–34. http://dx.doi.org/10.30837/itssi.2022.20.022.

Full text
Abstract:
Subject of research: crypto-resistant methods and tools of generating random sequences and hardware support of cryptographic transformations in IoT devices. The aim of the article is to study crypto-resistant methods and tools for generating and testing random sequences suitable for use in IoT devices with limited resources; determination of circuit implementations of random sequences hardware generators; formation of conclusions on the use of random number generators (RNG) in cryptographic protection systems of the IoT network. The article solves the following tasks: analysis of methods and hardware for generating random sequences to protect IoT solutions with limited resources; identification of safe and effective technologies for the implementation of RNG; classification of RNG attacks; analysis of the shortcomings of the practical use of statistical test packages to assess the quality of random sequences of RNG; evaluation of the speed of cryptoaccelerators of hardware support for cryptographic transformations; providing practical guidance on RNG for use in resource-constrained IoT devices. Research methods: method of structural and functional analysis of RNG and IoT devices, cryptographic methods of information protection, methods of random sequence generation, method of stability analysis of systems, methods of construction of autonomous Boolean networks and Boolean chaos analysis, methods of quality assessment of random sequences. Results of work: the analysis of technologies and circuit decisions of hardware RNG on characteristics: quality of numbers’ randomness and unpredictability of sequences, speed, power consumption, miniaturization, possibility of integral execution; providing practical recommendations for the use of RNG in cryptographic protection systems of the IoT network. The novelty of the study is the analysis of methods and hardware to support technologies for generating random sequences in the system of cryptographic protection of IoT solutions; classification of attacks on RNG and features of protection against them; identification of effective RNG technologies and circuit solutions for use in low-power IoT devices with limited computing resources; providing practical recommendations for the use of RNG in cryptographic protection systems of the IoT network. The analysis of technologies and circuit solutions allowed to draw the following conclusions: protection of IoT solutions includes: security of IoT network nodes and their connection to the cloud using secure protocols, ensuring confidentiality, authenticity and integrity of IoT data by cryptographic methods, attack analysis and network cryptographic stability; the initial basis for the protection of IoT solutions is the true randomness of the formed RNG sequences and used in algorithms for cryptographic transformation of information to protect it; feature of IoT devices is their heterogeneity and geographical distribution, limited computing resources and power supply, small size; The most effective (reduce power consumption and increase the generation rate) for use in IoT devices are RNG exclusively on a digital basis, which implements a three-stage process: the initial digital circuit, normalizer and random number flow generator; Autonomous Boolean networks (ABN) allow to create RNG with unique characteristics: the received numbers are really random, high speed – the number can be received in one measure, the minimum power consumption, miniature, high (up to 3 GHz) throughput of Boolean chaos; a promising area of ABN development is the use of optical logic valves for the construction of optical ABN with a bandwidth of up to 14 GHz; the classification of known classes of RNG attacks includes: direct cryptanalytic attacks, attacks based on input data, attacks based on the disclosure of the internal state of RNG, correlation attacks and special attacks; statistical test packages to evaluate RNG sequences have some limitations or shortcomings and do not replace cryptanalysis; Comparison of cryptoaccelerators with cryptographic transformation software shows their significant advantages: for AES block encryption algorithm, speeds increase by 10-20 times in 8/16-bit cryptoaccelerators and 150 times in 32-bit, growth hashing of SHA-256 in 32-bit cryptoaccelerators more than 100 times, and for the NMAS algorithm - up to 500 times.
APA, Harvard, Vancouver, ISO, and other styles
20

Hafman, Sari Agustini, and Arif Fachru Rozi. "Analisis Teoritis dan Empiris Uji Craps dari Diehard Battery of Randomness Test untuk Pengujian Pembangkit Bilangan Acaksemu." CAUCHY 2, no. 4 (May 15, 2013): 216. http://dx.doi.org/10.18860/ca.v2i4.3118.

Full text
Abstract:
According to Kerchoffs (1883), the security system should only rely on cryptographic keys which is used in that system. Generally, the key sequences are generated by a Pseudo Random Number Generator (PRNG) or Random Number Generator (RNG). There are three types of randomness sequences that generated by the RNG and PRNG i.e. pseudorandom sequence, cryptographically secure pseudorandom sequences, and real random sequences. Several statistical tests, including diehard battery of tests of randomness, is used to check the type of randomness sequences that generated by PRNG or RNG. Due to its purpose, the principle on taking the testing parameters and the test statistic are associated with the validity of the conclusion produced by a statistical test, then the theoretical analysis is performed by applying a variety of statistical theory to evaluate craps test, one of the test included in the diehard battery of randomness tests. Craps test, inspired by craps game, aims to examine whether a PRNG produces an independent and identically distributed (iid) pseudorandom sequences. To demonstrate the process to produce a test statistics equation and to show how craps games applied on that test, will be carried out theoretical analysis by applying a variety of statistical theory. Furthermore, empirical observations will be done by applying craps test on a PRNG in order to check the test effectiveness in detecting the distribution and independency of sequences which produced by PRNG
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Yewon, and Yongjin Yeom. "Accelerated implementation for testing IID assumption of NIST SP 800-90B using GPU." PeerJ Computer Science 7 (March 8, 2021): e404. http://dx.doi.org/10.7717/peerj-cs.404.

Full text
Abstract:
In cryptosystems and cryptographic modules, insufficient entropy of the noise sources that serve as the input into random number generator (RNG) may cause serious damage, such as compromising private keys. Therefore, it is necessary to estimate the entropy of the noise source as precisely as possible. The National Institute of Standards and Technology (NIST) published a standard document known as Special Publication (SP) 800-90B, which describes the method for estimating the entropy of the noise source that is the input into an RNG. The NIST offers two programs for running the entropy estimation process of SP 800-90B, which are written in Python and C++. The running time for estimating the entropy is more than one hour for each noise source. An RNG tends to use several noise sources in each operating system supported, and the noise sources are affected by the environment. Therefore, the NIST program should be run several times to analyze the security of RNG. The NIST estimation runtimes are a burden for developers as well as evaluators working for the Cryptographic Module Validation Program. In this study, we propose a GPU-based parallel implementation of the most time-consuming part of the entropy estimation, namely the independent and identically distributed (IID) assumption testing process. To achieve maximal GPU performance, we propose a scalable method that adjusts the optimal size of the global memory allocations depending on GPU capability and balances the workload between streaming multiprocessors. Our GPU-based implementation excluded one statistical test, which is not suitable for GPU implementation. We propose a hybrid CPU/GPU implementation that consists of our GPU-based program and the excluded statistical test that runs using OpenMP. The experimental results demonstrate that our method is about 3 to 25 times faster than that of the NIST package.
APA, Harvard, Vancouver, ISO, and other styles
22

Hay, Jessica F., and Jenny R. Saffran. "Rhythmic Grouping Biases Constrain Infant Statistical Learning." Infancy 17, no. 6 (December 29, 2011): 610–41. http://dx.doi.org/10.1111/j.1532-7078.2011.00110.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Diewert, W. Erwin. "Index Number Issues in the Consumer Price Index." Journal of Economic Perspectives 12, no. 1 (February 1, 1998): 47–58. http://dx.doi.org/10.1257/jep.12.1.47.

Full text
Abstract:
This paper addresses the following issues: what is an appropriate theoretical consumer price index that statistical agencies should attempt to measure; what are some of the possible sources of biases between the fixed base Laspeyres price index that statistical agencies produce and the theoretical cost-of-living index; and what factors will make the biases larger or smaller and how will the biases change as the general inflation rate changes? This paper addresses all of the issues mentioned above and discusses what statistical agencies can do to reduce the biases.
APA, Harvard, Vancouver, ISO, and other styles
24

Harvey, Stephen C., and M. Prabhakaran. "Umbrella sampling: avoiding possible artifacts and statistical biases." Journal of Physical Chemistry 91, no. 18 (August 27, 1987): 4799–801. http://dx.doi.org/10.1021/j100302a030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ramesh, G. P., and A. Rajan. "SRAM Based Random Number Generator for Non-Repeating Pattern Generation." Applied Mechanics and Materials 573 (June 2014): 181–86. http://dx.doi.org/10.4028/www.scientific.net/amm.573.181.

Full text
Abstract:
—Field-programmable gate array (FPGA) optimized random number generators (RNGs) are more resource-efficient than software-optimized RNGs because they can take advantage of bitwise operations and FPGA-specific features. A random number generator (RNG) is a computational or physical device designed to generate a sequence of numbers or symbols that lack any pattern, i.e. appear random. The many applications of randomness have led to the development of several different methods for generating random data. Several computational methods for random number generation exist, but often fall short of the goal of true randomness though they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible).LUT-SR Family of Uniform Random Number Generators are able to handle randomness only based on seeds that is loaded in the look up table. To make random generation efficient, we propose new approach based on SRAM storage device.Keywords: RNG, LFSR, SRAM
APA, Harvard, Vancouver, ISO, and other styles
26

HNATICH, M., D. HORVÁTH, and P. KOPANSK. "QUANTUM FIELD APPROACH TO THE THEORY OF DEVELOPED TURBULENCE OF CONDUCTING FERROFLUIDS." Modern Physics Letters B 08, no. 17 (July 20, 1994): 1027–40. http://dx.doi.org/10.1142/s0217984994001023.

Full text
Abstract:
Universality of a principle of the renormalization group (RNG) enables to apply it to a wide spectrum of physical problems, including stochastic problems in the mechanics of continuum. In this paper we study using RNG method statistic properties of a randomly stirred ferrohydrodynamic systems with weak magnetization m. It is shown that also in these systems, similarly to magnetohydrodynamic systems, a steady state asymptotic regime exists in which the spectrum of the pulsation energy has Kolmogorov character k−5/3. In such a regime Lorentz force has no influence on the behavior of a turbulent system on large scales with the result that the magnetic field together with the field of magnetization behave as a passive admixture in the velocity field. It was also shown that strong nonlinear interactions of the velocity field with magnetization generate in the equation for dynamics of the field of magnetization term wν∇2m, which opens a new channel for energy dissipation. Dimensionless "magnetization" Prandtl number w−1 represents the relative intensity of the energy dissipation as compared with other channels and it attains a universal value in the turbulent regime.
APA, Harvard, Vancouver, ISO, and other styles
27

Teixeira, Christopher M. "Incorporating Turbulence Models into the Lattice-Boltzmann Method." International Journal of Modern Physics C 09, no. 08 (December 1998): 1159–75. http://dx.doi.org/10.1142/s0129183198001060.

Full text
Abstract:
The Lattice-Boltzmann method (LBM) is extended to allow incorporation of traditional turbulence models. Implementation of a two-layer mixing-length algebraic model and two versions of the k-ε two-equation model, Standard and RNG, in conjunction with a wall model, are presented. Validation studies are done for turbulent flows in a straight pipe at three Re numbers and over a backwards facing step of expansion ratio 1.5 and Re H=44 000. All models produce good agreement with experiment for the straight pipes but the RNG k-ε model is best able to capture both the recirculation length, within 2% of experiment, and the detailed structure of the mean fluid flow for the backwards facing step.
APA, Harvard, Vancouver, ISO, and other styles
28

Gillmore, Gerald M., and Anthony G. Greenwald. "Using statistical adjustment to reduce biases in student ratings." American Psychologist 54, no. 7 (July 1999): 518–19. http://dx.doi.org/10.1037/0003-066x.54.7.518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Vernotte, F., and E. Lantz. "Statistical biases and very-long-term time stability analysis." IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 59, no. 3 (March 2012): 523–30. http://dx.doi.org/10.1109/tuffc.2012.2223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

LINDSAY, R. MURRAY. "Publication System Biases Associated with the Statistical Testing Paradigm." Contemporary Accounting Research 11, no. 1 (June 9, 1994): 33–57. http://dx.doi.org/10.1111/j.1911-3846.1994.tb00435.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Butkevich, A. G., A. V. Berdyugin, and P. Teerikorpi. "Statistical biases in stellar astronomy: the Malmquist bias revisited." Monthly Notices of the Royal Astronomical Society 362, no. 1 (September 2005): 321–30. http://dx.doi.org/10.1111/j.1365-2966.2005.09306.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bellogín, Alejandro, Pablo Castells, and Iván Cantador. "Statistical biases in Information Retrieval metrics for recommender systems." Information Retrieval Journal 20, no. 6 (July 27, 2017): 606–34. http://dx.doi.org/10.1007/s10791-017-9312-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

KIKUCHI, MACOTO, NOBUYASU ITO, and YUTAKA OKABE. "STATISTICAL DEPENDENCE ANALYSIS." International Journal of Modern Physics C 07, no. 03 (June 1996): 379–87. http://dx.doi.org/10.1142/s0129183196000326.

Full text
Abstract:
We review our recent studies on the dynamical correlations in MC simulations from the view point of the statistical dependence. Attentions are paid to the reduction of the statistical degrees of freedom for correlated data. Possible biases on several cumulants, such as the susceptibility and the Binder number due to finite MC length are discussed. A new method for calculating the equilibrium relaxation time from the analysis of the statistical dependence is presented. We apply it to the critical dynamics of the Ising model to estimate the dynamical critical exponent accurately.
APA, Harvard, Vancouver, ISO, and other styles
34

Fatima, Saher, Rana Aamir Raza, Maruf Pasha, and Asghar Ali. "Generalization ability of Extreme Learning Machine using different Sample Selection Methods." STATISTICS, COMPUTING AND INTERDISCIPLINARY RESEARCH 3, no. 1 (June 30, 2021): 57–72. http://dx.doi.org/10.52700/scir.v3i1.34.

Full text
Abstract:
The recent explosion of data has triggered the need of data reduction for completing the effective data mining task in the process of knowledge discovery in databases (KDD). The process of instance selection (IS) plays a significant role for data reduction by eliminating the redundant, noisy, unreliable and irrelevant instances, which, in-turn reduces the computational resources, and helps to increase the capabilities and generalization abilities of the learning models. . This manuscript expounds the concept and functionalities of seven different instance selection techniques (i.e., ENN, AllKNN, MENN, ENNTh, Mul- tiEdit, NCNEdit, and RNG), and also evaluates their effectiveness by using single layer feed-forward neural network (SLFN), which is trained with extreme learning machine (ELM). Unlike traditional neural network, ELM randomly chooses the weights and biases of hidden layer nodes and analytically determines the weights of output layer node. The generalization ability of ELM is analyzed by using both original and reduced datasets. Experiment results depict that ELM provides better generalization with these IS methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Rossato, Sinara L., and Sandra C. Fuchs. "Handling random errors and biases in methods used for short-term dietary assessment." Revista de Saúde Pública 48, no. 5 (October 2014): 845–50. http://dx.doi.org/10.1590/s0034-8910.2014048005154.

Full text
Abstract:
Epidemiological studies have shown the effect of diet on the incidence of chronic diseases; however, proper planning, designing, and statistical modeling are necessary to obtain precise and accurate food consumption data. Evaluation methods used for short-term assessment of food consumption of a population, such as tracking of food intake over 24h or food diaries, can be affected by random errors or biases inherent to the method. Statistical modeling is used to handle random errors, whereas proper designing and sampling are essential for controlling biases. The present study aimed to analyze potential biases and random errors and determine how they affect the results. We also aimed to identify ways to prevent them and/or to use statistical approaches in epidemiological studies involving dietary assessments.
APA, Harvard, Vancouver, ISO, and other styles
36

Arenou, Frédéric, and Xavier Luri. "Statistical Effects from Hipparcos Astrometry." Highlights of Astronomy 12 (2002): 661–64. http://dx.doi.org/10.1017/s153929960001460x.

Full text
Abstract:
AbstractThe Hipparcos astrometry is used mainly for the derivation of stellar physical quantities such as luminosity, masses and velocity. However, sample selections on data with observational errors or an intrinsic dispersion may lead to biased estimates, especially when the error distributions are non-Gaussian. We review the classical biases and the ways to avoid them through the use of statistical methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Smith, Kenny, Amy Perfors, Olga Fehér, Anna Samara, Kate Swoboda, and Elizabeth Wonnacott. "Language learning, language use and the evolution of linguistic variation." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1711 (January 5, 2017): 20160051. http://dx.doi.org/10.1098/rstb.2016.0051.

Full text
Abstract:
Linguistic universals arise from the interaction between the processes of language learning and language use. A test case for the relationship between these factors is linguistic variation, which tends to be conditioned on linguistic or sociolinguistic criteria. How can we explain the scarcity of unpredictable variation in natural language, and to what extent is this property of language a straightforward reflection of biases in statistical learning? We review three strands of experimental work exploring these questions, and introduce a Bayesian model of the learning and transmission of linguistic variation along with a closely matched artificial language learning experiment with adult participants. Our results show that while the biases of language learners can potentially play a role in shaping linguistic systems, the relationship between biases of learners and the structure of languages is not straightforward. Weak biases can have strong effects on language structure as they accumulate over repeated transmission. But the opposite can also be true: strong biases can have weak or no effects. Furthermore, the use of language during interaction can reshape linguistic systems. Combining data and insights from studies of learning, transmission and use is therefore essential if we are to understand how biases in statistical learning interact with language transmission and language use to shape the structural properties of language. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’.
APA, Harvard, Vancouver, ISO, and other styles
38

Su, Liguo, Richard L. Collins, David A. Krueger, and Chiao-Yao She. "Statistical Analysis of Sodium Doppler Wind–Temperature Lidar Measurements of Vertical Heat Flux." Journal of Atmospheric and Oceanic Technology 25, no. 3 (March 1, 2008): 401–15. http://dx.doi.org/10.1175/2007jtecha915.1.

Full text
Abstract:
Abstract A statistical study is presented of the errors in sodium Doppler lidar measurements of wind and temperature in the mesosphere that arise from the statistics of the photon-counting process that is inherent in the technique. The authors use data from the Colorado State University (CSU) sodium Doppler wind-temperature lidar, acquired at a midlatitude site, to define the statistics of the lidar measurements in different seasons under both daytime and nighttime conditions. The CSU lidar measurements are scaled, based on a 35-cm-diameter receiver telescope, to the use of large-aperture telescopes (i.e., 1-, 1.8-, and 3.5-m diameters). The expected biases in vertical heat flux measurements at a resolution of 480 m and 150 s are determined and compared to Gardner and Yang’s reported geophysical values of 2.3 K m s−1. A cross-correlation coefficient of 2%–7% between the lidar wind and temperature estimates is found. It is also found that the biases vary from −4 × 10−3 K m s−1 for wintertime measurements at night with a 3.5-m telescope to −61 K m s−1 for summertime measurements at midday with a 1-m telescope. During winter, at night, the three telescope systems yield biases in their heat flux measurements that are less than 10% of the reported value of the heat flux; and during summer, at night, the 1.8- and 3.5-m systems yield biases in their heat flux measurements that are less than 10% of the geophysical value. While during winter at midday the 3.5-m system yields biases in their heat flux measurements that are less than 10% of the geophysical value, during summer at midday all of the systems yield flux biases that are greater than the geophysical value of the heat flux. The results are discussed in terms of current lidar measurements and proposed measurements at high-latitude sites.
APA, Harvard, Vancouver, ISO, and other styles
39

Viscusi, W. Kip. "Best Estimate Selection Bias in the Value of a Statistical Life." Journal of Benefit-Cost Analysis 9, no. 2 (October 9, 2017): 205–46. http://dx.doi.org/10.1017/bca.2017.21.

Full text
Abstract:
Selection of the best estimates of economic parameters frequently relies on the “best estimates” or a meta-analysis of the “best set” of parameter estimates from the literature. Using an all-set dataset consisting of all reported estimates of the value of a statistical life (VSL) as well as a best-set sample of the best estimates from these studies, this article estimates statistically significant publication selection biases in each case. Biases are much greater for the best-set sample, as one might expect, given the subjective nature of the best-set selection process. For the all-set sample, the mean bias-corrected estimate of the VSL for the preferred specification is $8.1 million for the whole sample and $11.4 million based on the CFOI data, while for the best-set results, the whole sample value is $3.5 million, and the CFOI data estimate is $4.4 million. Previous estimates of huge publication selection biases in the VSL estimates are attributable to these studies’ reliance on best-set samples.
APA, Harvard, Vancouver, ISO, and other styles
40

Melnikov, Valery M. "One-Lag Estimators for Cross-Polarization Measurements." Journal of Atmospheric and Oceanic Technology 23, no. 7 (July 1, 2006): 915–26. http://dx.doi.org/10.1175/jtech1919.1.

Full text
Abstract:
Abstract Estimators of the linear depolarization ratio (LDR) and cross-polarization correlation coefficients (ρxh) free from noise biases are devised. The estimators are based on the 1-lag correlation functions. The 1-lag estimators can be implemented with radar with simultaneous reception of copolar and cross-polar returns. Absence of noise biases makes the 1-lag estimators useful in eliminating variations of the system gain and in observations of heavy precipitation with enhanced thermal radiation. The 1-lag estimators allow for measurements at lower signal-to-noise ratios than the conventional algorithms. The statistical biases and standard deviations of 1-lag estimates are obtained via the perturbation analysis. It is found that both the 1-lag and conventional estimates of ρxh experience strong statistical biases at ρxh less than 0.3 (i.e., at low canting angles of oblate hydrometeors), and a procedure to correct for this bias is proposed.
APA, Harvard, Vancouver, ISO, and other styles
41

Masterman, Clayton J., and W. Kip Viscusi. "Publication Selection Biases in Stated Preference Estimates of the Value of a Statistical Life." Journal of Benefit-Cost Analysis 11, no. 3 (2020): 357–79. http://dx.doi.org/10.1017/bca.2020.21.

Full text
Abstract:
AbstractThis article presents the first meta-analysis documenting the extent of publication selection biases in stated preference estimates of the value of a statistical life (VSL). Stated preference studies fail to overcome the publication biases that affect much of the VSL literature. Such biases account for approximately 90% of the mean value of published VSL estimates in this subset of the literature. The bias is greatest for the largest estimates, possibly because the high-income labor market and stated preference estimates from the USA serve as an anchor for the VSL in other higher income countries. Estimates from lower-income countries exhibit less bias but remain unreliable for benefit-cost analysis. Unlike labor market estimates of the VSL, there is no evidence that any subsample of VSL estimates is free of significant publication selection biases. Although stated preference studies often provide the most readily accessible country-specific VSL estimates, a preferable approach to monetizing mortality risk benefits is to draw on income-adjusted estimates from labor market studies in the USA that use Census of Fatal Occupational Injuries risk data. These estimates lack publication selection effects as well as the limitations that are endemic to stated preference methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Johnson, Valen E. "On biases in assessing replicability, statistical consistency and publication bias." Journal of Mathematical Psychology 57, no. 5 (October 2013): 177–79. http://dx.doi.org/10.1016/j.jmp.2013.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kaping, Daniel, Tzvetomir Tzvetanov, and Stefan Treue. "Adaptation to statistical properties of visual scenes biases rapid categorization." Visual Cognition 15, no. 1 (January 2007): 12–19. http://dx.doi.org/10.1080/13506280600856660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cook, Tessa S., Stefan L. Zimmerman, and Saurabh Jha. "Analysis of Statistical Biases in Studies Used to Formulate Guidelines." Academic Radiology 22, no. 8 (August 2015): 1010–15. http://dx.doi.org/10.1016/j.acra.2015.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Villalobos-Herrera, Roberto, Emanuele Bevacqua, Andreia F. S. Ribeiro, Graeme Auld, Laura Crocetti, Bilyana Mircheva, Minh Ha, Jakob Zscheischler, and Carlo De Michele. "Towards a compound-event-oriented climate model evaluation: a decomposition of the underlying biases in multivariate fire and heat stress hazards." Natural Hazards and Earth System Sciences 21, no. 6 (June 17, 2021): 1867–85. http://dx.doi.org/10.5194/nhess-21-1867-2021.

Full text
Abstract:
Abstract. Climate models' outputs are affected by biases that need to be detected and adjusted to model climate impacts. Many climate hazards and climate-related impacts are associated with the interaction between multiple drivers, i.e. by compound events. So far climate model biases are typically assessed based on the hazard of interest, and it is unclear how much a potential bias in the dependence of the hazard drivers contributes to the overall bias and how the biases in the drivers interact. Here, based on copula theory, we develop a multivariate bias-assessment framework, which allows for disentangling the biases in hazard indicators in terms of the underlying univariate drivers and their statistical dependence. Based on this framework, we dissect biases in fire and heat stress hazards in a suite of global climate models by considering two simplified hazard indicators: the wet-bulb globe temperature (WBGT) and the Chandler burning index (CBI). Both indices solely rely on temperature and relative humidity. The spatial pattern of the hazard indicators is well represented by climate models. However, substantial biases exist in the representation of extreme conditions, especially in the CBI (spatial average of absolute bias: 21 ∘C) due to the biases driven by relative humidity (20 ∘C). Biases in WBGT (1.1 ∘C) are small compared to the biases driven by temperature (1.9 ∘C) and relative humidity (1.4 ∘C), as the two biases compensate for each other. In many regions, also biases related to the statistical dependence (0.85 ∘C) are important for WBGT, which indicates that well-designed physically based multivariate bias adjustment procedures should be considered for hazards and impacts that depend on multiple drivers. The proposed compound-event-oriented evaluation of climate model biases is easily applicable to other hazard types. Furthermore, it can contribute to improved present and future risk assessments through increasing our understanding of the biases' sources in the simulation of climate impacts.
APA, Harvard, Vancouver, ISO, and other styles
46

Kumari, Madhu, Meera Sharma, and V. B. Singh. "Severity Assessment of a Reported Bug by Considering its Uncertainty and Irregular State." International Journal of Open Source Software and Processes 9, no. 4 (October 2018): 20–46. http://dx.doi.org/10.4018/ijossp.2018100102.

Full text
Abstract:
An accurate bug severity assessment is an important factor in bug fixing. Bugs are reported on the bug tracking system by different users with a fast speed. The size of software repositories is also increasing at an enormous rate. This increased size often has much uncertainty and irregularities. The factors that cause uncertainty are biases, noise and abnormality in data. The authors consider that software bug report phenomena on the bug tracking system keeps an irregular state. Without proper handling of these uncertainties and irregularities, the performance of learning strategies can be significantly reduced. To incorporate and consider these two phenomena, they have used entropy as an attribute to assess bug severity. The authors have predicted the bug severity by using machine learning techniques, namely KNN, J48, RF, RNG, NB, CNN and MLR. They have validated the classifiers using PITS, Eclipse and Mozilla projects. The results show that the proposed entropy-based approaches improves the performance as compared to the state of the art approach considered in this article.
APA, Harvard, Vancouver, ISO, and other styles
47

Cherabli, Meriem, Megdouda Ourbih-Tari, and Meriem Boubalou. "Refined descriptive sampling simulated annealing algorithm for solving the traveling salesman problem." Monte Carlo Methods and Applications 28, no. 2 (May 31, 2022): 175–88. http://dx.doi.org/10.1515/mcma-2022-2113.

Full text
Abstract:
Abstract The simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. In this paper, we propose a software component under the Windows environment called goRDS which implements a refined descriptive sampling (RDS) number generator of high quality in the MATLAB programming language. The aim of this generator is to sample random inputs through the RDS method to be used in the Simple SA algorithm with swap operator. In this way, the new probabilistic meta-heuristic algorithm called RDS-SA algorithm will enhance the simple SA algorithm with swap operator, the SA algorithm and possibly its variants with solutions of better quality and precision. Towards this goal, the goRDS generator was highly tested by adequate statistical tests and compared statistically to the random number generator (RNG) of MATLAB, and it was proved that goRDS has passed all tests better. Simulation experiments were carried out on the benchmark traveling salesman problem (TSP) and the results show that the solutions obtained with the RDS-SA algorithm are of better quality and precision than those of the simple SA algorithm with swap operator, since the software component goRDS represents the probability behavior of the SA input random variables better than the usual RNG.
APA, Harvard, Vancouver, ISO, and other styles
48

PARESCHI, FABIO, GIANLUCA SETTI, and RICCARDO ROVATTI. "STATISTICAL TESTING OF A CHAOS BASED CMOS TRUE-RANDOM NUMBER GENERATOR." Journal of Circuits, Systems and Computers 19, no. 04 (June 2010): 897–910. http://dx.doi.org/10.1142/s0218126610006517.

Full text
Abstract:
As faster Random Number Generators become available, the possibility to improve the accuracy of randomness tests through the analysis of a larger number of generated bits increases. In this paper we first introduce a high-performance true-random number generator designed by authors, which use a set of discrete-time piecewise-linear chaotic maps as its entropy source. Then, we present by means of suitably improved randomness tests, the validation of this generator and the comparison with other high-end solutions. We consider the NIST test suite SP 800-22 and we show that, as suggested by NIST itself, to increase the so-called power of the test, a more in-depth analysis should be performed using the outcomes of the suite over many generated sequences. With this approach we build a framework for RNG high quality testing, with which we are able to show that the designed prototype has a comparable quality with respect to the other high-quality commercial solutions, with a working speed that is one order of magnitude faster.
APA, Harvard, Vancouver, ISO, and other styles
49

Su, C. H., and D. Ryu. "Multi-scale analysis of bias correction of soil moisture." Hydrology and Earth System Sciences Discussions 11, no. 7 (July 29, 2014): 8995–9026. http://dx.doi.org/10.5194/hessd-11-8995-2014.

Full text
Abstract:
Abstract. Remote sensing, in situ networks and models are now providing unprecedented information for environmental monitoring. To conjunctively use multi-source data nominally representing an identical variable, one must resolve biases existing between these disparate sources, and the characteristics of the biases can be non-trivial due to spatiotemporal variability of the target variable, inter-sensor differences with variable measurement supports. One such example is of soil moisture (SM) monitoring. Triple collocation (TC) based bias correction is a powerful statistical method that increasingly being used to address this issue but is only applicable to the linear regime, whereas nonlinear method of statistical moment matching is susceptible to unintended biases originating from measurement error. Since different physical processes that influence SM dynamics may be distinguishable by their characteristic spatiotemporal scales, we propose a multi-time-scale linear bias model in the framework of a wavelet-based multi-resolution analysis (MRA). The joint MRA-TC analysis was applied to demonstrate scale-dependent biases between in situ, remotely-sensed and modelled SM, the influence of various prospective bias correction schemes on these biases, and lastly to enable multi-scale bias correction and data adaptive, nonlinear de-noising via wavelet thresholding.
APA, Harvard, Vancouver, ISO, and other styles
50

Iacucci, M., L. Jeffery, A. Acharjee, O. M. Nardone, S. C. Smith, N. Labarile, D. Zardo, et al. "OP10 Response to biologics in IBD patients assessed by Computerized image analysis of Probe Based Confocal Laser Endomicroscopy with molecular labeling and gene expression profiling." Journal of Crohn's and Colitis 15, Supplement_1 (May 1, 2021): S009—S010. http://dx.doi.org/10.1093/ecco-jcc/jjab075.009.

Full text
Abstract:
Abstract Background Biologics are being used increasingly in the treatment of Inflammatory Bowel Disease. However, up to 40% of patients do not respond to biologics. Therefore, methods to predict response are imperative. We aimed to identify novel genes and pathways predictive of anti-TNF response in patients with Ulcerative Colitis (UC) undergoing electronic chromoendoscopy and probe confocal laser endomicroscopy (pCLE). We further evaluated the ex-vivo binding of fluorescent labelled biologics as markers of response Methods 26 UC patients starting anti-TNF therapy as standard of care were recruited. Pre-treatment colonoscopy, with electronic chromoendoscopy and pCLE (Cellvizio, Mauna Kea) by injecting intravenous fluorescein (2.5-5mls), was performed to assess disease activity. Targeted biopsies were taken for fluorescein isothiocyanate (FITC)-labelled infliximab staining and RNA extraction and gene expression analysis. Ex vivo labelling was evaluated by an automated analysis: after a first pre-processing step to remove biases, the labelled regions were identified using statistical multi-level thresholding, and evaluated as area and intensity. To assess response, the same endoscopic procedure was repeated at week 12–14 after anti-TNF. cDNA libraries were prepared using QIAseq UPX 3’Transcriptome reagents and sequenced. Normalised gene expressions were obtained through the CLC Genomics Workbench. Differentially expressed genes (DEGs) (FDR-corrected P-value&lt;0.05) were determined using the Limma package and PLS-DA modelling performed to calculate their importance (VIP score). Functionally related genes were identified and classified using DAVID tools. Strongest indicators of response were predicted by Random Forest area under the curve (AUC) analysis in this cohort and a similar validation cohort Results At baseline increased binding of the labelled biologic was associated with a higher likelihood of response to treatment (AUROC81%, accuracy77%, PPV100%, NPV63%). 342 DEGs (75 up-regulated, 267 down-regulated) distinguished responders from non-responders, 76 fell within enriched pathways. Pathways related to inflammation, chemotaxis, TGF-beta signalling, extracellular matrix and carbohydrate metabolism were reduced and cell-cell adhesion increased in responders pre-treatment. Among the 37 genes with VIP&gt;1, CRIP2, CXCL6,EMILIN1,GADD45B, LAMA4 and MAPKAPK2 were upregulated in non-responders pre-treatment and were good predictors of response (AUROC&gt;0.7) in this cohort and validation cohort Conclusion A higher mucosal binding of the biologics before treatment was observed in anti-TNF responders. Responsive UC patients have a less inflamed and fibrotic state pre-treatment. Chemotactic pathways, involving CXCL6 may be novel targets to treat non-responders
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography