Academic literature on the topic 'Statistical biases of RNG'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Statistical biases of RNG.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Statistical biases of RNG"

1

Zaim, Samir Rachid, Colleen Kenost, Hao Helen Zhang, and Yves A. Lussier. "Personalized beyond Precision: Designing Unbiased Gold Standards to Improve Single-Subject Studies of Personal Genome Dynamics from Gene Products." Journal of Personalized Medicine 11, no. 1 (December 31, 2020): 24. http://dx.doi.org/10.3390/jpm11010024.

Full text
Abstract:
Background: Developing patient-centric baseline standards that enable the detection of clinically significant outlier gene products on a genome-scale remains an unaddressed challenge required for advancing personalized medicine beyond the small pools of subjects implied by “precision medicine”. This manuscript proposes a novel approach for reference standard development to evaluate the accuracy of single-subject analyses of transcriptomes and offers extensions into proteomes and metabolomes. In evaluation frameworks for which the distributional assumptions of statistical testing imperfectly model genome dynamics of gene products, artefacts and biases are confounded with authentic signals. Model confirmation biases escalate when studies use the same analytical methods in the discovery sets and reference standards. In such studies, replicated biases are confounded with measures of accuracy. We hypothesized that developing method-agnostic reference standards would reduce such replication biases. We propose to evaluate discovery methods with a reference standard derived from a consensus of analytical methods distinct from the discovery one to minimize statistical artefact biases. Our methods involve thresholding effect-size and expression-level filtering of results to improve consensus between analytical methods. We developed and released an R package “referenceNof1” to facilitate the construction of robust reference standards. Results: Since RNA-Seq data analysis methods often rely on binomial and negative binomial assumptions to non-parametric analyses, the differences create statistical noise and make the reference standards method dependent. In our experimental design, the accuracy of 30 distinct combinations of fold changes (FC) and expression counts (hereinafter “expression”) were determined for five types of RNA analyses in two different datasets. This design was applied to two distinct datasets: Breast cancer cell lines and a yeast study with isogenic biological replicates in two experimental conditions. Furthermore, the reference standard (RS) comprised all RNA analytical methods with the exception of the method testing accuracy. To mitigate biases towards a specific analytical method, the pairwise Jaccard Concordance Index between observed results of distinct analytical methods were calculated for optimization. Optimization through thresholding effect-size and expression-level reduced the greatest discordances between distinct methods’ analytical results and resulted in a 65% increase in concordance. Conclusions: We have demonstrated that comparing accuracies of different single-subject analysis methods for clinical optimization in transcriptomics requires a new evaluation framework. Reliable and robust reference standards, independent of the evaluated method, can be obtained under a limited number of parameter combinations: Fold change (FC) ranges thresholds, expression level cutoffs, and exclusion of the tested method from the RS development process. When applying anticonservative reference standard frameworks (e.g., using the same method for RS development and prediction), most of the concordant signal between prediction and Gold Standard (GS) cannot be confirmed by other methods, which we conclude as biased results. Statistical tests to determine DEGs from a single-subject study generate many biased results requiring subsequent filtering to increase reliability. Conventional single-subject studies pertain to one or a few patient’s measures over time and require a substantial conceptual framework extension to address the numerous measures in genome-wide analyses of gene products. The proposed referenceNof1 framework addresses some of the inherent challenges for improving transcriptome scale single-subject analyses by providing a robust approach to constructing reference standards.
APA, Harvard, Vancouver, ISO, and other styles
2

Marschner, Ian C., Rebecca A. Betensky, Victor DeGruttola, Scott M. Hammer, and Daniel R. Kuritzkes. "Clinical Trials Using HIV-1 RNA-Based Primary Endpoints: Statistical Analysis and Potential Biases." Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology 20, no. 3 (March 1999): 220–27. http://dx.doi.org/10.1097/00042560-199903010-00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yamaguchi, David K. "More on estimating the statistical significance of cross-dating positions for "floating" tree-ring series." Canadian Journal of Forest Research 24, no. 2 (February 1, 1994): 427–29. http://dx.doi.org/10.1139/x94-058.

Full text
Abstract:
Tabulated Student's t-values and climatic insensitivity among inner tree-ring widths can bias estimates of statistical significance for cross correlations relating "floating" and master tree-ring series. These biases can be removed by (i) directly computing significance levels for cross-correlation coefficients at dating positions and (ii) deleting insensitive inner rings from a dated floating sample before final correlation analysis. The number of early rings to delete can be determined from plots of cross-correlation coefficients linking a dated floating series of artificially decreasing length with a master series. These modifications improve the precision of Yamaguchi and Allen's approach (D.K. Yamaguchi and G.L. Allen. 1992. Can. J. For. Res. 22: 1215–1221) for estimating significance.
APA, Harvard, Vancouver, ISO, and other styles
4

Flandre, Philippe, Christine Durier, Diane Descamps, Odile Launay, and Véronique Joly. "On the Use of Magnitude of Reduction in HIV-1 RNA in Clinical Trials: Statistical Analysis and Potential Biases." JAIDS Journal of Acquired Immune Deficiency Syndromes 30, no. 1 (May 2002): 59–64. http://dx.doi.org/10.1097/00126334-200205010-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Flandre, Philippe, Christine Durier, Diane Descamps, Odile Launay, and Véronique Joly. "On the Use of Magnitude of Reduction in HIV-1 RNA in Clinical Trials: Statistical Analysis and Potential Biases." JAIDS Journal of Acquired Immune Deficiency Syndromes 30, no. 1 (May 2002): 59–64. http://dx.doi.org/10.1097/00042560-200205010-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Petegrosso, Raphael, Zhuliu Li, and Rui Kuang. "Machine learning and statistical methods for clustering single-cell RNA-sequencing data." Briefings in Bioinformatics 21, no. 4 (June 27, 2019): 1209–23. http://dx.doi.org/10.1093/bib/bbz063.

Full text
Abstract:
Abstract Single-cell RNAsequencing (scRNA-seq) technologies have enabled the large-scale whole-transcriptome profiling of each individual single cell in a cell population. A core analysis of the scRNA-seq transcriptome profiles is to cluster the single cells to reveal cell subtypes and infer cell lineages based on the relations among the cells. This article reviews the machine learning and statistical methods for clustering scRNA-seq transcriptomes developed in the past few years. The review focuses on how conventional clustering techniques such as hierarchical clustering, graph-based clustering, mixture models, $k$-means, ensemble learning, neural networks and density-based clustering are modified or customized to tackle the unique challenges in scRNA-seq data analysis, such as the dropout of low-expression genes, low and uneven read coverage of transcripts, highly variable total mRNAs from single cells and ambiguous cell markers in the presence of technical biases and irrelevant confounding biological variations. We review how cell-specific normalization, the imputation of dropouts and dimension reduction methods can be applied with new statistical or optimization strategies to improve the clustering of single cells. We will also introduce those more advanced approaches to cluster scRNA-seq transcriptomes in time series data and multiple cell populations and to detect rare cell types. Several software packages developed to support the cluster analysis of scRNA-seq data are also reviewed and experimentally compared to evaluate their performance and efficiency. Finally, we conclude with useful observations and possible future directions in scRNA-seq data analytics. Availability All the source code and data are available at https://github.com/kuanglab/single-cell-review.
APA, Harvard, Vancouver, ISO, and other styles
7

Jaffe, Andrew E., Ran Tao, Alexis L. Norris, Marc Kealhofer, Abhinav Nellore, Joo Heon Shin, Dewey Kim, et al. "qSVA framework for RNA quality correction in differential expression analysis." Proceedings of the National Academy of Sciences 114, no. 27 (June 20, 2017): 7130–35. http://dx.doi.org/10.1073/pnas.1617384114.

Full text
Abstract:
RNA sequencing (RNA-seq) is a powerful approach for measuring gene expression levels in cells and tissues, but it relies on high-quality RNA. We demonstrate here that statistical adjustment using existing quality measures largely fails to remove the effects of RNA degradation when RNA quality associates with the outcome of interest. Using RNA-seq data from molecular degradation experiments of human primary tissues, we introduce a method—quality surrogate variable analysis (qSVA)—as a framework for estimating and removing the confounding effect of RNA quality in differential expression analysis. We show that this approach results in greatly improved replication rates (>3×) across two large independent postmortem human brain studies of schizophrenia and also removes potential RNA quality biases in earlier published work that compared expression levels of different brain regions and other diagnostic groups. Our approach can therefore improve the interpretation of differential expression analysis of transcriptomic data from human tissue.
APA, Harvard, Vancouver, ISO, and other styles
8

Waweru, Jacqueline Wahura, Zaydah de Laurent, Everlyn Kamau, Khadija Said, Elijah Gicheru, Martin Mutunga, Caleb Kibet, et al. "Enrichment approach for unbiased sequencing of respiratory syncytial virus directly from clinical samples." Wellcome Open Research 6 (May 7, 2021): 99. http://dx.doi.org/10.12688/wellcomeopenres.16756.1.

Full text
Abstract:
Background: Nasopharyngeal samples contain higher quantities of bacterial and host nucleic acids relative to viruses; presenting challenges during virus metagenomics sequencing, which underpins agnostic sequencing protocols. We aimed to develop a viral enrichment protocol for unbiased whole-genome sequencing of respiratory syncytial virus (RSV) from nasopharyngeal samples using the Oxford Nanopore Technology (ONT) MinION platform. Methods: We assessed two protocols using RSV positive samples. Protocol 1 involved physical pre-treatment of samples by centrifugal processing before RNA extraction, while Protocol 2 entailed direct RNA extraction without prior enrichment. Concentrates from Protocol 1 and RNA extracts from Protocol 2 were each divided into two fractions; one was DNase treated while the other was not. RNA was then extracted from both concentrate fractions per sample and RNA from both protocols converted to cDNA, which was then amplified using the tagged Endoh primers through Sequence-Independent Single-Primer Amplification (SISPA) approach, a library prepared, and sequencing done. Statistical significance during analysis was tested using the Wilcoxon signed-rank test. Results: DNase-treated fractions from both protocols recorded significantly reduced host and bacterial contamination unlike the untreated fractions (in each protocol p<0.01). Additionally, DNase treatment after RNA extraction (Protocol 2) enhanced host and bacterial read reduction compared to when done before (Protocol 1). However, neither protocol yielded whole RSV genomes. Sequenced reads mapped to parts of the nucleoprotein (N gene) and polymerase complex (L gene) from Protocol 1 and 2, respectively. Conclusions: DNase treatment was most effective in reducing host and bacterial contamination, but its effectiveness improved if done after RNA extraction than before. We attribute the incomplete genome segments to amplification biases resulting from the use of short length random sequence (6 bases) in tagged Endoh primers. Increasing the length of the random nucleotides from six hexamers to nine or 12 in future studies may reduce the coverage biases.
APA, Harvard, Vancouver, ISO, and other styles
9

Bergsten, Emma, Denis Mestivier, and Iradj Sobhani. "The Limits and Avoidance of Biases in Metagenomic Analyses of Human Fecal Microbiota." Microorganisms 8, no. 12 (December 9, 2020): 1954. http://dx.doi.org/10.3390/microorganisms8121954.

Full text
Abstract:
An increasing body of evidence highlights the role of fecal microbiota in various human diseases. However, more than two-thirds of fecal bacteria cannot be cultivated by routine laboratory techniques. Thus, physicians and scientists use DNA sequencing and statistical tools to identify associations between bacterial subgroup abundances and disease. However, discrepancies between studies weaken these results. In the present study, we focus on biases that might account for these discrepancies. First, three different DNA extraction methods (G’NOME, QIAGEN, and PROMEGA) were compared with regard to their efficiency, i.e., the quality and quantity of DNA recovered from feces of 10 healthy volunteers. Then, the impact of the DNA extraction method on the bacteria identification and quantification was evaluated using our published cohort of sample subjected to both 16S rRNA sequencing and whole metagenome sequencing (WMS). WMS taxonomical assignation employed the universal marker genes profiler mOTU-v2, which is considered the gold standard. The three standard pipelines for 16S RNA analysis (MALT and MEGAN6, QIIME1, and DADA2) were applied for comparison. Taken together, our results indicate that the G’NOME-based method was optimal in terms of quantity and quality of DNA extracts. 16S rRNA sequence-based identification of abundant bacteria genera showed acceptable congruence with WMS sequencing, with the DADA2 pipeline yielding the highest congruent levels. However, for low abundance genera (<0.5% of the total abundance) two pipelines and/or validation by quantitative polymerase chain reaction (qPCR) or WMS are required. Hence, 16S rRNA sequencing for bacteria identification and quantification in clinical and translational studies should be limited to diagnostic purposes in well-characterized and abundant genera. Additional techniques are warranted for low abundant genera, such as WMS, qPCR, or the use of two bio-informatics pipelines.
APA, Harvard, Vancouver, ISO, and other styles
10

Goncearenco, Alexander, Bin-Guang Ma, and Igor N. Berezovsky. "Molecular mechanisms of adaptation emerging from the physics and evolution of nucleic acids and proteins." Nucleic Acids Research 42, no. 5 (December 25, 2013): 2879–92. http://dx.doi.org/10.1093/nar/gkt1336.

Full text
Abstract:
Abstract DNA, RNA and proteins are major biological macromolecules that coevolve and adapt to environments as components of one highly interconnected system. We explore here sequence/structure determinants of mechanisms of adaptation of these molecules, links between them, and results of their mutual evolution. We complemented statistical analysis of genomic and proteomic sequences with folding simulations of RNA molecules, unraveling causal relations between compositional and sequence biases reflecting molecular adaptation on DNA, RNA and protein levels. We found many compositional peculiarities related to environmental adaptation and the life style. Specifically, thermal adaptation of protein-coding sequences in Archaea is characterized by a stronger codon bias than in Bacteria. Guanine and cytosine load in the third codon position is important for supporting the aerobic life style, and it is highly pronounced in Bacteria. The third codon position also provides a tradeoff between arginine and lysine, which are favorable for thermal adaptation and aerobicity, respectively. Dinucleotide composition provides stability of nucleic acids via strong base-stacking in ApG dinucleotides. In relation to coevolution of nucleic acids and proteins, thermostability-related demands on the amino acid composition affect the nucleotide content in the second codon position in Archaea.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Statistical biases of RNG"

1

Traore, Mohamed. "Analyse des biais de RNG pour les mécanismes cryptographiques et applications industrielles." Thesis, Université Grenoble Alpes, 2022. http://www.theses.fr/2022GRALM013.

Full text
Abstract:
Dans ce travail, nous analysons des certificats SSL/TLS X.509 (utilisant le chiffrement RSA et provenant de centaines de millions de matériels connectés) à la recherche d'anomalies et étendons notamment les travaux de Hastings, Fried et Heninger (2016). Notre étude a été réalisée sur trois bases de données provenant de l'EFF (2010-2011), de l'ANSSI (2011-2017) et de Rapid7 (2017-2021). Plusieurs vulnérabilités affectant des matériels de fabricants connus furent détectées : modules de petites tailles (strictement inférieures à 1024 bits), modules redondants (utilisés par plusieurs entités), certificats invalides mais toujours en usage, modules vulnérables à l'attaque ROCA ainsi que des modules dits «PGCD-vulnérables» (c'est-à-dire des modules ayant des facteurs communs). Pour la base de données de Rapid7, dénombrant près de 600 millions de certificats (et incluant ceux des matériels récents), nous avons identifié 1,550,382 certificats dont les modules sont PGCD-vulnérables, soit 0.27% du nombre total. Cela a permis de factoriser 14,765 modules de 2048 bits ce qui, à notre connaissance, n'a jamais été fait.En analysant certains modules PGCD-vulnérables, on a pu rétro-concevoir de façon partielle le générateur de modules (de 512 bits) utilisé par certaines familles de pare-feux, ce qui a permis la factorisation instantanée de 42 modules de 512 bits, correspondant aux certificats provenant de 8,817 adresses IPv4.Après avoir constaté que la plupart des modules factorisés avaient été générés par la bibliothèque OpenSSL, on a analysé les codes sources et les méthodes en charge du processus de génération de clefs RSA de plusieurs versions de cette bibliothèque (couvrant la période 2005 à 2021). À travers des expérimentations sur des plateformes à base de processeurs ARM, où l'on s'est mis quasiment dans les mêmes conditions que les matériels vulnérables identifiés, on a réussi à remonter aux causes de la PGCD-vulnérabilité
In this work, we analyze X.509 SSL/TLS certificates (using RSA encryption and from hundreds of millions of connected devices) looking for anomalies and notably extend the work of Hastings, Fried and Heninger (2016). Our study was carried out on three databases from EFF (2010-2011), ANSSI (2011-2017) and Rapid7 (2017-2021). Several vulnerabilities affecting devices from well-known manufacturers were detected: small moduli (strictly less than 1024 bits), redundant moduli (used by several entities), invalid certificates but still in use, moduli vulnerable to the ROCA attack as well as so-called “GCD-vulnerable” moduli (i.e. moduli having common factors). For the Rapid7 database, counting nearly 600 million certificates (and including those for recent devices), we have identified 1,550,382 certificates whose moduli are GCD-vulnerable, that is 0.27% of the total number. This made it possible to factor 14,765 moduli of 2048 bits which, to our knowledge, has never been done.By analyzing certain GCD-vulnerable moduli, we were able to partially reverse-engineer the modulus generator (of 512 bits) used by certain families of firewalls, which allowed the instantaneous factorization of 42 moduli of 512 bits, corresponding certificates from 8,817 IPv4 addresses.After noting that most of the factored moduli had been generated by the OpenSSL library, we analyzed the source codes and the methods in charge of the RSA key generation process of several versions of this library (covering the period 2005 to 2021). Through experiments on platforms based on ARM processors, where we put ourselves in almost the same conditions as the vulnerable devices identified, we managed to trace the causes of the PGCD-vulnerability
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Cassandra. "A game-based intervention for the reduction of statistical cognitive biases." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91416.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.
50
Title as it appears in MIT commencement exercises program, June 6, 2014: Probability playground: a set of games for statical intuition Cataloged from PDF version of thesis.
Includes bibliographical references (pages 48-50).
Probability and statistics is perhaps the area of mathematics education most directly applicable to everyday life. Yet, the methodologies traditionally used to cover these topics in school render the material formal and difficult to apply. In this thesis, I describe a game design that develops probabilistic concepts in real-life situations. Psychologists have coined the term cognitive bias for instances in which the intuition of the average person disagrees with the formal mathematical analysis of the problem. This thesis examines if a one-hour game-based intervention can enact a change in the intuitive mental models people have for reasoning about probability and uncertainty in real-life. Two cognitive biases were selected for treatment: overconfidence effect and base rate neglect. These two biases represent instances of miscalibrated subjective probabilities and Bayesian inference, respectively. Results of user tests suggest that it is possible to alter probabilistic intuitions, but that attention to the transitions from the current mental constructs must be carefully designed. Prototyping results suggest how some elements of game design may naturally lend themselves to deep learning objectives and heuristics.
by Cassandra Xia.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Eroglu, Cuneyt. "An investigation of accuracy, learning and biases in judgmental adjustments of statistical forecasts." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1150398313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schäfer, Thomas, and Marcus A. Schwarz. "The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases." Frontiers Media SA, 2019. https://monarch.qucosa.de/id/qucosa%3A33749.

Full text
Abstract:
Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = 0.36) were much larger than effects from the latter (median r = 0.16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.
APA, Harvard, Vancouver, ISO, and other styles
5

Ripollone, John Edward. "Exploration of structural and statistical biases in the application of propensity score matching to pharmacoepidemiologic data." Thesis, 2019. https://hdl.handle.net/2144/36025.

Full text
Abstract:
Certain pitfalls associated with propensity score matching have come to light, recently. The extent to which these pitfalls might threaten validity and precision in pharmacoepidemiologic research, for which propensity score matching often is used, is uncertain. We evaluated the “propensity score matching paradox” – the tendency for covariate imbalance to increase in a propensity score-matched dataset upon continuous pruning of matched sets – as well as the utility of coarsened exact matching, a technique that has been posed as a preferable alternative to propensity score matching, especially in light of the “propensity score matching paradox”. We show that the “propensity score matching paradox” may not threaten causal inference that is based on propensity score matching in typical pharmacoepidemiologic settings to the extent predicted by previous research. Moreover, even though coarsened exact matching substantially improves covariate balance, it may not be optimal in typical pharmacoepidemiologic settings due to the extreme loss of study size (and resulting increase in bias and variance) that may be required to build the matched dataset. Finally, we explain variability in 1:1 propensity score matching without replacement as well as methods that were developed to account for this variability, with application of these methods to an example claims-based study.
2021-06-03T00:00:00Z
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Statistical biases of RNG"

1

Ne'ma, S. How unobservable productivity biases the value of a statistical life. Cambridge, MA: Harvard Law School, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lo, Andrew W. Data-snooping biases in tests of financial asset pricing models. Cambridge, MA: National Bureau of Economic Research, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Selection bias and covariate imbalances in randomzied clinical trials. Hoboken, NJ: John Wiley & Sons, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ioannidis, John P. A. Statistical Biases in Science Communication. Edited by Kathleen Hall Jamieson, Dan M. Kahan, and Dietram A. Scheufele. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190497620.013.11.

Full text
Abstract:
Misuse and misinterpretation of statistics result in statistical biases that affect the quality, clarity, relevance, and implications of communicated scientific information. Statistical tools are often suboptimally used in scientific papers, even in the best journals. The vast majority of published results are statistically significant, and even nonsignificant results are often spun as being important. Inferences based on P-values generate additional misconceptions. It is also common to focus on metrics that are more prone to exaggerated interpretation. Most of these problems are possible to solve or at least improve on. The prevalence of statistical biases has been used in attacks designed to discredit science’s validity. However, the use of rigorous statistical methods and their careful interpretation can be one of the strongest distinguishing features of good science and a powerful tool to sustain science’s integrity.
APA, Harvard, Vancouver, ISO, and other styles
5

J, Kniesner Thomas, and National Bureau of Economic Research., eds. How unobservable productivity biases the value of a statistical life. Cambridge, MA: National Bureau of Economic Research, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jamieson, Kathleen Hall, Dan M. Kahan, and Dietram A. Scheufele, eds. The Oxford Handbook of the Science of Science Communication. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190497620.001.0001.

Full text
Abstract:
The cross-disciplinary Oxford Handbook on the Science of Science Communication contains 47 essays by 57 leading scholars organized into six sections: The first section establishes the need for a science of science communication, provides an overview of the area, examines sources of science knowledge and the ways in which changing media structures affect it, reveals what the public thinks about science, and situates current scientific controversies in their historical contexts. The book’s second part examines challenges to science including difficulties in peer review, rising numbers of retractions, publication and statistical biases, and hype. Successes and failures in communicating about four controversies are the subject of Part III: “mad cow,” nanotechnology, biotechnology, and the HPV and HBV vaccines. The fourth section focuses on the ways in which elite intermediaries communicate science. These include the national academies, scholarly presses, government organizations, museums, foundations, and social networks. It examines as well scientific deliberation among citizens and science-based policymaking. In Part V, the handbook treats science media interactions, knowledge-based journalism, polarized media environments, popular images of science, and the portrayal of science in entertainment, narratives, and comedy. The final section identifies the ways in which human biases that can affect communicated science can be overcome. Biases include resistant misinformation, inadequate frames, biases in moral reasoning, confirmation and selective exposure biases, innumeracy, recency effects, fear of the unnatural, normalization, false causal attribution, and public difficulty in processing uncertainty. Each section of the book includes a thematic synthesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Grossmann, Matt. How Social Science Got Better. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197518977.001.0001.

Full text
Abstract:
Social science research is facing mounting criticism, as canonical studies fail to replicate, questionable research practices abound, and researcher social and political biases come under fire. Far from being in crisis, however, social science is undergoing an unparalleled renaissance of ever-broader and deeper understanding and application—made possible by close attention to criticism of our biases and open public engagement. Wars between scientists and their humanist critics, methodological disputes over statistical practice and qualitative research, and disciplinary battles over grand theories of human nature have all quietly died down as new generations of scholars have integrated the insights of multiple sides. Rather than deny that researcher biases affect results, scholars now closely analyze how our racial, gender, geographic, methodological, political, and ideological differences impact our research questions; how the incentives of academia influence our research practices; and how universal human desires to avoid uncomfortable truths and easily solve problems affect our conclusions. To be sure, misaligned incentive structures remain, but a messy, collective deliberation across the research community is boosting self-knowledge and improving practice. Ours is an unprecedented age of theoretical diversity, open and connected data, and public scholarship. How Social Science Got Better documents and explains recent transformations, crediting both internal and public critics for strengthening social science. Applying insights from the philosophy, history, and sociology of science and providing new data on trends in social science research and scholarly views, it demonstrates that social science has never been more relevant, rigorous, or self-reflective.
APA, Harvard, Vancouver, ISO, and other styles
8

Qin, Nan, and Ying Wang. Hedge Funds and Performance Persistence. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190607371.003.0026.

Full text
Abstract:
Despite the exponential growth of global hedge fund assets since the 1990s, the high attrition rates in the industry have raised an important issue about hedge fund return persistence. This chapter discusses the various statistical methodologies in measuring performance persistence and provides a comprehensive review of the empirical literature on short- and long-term performance persistence. In particular, the literature suggests that fund strategies and characteristics are related to performance persistence. The chapter also discusses three important issues: return smoothing, the use of option-like strategies, and data biases. The chapter provides additional empirical evidence on performance persistence, using a portfolio approach and a hedge fund sample from the Trading Advisor Selection System (TASS) database between 1994 and 2015.
APA, Harvard, Vancouver, ISO, and other styles
9

Grant, Warren, and Martin Scott-Brown. Screening for cancer. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0356.

Full text
Abstract:
The aim in cancer screening is not just to prevent the incidence of disease or diagnose it in an early stage but, most importantly, to reduce mortality. Designing screening programmes leads to challenging questions. Effective cancer screening programmes require a centralized organization to coordinate implementation, robust statistical evidence of benefit, and extensive cost and resource planning for the inevitable increase in use of diagnostic and treatment services. This chapter explores screening for cancer and malignancy, including the aims of cancer screening; the history of cancer screening; cancers that are suitable for screening; breast cancer screening; cervical cancer screening; and bowel cancer screening. It also discusses strategies for ensuring a useful screening test, and potential biases.
APA, Harvard, Vancouver, ISO, and other styles
10

Muentener, Paul, and Elizabeth Bonawitz. The Development of Causal Reasoning. Edited by Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.40.

Full text
Abstract:
Research on the development of causal reasoning has broadly focused on accomplishing two goals: understanding the origins of causal reasoning, and examining how causal reasoning changes with development. This chapter reviews evidence and theory that aim to fulfill both of these objectives. In the first section, it focuses on the research that explores the possible precedents for recognizing causal events in the world, reviewing evidence for three distinct mechanisms in early causal reasoning: physical launching events, agents and their actions, and covariation information. The second portion of the chapter examines the question of how older children learn about specific causal relationships. It focuses on the role of patterns of statistical evidence in guiding learning about causal structure, suggesting that even very young children leverage strong inductive biases with patterns of data to inform their inferences about causal events, and discussing ways in which children’s spontaneous play supports causal learning.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Statistical biases of RNG"

1

Hurley-Smith, Darren, and Julio Hernandez-Castro. "Challenges in Certifying Small-Scale (IoT) Hardware Random Number Generators." In Security of Ubiquitous Computing Systems, 165–81. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-10591-4_10.

Full text
Abstract:
AbstractThis chapter focuses on the testing and certification of Random Number Generators (RNG). Statistical testing is required to identify whether sequences produced by RNG demonstrate non-random characteristics. These can include structures within their output, repetition of sequences, and any other form of predictability. Certification of computer security systems draws on such evaluations to determine whether a given RNG implementation contributes to a secure, robust security system. Recently, small-scale hardware RNGs have been targeted at IoT devices, especially those requiring security. This, however, introduces new technical challenges; low computational resources for post-processing and evaluation of on-board RNGs being just two examples. Can we rely on the current suite of statistical tests? What other challenges are encountered when evaluating RNG?
APA, Harvard, Vancouver, ISO, and other styles
2

Thornton, Chris. "Statistical Biases in Backpropagation Learning." In ICANN ’94, 709–12. London: Springer London, 1994. http://dx.doi.org/10.1007/978-1-4471-2097-1_167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Horman, Yoav, and Gal A. Kaminka. "Removing Statistical Biases in Unsupervised Sequence Learning." In Lecture Notes in Computer Science, 157–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11552253_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Henderson, C. R. "Accounting for Selection and Mating Biases in Genetic Evaluations." In Advances in Statistical Methods for Genetic Improvement of Livestock, 413–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-74487-7_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Norman, Geoff, Paul Stratford, and Glenn Regehr. "Biases in the Retrospective Calculation of Reliability and Responsiveness from Longitudinal Studies." In Statistical Methods for Quality of Life Studies, 21–31. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4757-3625-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tomaselli, Venera, and Giulio Giacomo Cantone. "Multipoint vs slider: a protocol for experiments." In Proceedings e report, 91–96. Florence: Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.19.

Full text
Abstract:
Since the broad diffusion of Computer-Assisted survey tools (i.e. web surveys), a lively debate about innovative scales of measure arose among social scientists and practitioners. Implications are relevant for applied Statistics and evaluation research since while traditional scales collect ordinal observations, data from sliders can be interpreted as continuous. Literature, however, report excessive times of completion of the task from sliders in web surveys. This experimental protocol is aimed at testing hypotheses on the accuracy in prediction and dispersion of estimates from anonymous participants who are recruited online and randomly assigned into tasks in recognition of shades of colour. The treatment variable is two scales: a traditional multipoint 0-10 multipoint vs a slider 0-100. Shades have a unique parametrisation (true value) and participants have to guess the true value through the scale. These tasks are designed to recreate situations of uncertainty among participants while minimizing the subjective component of a perceptual assessment and maximizing information about scale-driven differences and biases. We propose to test statistical differences in the treatment variable: (i) mean absolute error from the true value (ii), time of completion of the task. To correct biases due to the variance in the number of completed tasks among participants, data about participants can be collected through both pre-tasks acceptance of web cookies and post-tasks explicit questions.
APA, Harvard, Vancouver, ISO, and other styles
7

Nisbett, Richard E., David H. Krantz, Christopher Jepson, and Ziva Kunda. "The Use of Statistical Heuristics in Everyday Inductive Reasoning." In Heuristics and Biases, 510–33. Cambridge University Press, 2002. http://dx.doi.org/10.1017/cbo9780511808098.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jennions, Michael D., Christopher J. Lortie, Michael S. Rosenberg, and Hannah R. Rothstein. "Publication and Related Biases." In Handbook of Meta-analysis in Ecology and Evolution. Princeton University Press, 2013. http://dx.doi.org/10.23943/princeton/9780691137285.003.0014.

Full text
Abstract:
This chapter discusses the increased occurrence of publication bias in the scientific literature. Publication bias is associated with the inaccurate representation of the merit of a hypothesis or idea. A strict definition is that it occurs when the published literature reports results that systematically differ from those of all studies and statistical tests conducted; the result is that false conclusions are drawn. The chapter presents five main approaches used to either detect potential narrow sense publication bias or assess how sensitive the results of a meta-analysis are to the possible exclusion. These include funnel plots, tests for relationships between effect size and sample size using nonparametric correlation or regression, trim and fill method, fail-safe numbers, and model selection.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoppitt, William, and Kevin N. Laland. "Social Learning Strategies." In Social Learning. Princeton University Press, 2013. http://dx.doi.org/10.23943/princeton/9780691150703.003.0008.

Full text
Abstract:
This chapter focuses on social learning strategies—functional rules specifying what, when, and who to copy. There are many plausible social learning strategies. Individuals might disproportionately copy when asocial learning would be difficult or costly, when they are uncertain of what to do, when the environment changes, when established behavior proves unproductive, and so forth. Likewise, animals might preferentially copy the dominant individual, the most successful individual, or a close relative. This chapter presents evidence for some of the better-studied learning heuristics and describes statistical procedures for identifying which social learning strategies are being deployed in a data set. It examines “who” strategies, which cover frequency-dependent biases, success biases, and kin and age biases, as well as “what” strategies, random copying, and statistical methods for detecting social learning strategies. Finally, it evaluates meta-strategies, best strategies, and hierarchical control.
APA, Harvard, Vancouver, ISO, and other styles
10

Bassma, Guermah, Sadiki Tayeb, and El Ghazi Hassan. "GNSS Positioning Enhancement Based on NLOS Multipath Biases Estimation Using Gaussian Mixture Noise." In Research Anthology on Reliability and Safety in Aviation Systems, Spacecraft, and Air Transport, 632–52. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5357-2.ch025.

Full text
Abstract:
Global navigation satellite systems (GNSS) have been widely used in many applications where positioning plays an important role. However, the performances of these applications can be degraded in urban canyons, due to Non-Line-Of-Sight (NLOS) and Multipath interference affecting GNSS signals. In order to ensure high accuracy positioning, this article proposes to model the NLOS and Multipath biases by Gaussian Mixture noise using Expectation Maximization (EM) algorithm. In this context, an approach to estimate the Multipath and NLOS biases for real time positioning is presented and statistical tests for searching the probability distribution of NLOS and Multipath biases are illustrated. Furthermore, a hybrid approach based on PF (Particle Filter) and EM algorithm for estimating user position in hard environment is presented. Using real GPS (Global Positioning System) signal, the efficiency of the proposed approach is shown, and a significant improvement of the positioning accuracy over the simple PF estimation is obtained.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Statistical biases of RNG"

1

Vernotte, Francois, and Eric Lantz. "Statistical biases and very long term time stability analysis." In 2011 Joint Conference of the IEEE International Frequency Control and the European Frequency and Time Forum (FCS). IEEE, 2011. http://dx.doi.org/10.1109/fcs.2011.5977796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jouvin, Léa, Anne Lemiere, Regis Terrier, Stefan Ohm, Igor Oya, and Christopher van Eldik. "Statistical biases of spectral analysis with the ON-OFF likelihood statistic." In The 34th International Cosmic Ray Conference. Trieste, Italy: Sissa Medialab, 2016. http://dx.doi.org/10.22323/1.236.0871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Hao, Ming Zhan, Liangxi Liu, Mingjuan Qiu, Fulong Wang, Qian Zhang, and Yunkai Feng. "Segmented CRC-Aided Order Statistical Decoding with Multiple Biases for Short Polar Codes." In 2021 11th International Conference on Information Science and Technology (ICIST). IEEE, 2021. http://dx.doi.org/10.1109/icist52614.2021.9440582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ribeiro, Wellinton Costa, and Marcus Tadeu Pinheiro Silva. "Evaluating the Randomness of the RNG in a Commercial Smart Card." In Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/sbseg.2017.19531.

Full text
Abstract:
This paper brings results concerning the quality evaluation for the pseudo-random number generator (PRNG) in a commercial smart card. The RNG is a fundamental part for the cryptography carried out in several applications. We have acquired a huge quantity of random numbers from three samples of a commercial smart card. These data were evaluated using the statistical computation package developed by National Institute of Standards and Technology. In order to be used as gold benchmark and to validate our methodology, we have also tested the true random number generator (TRNG) included in a commercial integrated circuit. Our results show that the card PRNG owns quality too inferior than the TRNG. Due to card vendor confidentiality policy is not possible state the tested PRNG is base for the device cryptography. However, if this occurs, results lead us to conclude the tested PRNG is not adequate to provide the required security in the systems that adopt the evaluated smart card.
APA, Harvard, Vancouver, ISO, and other styles
5

Tipa, Andrea, Alessandro Sorce, Matteo Pascenti, and Alberto Traverso. "A New Sensor Diagnostic Technique Applied to a Micro Gas Turbine Rig." In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-68580.

Full text
Abstract:
This paper describes the development and testing of a new algorithm to identify faulty sensors, based on a statistical model using quantitative statistical process history. Two different mathematical models were used and the results were analyzed to highlight the impact of model approximation and random error. Furthermore, a case study was developed based on a real micro gas turbine facility, located at the University of Genoa. The diagnostic sensor algorithm aims at early detection of measurement errors such as drift, bias, and accuracy degradation (increase of noise). The process description is assured by a database containing the measurements selected under steady state condition and without faults during the operating life of the plant. Using an invertible statistical model and a combinatorial approach, the algorithm is able to identify sensor fault. This algorithm could be applied to plants in which historical data are available and quasi steady state conditions are common (e.g. Nuclear, Coal Fired, Combined Cycle).
APA, Harvard, Vancouver, ISO, and other styles
6

Sanei, Hamid, and Hollylynne Lee. "Attending to Students’ Reasoning About Probability Concepts for Building Statistical Literacy." In IASE 2021 Satellite Conference: Statistics Education in the Era of Data Science. International Association for Statistical Education, 2022. http://dx.doi.org/10.52041/iase.byqzd.

Full text
Abstract:
This paper investigates two specific probabilistic biases which middle graders usually exhibit when reasoning about probability and randomness on assessment items. We discuss how students' reasoning about key probability concepts undergirds statistics literacy related to randomness, independence, and the likelihood of future events based on past results. We examine factors evoking misconceptions and students’ (in)consistency in exhibiting them. Findings indicate that misconceptions can be evoked based on three types of factors including (1) students’ particular understandings of probability and randomness, (2) general item characteristics, and (3) aspects of probability in items. Moreover, possession of a specific misconception will most likely result in exhibiting the bias again on other occasions including the same evoking factors (consistency).
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Baloul, Bader, Sharad Kumar Mittal, David Spencer, and Naseema Al-Ramadan. "Eradicating Biases and Establishing Consistency in Geological Chance of Success." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22594-ms.

Full text
Abstract:
Abstract Geoscientists are bound to have a degree of bias based on their own knowledge, experience, perception, adversity to risk, education, or pre-conceived beliefs. Such subjectivity may lead to a prejudice in making a decision unless this is properly recognized and corrected. As such, this may result in a distorted view of the likelihood on a decision to ‘drill-or-drop,’ if these pre-drill probability predictions are not rationalized. It is therefore extremely important to improve probability assessments by undertaking different approaches, such as setting up detailed and consistent protocols, company-wide standardization or by applying specific elicitation methods. A statistical analysis was undertaken using pre- and post-drill Geological Chance of Success (gCOS) and P-mean volume of the prospects that were drilled vis-a-vis prospects yet to be drilled. The reason for this is to identify the range of the pessimistic and/or optimistic evaluations by the risk reviewers. The purpose then becomes to derive a more stringent and authentic method by which such high deviations in risk estimations, and consistency with the methodology for prospective resource estimations, could be minimized with any potential biases removed. A historical database from the company's assets, spanning over a decade (2010-2020), was used for the statistical analysis. The results suggest that the risk reviewer's bias, lack of close analogues and paucity of direct evidence of perspectivity, resulted in non-realistic and over/under estimation of gCOS and prospective resources. Being able to understand and quantify the risks and uncertainties, and knowing how to manage them effectively, contributes to well-founded business decisions, protects the value of projects and assets, and maximizes the value of company project portfolios. A systematic risk and peer review processes was then evolved by KUFPEC to constrain these biased subjective deviations from real objective estimations and to minimize the risk of the overestimation / underestimation of risking and hydrocarbon volume for a given prospect.
APA, Harvard, Vancouver, ISO, and other styles
8

Hopson, Michael V., David E. Lambert, and Joseph Weiderhold. "Computational Comparisons of Homogeneous and Statistical Descriptions of Steel Subjected to Explosive Loading." In ASME 2010 Pressure Vessels and Piping Division/K-PVP Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/pvp2010-25330.

Full text
Abstract:
Experiments were conducted by the Munitions Directorate at the Air Force Research Laboratory to investigate the fracture and fragmentation of two different metals due to explosive loading. The first metal, Eglin Steel 1 (ES-1), was a high strength steel alloy configured as a thin shell surrounding the explosive core. The second metal, Aero 224, was a tungsten alloy configured as a stack of rings around the explosive core. The two different configurations generated two different stress states, plane-strain and uniaxial stress. The radial expansion velocity of the ES-1 shell was recorded via a photonic Doppler velocimeter (PDV). Also, the fragments from the ES-1 shell test and Aero 224 ring test were soft captured in a water tank. Complimentary computational analysis was conducted at the Naval Surface Warfare Center Dahlgren Division. The Eulerian wave propagation code, CTH, was used to analyze the stress states of the different configurations and also investigate the use of statistical compensation on explosive fragmentation. The stress states were examined in the context of stress triaxiality where triaxiality is defined as the ratio of pressure to the Von Mises stress. From the computational analysis both the ES-1 shell test and Aero 224 ring test approached, but did not reach the ideal triaxial values for plane-strain and uniaxial stress. Lastly, parametric calculations were conducted in order to determine the effectiveness of using a statistically compensated Johnson Cook fracture model to simulate the non-homogeneous nature of the ES-1 and Aero 224. While using the model did result in different fragment distributions, all the resulting distributions were less accurate than the baseline homogeneous calculation. Scrutiny of the early time fragment formation in the statistically compensated calculations revealed a mesh bias which caused material failure on surfaces parallel to the Cartesian axes. This preferential fracture produced rarefaction waves which prohibited further fragmentation thus generating fragment distributions larger than those observed in the Aero 224 ring test. Potential solutions for this issue will be explored in the future.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Peng, Haoxuan Li, Yuhao Deng, Wenjie Hu, Quanyu Dai, Zhenhua Dong, Jie Sun, Rui Zhang, and Xiao-Hua Zhou. "On the Opportunity of Causal Learning in Recommendation Systems: Foundation, Estimation, Prediction and Challenges." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/787.

Full text
Abstract:
Recently, recommender system (RS) based on causal inference has gained much attention in the industrial community, as well as the states of the art performance in many prediction and debiasing tasks. Nevertheless, a unified causal analysis framework has not been established yet. Many causal-based prediction and debiasing studies rarely discuss the causal interpretation of various biases and the rationality of the corresponding causal assumptions. In this paper, we first provide a formal causal analysis framework to survey and unify the existing causal-inspired recommendation methods, which can accommodate different scenarios in RS. Then we propose a new taxonomy and give formal causal definitions of various biases in RS from the perspective of violating the assumptions adopted in causal analysis. Finally, we formalize many debiasing and prediction tasks in RS, and summarize the statistical and machine learning-based causal estimation methods, expecting to provide new research opportunities and perspectives to the causal RS community.
APA, Harvard, Vancouver, ISO, and other styles
10

do Nascimento, Leonardo Sant’Anna, Luis Volnei Sudati Sagrilo, and Gilberto Bruno Ellwanger. "Conventional and Linear Statistical Moments Applied in Extreme Value Analysis of Non-Gaussian Response of Jack-Ups." In ASME 2012 31st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/omae2012-83583.

Full text
Abstract:
This work investigates numerically two different methods of moments applied to Hermite derived probability distribution model and variations of Weibull distribution fitted to the short-term time series peaks sample of stochastic response parameters of a simplified jack-up platform model which represents a source of high non-Gaussian responses. The main focus of the work is to compare the results of short-term extreme response statistics obtained by the so-called linear method of moments (L-moments) and the conventional method of moments using either Hermite or Weibull models as the peaks distribution model. A simplified mass-spring system representing a three-legged jack-up platform is initially employed in order to observe directly impacts of the linear method of moments (L-moments) in extreme analysis results. Afterwards, the stochastic response of the three-legged jack-up platform is analyzed by means of 3-D finite element model. Bias and statistical uncertainty in the estimated extreme statistics parameters are computed considering as the “theoretical” estimates those evaluated by fitting a Gumbel to a sample of episodical extreme values obtained from distinct short-term realizations (or simulations). Results show that the variability of the extreme results, as a function of the simulation length, determined by the linear method of moments (L-moments) is smaller than their corresponding ones derived from the conventional method of moments and the biases are more or less the same.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Statistical biases of RNG"

1

Kniesner, Thomas, W. Kip Viscusi, Christopher Woock, and James Ziliak. How Unobservable Productivity Biases the Value of a Statistical Life. Cambridge, MA: National Bureau of Economic Research, October 2005. http://dx.doi.org/10.3386/w11659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hovav, Ran, Peggy Ozias-Akins, and Scott A. Jackson. The genetics of pod-filling in peanut under water-limiting conditions. United States Department of Agriculture, January 2012. http://dx.doi.org/10.32747/2012.7597923.bard.

Full text
Abstract:
Pod-filling, an important yield-determining stage is strongly influenced by water stress. This is particularly true for peanut (Arachishypogaea), wherein pods are developed underground and are directly affected by the water condition. Pod-filling in peanut has a significant genetic component as well, since genotypes are considerably varied in their pod-fill (PF) and seed-fill (SF) potential. The goals of this research were to: Examine the effects of genotype, irrigation, and genotype X irrigation on PF and SF. Detect global changes in mRNA and metabolites levels that accompany PF and SF. Explore the response of the duplicate peanut pod transcriptome to drought stress. Study how entire duplicated PF regulatory processes are networked within a polyploid organism. Discover locus-specific SNP markers and map pod quality traits under different environments. The research included genotypes and segregating populations from Israel and US that are varied in PF, SF and their tolerance to water deficit. Initially, an extensive field trial was conducted to investigate the effects of genotype, irrigation, and genotype X irrigation on PF and SF. Significant irrigation and genotypic effect was observed for the two main PF related traits, "seed ratio" and "dead-end ratio", demonstrating that reduction in irrigation directly influences the developing pods as a result of low water potential. Although the Irrigation × Genotype interaction was not statistically significant, one genotype (line 53) was found to be more sensitive to low irrigation treatments. Two RNAseq studies were simultaneously conducted in IL and the USA to characterize expression changes that accompany shell ("source") and seed ("sink") biogenesis in peanut. Both studies showed that SF and PF processes are very dynamic and undergo very rapid change in the accumulation of RNA, nutrients, and oil. Some genotypes differ in transcript accumulation rates, which can explain their difference in SF and PF potential; like cvHanoch that was found to be more enriched than line 53 in processes involving the generation of metabolites and energy at the beginning of seed development. Interestingly, an opposite situation was found in pericarp development, wherein rapid cell wall maturation processes were up-regulated in line 53. Although no significant effect was found for the irrigation level on seed transcriptome in general, and particularly on subgenomic assignment (that was found almost comparable to a 1:1 for A- and B- subgenomes), more specific homoeologous expression changes associated with particular biosynthesis pathways were found. For example, some significant A- and B- biases were observed in particular parts of the oil related gene expression network and several candidate genes with potential influence on oil content and SF were further examined. Substation achievement of the current program was the development and application of new SNP detection and mapping methods for peanut. Two major efforts on this direction were performed. In IL, a GBS approach was developed to map pod quality traits on Hanoch X 53 F2/F3 generations. Although the GBS approach was found to be less effective for our genetic system, it still succeeded to find significant mapping locations for several traits like testa color (linkage A10), number of seeds/pods (A5) and pod wart resistance (B7). In the USA, a SNP array was developed and applied for peanut, which is based on whole genome re-sequencing of 20 genotypes. This chip was used to map pod quality related traits in a Tifrunner x NC3033 RIL population. It was phenotyped for three years, including a new x-ray method to phenotype seed-fill and seed density. The total map size was 1229.7 cM with 1320 markers assigned. Based on this linkage map, 21 QTLs were identified for the traits 16/64 weight, kernel percentage, seed and pod weight, double pod and pod area. Collectively, this research serves as the first fundamental effort in peanut for understanding the PF and SF components, as a whole, and as influenced by the irrigation level. Results of the proposed study will also generate information and materials that will benefit peanut breeding by facilitating selection for reduced linkage drag during introgression of disease resistance traits into elite cultivars. BARD Report - Project4540 Page 2 of 10
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography