Academic literature on the topic 'False Discovery Proportion control'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'False Discovery Proportion control.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "False Discovery Proportion control"

1

Genovese, Christopher R., and Larry Wasserman. "Exceedance Control of the False Discovery Proportion." Journal of the American Statistical Association 101, no. 476 (December 1, 2006): 1408–17. http://dx.doi.org/10.1198/016214506000000339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dudziński, Marcin, and Konrad Furmańczyk. "A note on control of the false discovery proportion." Applicationes Mathematicae 36, no. 4 (2009): 397–418. http://dx.doi.org/10.4064/am36-4-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shang, Shulian, Qianhe Zhou, Mengling Liu, and Yongzhao Shao. "Sample Size Calculation for Controlling False Discovery Proportion." Journal of Probability and Statistics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/817948.

Full text
Abstract:
The false discovery proportion (FDP), the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.
APA, Harvard, Vancouver, ISO, and other styles
4

Goeman, Jelle J., Rosa J. Meijer, Thijmen J. P. Krebs, and Aldo Solari. "Simultaneous control of all false discovery proportions in large-scale multiple hypothesis testing." Biometrika 106, no. 4 (September 23, 2019): 841–56. http://dx.doi.org/10.1093/biomet/asz041.

Full text
Abstract:
Summary Closed testing procedures are classically used for familywise error rate control, but they can also be used to obtain simultaneous confidence bounds for the false discovery proportion in all subsets of the hypotheses, allowing for inference robust to post hoc selection of subsets. In this paper we investigate the special case of closed testing with Simes local tests. We construct a novel fast and exact shortcut and use it to investigate the power of this approach when the number of hypotheses goes to infinity. We show that if a minimal level of signal is present, the average power to detect false hypotheses at any desired false discovery proportion does not vanish. Additionally, we show that the confidence bounds for false discovery proportion are consistent estimators for the true false discovery proportion for every nonvanishing subset. We also show close connections between Simes-based closed testing and the procedure of Benjamini and Hochberg.
APA, Harvard, Vancouver, ISO, and other styles
5

Ge, Yongchao, and Xiaochun Li. "Control of the False Discovery Proportion for Independently Tested Null Hypotheses." Journal of Probability and Statistics 2012 (2012): 1–19. http://dx.doi.org/10.1155/2012/320425.

Full text
Abstract:
Consider the multiple testing problem of testingmnull hypothesesH1,…,Hm, among whichm0hypotheses are truly null. Given theP-values for each hypothesis, the question of interest is how to combine theP-values to find out which hypotheses are false nulls and possibly to make a statistical inference onm0. Benjamini and Hochberg proposed a classical procedure that can control the false discovery rate (FDR). The FDR control is a little bit unsatisfactory in that it only concerns the expectation of the false discovery proportion (FDP). The control of the actual random variable FDP has recently drawn much attention. For any level1−α, this paper proposes a procedure to construct an upper prediction bound (UPB) for the FDP for a fixed rejection region. When1−α=50%, our procedure is very close to the classical Benjamini and Hochberg procedure. Simultaneous UPBs for all rejection regions' FDPs and the upper confidence bound for the unknownm0are presented consequently. This new proposed procedure works for finite samples and hence avoids the slow convergence problem of the asymptotic theory.
APA, Harvard, Vancouver, ISO, and other styles
6

Jeng, X. Jessie, and Xiongzhi Chen. "Predictor ranking and false discovery proportion control in high-dimensional regression." Journal of Multivariate Analysis 171 (May 2019): 163–75. http://dx.doi.org/10.1016/j.jmva.2018.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xiaohua Douglas. "An Effective Method for Controlling False Discovery and False Nondiscovery Rates in Genome-Scale RNAi Screens." Journal of Biomolecular Screening 15, no. 9 (September 20, 2010): 1116–22. http://dx.doi.org/10.1177/1087057110381783.

Full text
Abstract:
In most genome-scale RNA interference (RNAi) screens, the ultimate goal is to select siRNAs with a large inhibition or activation effect. The selection of hits typically requires statistical control of 2 errors: false positives and false negatives. Traditional methods of controlling false positives and false negatives do not take into account the important feature in RNAi screens: many small-interfering RNAs (siRNAs) may have very small but real nonzero average effects on the measured response and thus cannot allow us to effectively control false positives and false negatives. To address for deficiencies in the application of traditional approaches in RNAi screening, the author proposes a new method for controlling false positives and false negatives in RNAi high-throughput screens. The false negatives are statistically controlled through a false-negative rate (FNR) or false nondiscovery rate (FNDR). FNR is the proportion of false negatives among all siRNAs examined, whereas FNDR is the proportion of false negatives among declared nonhits. The author also proposes new concepts, q*-value and p*-value, to control FNR and FNDR, respectively. The proposed method should have broad utility for hit selection in which one needs to control both false discovery and false nondiscovery rates in genome-scale RNAi screens in a robust manner.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Xiaohua Douglas, Raul Lacson, Ruojing Yang, Shane D. Marine, Alex McCampbell, Dawn M. Toolan, Tim R. Hare, et al. "The Use of SSMD-Based False Discovery and False Nondiscovery Rates in Genome-Scale RNAi Screens." Journal of Biomolecular Screening 15, no. 9 (September 17, 2010): 1123–31. http://dx.doi.org/10.1177/1087057110381919.

Full text
Abstract:
In genome-scale RNA interference (RNAi) screens, it is critical to control false positives and false negatives statistically. Traditional statistical methods for controlling false discovery and false nondiscovery rates are inappropriate for hit selection in RNAi screens because the major goal in RNAi screens is to control both the proportion of short interfering RNAs (siRNAs) with a small effect among selected hits and the proportion of siRNAs with a large effect among declared nonhits. An effective method based on strictly standardized mean difference (SSMD) has been proposed for statistically controlling false discovery rate (FDR) and false nondiscovery rate (FNDR) appropriate for RNAi screens. In this article, the authors explore the utility of the SSMD-based method for hit selection in RNAi screens. As demonstrated in 2 genome-scale RNAi screens, the SSMD-based method addresses the unmet need of controlling for the proportion of siRNAs with a small effect among selected hits, as well as controlling for the proportion of siRNAs with a large effect among declared nonhits. Furthermore, the SSMD-based method results in reasonably low FDR and FNDR for selecting inhibition or activation hits. This method works effectively and should have a broad utility for hit selection in RNAi screens with replicates.
APA, Harvard, Vancouver, ISO, and other styles
9

Hollister, Megan C., and Jeffrey D. Blume. "4497 Accessible False Discovery Rate Computation." Journal of Clinical and Translational Science 4, s1 (June 2020): 44. http://dx.doi.org/10.1017/cts.2020.164.

Full text
Abstract:
OBJECTIVES/GOALS: To improve the implementation of FDRs in translation research. Current statistical packages are hard to use and fail to adequately convey strong assumptions. We developed a software package that allows the user to decide on assumptions and choose the hey desire. We encourage wider reporting of FDRs for observed findings. METHODS/STUDY POPULATION: We developed a user-friendly R function for computing FDRs from observed p-values. A variety of methods for FDR estimation and for FDR control are included so the user can select the approach most appropriate for their setting. Options include Efron’s Empirical Bayes FDR, Benjamini-Hochberg FDR control for multiple testing, Lindsey’s method for smoothing empirical distributions, estimation of the mixing proportion, and central matching. We illustrate the important difference between estimating the FDR for a particular finding and adjusting a hypothesis test to control the false discovery propensity. RESULTS/ANTICIPATED RESULTS: We performed a comparison of the capabilities of our new p.fdr function to the popular p.adjust function from the base stats-package. Specifically, we examined multiple examples of data coming from different unknown mixture distributions to highlight the null estimation methods p.fdr includes. The base package does not provide the optimal FDR usage nor sufficient estimation options. We also compared the step-up/step-down procedure used in adjusted p-value hypothesis test and discuss when this is inappropriate. The p.adjust function is not able to report raw-adjusted values and this will be shown in the graphical results. DISCUSSION/SIGNIFICANCE OF IMPACT: FDRs reveal the propensity for an observed result to be incorrect. FDRs should accompany observed results to help contextualize the relevance and potential impact of research findings. Our results show that previous methods are not sufficient rich or precise in their calculations. Our new package allows the user to be in control of the null estimation and step-up implementation when reporting FDRs.
APA, Harvard, Vancouver, ISO, and other styles
10

Ventura, Valérie, Christopher J. Paciorek, and James S. Risbey. "Controlling the Proportion of Falsely Rejected Hypotheses when Conducting Multiple Tests with Climatological Data." Journal of Climate 17, no. 22 (November 15, 2004): 4343–56. http://dx.doi.org/10.1175/3199.1.

Full text
Abstract:
Abstract The analysis of climatological data often involves statistical significance testing at many locations. While the field significance approach determines if a field as a whole is significant, a multiple testing procedure determines which particular tests are significant. Many such procedures are available, most of which control, for every test, the probability of detecting significance that does not really exist. The aim of this paper is to introduce the novel “false discovery rate” approach, which controls the false rejections in a more meaningful way. Specifically, it controls a priori the expected proportion of falsely rejected tests out of all rejected tests; additionally, the test results are more easily interpretable. The paper also investigates the best way to apply a false discovery rate (FDR) approach to spatially correlated data, which are common in climatology. The most straightforward method for controlling the FDR makes an assumption of independence between tests, while other FDR-controlling methods make less stringent assumptions. In a simulation study involving data with correlation structure similar to that of a real climatological dataset, the simple FDR method does control the proportion of falsely rejected hypotheses despite the violation of assumptions, while a more complicated method involves more computation with little gain in detecting alternative hypotheses. A very general method that makes no assumptions controls the proportion of falsely rejected hypotheses but at the cost of detecting few alternative hypotheses. Despite its unrealistic assumption, based on the simulation results, the authors suggest the use of the straightforward FDR-controlling method and provide a simple modification that increases the power to detect alternative hypotheses.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "False Discovery Proportion control"

1

Blain, Alexandre. "Reliable statistical inference : controlling the false discovery proportion in high-dimensional multivariate estimators." Electronic Thesis or Diss., université Paris-Saclay, 2024. https://theses.hal.science/tel-04935172.

Full text
Abstract:
La sélection de variables sous contrôle statistique est un problème fondamental rencontré dans divers domaines où les praticiens doivent évaluer l'importance des variables d'entrée par rapport à un résultat d'intérêt. Dans ce contexte, le contrôle statistique vise à limiter la proportion de fausses découvertes, c'est-à-dire la proportion de variables sélectionnées qui sont indépendantes du résultat d'intérêt. Dans cette thèse, nous développons des méthodes visant à assurer un contrôle statistique dans des contextes de grande dimension tout en conservant la puissance statistique. Nous présentons quatre contributions clés dans ce domaine de recherche. Premièrement, nous introduisons Notip, une méthode non paramétrique qui permet aux utilisateurs d'obtenir des garanties sur la proportion de vraies découvertes dans n'importe quelle région cérébrale. Cette procédure améliore la sensibilité de détection par rapport aux méthodes existantes tout en conservant le contrôle des fausses découvertes. Deuxièmement, nous étendons le cadre Knockoff en proposant KOPI, une méthode qui fournit un contrôle de la proportion de fausses découvertes (FDP) en probabilité plutôt qu'en espérance. KOPI est naturellement compatible avec l'agrégation de plusieurs tirages Knockoff, ce qui permet de prendre en compte la variabilité de l'inférence Knockoff traditionnelle. Troisièmement, nous développons un outil de diagnostic pour identifier les violations de l'hypothèse d'échangeabilité dans Knockoffs, accompagné d'une nouvelle méthode non paramétrique de génération de Knockoffs qui restaure le contrôle des fausses découvertes. Enfin, nous introduisons CoJER pour améliorer la prédiction conforme en fournissant un contrôle précis de la proportion de couverture fausse (FCP) lorsque plusieurs points de test sont pris en compte, garantissant des estimations d'incertitude plus fiables. CoJER peut également être utilisé pour agréger les intervalles de confiance fournis par différents modèles prédictifs, atténuant ainsi l'impact des choix de modélisation. Ensemble, ces contributions renforcent la fiabilité de l'inférence statistique dans des contextes de grande dimension tels que les données de neuroimagerie et de génomique
Statistically controlled variable selection is a fundamental problem encountered in diverse fields where practitioners have to assess the importance of input variables with regards to an outcome of interest. In this context, statistical control aims at limiting the proportion of false discoveries, meaning the proportion of selected variables that are independent of the outcome of interest. In this thesis, we develop methods that aim at statistical control in high-dimensional settings while retaining statistical power. We present four key contributions in this avenue of work. First, we introduce Notip, a non-parametric method that allows users to obtain guarantees on the proportion of true discoveries in any brain region. This procedure improves detection sensitivity over existing methods while retaining false discoveries control. Second, we extend the Knockoff framework by proposing KOPI, a method that provides False Discovery Proportion (FDP) control in probability rather than in expectancy. KOPI is naturally compatible with aggregation of multiple Knockoffs draws, addressing the randomness of traditional Knockoff inference. Third, we develop a diagnostic tool to identify violations of the exchangeability assumption in Knockoffs, accompanied by a novel non-parametric Knockoff generation method that restores false discoveries control. Finally, we introduce CoJER to enhance conformal prediction by providing sharp control of the False Coverage Proportion (FCP) when multiple test points are considered, ensuring more reliable uncertainty estimates. CoJER can also be used to aggregate the confidence intervals provided by different predictive models, thus mitigating the impact of modeling choices. Together, these contributions advance the reliability of statistical inference in high-dimensional settings such as neuroimaging and genomic data
APA, Harvard, Vancouver, ISO, and other styles
2

Afriyie, Prince. "Applications of Procedures Controlling the Tail Probability of the False Discovery Proportion." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/367548.

Full text
Abstract:
Statistics
Ph.D.
Multiple testing has been an active area of statistical research in the past decade mainly because of its wide scope of applicability in modern scientific investigations. One major application area of multiple testing is in identifying differentially expressed genes from massive biological data generated by high-throughput genomic technologies where the expression profiles of genes are compared across two or more experimental conditions on a genomic scale. This dissertation briefly reviews modern multiple testing methodologies including types of error control, and multiple testing procedures, before focusing on one of its objectives of identifying differentially expressed genes from high-throughput genomic data. More specifically, we apply multiple testing procedures that control the γ-FDP, the probability of false discovery proportion (FDP) exceeding γ, given some γ∈ [0,1), on two types of high-throughput genomic data, namely, microarray and digital gene expression (DGE) data. In addition, we propose four newer step-up procedures controlling the γ-FDP. The first of these procedures is developed by modifying the Benjamini and Hochberg (1995, J. Roy. Statist. Soc., Ser. B) critical constants, which controls the γ-FDP under both independent and positively dependent test statistics. The second one is a two-stage adaptive procedure developed from these modified Benjamini and Hochberg critical constants and controls the γ-FDP under independence. The third and fourth procedures are also two-stage adaptive procedures controlling the γ-FDP under independence, but developed using critical constants in Lehmann and Romano (2005, Ann. of Statist.) and Delattre and Roquain (2015, Ann. of Statist.) respectively. Results of simulation studies examining performances of our procedures relative to their relevant competitors are presented. We also present a heuristic approach to investigating an unusual problem in the detection of differentially expressed genes from a microarray data. This problem arises when the marginal p-value distribution is an unknown mixture distribution rendering some multiple testing procedures incompetent of eliciting differentially expressed genes. We illustrate why the control of the γ-FDP is preferred in those instances. Future research problems are also discussed.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
3

Benditkis, Julia [Verfasser], Arnold [Akademischer Betreuer] Janssen, and Helmut [Akademischer Betreuer] Finner. "Martingale Methods for Control of False Discovery Rate and Expected Number of False Rejections / Julia Benditkis. Gutachter: Arnold Janssen ; Helmut Finner." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2015. http://d-nb.info/1077295170/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Regaining control of false findings in feature selection, classification, and prediction on neuroimaging and genomics data." Tulane University, 2018.

Find full text
Abstract:
acase@tulane.edu
The technological advances of past decades have led to the accumulation of large amounts of genomic and neuroimaging data, enabling novel strategies in precision medicine. These largely rely on machine learning algorithms and modern statistical methods for big biological datasets, which are data-driven rather than hypothesis-driven. These methods often lack guarantees on the validity of the research findings. Because it can be a matter of life and death, when computational methods are deployed in clinical practice in medicine, establishing guarantees on the validity of the results is essential for the advancement of precision medicine. This thesis proposes several novel sparse regression and sparse canonical correlation analysis techniques, which by design include guarantees on the false discovery rate in variable selection. Variable selection on biomedical data is essential for many areas of healthcare, including precision medicine, population stratification, drug development, and predictive modeling of disease phenotypes. Predictive machine learning models can directly affect the patient when used to aid diagnosis, and therefore they need to be thoroughly evaluated before deployment. We present a novel approach to validly reuse the test data for performance evaluation of predictive models. The proposed methods are validated in the application on large genomic and neuroimaging datasets, where they confirm results from previous studies and also lead to new biological insights. In addition, this work puts a focus on making the proposed methods widely available to the scientific community though the release of free and open-source scientific software.
1
Alexej Gossmann
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "False Discovery Proportion control"

1

Romano, Joseph P., and Azeem M. Shaikh. "On stepdown control of the false discovery proportion." In Institute of Mathematical Statistics Lecture Notes - Monograph Series, 33–50. Beachwood, Ohio, USA: Institute of Mathematical Statistics, 2006. http://dx.doi.org/10.1214/074921706000000383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Perone-Pacifico, Marco, and Isabella Verdinelli. "False Discovery Control for Scan Clustering." In Scan Statistics, 271–87. Boston, MA: Birkhäuser Boston, 2009. http://dx.doi.org/10.1007/978-0-8176-4749-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Bangxin, and Wenqing He. "Simultaneous Control of False Discovery Rate and Sensitivity Using Least Angle Regressions in High-Dimensional Data Analysis." In Advances and Innovations in Statistics and Data Science, 55–68. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08329-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Genovese, C. R. "False Discovery Rate Control." In Brain Mapping, 501–7. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-12-397025-1.00323-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Fang, and Sanat K. Sarkar. "A New Adaptive Method to Control the False Discovery Rate." In Recent Advances in Biostatistics, 3–26. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814329804_0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Omairi, Luiza Barranco, Juliana Silva Barbosa, Andre Luiz Ciclini, and Vanessa Cicclini Guerra. "Hepatitis C treatment experience in a dialysis clinic: nephrologist's perspective." In COLLECTION OF INTERNATIONAL TOPICS IN HEALTH SCIENCE- V1. Seven Editora, 2023. http://dx.doi.org/10.56238/colleinternhealthscienv1-059.

Full text
Abstract:
Since its discovery in 1989, hepatitis C has been gaining relevance due to its potential to develop into chronic liver disease. It is known that its prevalence is higher in dialysis patients, increasing in accordance with the time the patient remains on treatment. The diagnosis in this population may be difficult because of the nonspecific clinical picture, which may be confused with symptoms of uremia, besides variable levels of alanine aminotransferase (ALT), added to false-negative serologies for hepatitis C virus (HCV) and low viremia. Several studies seek to identify the causes of transmission within dialysis units, and most point to breaches in infection control protocol.
APA, Harvard, Vancouver, ISO, and other styles
7

Cordes, Eugene H. "Take it off! Take it all off! Drugs for weight reduction." In Hallelujah Moments, 187–216. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190080457.003.0011.

Full text
Abstract:
Obesity is a major and growing threat to good health to most parts of the world. In the United States, Xenical, marketed over the counter as Alli, is the only drug approved by the Food and Drug Administration for long-term use for weight control. There are several others—Qsymia, Contrave, Belviq, Saxenda—that are approved for short-term use. A number of others, approved earlier, have been withdrawn from the market for patient safety reasons, including the popular combination known as phen-fen. The pharmaceutical industry has found the discovery of effective and safe weight control drugs to be a formidable challenge. Xenical (tetrahydrolipstatin) is an inhibitor of an enzyme in the gut that promotes the digestion of fats. As a result, an increased fraction of ingested fat is excreted in the feces rather than being absorbed in the body, a reduction in effective calorie intake. This is a novel weight control mechanism of action. Other agents act to suppress appetite or as stimulants to calorie burning. Dietary measures to control weight take several forms, but the effective measure is calorie intake, not diet composition. The field of weight control is rife with false and unsubstantiated claims of efficacy. Research to find better drugs for weight control continues.
APA, Harvard, Vancouver, ISO, and other styles
8

Targowski, Andrew. "Information Laws." In Information Technology and Societal Development, 277–88. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-004-2.ch012.

Full text
Abstract:
The purpose of this chapter is to define information laws which control the development of the global and universal civilizations as well as individual autonomous civilizations. Mankind progresses in proportion to its wisdom, which has roots in practice, acquired skills, available data, and information, concepts and knowledge. To be wise, humankind needs to be both informed and knowledgeable, otherwise it will not survive its own failures. Progress in knowledge was painfully slow as long as the spatial memory was transmitted only by oral tradition. With the inventions of writing and books, the process of knowledge discovery and dissemination was accelerated. Today, computers and their networks speed up that process far beyond our imagination. In the 21st century, the Information Wave significantly controls the Agricultural and Industrial Waves through millions of computers. IT supports decision-making based on knowledgeoriented systems such as “data mining” that, for example, discover knowledge about customers and organization dynamics to achieve competitive advantage. Information and knowledge have become the strategic resource that engineering science was in the Industrial Wave. However, the discovery of human cognition potential must be guided by knowledge science, which is just emerging. One of the signs of any science is its set of data, universal rules, laws, and systems of rules and laws. Hence, this chapter offers the first attempt to develop main laws of information that should increase our awareness about the Information Wave, the new stage of civilization dynamics that is taking place at the beginning of the third millennium. The chapter also provides the framework for the analysis of human capital from an information perspective. These considerations reflect a still emerging approach which I call macro-information ecology.
APA, Harvard, Vancouver, ISO, and other styles
9

Ganeri, Jonardon. "The Enigma of Heteronymy." In Virtual Subjects, Fugitive Selves, 17–22. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198864684.003.0003.

Full text
Abstract:
The two poles around which Pessoa’s entire philosophy of self revolves are commitments to two extremely enigmatic propositions: [simulation] I am a subject other than the subject I am; and [depersonalization] I am merely a forum for the subject I am. What I am calling the ‘enigma of heteronymy’ is the challenge to provide an analysis of the functions of the first person, that is to say, the use or uses of the pronoun ‘I’ and so of the phenomenology of self-consciousness, according to which this pair of propositions is not trivially false but, on the contrary, interestingly and importantly true. I set aside a solution to the enigma which some interpreters of Pessoa have found tempting. The enticing solution is to deny that Pessoa is rational. To put it another way: has Pessoa made a fundamental discovery about the nature of subjectivity, or is he in the grip of a psychosis? It is clear though, first of all, that the depersonalization Pessoa is talking about does not satisfy the diagnostic criteria of the eponymous mental disorder. Contemporary philosophers of psychiatry agree that a characteristic of a genuine mental disorder is that it is something over which the sufferer has little or no control. Pessoa refers to Henri-Frédéric Amiel several times, and his experiences too are ‘philosophical experiences’, under the direction of his guided imagination.
APA, Harvard, Vancouver, ISO, and other styles
10

Vinayakumar, R., K. P. Soman, and Prabaharan Poornachandran. "Evaluation of Recurrent Neural Network and its Variants for Intrusion Detection System (IDS)." In Deep Learning and Neural Networks, 295–316. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch018.

Full text
Abstract:
This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "False Discovery Proportion control"

1

Koka, Taulant, Jasin Machkour, and Michael Muma. "False Discovery Rate Control for Gaussian Graphical Models via Neighborhood Screening." In 2024 32nd European Signal Processing Conference (EUSIPCO), 2482–86. IEEE, 2024. http://dx.doi.org/10.23919/eusipco63174.2024.10715414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xiang, Yu. "Distributed False Discovery Rate Control with Quantization." In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McHugh, J. Mike, Janusz Konrad, Venkatesh Saligrama, Pierre-Marc Jodoin, and David Castanon. "Motion detection with false discovery rate control." In 2008 15th IEEE International Conference on Image Processing - ICIP 2008. IEEE, 2008. http://dx.doi.org/10.1109/icip.2008.4711894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dalleiger, Sebastian, and Jilles Vreeken. "Discovering Significant Patterns under Sequential False Discovery Control." In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3539398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, B., N. Chenouard, J. C. Olivo-Marin, and V. Meas-Yedid. "Statistical colocalization in biological imaging with false discovery control." In 2008 IEEE International Symposium on Biomedical Imaging: From Macro to Nano (ISBI '08). IEEE, 2008. http://dx.doi.org/10.1109/isbi.2008.4541249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vinzamuri, Bhanukiran, and Kush R. Varshney. "FALSE DISCOVERY RATE CONTROL WITH CONCAVE PENALTIES USING STABILITY SELECTION." In 2018 IEEE Data Science Workshop (DSW). IEEE, 2018. http://dx.doi.org/10.1109/dsw.2018.8439910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Hien D., Andrew L. Janke, Nicolas Cherbuin, Geoffrey J. McLachlan, Perminder Sachdev, and Kaarin J. Anstey. "Spatial False Discovery Rate Control for Magnetic Resonance Imaging Studies." In 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2013. http://dx.doi.org/10.1109/dicta.2013.6691531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Halme, Topi, and Visa Koivunen. "Optimal Multi-Stream Quickest Detection with False Discovery Rate Control." In 2023 57th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2023. http://dx.doi.org/10.1109/ieeeconf59524.2023.10476984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Flasseur, Olivier, Loic Denis, Eric Thiebaut, and Maud Langlois. "Finding Meaningful Detections: False Discovery Rate Control in Correlated Detection Maps." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Shengze, Shichao Feng, Chongle Pan, and Xuan Guo. "FineFDR: Fine-grained Taxonomy-specific False Discovery Rates Control in Metaproteomics." In 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2022. http://dx.doi.org/10.1109/bibm55620.2022.9995401.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "False Discovery Proportion control"

1

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography