Dissertations / Theses on the topic 'Statistical significance'

To see the other types of publications on this topic, follow the link: Statistical significance.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical significance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Agrawal, Ankit. "Sequence-specific sequence comparison using pairwise statistical significance." [Ames, Iowa : Iowa State University], 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Junhai. "Statistical significance and biological relevance of microarray data clustering." Cincinnati, Ohio : University of Cincinnati, 2008. http://rave.ohiolink.edu/etdc/view.cgi?acc_num=ucin1204736862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lybrand, Blythe, Ginette Blackhart, Amanda Parish, and Hannah Lowe. "Investigating the Misrepresentation of Statistical Significance in Empirical Articles." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/honors/646.

Full text
Abstract:
In an attempt to preserve research integrity, the aim of this study is to examine how often statistical results are being misrepresented in empirical studies by using terms such as “marginally significant,” “approached significance,” or “trend toward significance” when interpreting findings. The use of these terms gives ambiguous significance to results that are in fact nonsignificant, which threatens future research by contributing to issues such as the replication crisis. For this study, data were coded from 437 empirical articles published online in The Journal of Personality and Social Psychology (JPSP) over a 4-year period between 2017 and 2020. According to our findings, although misrepresentation of statistical results are prevalent within JPSP articles, rates decreased significantly over the four-year time period examined. Additionally, as the number of studies published in JPSP increased each year during the four-year period examined, there may be a potential rise in representatively sound studies and decrease of misrepresentation within this discipline.
APA, Harvard, Vancouver, ISO, and other styles
4

Price, Carly S., and L. Lee Glenn. "Statistical Significance of Eating Disorders and Adverse Perinatal Outcomes." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etsu-works/7462.

Full text
Abstract:
Excerpt: The study by Linna et al1 posited that “eating disorders appear to be associated with several adverse perinatal outcomes, particularly in offspring.” The adverse outcomes included anemia, slow fetal growth, premature contractions, and perinatal death. However, this conclusion cannot be supported by the data because the authors failed to correct the standard value of P = .05 to account for the large number of hypothesis tests. This leads to what is known as type 2 error and causes a hypothesis to be accepted that is actually false.
APA, Harvard, Vancouver, ISO, and other styles
5

Karki, Rabindra. "Regulation of glucose metabolism by Alox8." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1507.

Full text
Abstract:
Type II diabetes is one of the leading cause of morbidity in the U.S. and other parts of the world. Insulin resistance which precedes Type II diabetes is a complex state of the body where the body fails to respond to insulin. Its complexity lies in its multifactorial origin that is to say various environmental and polygenic components come into play. Here we try to dissect one of these components - `Alox8' in transgenic mice and try to see if it affects blood glucose homeostasis. Comparison of glucose tolerance and insulin sensitivity among sixteen mice comprising of six wild type, five heterozygous and five knockout mice with respect to Alox8 gene showed that wild type mice had relatively more glucose tolerance than knockout mice and this corresponded with relatively more insulin sensitiveness of wild type mice with respect to the knock out. However, these findings were not significant statistically at p=0.05. In search of any relevant biological significance, periodic acid schiff staining of the liver sections from these mice in three independent repeated experiments revealed that the knockout phenotype led to accumulation of glycogen deposits as compared to the wild type mice, an indication of insulin resistance. Taken together, our data suggests that these findings when extrapolated to human which carries ALOX15B instead of mice orthologue Alox8, could lead to a benefit of administration of lower doses of insulin in the wild type phenotype as compared to its polymorphic alleles carrying individuals.
APA, Harvard, Vancouver, ISO, and other styles
6

Zumbo, Bruno Domenico 1966 Carleton University Dissertation Psychology. "Statistical methods to overcome nonindependence of coupled data in significance testing." Ottawa.:, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morrison, Chelsea L., and L. Lee Glenn. "Statistical Significance of Paracetamol Administration in Fetal and Maternal Body Temperatures." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7470.

Full text
Abstract:
Excerpt: In their recent study [ 1 ], Lavesson et al. concluded, “In febrile parturients, neither maternal nor fetal temperatures dropped after paracetamol, but paracetamol halted an increasing trend and stabilized the fetal temperature. The effect of paracetamol on maternal temperature was inconclusive”. This conclusion, however, is not supported by the data in the study, in that the calculation of case and control temperature differences demonstrates clear fetal and maternal temperature decreases after paracetamol.
APA, Harvard, Vancouver, ISO, and other styles
8

Dallas, C. "The significance of costume on classical Attic grave stelai : A statistical analysis." Thesis, University of Oxford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sullivan, Autumn G., and L. Lee Glenn. "Statistical Significance and Benefit Comparisons for Infant Immobilization in Magnetic Resonance Imaging." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Trotter, Jennifer R., and L. Lee Glenn. "Measurement Validity and Statistical Significance for Nutritional Factors in Peripheral Artery Disease." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Quevedo, Candela Ana Valeria. "Statistical Methods for Non-Linear Profile Monitoring." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96265.

Full text
Abstract:
We have seen an increased interest and extensive research in the monitoring of a process over time whose characteristics are represented mathematically in functional forms such as profiles. Most of the current techniques require all of the data for each profile to determine the state of the process. Thus, quality engineers from industrial processes such as agricultural, aquacultural, and chemical cannot make process corrections to the current profile that are essential for correcting their processes at an early stage. In addition, the focus of most of the current techniques is on the statistical significance of the parameters or features of the model instead of the practical significance, which often relates to the actual quality characteristic. The goal of this research is to provide alternatives to address these two main concerns. First, we study the use of a Shewhart type control chart to monitor within profiles, where the central line is the predictive mean profile and the control limits are formed based on the prediction band. Second, we study a statistic based on a non-linear mixed model recognizing that the model leads to correlations among the estimated parameters.
Doctor of Philosophy
Checking the stability over time of the quality of a process which is best expressed by a relationship between a quality characteristic and other variables involved in the process has received increasing attention. The goal of this research is to provide alternative methods to determine the state of such a process. Both methods presented here are compared to the current methodologies. The first method will allow us to monitor a process while the data is still being collected. The second one is based on the quality characteristic of the process and takes full advantage of the model structure. Both methods seem to be more robust than the current most well-known method.
APA, Harvard, Vancouver, ISO, and other styles
12

Kosowski, Robert. "Essays on mutual fund performance : statistical significance, persistence and business-cycle time-variation." Thesis, London School of Economics and Political Science (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.483511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hanittinan, Patinya. "Statistical analysis of river discharge change in the Indochinese Peninsula using largo ensemble future climate projections." Kyoto University, 2017. http://hdl.handle.net/2433/227600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hess, Melinda Rae. "Effect Sizes, Significance Tests, and Confidence Intervals: Assessing the Influence and Impact of Research Reporting Protocol and Practice." Scholar Commons, 2003. https://scholarcommons.usf.edu/etd/1390.

Full text
Abstract:
This study addresses research reporting practices and protocols by bridging the gap from the theoretical and conceptual debates typically found in the literature with more realistic applications using data from published research. Specifically, the practice of using findings of statistical analysis as the primary, and often only, basis for results and conclusions of research is investigated through computing effect size and confidence intervals and considering how their use might impact the strength of inferences and conclusions reported. Using a sample of published manuscripts from three peer-rviewed journals, central quantitative findings were expressed as dichotomous hypothesis test results, point estimates of effect sizes and confidence intervals. Studies using three different types of statistical analyses were considered for inclusion: t-tests, regression, and Analysis of Variance (ANOVA). The differences in the substantive interpretations of results from these accomplished and published studies were then examined as a function of these different analytical approaches. Both quantitative and qualitative techniques were used to examine the findings. General descriptive statistical techniques were employed to capture the magnitude of studies and analyses that might have different interpretations if althernative methods of reporting findings were used in addition to traditional tests of statistical signficance. Qualitative methods were then used to gain a sense of the impact on the wording used in the research conclusions of these other forms of reporting findings. It was discovered that tests of non-signficant results were more prone to need evidence of effect size than those of significant results. Regardless of tests of significance, the addition of information from confidence intervals tended to heavily impact the findings resulting from signficance tests. The results were interpreted in terms of improving the reporting practices in applied research. Issues that were noted in this study relevant to the primary focus are discussed in general with implicaitons for future research. Recommendations are made regarding editorial and publishing practices, both for primary researchers and editors.
APA, Harvard, Vancouver, ISO, and other styles
15

Neupane, Ganesh Prasad. "Comparison of Natural and Predicted Earthquake Occurrence in Seismologically Active Areas for Determination of Statistical Significance." Bowling Green, Ohio : Bowling Green State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=bgsu1213494761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tlachac, Monica. "Tackling the Antibiotic Resistant Bacteria Crisis Using Longitudinal Antibiograms." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1318.

Full text
Abstract:
Antibiotic resistant bacteria, a growing health crisis, arise due to antibiotic overuse and misuse. Resistant infections endanger the lives of patients and are financially burdensome. Aggregate antimicrobial susceptibility reports, called antibiograms, are critical for tracking antibiotic susceptibility and evaluating the likelihood of the effectiveness of different antibiotics to treat an infection prior to the availability of patient specific susceptibility data. This research leverages the Massachusetts Statewide Antibiogram database, a rich dataset composed of antibiograms for $754$ antibiotic-bacteria pairs collected by the Massachusetts Department of Public Health from $2002$ to $2016$. However, these antibiograms are at least a year old, meaning antibiotics are prescribed based on outdated data which unnecessarily furthers resistance. Our objective is to employ data science techniques on these antibiograms to assist in developing more responsible antibiotic prescription practices. First, we use model selectors with regression-based techniques to forecast the current antimicrobial resistance. Next, we develop an assistant to immediately identify clinically and statistically significant changes in antimicrobial resistance between years once the most recent year of antibiograms are collected. Lastly, we use k-means clustering on resistance trends to detect antibiotic-bacteria pairs with resistance trends for which forecasting will not be effective. These three strategies can be implemented to guide more responsible antibiotic prescription practices and thus reduce unnecessary increases in antibiotic resistance.
APA, Harvard, Vancouver, ISO, and other styles
17

Larson, Lincoln Gary. "Investigating Statistical vs. Practical Significance of the Kolmogorov-Smirnov Two-Sample Test Using Power Simulations and Resampling Procedures." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28770.

Full text
Abstract:
This research examines the power of the Kolmogorov-Smirnov two-sample test. The motivation for this research is a large data set containing soil salinity values. One problem encountered was that the power of the Kolmogorov-Smirnov two-sample test became extremely high due to the large sample size. This extreme power resulted in statistically significant differences between two distributions when no practically significant difference was present. This research used resampling procedures to create simulated null distributions for the test statistic. These null distributions were used to obtain power approximations for the Kolmogorov-Smirnov tests under differing effect sizes. The research shows that the power of the Kolmogorov-Smirnov test can become very large in cases of large sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
18

Chai, Shan, and shan chai@optusnet com au. "Performance Evaluation of Perceptually Lossless Medical Image Coder." RMIT University. Electrical and Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080205.120648.

Full text
Abstract:
Medical imaging technologies offer the benefits of faster and accurate diagnosis. When the medical imaging combined with the digitization, they offer the advantage of permanent storage and fast transmission to any geographical location. However, there is a need for efficient compression algorithms that alleviate the taxing burden of both large storage space and transmission bandwidth requirements. The Perceptually Lossless Medical Image Coder is a new image compression technique. It provides a solution to challenge of delivering clinically critical information in the shortest time possible. It embeds the visual pruning into the JPEG 2000 coding framework to achieve the optimal compression without losing the visual integrity of medical images. However, the performance of the PLMIC under certain medical image operation is still unknown. In this thesis, we investigate the performance of the PLMIC by applying linear, quadratic and cubic standard and centered B-spline interpolation filters. In order to evaluate the visual performance, a subjective assessment consisting of 30 medical images and 6 image processing experts was conducted. The perceptually lossless medical image coder was compared to the state-of-the-art JPEG-LS compliant LOCO and NLOCO image coders. The results have shown overall, there were no perceivable differences of statistical significance when the medical images were enlarged by a factor of 2. The findings of the thesis may help the researchers to further improve the coder. Additionally, it may also notify the radiologists the performance of the PLMIC coder to help them with correct diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
19

Ubisse, Amosse Francisco. "A formative evaluation of LPC’s Montessori Preschool Programme." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/31303.

Full text
Abstract:
Research shows that early childhood interventions with fidelity to Montessori model generate learner’s outcomes that outperform the traditional model. The evidence is confirmed in developed and in developing countries. This formative evaluation reports the results of a Montessori model in implementation in township of Mfuleni, located in Cape Town, South Africa. Providing insights into the functioning of the programme, the evaluation confirms that the roll out of the Montessori model is still underway which may explain the reason of the learners not outperforming the comparison group.
APA, Harvard, Vancouver, ISO, and other styles
20

Chernoff, William Avram. "On determining the power of a test after data collection." Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stephens, Nathan W. "A comparison of genetic microarray analyses : a mixed models approach versus the significance analysis of microarrays /." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1604.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ng, Yui-kin. "A critical analysis of the role of statistical significance testing in education research, with special attention to mathematics education." Thesis, Durham University, 2005. http://etheses.dur.ac.uk/2714/.

Full text
Abstract:
This study analyzes the role of statistical significance testing (SST) in education. Although the basic logic underlying SST 一 a hypothesis is rejected because the observed data would be very unlikely if the hypothesis is true 一 appears so obvious that many people are tempted to accept it, it is in fact fallacious. In the light of its historical background and conceptual development, discussed in Chapter 2, the Fisher’s significance testing, Neyman-Pearson hypothesis testing and their hybrids are clearly distinguished. We argue that the probability of obtaining the observed or more extreme outcomes (p value) can hardly act as a measure of the strength of evidence against the null hypothesis. After discussing the five major interpretations of probability, we conclude that if we do not accept the subjective theory of probability, talking about the probability of a hypothesis that is not the outcome of a chance process is unintelligible. But the subjective theory itself has many intractable difficulties that can hardly be resolved. If we insist on assigning a probability value to a hypothesis in the same way as we assign one to a chance event, we have to accept that it is the hypothesis with low probability, rather than high probability, that we should aim at when conducting scientific research. More important, the inferences behind SST are shown to be fallacious from three different perspectives. The attempt to invoke the likelihood ratio with the observed or more extreme data instead of the probability of a hypothesis in defending the use of р value as a measure of the strength of evidence against the null hypothesis is also shown to be misleading because it can be demonstrated that the use of tail region to represent a result that is actually on the border would overstate the evidence against the ทน11 hypothesis.Although Neyman-Pearson hypothesis testing does not involve the concept of the probability of a hypothesis, it does have some other serious problems that can hardly be resolved. We show that it cannot address researchers' genuine concerns. By explaining why the level of significance must be specified or fixed prior to the analysis of data and why a blurring of the distinction between the р value and the significance level would lead to undesirable consequences, we conclude that the Neyman-Pearson hypothesis testing cannot provide an effective means for rejecting false hypotheses. After a thorough discussion of common misconceptions associated with SST and the major arguments for and against SST, we conclude that SST has insurmountable problems that could misguide the research paradigm although some other criticisms on SST are not really as justified. We also analyze various proposed alternatives to SST and conclude that confidence intervals (CIs) are no better than SST for the purpose of testing hypotheses and it is unreasonable to expect the existence of a statistical test that could provide researchers with algorithms or rigid rules by conforming to which all problems about testing hypotheses could be solved. Finally, we argue that falsificationism could eschew the disadvantages of SST and other similar statistical inductive inferences and we discuss how it could bring education research into a more fruitful situation in which to their practices. Although we pay special attention to mathematics education, the core of the discussion in the thesis might apply equally to other educational contexts.
APA, Harvard, Vancouver, ISO, and other styles
23

Morrisset, David. "The Effect of Orientation on the Ignition of Solids." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2171.

Full text
Abstract:
The ignition of a solid is an inherently complex phenomenon influenced by heat and mass transport mechanisms that are, even to this day, not understood in entirety. In order to use ignition data in meaningful engineering application, significant simplifications have been made to the theory of ignition. The most common way to classify ignition is the use of material specific parameters such as such as ignition temperature (Tig) and the critical heat flux for ignition (CHF). These parameters are determined through standardized testing of solid materials – however, the results of these tests are generally used in applications different from the environments in which these parameters were actually determined. Generally, ignition temperature and critical heat flux are used as material properties and are presented readily in sources such as the SFPE Handbook. However, these parameters are not truly material properties; each are inherently affected by the environment and orientation in which they are tests. Ignition parameters are therefore system dependent, tied to the conditions in which the parameters are determined. Previous work has demonstrated that ignition parameters (such as Tig or CHF) for the same material can vary depending on whether the sample is tested in a vertical or horizontal orientation. While the results are clear, the implications this may have on the use of ignition data remains uncertain. This work outlines the fundamental theory of ignition as well as a review of studies related to orientation. The aim of this study it to analyze the influence of sample orientation on ignition parameters. All experimental work in this study was conducted using cast black polymethyl methacrylate (PMMA or commonly referred to as acrylic). This study explores ignition parameters for PMMA in various orientations and develops a methodology through which orientation can be incorporated into existing ignition theory. An additional study was also conducted to explore the statistical significance of current flammability test methodologies. Ultimately, this study outlines the problem of the system dependency of ignition and provides commentary on the use of ignition data in engineering applications
APA, Harvard, Vancouver, ISO, and other styles
24

Toan, Duong Duc. "Assessment of river discharge changes in the Indochina Peninsula region under a changing climate." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/195976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Coe, Robert, and Soto César Merino. "Effect Size: A guide for researchers and users." Pontificia Universidad Católica del Perú, 2003. http://repositorio.pucp.edu.pe/index/handle/123456789/100341.

Full text
Abstract:
The present article describes a method to quantify the magnitude of the differences between two measures and/or the degree of the effect of a variable about criteria, and it is named likethe effect size measure, d. Use it use in research and applied contexts provides a quitedescriptive complementary information, improving the interpretation of the results obtained bythe traditional methods that emphasize the statistical significance. Severa) forms there are of interpreting the d, and an example taken of an experimental research, is presented to clarify the concepts and necessary calculations. This method is not robust to sorne conditions that they candistort its interpretation, for example, the non normality of the data; alternative methods are mentioned to the statistical d. We ending with sorne conclusions that will notice about the appropriate use of it.
El presente artículo describe un método para cuantificar la magnitud de las diferencias entredos mediciones y/o el grado del efecto de una variable sobre un criterio, y es llamado lamedida de la magnitud del efecto, de su uso en contextos de investigación y aplicados proporciona un información complementaria bastante descriptiva, mejorando la interpretaciónde los resultados obtenidos por los métodos tradicionales que enfatizan la significación estadística. Existen varias formas de interpretar el estadístico d, y se presenta un ejemplo,tomado de una investigación experimental, para aclarar los conceptos y cálculos necesarios.Este método no es robusto a ciertas condiciones que pueden distorsionar su interpretación, por ejemplo, la no normalidad de los datos entre otros; se mencionan métodos alternativos alestadístico d. Finalizamos con unas conclusiones que advierten sobre su apropiado uso.
APA, Harvard, Vancouver, ISO, and other styles
26

Pfeiffer, Philip Edward. "A System for Determining the Statistical Significance of the Frequency of Short DNA Motif Matches in a Genome - An Analytical Approach." University of Dayton / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1304599225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Keeling, Kellie Bliss. "Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery Applications." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc2231/.

Full text
Abstract:
With advances in computer technology, organizations are able to store large amounts of data in data warehouses. There are two fundamental issues researchers must address: the dimensionality of data and the interpretation of multiple statistical tests. The first issue addressed by this research is the determination of the number of components to retain in principal components analysis. This research establishes regression, asymptotic theory, and neural network approaches for estimating mean and 95th percentile eigenvalues for implementing Horn's parallel analysis procedure for retaining components. Certain methods perform better for specific combinations of sample size and numbers of variables. The adjusted normal order statistic estimator (ANOSE), an asymptotic procedure, performs the best overall. Future research is warranted on combining methods to increase accuracy. The second issue involves interpreting multiple statistical tests. This study uses simulation to show that Parker and Rothenberg's technique using a density function with a mixture of betas to model p-values is viable for p-values from central and non-central t distributions. The simulation study shows that final estimates obtained in the proposed mixture approach reliably estimate the true proportion of the distributions associated with the null and nonnull hypotheses. Modeling the density of p-values allows for better control of the true experimentwise error rate and is used to provide insight into grouping hypothesis tests for clustering purposes. Future research will expand the simulation to include p-values generated from additional distributions. The techniques presented are applied to data from Lake Texoma where the size of the database and the number of hypotheses of interest call for nontraditional data mining techniques. The issue is to determine if information technology can be used to monitor the chlorophyll levels in the lake as chloride is removed upstream. A relationship established between chlorophyll and the energy reflectance, which can be measured by satellites, enables more comprehensive and frequent monitoring. The results have both economic and political ramifications.
APA, Harvard, Vancouver, ISO, and other styles
28

Vaas, Lea Anna Isgard [Verfasser], and Hans Peter [Akademischer Betreuer] Klenk. "Physiological changes induced by gene overexpression in plant cell cultures: Characterization and statistical significance / Lea Anna Isgard Vaas ; Betreuer: Hans Peter Klenk." Braunschweig : Technische Universität Braunschweig, 2012. http://d-nb.info/1175822744/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Brits, Jeanette. "A framework for the use and interpretation of statistics in reading instruction / Jeanette Brits." Thesis, North-West University, 2007. http://hdl.handle.net/10394/1678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lu, Yingzhou. "Multi-omics Data Integration for Identifying Disease Specific Biological Pathways." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83467.

Full text
Abstract:
Pathway analysis is an important task for gaining novel insights into the molecular architecture of many complex diseases. With the advancement of new sequencing technologies, a large amount of quantitative gene expression data have been continuously acquired. The springing up omics data sets such as proteomics has facilitated the investigation on disease relevant pathways. Although much work has previously been done to explore the single omics data, little work has been reported using multi-omics data integration, mainly due to methodological and technological limitations. While a single omic data can provide useful information about the underlying biological processes, multi-omics data integration would be much more comprehensive about the cause-effect processes responsible for diseases and their subtypes. This project investigates the combination of miRNAseq, proteomics, and RNAseq data on seven types of muscular dystrophies and control group. These unique multi-omics data sets provide us with the opportunity to identify disease-specific and most relevant biological pathways. We first perform t-test and OVEPUG test separately to define the differential expressed genes in protein and mRNA data sets. In multi-omics data sets, miRNA also plays a significant role in muscle development by regulating their target genes in mRNA dataset. To exploit the relationship between miRNA and gene expression, we consult with the commonly used gene library - Targetscan to collect all paired miRNA-mRNA and miRNA-protein co-expression pairs. Next, by conducting statistical analysis such as Pearson's correlation coefficient or t-test, we measured the biologically expected correlation of each gene with its upstream miRNAs and identify those showing negative correlation between the aforementioned miRNA-mRNA and miRNA-protein pairs. Furthermore, we identify and assess the most relevant disease-specific pathways by inputting the differential expressed genes and negative correlated genes into the gene-set libraries respectively, and further characterize these prioritized marker subsets using IPA (Ingenuity Pathway Analysis) or KEGG. We will then use Fisher method to combine all these p-values derived from separate gene sets into a joint significance test assessing common pathway relevance. In conclusion, we will find all negative correlated paired miRNA-mRNA and miRNA-protein, and identifying several pathophysiological pathways related to muscular dystrophies by gene set enrichment analysis. This novel multi-omics data integration study and subsequent pathway identification will shed new light on pathophysiological processes in muscular dystrophies and improve our understanding on the molecular pathophysiology of muscle disorders, preventing and treating disease, and make people become healthier in the long term.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Chia, Nicholas Lee-Ping. "Sequence alignment." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1154616122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hess, Melinda Rae. "Effect sizes, signficance tests, and confidence intervals [electronic resource] : assessing the influence and impact of research reporting protocol and practice / by Melinda Rae Hess." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Haskell, Christopher Kent. "The Impact of Bicycle Corridors on Travel Demand in Utah." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5702.

Full text
Abstract:
Bicycling as an alternate mode of transportation has been on the rise. It is environmentally friendly in nature and the associated health benefits have made it a popular choice for many types of trips. The purpose of this research is to increase understanding of the impacts of implementing bicycle corridors (as part of the Utah Department of Transportation's (UDOT) Inclusion of Active Transportation policy) on bicycle rate as a function of roadway characteristics. The results of this research will be used in determining when and where bicycle corridors will enhance the transportation system and an estimate of the overall impact of bicycle corridors on travel demand in Utah. Data collection was fundamental in this research project in determining the impacts of bicycle corridors on travel demands in the state of Utah. With limited amount of commuting bicycle data available throughout the state, it was necessary to gather bicycle volume data on corridors with and without bicycle infrastructure. In order to accomplish this data collection effort, two primary methods were used to collect bicycle volume data. The first method was to use automatic bicycle counters on roadways that had bicycle infrastructure. The second method was to gather bicycle volume data through manual counts on roads with and without bicycle infrastructure. After the bicycle volume data were collected the data were analyzed to identify trends. The first step in the analysis was to convert the bicycle volumes into rates to provide a more uniform comparison. Several analyses were run including an analysis of bicycle rate compared to Annual Average Daily Traffic (AADT), bicycle rate compared to posted speed limit, bicycle rate compared to number of vehicle lanes, and bicycle rate compared to roadway classification. A comparison of sites with bicycle infrastructure to sites without bicycle infrastructure (non-bicycle infrastructure) was also conducted to identify relationships. Comparison of bicycle rates to AADT resulted in no correlation or statistical relationship in the data but the data do suggest trends. Statistically significant results did occur when comparing bicycle rates to posted speed limits. No statistically significant relationships occurred when comparing bicycle rates to the number of lanes or roadway classification. It was determined that roadways with bicycle infrastructure tend to yield higher bicycle rates than roadways that do not have bicycle infrastructure. Lastly, using shared use path data it is determined that bicycle rates on shared use paths have increased between 1.7 to 7.5 percent from 2013 to 2014 and it is assumed that a similar trend would exist on bicycle infrastructure in the communities.
APA, Harvard, Vancouver, ISO, and other styles
34

Nhandara, Simbarashe. "The Operational Code of Tony Blair : Did he experience Learning, Stability or Change in his Belief System during the period he was Prime Minister?" Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-30510.

Full text
Abstract:
The intention of this project is to examine whether or not; Anthony Charles Lynton "Tony" Blair, experienced any belief changes or learning, during the period he was Prime Minister of the United Kingdom (UK), a period which lasted 10 years commencing 1997 until 2007. Our analysis will cover a timeline beginning from 1999 a point in time when the UK participated in NATO’s Operation Allied Force during the Kosovo War until 2006 when Britain took on the reins of the EU presidency for a six-month period. An exploration, of the beliefs behind a leader’s decision making logic, should always be considered a prudent undertaking especially when it comes to foreign policy studies. This is because, it is only through such activity scholars can comprehend the distinction between decisions and actions. Thus, understanding when and how; an individual leaders’ belief system changes, is of central importance in furthering our ability in explaining not only state behaviour, but, also the relationship between ‘self’ and ‘other’. The main purpose of an operational code analysis is to enable political scientists and policy makers alike, to deduce from a particular leader’s verbal content, what that actor's beliefs are and the premises they take in relation to their decision-making process. The first of our two part hypotheses seeks to determine whether Tony Blair exhibited changes in his beliefs at various stages of his premiership. These stages were signified by events which were also crucial in defining his political legacy. The events are divided into three categories, Post Kosovo – Pre Iraq, Pre 9/11 – Post 9/11 and Pre EU – Post EU. On completion of our VICS and SPSS analysis on Tony Blair’s operational code belief we discovered that there were no statistically significant changes in any of his operational indices. So, due to the lack of statistically significant changes in Blair’s Philosophical and Instrumental indices we could not classify the events selected for this analysis as having produced any influence on his belief system. Thus we rejected our null hypothesis and accept the alternative hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
35

Abdey, James Spencer. "To p, or not to p? : quantifying inferential decision errors to assess whether significance truly is significant." Thesis, London School of Economics and Political Science (University of London), 2009. http://etheses.lse.ac.uk/31/.

Full text
Abstract:
Empirical testing is centred on p-values. These summary statistics are used to assess the plausibility of a null hypothesis, and therein lies a flaw in their interpretation. Central to this research is accounting for the behaviour of p-values, through density functions, under the alternative hypothesis, H1. These densities are determined by a combination of the sample size and parametric specification of H1. Here, several new contributions are presented to reflect p-value behaviour. By considering the likelihood of both hypotheses in parallel, it is possible to optimise the decision-making process. A framework for simultaneously testing the null and alternative hypotheses is outlined for various testing scenarios. To facilitate efficient empirical conclusions, a new set of critical value tables is presented requiring only the conventional p-value, hence avoiding the need for additional computation in order to apply this joint testing in practice. Simple and composite forms of H1 are considered. Recognising the conflict between different schools of thought with respect to hypothesis testing, a unified approach at consolidating the advantages of each is offered. Again, exploiting p-value distributions under various forms of H1, a revised conditioning statistic for conditional frequentist testing is developed from which original p-value curves and surfaces are produced to further ease decision making. Finally, attention turns to multiple hypothesis testing. Estimation of multiple testing error rates is discussed and a new estimator for the proportion of true null hypotheses, when simultaneously testing several independent hypotheses, is presented. Under certain conditions it is shown that this estimator is superior to an established estimator.
APA, Harvard, Vancouver, ISO, and other styles
36

Laxminarayan, Parameshvyas. "Exploratory analysis of human sleep data." Worcester, Mass. : Worcester Polytechnic Institute, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0119104-120134/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: association rule mining; logistic regression; statistical significance of rules; window-based association rule mining; data mining; sleep data. Includes bibliographical references (leaves 166-167).
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Zi Hua. "An examination of some significant approaches to statistical deconvolution." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6435.

Full text
Abstract:
We examine statistical approaches to two significant areas of deconvolution - Blind Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals. For BD, we review some major classical and new methods in a unified framework of nonGaussian signals. The first class of algorithms we look at falls into the class of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities between them despite differences in origins and motivations. We give new theoretical results concerning the behaviour and generality of these algorithms and give evidence of scenarios where they may fail. In some cases, we present new modifications to the algorithms to overcome these shortfalls. Following our discussion on the MED algorithms, we next look at a recently proposed BD algorithm based on the correntropy function, a function defined as a combination of the autocorrelation and the entropy functiosn. We examine its BD performance when compared with MED algorithms. We find that the BD carried out via correntropy-matching cannot be straightforwardly interpreted as simultaneous moment-matching due to the breakdown of the correntropy expansion in terms of moments. Other issues such as maximum/minimum phase ambiguity and computational complexity suggest that careful attention is required before establishing the correntropy algorithm as a superior alternative to the existing BD techniques. For the problem of RD, we give a categorisation of different kinds of uncertainties encountered in estimation and discuss techniques required to solve each individual case. Primarily, we tackle the overlooked cases of robustification of deconvolution filters based on estimated blurring response or estimated signal spectrum. We do this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs to the standard plug-in Wiener Filter (WF).
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Jie. "Spatiotemporal roles of retinoic acid signaling in the cephalochordate amphioxus." Thesis, Lyon, École normale supérieure, 2011. http://www.theses.fr/2011ENSL0605.

Full text
Abstract:
L'acide rétinoïque (AR) est un morphogène dérivé de la vitamine A, qui intervient dans le contrôle de l'organogenèse, de la prolifération et de la différenciation cellulaires chez les Chordés. Dans ce contexte, nous avons étudié les régulations spatio-temporelles de la voie de signalisation de l’AR au cours du développement de l’amphioxus, en mettant l'accent sur l’espèce européenne Branchiostoma lanceolatum.Nous avons tout d'abord inhibé ou activé la voie de signalisation de l’AR lors du développement embryonnaire en traitant des embryons d’amphioxus à des doses variables de composés pharmacologiques interférant avec le métabolisme des rétinoïdes. Grâce à l’utilisation d’outils mathématiques spécifiques, nous avons établi un schéma détaillé des effets des traitements effectués sur le développement du système nerveux central (SNC) et du pharynx chez l’amphioxus en nous basant sur l’expression de gènes marqueurs de tissus spécifiques. À l’issue de cette première analyse, nous avons par la suite étudié les effets d’une perturbation de la signalisation de l’AR à des points clés du développement chez l’amphioxus lors de la régionalisation du SNC et du pharynx. Nous avons ainsi montré que la voie de signalisation de l’AR intervient dans la régionalisation de l’axe antéro-postérieur via le contrôle des gènes hox dès le stade gastrula et jusqu’aux stades larvaires. En outre, nous avons réalisé l'étude préliminaire du gène homologue chez l’amphioxus du gène aldh1a2 des Vertébrés, et avons démontré que la régulation du niveau de synthèse de l’AR au cour du développement est conservée entre l’amphioxus et les Vertébrés. Finalement, nous avons montré que la voie de l’AR participe également à la morphogenèse caudale chez l’amphioxus, et que le mécanisme impliqué semble différent de celui proposé chez les Vertébrés où l’AR contrôle la structuration de la nageoire caudale par le ciblage des tissus mésenchymateux
Retinoic acid (RA) is an endogenous vitamin A-derived morphogen. In this context, we studied the spatiotemporal roles of RA signaling in amphioxus development, focusing on the European amphioxus species: Branchiostoma lanceolatum. We first created excess and insufficiency models of RA signaling by exposing amphioxus embryos to series of doses of different pharmacological compounds targeting either the RA receptors or the RA metabolism machinery. By introducing the important mathematical concept of a Cartesian coordinate system founded by René Descartes, we created detailed diagrams of the concentration-dependent defects caused by RA signaling in the central nervous system (CNS) and pharynx of amphioxus by evaluating the statistical significances of tissue-specific marker gene expression in labeled embryos. This analysis yielded a very detailed description of the sensitivities of the developing amphioxus CNS and pharynx to altered RA signaling levels. Following this initial challenge, we correlated the effects of altered RA signaling levels with key amphioxus developmental stages characterized by structural transitions in CNS and pharynx. We show that hox-mediated RA signaling in axial patterning is active beyond the gastrula stage and might be maintained until at least early larval stage, with possible roles in more regionalized axis formation and organ induction. In addition, we carried out a preliminary study on a RA synthesizing gene in amphioxus, called aldh1a, a possible homolog of the vertebrate aldh1a2 gene, demonstrating that the feedback between RA signaling and RA synthesizing levels has emerged before the split of the cephalochordate and vertebrate lineages. Moreover, we are able to show that RA signaling also participates in tail fin morphogenesis in amphioxus by a mechanism that is probably not comparable to that in vertebrates, where RA modulates caudal fin patterning through targeting mesenchymal derivatives
APA, Harvard, Vancouver, ISO, and other styles
39

Kane, S. A. "Significance tests of probability non-stationarity of security price returns /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487780393266049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Benfenati, Francesco Maria. "Statistical analysis of oceanographic extreme events." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19885/.

Full text
Abstract:
Condizioni ambientali estreme del mare possono avere un forte impatto sulla navigazione e/o sul successo di operazioni di salvataggio. Le tecniche statistiche sono cruciali per quantificare la presenza di eventi estremi e monitorarne variazioni di frequenza e intensità. Gli eventi estremi "vivono" nella coda di una funzione distribuzione di probabilità (PDF), per questo è importante studiare la PDF in punti lontani diverse deviazioni standard dalla media. L’altezza significativa dell’onda (SWH) è il parametro solitamente usato per valutare l’intensità degli stati del mare. L’analisi degli estremi nella coda di una distribuzione richiede lunghe serie temporali per stime ragionevoli della loro intesità e e frequenza. Dati osservativi (i.e. dati storici da boe), sono spesso assenti e vengono invece utilizzate ricostruzioni numeriche delle onde, con il vantaggio che l’analisi di eventi estremi diventa possibile su una vasta area. Questa tesi vuole condurre un’analisi preliminare delle variazioni spaziali dei valori estremi della SWH nel Mediterraneo. Vengono usati dati orari dal modello del Med-MFC (dal portale del CMEMS), una ricostruzione numerica di onde per il Mediterraneo, che sfrutta il modello "WAM Cycle 4.5.4", coprendo il periodo 2006-2018, con risoluzione spaziale 0.042° (~ 4km). In particolare, consideriamo dati di 11 anni (dal 2007 al 2017), concentrandoci sulle regioni del Mar Ionio e del Mar Iberico. La PDF della SWH è seguita piuttosto bene dall’andamento di una curva Weibull a 2 parametri sia durante l’inverno (Gennaio) che durante l’estate (Luglio), con difetti per quanto riguarda il picco e la coda della distribuzione. A confronto, la curva a 3 parametri Weibull Esponenziata sembra essere più appropriata, anche se non è stato trovato un metodo per dimostrare che sia un fit migliore. Alla fine, viene proposto un metodo di stima del rischio basato sul periodo giornaliero di ritorno delle onde più alte di un certo valore di soglia, ritenute pericolose.
APA, Harvard, Vancouver, ISO, and other styles
41

Akintola, Abayomi Rasheed. "User Adoption of Big Data Analyticsin the Public Sector." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-86641.

Full text
Abstract:
The goal of this thesis was to investigate the factors that influence the adoption of big data analytics by public sector employees based on the adapted Unified Theory of Acceptance and Use of Technology (UTAUT) model. A mixed method of survey and interviews were used to collect data from employees of a Canadian provincial government ministry. The results show that performance expectancy and facilitating conditions have significant positive effects on the adoption intention of big data analytics, while effort expectancy has a significant negative effect on the adoption intention of big data analytics. The result shows that social influence does not have a significant effect on adoption intention. In terms of moderating variables, the results show that gender moderates the effects of effort expectancy, social influence and facilitating condition; data experience moderates the effects of performance expectancy, effort expectancy and facilitating condition; and leadership moderates the effect of social influence. The moderation effects of age on performance expectancy, effort expectancy is significant for only employees in the 40 to 49 age group while the moderation effects of age on social influence is significant for employees that are 40 years and more. Based on the results, implications for public sector organizations planning to implement big data analytics were discussed and suggestions for further research were made. This research contributes to existing studies on the user adoption of big data analytics.
APA, Harvard, Vancouver, ISO, and other styles
42

Saket, Munther Musa. "Cost-significance applied to estimating and control of construction projects." Thesis, University of Dundee, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.276578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Endo, Wagner. "Modelagem de circuitos neurais do sistema neuromotor e proprioceptor de insetos com o uso da transferência de informação entre conexões neurais." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-06052014-163332/.

Full text
Abstract:
Apresenta-se, neste trabalho, o desenvolvimento de um modelo bioinspirado a partir do circuito neural de insetos. Este modelo é obtido através da análise de primeira ordem dada pelo STA (Spike Triggered Average) e pela transferência de informação entre os sinais neurais. São aplicadas técnicas baseadas na identificação dos atrasos de tempo da máxima coerência da informação. Utilizam-se, para esta finalidade, os conceitos da teoria de informação: a DMI (Delayed Mutual Information) e a TE (Transfer Entropy). Essas duas abordagens têm aplicação em transferência de informação, cada uma com suas particularidades. A DMI é uma ferramenta mais simples do que a TE, do ponto de vista computacional, pois depende da análise estatística de funções densidades de probabilidades de segunda ordem, enquanto que a TE, de funções de terceira ordem. Dependendo dos recursos computacionais disponíveis, este é um fator que deve ser levado em consideração. Os resultados de atraso da informação são muito bem identificados pela DMI. No entanto, a DMI falha em distinguir a direção do fluxo de informação, quando se tem sistemas com transferência de informação indireta e com sobreposição da informação. Nesses casos, a TE é a ferramenta mais indicada para a determinação da direção do fluxo de informação, devido à dependência condicional imposta pelo histórico comum entre os sinais analisados. Em circuitos neurais, estas questões ocorrem em diversos casos. No gânglio metatorácico de insetos, os interneurônios locais possuem diferentes padrões de caminhos com sobreposição da informação, pois recebem sinais de diferentes neurônios sensores para o movimento das membros locomotores desses animais. O principal objetivo deste trabalho é propor um modelo do circuito neural do inseto, para mapear como os sinais neurais se comportam, quando sujeitos a um conjunto de movimentos aleatórios impostos no membro do inseto. As respostas neurais são reflexos provocados pelo estímulo táctil, que gera o movimento na junção femorotibial do membro posterior. Nestes circuitos neurais, os sinais neurais são processados por interneurônios locais dos tipos spiking e nonspiking que operam em paralelo para processar a informação vinda dos neurônios sensores. Esses interneurônios recebem sinais de entrada de mecanorreceptores do membro posterior e da junção motora dos insetos. A principal característica dos interneurônios locais é a sua capacidade de se comunicar com outros neurônios, tendo ou não a presença de impulsos nervosos (spiking e nonspiking). Assim, forma-se um circuito neural com sinais de entradas (neurônios sensores) e saídas (neurônios motores). Neste trabalho, os algoritmos propostos analisam desde a geração aleatória dos movimentos mecânicos e os estímulos nos neurônios sensores que chegam até o gânglio metatorácico, incluindo suas respostas nos neurônios motores. São implementados os algoritmos e seus respectivos pseudocódigos para a DMI e para a TE. É utilizada a técnica de Surrogate Data para inferir as medidas de significância estatística em relação à máxima coerência de informação entre os sinais neurais. Os resultados a partir dos Surrogate Data são utilizados para a compensação dos erros de desvio das medidas de transferência de informação. Um algoritmo, baseado na IAAFT (Iterative Amplitude Adjusted Fourier Transform), gera os dados substitutos, com mesmo espectro de potência e diferentes distribuições dos sinais originais. Os resultados da DMI e da TE com os Surrogate Data fornecem os valores das linhas de base quando ocorre a mínima transferência de informação. Além disso, foram utilizados dados simulados, para uma discussão sobre os efeitos dos tamanhos das amostras e as forças de associação da informação. Os sinais neurais coletados estão disponíveis em um banco de dados com diversos experimentos no gânglio metatorácico dos gafanhotos. No entanto, cada experimento possui poucos sinais coletados simultaneamente; assim, para diferentes experimentos, os sinais ficam sujeitos às variações de tamanho de amostras, além de ruídos que interferem nas medidas absolutas de transferência de informação. Para se mapear essas conexões neurais, foi utilizada a metodologia baseada na normalização e compensação dos erros de desvio para os cálculos da transferência de informação. As normalizações das medidas utilizam as entropias totais do sistema. Para a DMI, utiliza-se a média geométrica das entropias de X e Y , para a TE aplica-se a CMI (Conditional Mutual Information) para a sua normalização. Após a aplicação dessas abordagens, baseadas no STA e na transferência de informação, apresenta-se o modelo estrutural do circuito neural do sistema neuromotor de gafanhotos. São apresentados os resultados com o STA e a DMI para os neurônios sensores, dos quais são levantadas algumas hipóteses sobre o funcionamento desta parte do FeCO (Femoral Chordotonal Organ). Para cada tipo de neurônio foram identificados múltiplos caminhos no circuito neural, através dos tempos de atraso e dos valores de máxima coerência da informação. Nos interneurônios spiking obtiveram-se dois padrões de caminhos, enquanto que para os interneurônios nonspiking identificaram-se três padrões distintos. Esses resultados são obtidos computacionalmente e condizem com que é esperado a partir dos modelos biológicos descritos em Burrows (1996).
Herein, we present the development of a bioinspired model by the neural circuit of insects. This model is obtained by analyzing the first order from STA (Spike Triggered Average) and the transfer of information among neural signals. Techniques are applied based on the identification of the time delays of the information maximum coherence. For this purpose we use the concepts of the theory of information: DMI (Delayed Mutual Information) and TE (Transfer Entropy). These two approaches have applications on information transfer and each one has peculiarities. The DMI is a simpler tool than the TE, from the computational point of view. Therefore, DMI depends on the statistical analysis of second order probability density functions, whereas the TE depends on third order functions. If computational resources are a problem, those questions can be taken into consideration. The results of the information delay are very effective for DMI. However, DMI fails to distinguish the direction of the information flow when we have systems subjected to indirect information transfer and superposition of the information. In these cases, the TE is the most appropriate tool for determining the direction of the information flow, due to the conditional dependence imposed by a common history among the signals. In neural circuits, those issues occur in many cases. For example, in metathoracic ganglion of insects, the local interneurons have different pathways with superposition of the information. Therefore, the interneurons receive signals from different sensory neurons for moving the animals legs . The main objective of this work is propose a model of the neural circuit from an insect. Additionally, we map the neural signals when the hind leg is subjected to a set of movements. Neural responses are reflexes caused by tactile stimulus, which generates the movement of femoro-tibial joint of the hind leg. In these neural circuits, the signals are processed by neural spiking and nonspiking local interneurons. These types of neurons operate in parallel processing of the information from the sensory neurons. Interneurons receive input signals from mechanoreceptors by the leg and the insect knees. The main feature of local interneurons is their ability to communicate with others neurons. It can occur with or without of the presence of impulses (spiking and nonspiking). Thus, they form a neural circuit with input signals (sensory neurons) and outputs (motor neurons). The proposed algorithms analyze the random generation of movements and mechanical stimuli in sensory neurons. Which are processing in the metathoracic ganglion, including their responses in the motor neurons. The algorithms and the pseudo-code are implemented for TE and DMI. The technique of Surrogate Data is applied to infer the measures of statistical significance related to the information maximum coherence among neural signals. The results of the Surrogate Data are used for bias error compensation from information transfer. An algorithm, based on IAAFT (Iterative Amplitude Adjusted Fourier Transform), generates Surrogate Data with the same power spectrum and different distributions of the original signals. The results of the surrogate data, for DMI and TE, achieve the values of baselines when there are minimum information transfer. Additionally, we used simulated data to discuss the effects of sample sizes and different strengths of information connectivity. The collected neural signals are available from one database based on several experiments of the locusts metathoracic ganglion. However, each experiment has few simultaneously collected signals and the signals are subjected of variations in sample size and absolute measurements noisy of information transfer. We used a methodology based on normalization and compensation of the bias errors for computing the information transfer. The normalization of the measures uses the total entropy of the system. For the DMI, we applied the geometric mean of X and Y . Whereas, for the TE is computed the CMI (Conditional Mutual Information) for the normalization. We present the neural circuit structural model of the locusts neuromotor system, from those approaches based on STA and the information transfer. Some results are presented from STA and DMI for sensory neurones. Then, we achieve some new hypothesis about the neurophisiology function of FeCO. For each type of neuron, we identify multiple pathways in neural circuit through the time delay and the information maximum coherence. The spiking interneurons areyielded by two pathways, whereas the nonspiking interneurons has revealed three distinct patterns. These results are obtained computationally and are consistent with biological models described in Burrows (1996).
APA, Harvard, Vancouver, ISO, and other styles
44

Riley, Michael. "Significant pattern discovery in gene location and phylogeny." Thesis, Aberystwyth University, 2009. http://hdl.handle.net/2160/fbabd607-ae86-44ed-a3f1-9eb2c461da32.

Full text
Abstract:
This thesis documents the investigation into the acquisition of knowledge from biological data using computational methods for the discovery of significantly frequent patterns in gene location and phylogeny. Beginning with an initial statistical analysis of distribution of gene locations in the flowering plant Arabidopsis thaliana, we discover unexplained elements of order. The second area of this research looks into frequent patterns in the single dimensional linear structure of the physical locations of genes on the genome of Saccharomyces cerevisiae. This is an area of epigenetics which has, hitherto, attracted little attention. The frequent patterns are patterns of structure represented in Datalog, suitable for analyses using the logic programming methodology Prolog. This is used to find patterns in gene location with respect to various gene attributes such as molecular function and the distance between genes. Here we find significant frequent patterns in neighbouring pairs of genes. We also discover very significant patterns in the molecular function of genes separated by distances of between 5,000 and 20,000 base pairs. However, in complete contrast to the latter result, we find that the distribution of genes of molecular function within a local region of ±20, 000 base pairs is locationally independent. In the second part of this research we look for significantly frequent patterns of phylogenetic subtrees in a broad database of phylogenetic trees. Here we investigate the use of two types of frequent phylogenetic structures. Firstly, phylogenetic pairs are used to determine relationships between organisms. Secondly, phylogenetic triple structures are used to represent subtrees. Frequent subtree mining is then used to establish phylogenetic relationships with a high confidence between a small set of organisms. This exercise was invaluable to enable these procedures to be extended in future to encompass much larger sets of organisms. This research has revealed effective methods for the analysis of, and has discovered patterns of order in the locations of genes within genomes. Research into phylogenetic tree generation based on protein structure has discovered the requirements for an effective method to extract elements of phylogenetic information from a phylogenetic database and reconstruct a single consensus tree from that information. In this way it should be possible to produce a species tree of life with high degree of confidence and resolution.
APA, Harvard, Vancouver, ISO, and other styles
45

SUN, SHUYAN. "A Comprehensive Review of Effect Size Reporting and Interpreting Practices in Academic Journals in Education and Psychology." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1216868724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Paranagama, Dilan C. "Correlation and variance stabilization in the two group comparison case in high dimensional data under dependencies." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/13132.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Gary L. Gadbury
Multiple testing research has undergone renewed focus in recent years as advances in high throughput technologies have produced data on unprecedented scales. Much of the focus has been on false discovery rates (FDR) and related quantities that are estimated (or controlled for) in large scale multiple testing situations. Recent papers by Efron have directly addressed this issue and incorporated measures to account for high-dimensional correlation structure when estimating false discovery rates and when estimating a density. Other authors also have proposed methods to control or estimate FDR under dependencies with certain assumptions. However, not much focus is given to the stability of the results obtained under dependencies in the literature. This work begins by demonstrating the effect of dependence structure on the variance of the number of discoveries and the false discovery proportion (FDP). A variance of the number of discoveries is shown and the density of a test statistic, conditioned on the status (reject or failure to reject) of a different correlated test, is derived. A closed form solution to the correlation between test statistics is also derived. This correlation is a combination of correlations and variances of the data within groups being compared. It is shown that these correlations among the test statistics affect the conditional density and alters the threshold for significance of a correlated test, causing instability in the results. The concept of performing tests within networks, Conditional Network Testing (CNT) is introduced. This method is based on the conditional density mentioned above and uses the correlation between test statistics to construct networks. A method to simulate realistic data with preserved dependence structures is also presented. CNT is evaluated using simple simulations and the proposed simulation method. In addition, existing methods that controls false discovery rates are used on t-tests and CNT for comparing performance. It was shown that the false discovery proportion and type I error proportions are smaller when using CNT versus using t-tests and, in general, results are more stable when applied to CNT. Finally, applications and steps to further improve CNT are discussed.
APA, Harvard, Vancouver, ISO, and other styles
47

Lindsay, R. M. "The use of tests of significance in accounting research : A methodological, philosophical and empirical inquiry." Thesis, Lancaster University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lucas, Craig. "Prediction of protein function using statistically significant sub-structure discovery." Thesis, University of Leeds, 2007. http://etheses.whiterose.ac.uk/1345/.

Full text
Abstract:
Proteins perform a vast number of functional roles. The number of protein structures available for analysis continues to grow and, with the development of methods to predict protein structure directly from genetic sequence without imaging technology, the number of structures with unknown function is likely to increase. Computational methods for predicting the function of protein structures are therefore desirable. There are several existing systems for attempting to assign function but their use is inadvisable without human intervention. Methods for searching proteins with shared function for a shared structural feature are often limited in ways that are counterproductive to a general discovery solution. Assigning accurate scores to significant sub-structures also remains an area of development. A method is presented that can find common sub-structures between multiple proteins, without the size or structural limitations of existing discovery methods. A novel measure of assigning statistical significance is also presented. These methods are tested on artificially generated and real protein data to demonstrate their ability to successfully discover statistically significant sub-structures. With a database of such sub-structures, it is then shown that prediction of function for a new protein is possible based on the presence of the discovered significant patterns.
APA, Harvard, Vancouver, ISO, and other styles
49

Gyldberg, Ellinor, and Henrik Bark. "Type 1 error rate and significance levels when using GARCH-type models." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375770.

Full text
Abstract:
The purpose of this thesis is to test whether the probability of falsely rejecting a true null hypothesis of a model intercept being equal to zero is consistent with the chosen significance level when modelling the variance of the error term using GARCH (1,1), TGARCH (1,1) or IGARCH (1,1) models. We test this by estimating “Jensen’s alpha” to evaluate alpha trading, using a Monte Carlo simulation based on historical data from the Standard & Poor’s 500 Index and stocks in the Dow Jones Industrial Average Index. We evaluate over simulated daily data ranging over periods of 3 months, 6 months, and 1 year. Our results indicate that the GARCH and IGARCH consistently reject a true null hypothesis less often than the selected 1%, 5%, or 10%, whereas the TGARCH consistently rejects a true null more often than the chosen significance level. Thus, there is a risk of incorrect inferences when using these GARCH-type models.
APA, Harvard, Vancouver, ISO, and other styles
50

Krouthen, Johannes. "Apartment values in Uppsala: Significant factors that differentiate the selling prices." Thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography