Academic literature on the topic 'Principal comparisons (Statistics)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Principal comparisons (Statistics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Principal comparisons (Statistics)"

1

Duarte Silva, António Pedro. "Discarding Variables in a Principal Component Analysis: Algorithms for All-Subsets Comparisons." Computational Statistics 17, no. 2 (July 2002): 251–71. http://dx.doi.org/10.1007/s001800200105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rosenbaum, Paul R. "Combining planned and discovered comparisons in observational studies." Biostatistics 21, no. 3 (September 26, 2018): 384–99. http://dx.doi.org/10.1093/biostatistics/kxy055.

Full text
Abstract:
Summary In observational studies of treatment effects, it is common to have several outcomes, perhaps of uncertain quality and relevance, each purporting to measure the effect of the treatment. A single planned combination of several outcomes may increase both power and insensitivity to unmeasured bias when the plan is wisely chosen, but it may miss opportunities in other cases. A method is proposed that uses one planned combination with only a mild correction for multiple testing and exhaustive consideration of all possible combinations fully correcting for multiple testing. The method works with the joint distribution of $\kappa^{T}\left( \mathbf{T}-\boldsymbol{\mu}\right) /\sqrt {\boldsymbol{\kappa}^{T}\boldsymbol{\Sigma\boldsymbol{\kappa}}}$ and $max_{\boldsymbol{\lambda}\neq\mathbf{0}}$$\,\lambda^{T}\left( \mathbf{T} -\boldsymbol{\mu}\right) /$$\sqrt{\boldsymbol{\lambda}^{T}\boldsymbol{\Sigma \lambda}}$ where $\kappa$ is chosen a priori and the test statistic $\mathbf{T}$ is asymptotically $N_{L}\left( \boldsymbol{\mu},\boldsymbol{\Sigma}\right) $. The correction for multiple testing has a smaller effect on the power of $\kappa^{T}\left( \mathbf{T}-\boldsymbol{\mu }\right) /\sqrt{\boldsymbol{\kappa}^{T}\boldsymbol{\Sigma\boldsymbol{\kappa} }}$ than does switching to a two-tailed test, even though the opposite tail does receive consideration when $\lambda=-\kappa$. In the application, there are three measures of cognitive decline, and the a priori comparison $\kappa$ is their first principal component, computed without reference to treatment assignments. The method is implemented in an R package sensitivitymult.
APA, Harvard, Vancouver, ISO, and other styles
3

Chou, Lin-Yi, and P. W. Sharp. "On order 5 symplectic explicit Runge-Kutta Nyström methods." Journal of Applied Mathematics and Decision Sciences 4, no. 2 (January 1, 2000): 143–50. http://dx.doi.org/10.1155/s1173912600000109.

Full text
Abstract:
Order five symplectic explicit Runge-Kutta Nyström methods of five stages are known to exist. However, these methods do not have free parameters with which to minimise the principal error coefficients. By adding one derivative evaluation per step, to give either a six-stage non-FSAL family or a seven-stage FSAL family of methods, two free parameters become available for the minimisation. This raises the possibility of improving the efficiency of order five methods despite the extra cost of taking a step.We perform a minimisation of the two families to obtain an optimal method and then compare its numerical performance with published methods of orders four to seven. These comparisons along with those based on the principal error coefficients show the new method is significantly more efficient than the five-stage, order five methods. The numerical comparisons also suggest the new methods can be more efficient than published methods of other orders.
APA, Harvard, Vancouver, ISO, and other styles
4

Marchetti, Mario, Lee Chapman, Abderrahmen Khalifa, and Michel Buès. "New Role of Thermal Mapping in Winter Maintenance with Principal Components Analysis." Advances in Meteorology 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/254795.

Full text
Abstract:
Thermal mapping uses IR thermometry to measure road pavement temperature at a high resolution to identify and to map sections of the road network prone to ice occurrence. However, measurements are time-consuming and ultimately only provide a snapshot of road conditions at the time of the survey. As such, there is a need for surveys to be restricted to a series of specific climatic conditions during winter. Typically, five to six surveys are used, but it is questionable whether the full range of atmospheric conditions is adequately covered. This work investigates the role of statistics in adding value to thermal mapping data. Principal components analysis is used to interpolate between individual thermal mapping surveys to build a thermal map (or even a road surface temperature forecast), for a wider range of climatic conditions than that permitted by traditional surveys. The results indicate that when this approach is used, fewer thermal mapping surveys are actually required. Furthermore, comparisons with numerical models indicate that this approach could yield a suitable verification method for the spatial component of road weather forecasts—a key issue currently in winter road maintenance.
APA, Harvard, Vancouver, ISO, and other styles
5

Craft, Kathleen J., and Mary V. Ashley. "Population differentiation among three species of white oak in northeastern Illinois." Canadian Journal of Forest Research 36, no. 1 (January 1, 2006): 206–15. http://dx.doi.org/10.1139/x05-234.

Full text
Abstract:
We used microsatellite DNA analysis to examine population differentiation among three species of white oak, Quercus alba L., Quercus bicolor Willd., and Quercus macrocarpa Michx., occurring in both pure and mixed stands in northeastern Illinois. Using individual-based Bayesian clustering or principal components analyses, no strong genetic groupings of individuals were detected. This suggests that the three species do not represent distinct and differentiated genetic entities. Nevertheless, traditional approaches where individuals are pre-assigned to species and populations, including F statistics, allele frequency analysis, and Nei's genetic distance, revealed low, but significant genetic differentiation. Pairwise F statistics showed that some intraspecific comparisons were as genetically differentiated as interspecific comparisons, with the two populations of Q. alba exhibiting the highest level of genetic differentiation (θ = 0.1156). A neighbor-joining tree also showed that the two populations of Q. alba are distinct from one another and from the two other species, while Q. bicolor and Q. macrocarpa were genetically more similar. Pure stands of Q. macrocarpa did not show a higher degree of genetic differentiation than mixed stands.
APA, Harvard, Vancouver, ISO, and other styles
6

Bose, Lotan, Nitiprasad Jambhulkar, and Kanailal Pande. "Genotype by environment interaction and stability analysis for rice genotypes under Boro condition." Genetika 46, no. 2 (2014): 521–28. http://dx.doi.org/10.2298/gensr1402521b.

Full text
Abstract:
Genotype (G)?Environment (E) interaction of nine rice genotypes possessing cold tolerance at seedling stage tested over four environments was analyzed to identify stable high yielding genotypes suitable for boro environments. The genotypes were grown in a randomized complete block design with three replications. The genotype ? environment (G?E) interaction was studied using different stability statistics viz. Additive Main effects and Multiplicative Interaction (AMMI), AMMI stability value (ASV), rank-sum (RS) and yield stability index (YSI). Combined analysis of variance shows that genotype, environment and G?E interaction are highly significant. This indicates possibility of selection of stable genotypes across the environments. The results of AMMI (additive main effect and multiplicative interaction) analysis indicated that the first two principal components (PC1-PC2) were highly significant (P<0.05). The partitioning of TSS (total sum of squares) exhibited that the genotype effect was a predominant source of variation followed by G?E interaction and environment. The genotype effect was nine times higher than that of the G?E interaction, suggesting the possible existence of different environment groups. The first two interaction principal component axes (IPCA) cumulatively explained 92 % of the total interaction effects. The study revealed that genotypes GEN6 and GEN4 were found to be stable based on all stability statistics. Grain yield (GY) is positively and significantly correlated with rank-sum (RS) and yield stability index (YSI). The above mentioned stability statistics could be useful for identification of stable high yielding genotypes and facilitates visual comparisons of high yielding genotype across the multi-environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Monroe, Scott. "Estimation of Expected Fisher Information for IRT Models." Journal of Educational and Behavioral Statistics 44, no. 4 (April 7, 2019): 431–47. http://dx.doi.org/10.3102/1076998619838240.

Full text
Abstract:
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in practice, the expected information is not typically used, as it often requires a large amount of computation. In the present research, two methods to approximate the expected information by Monte Carlo are proposed. The first method is suitable for less complex IRT models such as unidimensional models. The second method is generally applicable but is designed for use with more complex models such as high-dimensional IRT models. The proposed methods are compared to existing methods using real data sets and a simulation study. The comparisons are based on simple structure multidimensional IRT models with two-parameter logistic item models.
APA, Harvard, Vancouver, ISO, and other styles
8

Koudelakova, Tana, Eva Chovancova, Jan Brezovsky, Marta Monincova, Andrea Fortova, Jiri Jarkovsky, and Jiri Damborsky. "Substrate specificity of haloalkane dehalogenases." Biochemical Journal 435, no. 2 (March 29, 2011): 345–54. http://dx.doi.org/10.1042/bj20101405.

Full text
Abstract:
An enzyme's substrate specificity is one of its most important characteristics. The quantitative comparison of broad-specificity enzymes requires the selection of a homogenous set of substrates for experimental testing, determination of substrate-specificity data and analysis using multivariate statistics. We describe a systematic analysis of the substrate specificities of nine wild-type and four engineered haloalkane dehalogenases. The enzymes were characterized experimentally using a set of 30 substrates selected using statistical experimental design from a set of nearly 200 halogenated compounds. Analysis of the activity data showed that the most universally useful substrates in the assessment of haloalkane dehalogenase activity are 1-bromobutane, 1-iodopropane, 1-iodobutane, 1,2-dibromoethane and 4-bromobutanenitrile. Functional relationships among the enzymes were explored using principal component analysis. Analysis of the untransformed specific activity data revealed that the overall activity of wild-type haloalkane dehalogenases decreases in the following order: LinB~DbjA>DhlA~DhaA~DbeA~DmbA>DatA~DmbC~DrbA. After transforming the data, we were able to classify haloalkane dehalogenases into four SSGs (substrate-specificity groups). These functional groups are clearly distinct from the evolutionary subfamilies, suggesting that phylogenetic analysis cannot be used to predict the substrate specificity of individual haloalkane dehalogenases. Structural and functional comparisons of wild-type and mutant enzymes revealed that the architecture of the active site and the main access tunnel significantly influences the substrate specificity of these enzymes, but is not its only determinant. The identification of other structural determinants of the substrate specificity remains a challenge for further research on haloalkane dehalogenases.
APA, Harvard, Vancouver, ISO, and other styles
9

Androniceanu, Ane-Mari, Raluca Dana Căplescu, Manuela Tvaronavičienė, and Cosmin Dobrin. "The Interdependencies between Economic Growth, Energy Consumption and Pollution in Europe." Energies 14, no. 9 (April 30, 2021): 2577. http://dx.doi.org/10.3390/en14092577.

Full text
Abstract:
The strong interdependency between economic growth and conventional energy consumption have led to significant environmental impact, especially with respect to greenhouse gas emissions. Conventional energy-intensive industries release increasing quantities every year, which has prompted global leaders to consider new approaches based on sustainable consumption. The main purpose of this research is to propose a new energy index that accounts for the complexity and interdependences between the research variables. The methodology is based on Principal Component Analysis (PCA) and combines the key components determined into a score that allows for both temporal and cross-country comparisons. All data analyses were performed using IBM SPSS Statistics 25™. The main findings show that most countries improved their economic performance since 2014, but the speed of the improvement varies a lot from one country to another. The final score determined reflects the complex changes taking place in each country and the efficiency of the governmental measures for sustainable economic growth based on low energy consumption and low environmental pollution.
APA, Harvard, Vancouver, ISO, and other styles
10

Picha*, David, and Roger Hinson. "Economic Assessment of Marketing U.S. Sweetpotatoes in the United Kingdom." HortScience 39, no. 4 (July 2004): 765B—765. http://dx.doi.org/10.21273/hortsci.39.4.765b.

Full text
Abstract:
Opportunities for marketing United States (U.S.) sweetpotatoes in the United Kingdom (U.K.) are expanding, particularly within the retail sector. The U.K. import volume has steadily increased in recent years. Trade statistics indicate the U.K. imported nearly 12 thousand metric tons of sweetpotatoes in 2002, with the U.S. providing slightly over half of the total import volume. Considerable competition exists among suppliers and countries of origin in their attempts to penetrate the U.K. market. Currently, over a dozen countries supply sweetpotatoes to the U.K., and additional countries are planning on sending product in the near future. An economic assessment of production and transport costs was made among the principal supplying nations to estimate their comparative market advantages. Price histories for sweetpotatoes in various U.K. market destinations were compiled to determine seasonality patterns. Comparisons of net profit (or loss) between U.S. and U.K. market destinations were made to determine appropriate marketing strategies for U.S. sweetpotato growers/shippers. Results indicated the U.K. to be a profitable and increasingly important potential market for U.S. sweetpotatoes.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Principal comparisons (Statistics)"

1

Michael, Simon. "A Comparison of Data Transformations in Image Denoising." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375715.

Full text
Abstract:
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
APA, Harvard, Vancouver, ISO, and other styles
2

Mariani, Tommaso. "Comparison between Oja's and BCM neural networks models in finding useful projections in high-dimensional spaces." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14512/.

Full text
Abstract:
This thesis presents the concept of a neural network starting from its corresponding biological model, paying particular attention to the learning algorithms proposed by Oja and Bienenstock Cooper & Munro. A brief introduction to Data Analysis is then performed, with particular reference to the Principal Components Analysis and Singular Value Decomposition. The two previously introduced algorithms are then dealt with more thoroughly, going to study in particular their connections with data analysis. Finally, it is proposed to use the Singular Value Decomposition as a method for obtaining stationary points in the BCM algorithm, in the case of linearly dependent inputs.
APA, Harvard, Vancouver, ISO, and other styles
3

Ebrahimi, Mohammadi Diako Chemistry Faculty of Science UNSW. "Multi-purpose multi-way data analysis." 2007. http://handle.unsw.edu.au/1959.4/40646.

Full text
Abstract:
In this dissertation, application of multi-way analysis is extended into new areas of environmental chemistry, microbiology, electrochemistry and organometallic chemistry. Additionally new practical aspects of some of the multi-way analysis methods are discussed. Parallel Factor Analysis Two (PARAFAC2) is used to classify a wide range of weathered petroleum oils using GC-MS data. Various chemical and data analysis issues exist in the current methods of oil spill analysis are discussed and the proposed method is demonstrated to have potential to be employed in identification of source of oil spills. Two important practical aspects of PARAFAC2 are exploited to deal with chromatographic shifts and non-diagnostic peaks.GEneralized Multiplicative ANalysis Of VAriance (GEMANOVA) is applied to assess the bactericidal activity of new natural antibacterial extracts on three species of bacteria in different structure and oxidation forms and different concentrations. In this work while the applicability of traditional ANOVA is restricted due to the high interaction amongst the factors, GEMANOVA is shown to return robust and easily interpretable models which conform to the actual structure of the data. Peptide-modified electrochemical sensors are used to determine three metal cations of Cu2+, Cd2+ and Pb2+ simultaneously. Two sets of experiments are performed using a four-electrode system returning a three-way array of size (sample ?? current ?? electrode) and a single electrode resulting in a two-way data set of size (sample ?? current). The data of former is modeled by N-PLS and that latter using PLS. Despite the presence of highly overlapped voltammograms and several sources of non-linearity N-PLS returns reasonable models while PLS fails. An intramolecular hydroamination reaction is catalyzed by several organometallic catalysts to identify the most effective catalysts. The reaction of starting material in the presence of 72 different catalysts is monitored by UV-Vis at two time points, before and after heating the mixtures in an oven. PARAFAC is applied to the three-way data set of (sample ?? wavelength ?? time) to resolve the overlapped UV-Vis peaks and to identify the effective catalysts using the estimated relative concentration of product (loadings plot of the sample mode).
APA, Harvard, Vancouver, ISO, and other styles
4

Kanyama, Busanga Jerome. "A comparison of the performance of three multivariate methods in investigating the effects of province and power usage on the amounts of five power modes in South Africa." Diss., 2011. http://hdl.handle.net/10500/4681.

Full text
Abstract:
Researchers perform multivariate techniques MANOVA, discriminant analysis and factor analysis. The most common applications in social science are to identify and test the effects from the analysis. The use of this multivariate technique is uncommon in investigating the effects of power usage and Province in South Africa on the amounts of the five power modes. This dissertation discusses this issue, the methodology and practical problems of the three multivariate techniques. The author examines the applications of each technique in social public research and comparisons are made between the three multivariate techniques. This dissertation concludes with a discussion of both the concepts of the present multivariate techniques and the results found on the use of the three multivariate techniques in the energy household consumption. The author recommends focusing on the hypotheses of the study or typical questions surrounding of each technique to guide the researcher in choosing the appropriate analysis in the social research, as each technique has some strengths and limitations.
Statistics
M. Sc. (Statistics)
APA, Harvard, Vancouver, ISO, and other styles
5

Kanyama, Busanga Jerome. "A comparison of the performance of three multivariate methods in investigating the effects of province and power usage on the amount of five power modes in South Africa." Diss., 2011. http://hdl.handle.net/10500/4681.

Full text
Abstract:
Researchers perform multivariate techniques MANOVA, discriminant analysis and factor analysis. The most common applications in social science are to identify and test the effects from the analysis. The use of this multivariate technique is uncommon in investigating the effects of power usage and Province in South Africa on the amounts of the five power modes. This dissertation discusses this issue, the methodology and practical problems of the three multivariate techniques. The author examines the applications of each technique in social public research and comparisons are made between the three multivariate techniques. This dissertation concludes with a discussion of both the concepts of the present multivariate techniques and the results found on the use of the three multivariate techniques in the energy household consumption. The author recommends focusing on the hypotheses of the study or typical questions surrounding of each technique to guide the researcher in choosing the appropriate analysis in the social research, as each technique has some strengths and limitations.
Statistics
M. Sc. (Statistics)
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Principal comparisons (Statistics)"

1

Eurostat. Basic statistics of the Community: Comparison with the principal partners of the Community. Luxembourg: Statistical Office of the European Communities., 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Eurostat. Basic statistics of the European Union: Comparison with the principal partners of the Union. 3rd ed. Luxembourg: Office for Official Publications of the European Communities, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eurostat. Basic statistics of the European Union: Comparison with the principal partners of the European Union. Luxembourg: Office for Official Publications of the European Communities., 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kroonenberg, Pieter M. Applied Multiway Data Analysis. Wiley & Sons, Incorporated, John, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Applied Multiway Data Analysis. Wiley-Interscience, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eurostat. Basic Statistics of the European Union: Comparison With the Principal Partners of the Union (Eurostat Yearbook). 3rd ed. Bernan Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eurostat. Basic Statistics of the European Union: Comparison With the Principal Partners of the European Union (Eurostat Yearbook). 3rd ed. European Communities, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Veech, Joseph A. Habitat Ecology and Analysis. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198829287.001.0001.

Full text
Abstract:
Habitat is crucial to the survival and reproduction of individual organisms as well as persistence of populations. As such, species-habitat relationships have long been studied, particularly in the field of wildlife ecology and to a lesser extent in the more encompassing discipline of ecology. The habitat requirements of a species largely determine its spatial distribution and abundance in nature. One way to recognize and appreciate the over-riding importance of habitat is to consider that a young organism must find and settle into the appropriate type of habitat as one of the first challenges of life. This process can be cast in a probabilistic framework and used to better understand the mechanisms behind habitat preferences and selection. There are at least six distinctly different statistical approaches to conducting a habitat analysis – that is, identifying and quantifying the environmental variables that a species most strongly associates with. These are (1) comparison among group means (e.g., ANOVA), (2) multiple linear regression, (3) multiple logistic regression, (4) classification and regression trees, (5) multivariate techniques (Principal Components Analysis and Discriminant Function Analysis), and (6) occupancy modelling. Each of these is lucidly explained and demonstrated by application to a hypothetical dataset. The strengths and weaknesses of each method are discussed. Given the ongoing biodiversity crisis largely caused by habitat destruction, there is a crucial and general need to better characterize and understand the habitat requirements of many different species, particularly those that are threatened and endangered.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Principal comparisons (Statistics)"

1

Olsson, Daniel, Pando Georgiev, and Panos M. Pardalos. "Kernel Principal Component Analysis: Applications, Implementation and Comparison." In Springer Proceedings in Mathematics & Statistics, 127–48. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-8588-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wagenmakers, Eric-Jan, Michael D. Lee, Jeffrey N. Rouder, and Richard D. Morey. "The Principle of Predictive Irrelevance or Why Intervals Should Not be Used for Model Comparison Featuring a Point Null Hypothesis." In The Theory of Statistics in Psychology, 111–29. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48043-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mariani, Paolo, and Andrea Marletta. "How to become a pastry chef: a statistical analysis through the company requirements." In Proceedings e report, 61–64. Florence: Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.13.

Full text
Abstract:
The definition of requested requirements by the companies represents one of the key aspects for the entrance of new professional figures. In particular, focusing the attention on food & beverage sector, in this study two job profiles are considered: pastry chef e pastry assistant. Data for this analysis are collected by The AdeccoGroup in Italy in 2016 and 2017. The personal competencies to make capable to face the growing flexibility of the profession are object of specified request cross-sectional to more economic sectors. After a brief description of the database content, the principal objective of the research is to report the most requested requirements for the companies. Other analysis are provided to show possible relationships among these requirements and the previous experience owned by candidates. Finally, a comparison is presented about the competencies requested by the two job figures using descriptive statistics and classification techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Clancy, Patrick, and Simon Marginson. "Comparative Data on High Participation Systems." In High Participation Systems of Higher Education, 39–67. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198828877.003.0002.

Full text
Abstract:
This chapter provides and discusses existing comparative data on higher education participation between various countries. The chapter starts with a review of the principal measures of participation, noting an inevitable tradeoff between optimum statistical measures and what is feasible given data limitations. After surveying participation in higher education in all countries, and noting that almost three-quarters have achieved enrolment ratios of at least 15 per cent, the chapter provides more detailed comparisons of the OECD member countries. The chapter proposes a composite Higher Education Participation Index which combines enrolment and output measures.
APA, Harvard, Vancouver, ISO, and other styles
5

Veech, Joseph A. "Statistical Methods for Analyzing Species–Habitat Associations." In Habitat Ecology and Analysis, 135–74. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198829287.003.0009.

Full text
Abstract:
Six methods for statistically identifying and quantifying meaningful species–habitat associations are discussed. These are (1) comparison among group means (e.g. ANOVA), (2) multiple linear regression, (3) multiple logistic regression, (4) classification and regression trees, (5) multivariate techniques (principal components analysis and discriminant function analysis), and (6) occupancy modeling. Each method is described in statistical detail and associated terminology is explained. The example of habitat associations of a hypothetical beetle species (from Chapter 8) is used to further explain some of the methods. Assumption, strengths, and weaknesses of each method are discussed. Related statistical constructs and procedures such as the variance–covariance matrix, negative binomial distribution, generalized linear modeling, maximum likelihood estimation, and Bayes’ theorem are also explained. Some historical context is provided for some of the methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Sargent, Daniel, and Qian Shi. "Design and analysis of clinical trials." In Oxford Textbook of Oncology, 220–28. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199656103.003.0024.

Full text
Abstract:
This chapter addresses the statistical design and analysis of oncology clinical trials. Traditional trial designs are described by each of three phases—phase I, II, and III studies. It elaborates on the concepts of single-arm versus randomized (screening and selection) phase II trials. Critical aspects of randomized phase III studies are discussed, including randomization, stratification, blinding, and the intent-to-treat principle. Furthermore, the chapter introduces innovative designs, emphasizing the incorporation of biomarkers into the study design. Careful considerations of endpoints, essential to power and sample size when planning a study, are considered. In addition to discussing standard statistical analysis methods, we particularly discuss the critical element of controlling for multiple comparisons.
APA, Harvard, Vancouver, ISO, and other styles
7

Barbosa, Carla, M. Rui Alves, and Beatriz Oliveira. "Comparison of Methods to Display Principal Component Analysis, Focusing on Biplots and the Selection of Biplot Axes." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 289–332. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8823-0.ch010.

Full text
Abstract:
Principal components analysis (PCA) is probably the most important multivariate statistical technique, being used to model complex problems or just for data mining, in almost all areas of science. Although being well known by researchers and available in most statistical packages, it is often misunderstood and poses problems when applied by inexperienced users. A biplot is a way of concentrating all information related to sample units and variables in a single display, in an attempt to help interpretations and avoid overestimations. This chapter covers the main mathematical aspects of PCA, as well as the form and covariance biplots developed by Gabriel and the predictive and interpolative biplots devised by Gower and coworkers. New developments are also presented, involving techniques to automate the production of biplots, with a controlled output in terms of axes predictivities and interpolative accuracies, supported by the AutoBiplot.PCA function developed in R. A practical case is used for illustrations and discussions.
APA, Harvard, Vancouver, ISO, and other styles
8

Bodea, Constanta-Nicoleta, and Maria-Iuliana Dascalu. "A Tool for Adaptive E-Assessment of Project Management Competences." In Intelligent and Adaptive Learning Systems, 133–50. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-842-2.ch009.

Full text
Abstract:
This chapter proposes an e-assessment method for project management competences, using the computer adaptive testing (CAT) principle. Competences are represented using concept space graphs. The proposed model increases the tests configurability by considering several knowledge constraints when an item is selected. The proposed model is also seen as a self-directed-learning-tool, useful in the preparation process for project management certifications. The model is validated by comparison with an existing e-assessment tool, used for simulation purposes; statistic results are presented and analyzed. Although the initial level of knowledge of each user has a great impact on the final results obtained by that user, preparation with the proposed e-assessment method proved to be more efficient.
APA, Harvard, Vancouver, ISO, and other styles
9

Schäfer, Lothar, and John D. Ewbank. "On Comparing Experimental and Calculated Structural Parameters." In Molecular Orbital Calculations for Biological Systems. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195098730.003.0010.

Full text
Abstract:
The tacit assumption underlying all science is that, of two competing theories, the one in closer agreement with experiment is the better one. In structural chemistry the same principle applies but, when calculated and experimental structures are compared, closer is not necessarily better. Structures from ab initio calculations, specifically, must not be the same as the experimental counterparts the way they are observed. This is so because ab initio geometries refer to nonexistent, vibrationless states at the minimum of potential energy, whereas structural observables represent specifically defined averages over distributions of vibrational states. In general, if one wants to make meaningful comparisons between calculated and experimental molecular structures, one must take recourse of statistical formalisms to describe the effects of vibration on the observed parameters. Among the parameters of interest to structural chemists, internuclear distances are especially important because other variables, such as bond angles, dihedral angles, and even crystal spacings, can be readily derived from them. However, how a rigid torsional angle derived from an ab initio calculation compares with the corresponding experimental value in a molecule subject to vibrational anharmonicity, is not so easy to determine. The same holds for the lattice parameters of a molecule in a dynamical crystal, and their temperature dependence as a function of the molecular potential energy surface. In contrast, vibrational effects are readily defined and best described for internuclear distances, bonded and non-bonded ones. In general, all observed internuclear distances are vibrationally averaged parameters. Due to anharmonicity, the average values will change from one vibrational state to the next and, in a molecular ensemble distributed over several states, they are temperature dependent. All these aspects dictate the need to make statistical definitions of various conceivable, different averages, or structure types. In addition, since the two main tools for quantitative structure determination in the vapor phase—gas electron diffraction and microwave spectroscopy—interact with molecular ensembles in different ways, certain operational definitions are also needed for a precise understanding of experimental structures. To illustrate how the operations of an experimental technique affect the nature of its observables, gas electron diffraction shall be used as an example.
APA, Harvard, Vancouver, ISO, and other styles
10

Doveton, John H. "Saturation-Height Functions." In Principles of Mathematical Petrophysics. Oxford University Press, 2014. http://dx.doi.org/10.1093/oso/9780199978045.003.0012.

Full text
Abstract:
As observed by Worthington (2002), “The application of saturation-height functions forms part of the intersection of geologic, petrophysical, and reservoir engineering practices within integrated reservoir description.” It is also a critical reference point for mathematical petrophysics; the consequences of deterministic and statistical prediction models are finally evaluated in terms of how closely the estimates conform to physical laws. Saturations within a reservoir are controlled by buoyancy pressure applied to pore-throat size distributions and pore-body storage capacities within a rock unit that varies both laterally and vertically and may be subdivided into compartments that are not in pressure communication. Traditional lithostratigraphic methods describe reservoir architecture as correlative rock units, but the degree to which this partitioning matches flow units must be carefully evaluated to reconcile petrofacies with lithofacies. Stratigraphic correlation provides the fundamental reference framework for surfaces that define structure and isopach maps and usually represent principal reflection events in the seismic record. In some instances, there is a strong conformance between lithofacies and petrofacies, but all too commonly, this is not the case, and petrofacies must be partitioned and evaluated separately. Failure to do this may result in invalid volumetrics and reservoir models that are inadequate for fluid-flow characterization. A dynamic reservoir model must be history matched to the actual performance of the reservoir; this process often requires adjustments of petrophysical parameters to improve the reconciliation between the model’s performance and the history of production. Once established, the reservoir model provides many beneficial outcomes. At the largest scale, the model assesses the volumetrics of hydrocarbons in place. Within the reservoir, the model establishes any partitioning that may exist between compartments on the basis of pressure differences and, therefore, lack of communication. Lateral trends within the model trace changes in rock reservoir quality that control anticipated rates and types of fluids produced in development wells. Because the modeled fluids represent initial reservoir conditions, comparisons can be made between water saturations of the models and those calculated from logs in later wells, helping to ascertain sweep efficiency during production.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Principal comparisons (Statistics)"

1

Lambkin, David, Ian Wade, and Robin Stephens. "Estimating Operational Weather Downtime: A Comparison of Analytical Methods." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-95367.

Full text
Abstract:
Abstract Weather downtime (WDT) is a logistical and financial risk when planning operations or offering services. Such risk is typically identified and managed in advance using statistical predictions based on historical weather data. Estimates of programme and cost for offshore construction work may vary, not because of the nature of a task, or the environment at the location, or the capability and price of a vessel, but because estimates of WDT have been calculated in different ways. Estimates of WDT are required in order to develop a realistic programme for complex and long duration projects. Therefore, a good understanding of the analytical options and a feel for the implications of the many and varied approaches is key to finding optimal solutions regarding WDT assessments. In this paper we consider a number of variants to the two principal approaches (namely ‘Weather Windows’ and ‘Simulation Based’ WDT analysis) to the derivation of WDT statistics. WDT estimates calculated using the same environmental input data, but alternative approaches are presented. The presentation highlights the potential variation in downtime statistics that can result from the alternative analyses, aiming to improve awareness of the application of such statistics when estimating project programme and cost at the planning stages.
APA, Harvard, Vancouver, ISO, and other styles
2

Lopatoukhin, Leonid J., and Alexander V. Boukhanovsky. "Extreme and Freak Waves: Results of Measurements and Simulation." In ASME 2008 27th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2008. http://dx.doi.org/10.1115/omae2008-57841.

Full text
Abstract:
Main statistical characteristics of wave climate are considered in respect of offshore and ship design. Sophistication of ships and marine platforms and expansion of offshore activities to non-investigated regions means increasing of probability of being damaged by high waves. Hindcasting of wave fields, using the hydrodynamic models is main approach to wave climate investigation. Offshore wave measurements are used, mainly, for model verification. In compliance with existent regulatory documents and accepted practice applied statistical characteristics of wind waves are prescribed to operational and extreme. Operational statistics describe wind and wave conditions for the life span of a ship or an offshore structure. Extreme characteristics determine the so-called “structure survival regime”. There are a lot of approaches to calculations of extreme wave heights at a point (classical unconditional extremes). Their comparison shows the advantages and disadvantages of each of them. Freak (rogue) waves have some principal difference from extreme wave, mainly due to their form and asymmetry. In this sense freak wave is a multidimensional extreme. Contaminated distribution may be used for probability density approximation of joint extreme and freak wave. The example of recent freak wave event is the loss of ship “Aurelia” (Class of Russian Register of shipping) in February 2005 in the North Pacific. “Aurelia” sunk during passing of atmospheric front with veering wind, changing wind waves. Any wave has at least three dimensions: height, length, and crest length. The last parameter in mean is 3 times greater than wave length. Any information about three dimensional waves is of interest, as such measurements are unique. Some results of unique stereo wave measurements in the South Pacific where the wave as high as 24.9m was fixed (probably still almost the highest measured in the World Ocean), is presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Lewis, John R., Dusty Brooks, and Michael L. Benson. "Methods for Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions." In ASME 2017 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/pvp2017-65552.

Full text
Abstract:
Weld residual stress (WRS) is a major driver of primary water stress corrosion cracking (PWSCC) in safety critical components of nuclear power plants. Accurate understanding of WRS is thus crucial for reliable prediction of safety performance of component design throughout the life of the plant. However, measurement uncertainty in WRS is significant, driven by the method and the indirect nature in which WRS must be measured. Likewise, model predictions of WRS vary due to uncertainty induced by individual modeling choices. The uncertainty in WRS measurements and modeling predictions is difficult to quantify and complicates the use of WRS measurements in validating WRS predictions for future use in safety evaluations. This paper describes a methodology for quantifying WRS uncertainty that facilitates the comparison of predictions and measurements and informs design safety evaluations. WRS is considered as a function through the depth of the weld. To quantify its uncertainty, functional data analysis techniques are utilized to account for the two types of variation observed in functional data: phase and amplitude. Phase variability, also known as horizontal variability, describes the variability in the horizontal direction (i.e., through the depth of the weld). Amplitude variability, also known as vertical variability, describes the variation in the vertical direction (i.e., magnitude of stresses). The uncertainty in both components of variability is quantified using statistical models in principal component space. Statistical confidence/tolerance bounds are constructed using statistical bootstrap (i.e., resampling) techniques applied to these models. These bounds offer a succinct quantification of the uncertainty in both the predictions and measurements as well as a method to quantitatively compare the two. Major findings show that the level of uncertainty among measurements is comparable to that among predictions and further experimental work is recommended to inform a validation effort for prediction models.
APA, Harvard, Vancouver, ISO, and other styles
4

Shevchenko, Maksim, Sergiy Yepifanov, and Igor Loboda. "Ridge Estimation and Principal Component Analysis to Solve an Ill-Conditioned Problem of Estimating Unmeasured Gas Turbine Parameters." In ASME Turbo Expo 2013: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/gt2013-94496.

Full text
Abstract:
This paper addresses the problem of estimation of unmeasured gas turbine engine variables using statistical analysis of measured data. Possible changes of an engine health condition and lack of information about these changes caused by limited instrumentation are taken into account. Engine thrust is under consideration as one of the most important unmeasured parameters. Two common methods of aircraft gas turbine engine (GTE) thrust monitoring and their errors due to health condition changes are analyzed. Additionally, two mathematical techniques that allow reducing in-flight thrust estimation errors in the case of GTE deterioration are suggested and verified in the paper. They are a ridge trace and a principal component analysis. A turbofan engine has been chosen as a test case. The engine has five measured variables and 23 health parameters to describe its health condition. Measurement errors are simulated using a generator of random numbers with the normal distribution. The engine is presented in calculations by its nonlinear component level model (CLM). Results of the comparison of thrust estimates computed by the CLM and the proposed techniques confirm accuracy of the techniques. The regression model on principal components has demonstrated the highest accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

SUGLIANO, Nathalie. "Proficiency testing for calibration of multimeter." In 19th International Congress of Metrology (CIM2019), edited by Sandrine Gazal. Les Ulis, France: EDP Sciences, 2019. http://dx.doi.org/10.1051/metrology/201911006.

Full text
Abstract:
In 2018 and 2019, CT2M organize an inter-laboratory comparison of multimeter calibration in which 27 Europeans laboratories participated. Among them were calibration laboratories (accredited or not) but also laboratories performing themselves the calibrations of their multimeter. The principle of this inter-laboratory comparison is to circulate a multimeter from one laboratory to another in order to compare the calibration results (including correction and calibration uncertainty). The processing of the results is carried out according to the ISO 13528 statistical principles and in compliance with the requirements of ISO 17043. A final report in 2019 will indicate to the participants all the results in anonymous way and the performance scores to evaluate the ability of laboratories to carry out this calibration. This article will present the organization of this inter-laboratory comparison as well as the results obtained.
APA, Harvard, Vancouver, ISO, and other styles
6

HEGRON, Lise, and Boris GEYNET. "Proficiency testing for the calibration of masses." In 19th International Congress of Metrology (CIM2019), edited by Sandrine Gazal. Les Ulis, France: EDP Sciences, 2019. http://dx.doi.org/10.1051/metrology/201910001.

Full text
Abstract:
The CT2M organized in 2018 a european inter-laboratory comparison (ILC) for the calibration of masses. The proficiency testing was particulary intended for the calibration laboratories (accredited or not) but also the testing laboratories carrying out their own calibrations and / or controls of their masses. This circuit took place between April 2018 to October 2018 in five European countries: England, France, Germany, Portugal and Switzerland. The results were processed according to the statistical principle of ISO 13528 [1] and in compliance with the requirements of ISO 17043 [2]. This article presents the organization of this inter-laboratory comparison and the results. The performances of the participants are evaluated and an interpretation of the results is proposed in order to highlight the predominant influence parameters on the mass calibration results (nominal values: 200mg, 2g, 20g, 200g and 20kg).
APA, Harvard, Vancouver, ISO, and other styles
7

Lall, Pradeep, Shantanu Deshpande, and Luu Nguyen. "Fuming Acid Based Decapsulation Process for Copper-Aluminum Wirebond System Molded With Different EMCs." In ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ipack2015-48638.

Full text
Abstract:
Decapsulation is one of the very powerful technique in failure analysis process. During this process, die and first level interconnects are exposed by dissolving molding compound around them using variety of methods. Typically decapsulation formulation uses red fuming nitric acid at elevated temperatures. This technique work for traditional Gold wire bonds, but does not work for its new alternative Copper. Gold, being inert metal does not react with acid. Copper on the other hand; tends to react with fuming nitric acid, and dissolves rapidly into acid. It is important to develop acid chemistry that can be successfully used to perform decapsulation of Cu-Al incorporated packages for different EMC’s. In this paper, decap process based on combination of red fuming nitric acid and concentrated sulfuric acid at elevated temperatures is presented. Reduction in wire diameter was monitored for all devices. For some devices decap process was evaluated based on comparison of WB shear strength of decaped part with unmolded part. SEM was used extensively to track down degradation of copper wires. These tests were performed on packages with different EMC’s, wire diameters, pad thickness and some active dies. Statistical principal components regression model has been developed correlating the decapsulation process parameters with the post decap wire diameter reduction. Principal component regression in conjunction with stepwise regression has been used to identify the influential variables, and to remove the multicollinearity between the predictor variables. Principal component analysis which combines two correlated variables into a single factor is a widely used image processing technique for pattern recognition and image compression. The post molded packages have then used to assess the effect of various decapsulation treatments.
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Peng, Yuji Zhou, Jing Ren, and Hongde Jiang. "Radiative Models for Hot Gas With Various H2O-CO2 Ratios." In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-69057.

Full text
Abstract:
Radiation is the principal mode of heat transfer in many engineering problems, such as in the performance calculation of the gas turbine. Models for gaseous radiative properties have been well-established for air combustion. The weighted-sum-of-gray-gases-model (WSGGM) is the most widely used global model for the computation of the gaseous radiative properties. It is a reasonable compromise between the oversimplified gray gas model and a complete model that takes in account the particular absorption bands. However, the WSGGM coefficients have generally been determined only at the H2O-CO2 ratios of 1 and 2 or other special ratios. In the advanced gas turbines, the H2O-CO2 ratios sometimes fall outside the range of the weighted-sum-of-gray-gases-model parameters reported in the literature. In order to obtain the radiative parameters for various H2O-CO2 ratios, a computer code is developed based on the HITRAN 2008 database. The total emissivity of CO2 and H2O are evaluated under different temperature, pressure and path length utilizing the statistic narrow band (SNB) model. On the basis of the SNB model, the parameters of WSGGM for various H2O-CO2 mixtures are computed with the Flether-Powell technique. Comparisons are executed with the other WSGGM.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yang, Lingyu Sun, Lijun Li, Yiben Zhang, Zongmiao Dai, and Zhenkai Xiong. "Image Identification of a Moving Object Based on an Improved Canny Edge Detection Algorithm." In ASME 2018 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/imece2018-86792.

Full text
Abstract:
Edge detection plays an increasingly critical role in image process community, especially for moving object identification problems. For this case, the target object can be captured straightly via the edges beside which there is an obvious jump of grey value or texture. Nowadays, Canny operator has gained great popularity as it shows higher anti-noise performance and presents better detection accuracy in comparison with other edge detection operators like Robert’s, Sobel’s, Prewitt’s etc. However, the Gaussian filter associated with the classic Canny operator is sometimes too simple to decrease the all-type-noise. Additionally, in order to enhance the detection accuracy and lower the pseudo-edges detection ratio, two thresholds, high and low, are chosen artificially which have actually limited the adaptability of the algorithm. In this work, a compound filter, Gaussian-Median filter, is proposed to improve the smoothing effect. The self-adaptive multi-threshold Otsu algorithm is realized to determine the high/low threshold automatically according to the grey value statistic. Image moment method is conducted on basis of the detected moving object edges to locate the centroid and to compute the principal orientation. The experimental results based upon locating the edges of both static and moving objects proved the good robustness and the excellent accuracy of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
10

Aulich, Marcel, Fabian Küppers, Andreas Schmitz, and Christian Voß. "Surrogate Estimations of Complete Flow Fields of Fan Stage Designs via Deep Neural Networks." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91258.

Full text
Abstract:
Abstract This paper presents a first step to adapt deep neural networks (DNN) to turbomachinery designs. It is demonstrated that DNNs can predict complete flow solutions, using xyzcoordinates of the CFD mesh, rotational speed and boundary conditions as input to predict the velocities, pressure and density in the flow field. The presented DNN is trained by only twenty random 3D fan stage designs (training members). These designs were part of the initialization process of a previous optimization. The approximation quality of the DNN is validated on a random and a Pareto optimal design. The random design is a statistical outlier with low efficiency while the Pareto optimal design dominates the training members in terms of efficiency. So both test members require some extrapolation quality of the DNN. The DNN reproduces characteristics of the flow of both designs, showing its capability of generalization and potential for future applications. The paper begins with an explanation of the DNN concept, which is based on convolutional layers. Based on the working principal of these layers a conversion of a CFD mesh to a suitable DNN input is derived. This conversion ensures that the DNNs can work in a similar way as in image recognition, where DNNs show superior results in comparison to other models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography