Journal articles on the topic 'Principal comparisons (Statistics)'

To see the other types of publications on this topic, follow the link: Principal comparisons (Statistics).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Principal comparisons (Statistics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Duarte Silva, António Pedro. "Discarding Variables in a Principal Component Analysis: Algorithms for All-Subsets Comparisons." Computational Statistics 17, no. 2 (July 2002): 251–71. http://dx.doi.org/10.1007/s001800200105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rosenbaum, Paul R. "Combining planned and discovered comparisons in observational studies." Biostatistics 21, no. 3 (September 26, 2018): 384–99. http://dx.doi.org/10.1093/biostatistics/kxy055.

Full text
Abstract:
Summary In observational studies of treatment effects, it is common to have several outcomes, perhaps of uncertain quality and relevance, each purporting to measure the effect of the treatment. A single planned combination of several outcomes may increase both power and insensitivity to unmeasured bias when the plan is wisely chosen, but it may miss opportunities in other cases. A method is proposed that uses one planned combination with only a mild correction for multiple testing and exhaustive consideration of all possible combinations fully correcting for multiple testing. The method works with the joint distribution of $\kappa^{T}\left( \mathbf{T}-\boldsymbol{\mu}\right) /\sqrt {\boldsymbol{\kappa}^{T}\boldsymbol{\Sigma\boldsymbol{\kappa}}}$ and $max_{\boldsymbol{\lambda}\neq\mathbf{0}}$$\,\lambda^{T}\left( \mathbf{T} -\boldsymbol{\mu}\right) /$$\sqrt{\boldsymbol{\lambda}^{T}\boldsymbol{\Sigma \lambda}}$ where $\kappa$ is chosen a priori and the test statistic $\mathbf{T}$ is asymptotically $N_{L}\left( \boldsymbol{\mu},\boldsymbol{\Sigma}\right) $. The correction for multiple testing has a smaller effect on the power of $\kappa^{T}\left( \mathbf{T}-\boldsymbol{\mu }\right) /\sqrt{\boldsymbol{\kappa}^{T}\boldsymbol{\Sigma\boldsymbol{\kappa} }}$ than does switching to a two-tailed test, even though the opposite tail does receive consideration when $\lambda=-\kappa$. In the application, there are three measures of cognitive decline, and the a priori comparison $\kappa$ is their first principal component, computed without reference to treatment assignments. The method is implemented in an R package sensitivitymult.
APA, Harvard, Vancouver, ISO, and other styles
3

Chou, Lin-Yi, and P. W. Sharp. "On order 5 symplectic explicit Runge-Kutta Nyström methods." Journal of Applied Mathematics and Decision Sciences 4, no. 2 (January 1, 2000): 143–50. http://dx.doi.org/10.1155/s1173912600000109.

Full text
Abstract:
Order five symplectic explicit Runge-Kutta Nyström methods of five stages are known to exist. However, these methods do not have free parameters with which to minimise the principal error coefficients. By adding one derivative evaluation per step, to give either a six-stage non-FSAL family or a seven-stage FSAL family of methods, two free parameters become available for the minimisation. This raises the possibility of improving the efficiency of order five methods despite the extra cost of taking a step.We perform a minimisation of the two families to obtain an optimal method and then compare its numerical performance with published methods of orders four to seven. These comparisons along with those based on the principal error coefficients show the new method is significantly more efficient than the five-stage, order five methods. The numerical comparisons also suggest the new methods can be more efficient than published methods of other orders.
APA, Harvard, Vancouver, ISO, and other styles
4

Marchetti, Mario, Lee Chapman, Abderrahmen Khalifa, and Michel Buès. "New Role of Thermal Mapping in Winter Maintenance with Principal Components Analysis." Advances in Meteorology 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/254795.

Full text
Abstract:
Thermal mapping uses IR thermometry to measure road pavement temperature at a high resolution to identify and to map sections of the road network prone to ice occurrence. However, measurements are time-consuming and ultimately only provide a snapshot of road conditions at the time of the survey. As such, there is a need for surveys to be restricted to a series of specific climatic conditions during winter. Typically, five to six surveys are used, but it is questionable whether the full range of atmospheric conditions is adequately covered. This work investigates the role of statistics in adding value to thermal mapping data. Principal components analysis is used to interpolate between individual thermal mapping surveys to build a thermal map (or even a road surface temperature forecast), for a wider range of climatic conditions than that permitted by traditional surveys. The results indicate that when this approach is used, fewer thermal mapping surveys are actually required. Furthermore, comparisons with numerical models indicate that this approach could yield a suitable verification method for the spatial component of road weather forecasts—a key issue currently in winter road maintenance.
APA, Harvard, Vancouver, ISO, and other styles
5

Craft, Kathleen J., and Mary V. Ashley. "Population differentiation among three species of white oak in northeastern Illinois." Canadian Journal of Forest Research 36, no. 1 (January 1, 2006): 206–15. http://dx.doi.org/10.1139/x05-234.

Full text
Abstract:
We used microsatellite DNA analysis to examine population differentiation among three species of white oak, Quercus alba L., Quercus bicolor Willd., and Quercus macrocarpa Michx., occurring in both pure and mixed stands in northeastern Illinois. Using individual-based Bayesian clustering or principal components analyses, no strong genetic groupings of individuals were detected. This suggests that the three species do not represent distinct and differentiated genetic entities. Nevertheless, traditional approaches where individuals are pre-assigned to species and populations, including F statistics, allele frequency analysis, and Nei's genetic distance, revealed low, but significant genetic differentiation. Pairwise F statistics showed that some intraspecific comparisons were as genetically differentiated as interspecific comparisons, with the two populations of Q. alba exhibiting the highest level of genetic differentiation (θ = 0.1156). A neighbor-joining tree also showed that the two populations of Q. alba are distinct from one another and from the two other species, while Q. bicolor and Q. macrocarpa were genetically more similar. Pure stands of Q. macrocarpa did not show a higher degree of genetic differentiation than mixed stands.
APA, Harvard, Vancouver, ISO, and other styles
6

Bose, Lotan, Nitiprasad Jambhulkar, and Kanailal Pande. "Genotype by environment interaction and stability analysis for rice genotypes under Boro condition." Genetika 46, no. 2 (2014): 521–28. http://dx.doi.org/10.2298/gensr1402521b.

Full text
Abstract:
Genotype (G)?Environment (E) interaction of nine rice genotypes possessing cold tolerance at seedling stage tested over four environments was analyzed to identify stable high yielding genotypes suitable for boro environments. The genotypes were grown in a randomized complete block design with three replications. The genotype ? environment (G?E) interaction was studied using different stability statistics viz. Additive Main effects and Multiplicative Interaction (AMMI), AMMI stability value (ASV), rank-sum (RS) and yield stability index (YSI). Combined analysis of variance shows that genotype, environment and G?E interaction are highly significant. This indicates possibility of selection of stable genotypes across the environments. The results of AMMI (additive main effect and multiplicative interaction) analysis indicated that the first two principal components (PC1-PC2) were highly significant (P<0.05). The partitioning of TSS (total sum of squares) exhibited that the genotype effect was a predominant source of variation followed by G?E interaction and environment. The genotype effect was nine times higher than that of the G?E interaction, suggesting the possible existence of different environment groups. The first two interaction principal component axes (IPCA) cumulatively explained 92 % of the total interaction effects. The study revealed that genotypes GEN6 and GEN4 were found to be stable based on all stability statistics. Grain yield (GY) is positively and significantly correlated with rank-sum (RS) and yield stability index (YSI). The above mentioned stability statistics could be useful for identification of stable high yielding genotypes and facilitates visual comparisons of high yielding genotype across the multi-environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Monroe, Scott. "Estimation of Expected Fisher Information for IRT Models." Journal of Educational and Behavioral Statistics 44, no. 4 (April 7, 2019): 431–47. http://dx.doi.org/10.3102/1076998619838240.

Full text
Abstract:
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in practice, the expected information is not typically used, as it often requires a large amount of computation. In the present research, two methods to approximate the expected information by Monte Carlo are proposed. The first method is suitable for less complex IRT models such as unidimensional models. The second method is generally applicable but is designed for use with more complex models such as high-dimensional IRT models. The proposed methods are compared to existing methods using real data sets and a simulation study. The comparisons are based on simple structure multidimensional IRT models with two-parameter logistic item models.
APA, Harvard, Vancouver, ISO, and other styles
8

Koudelakova, Tana, Eva Chovancova, Jan Brezovsky, Marta Monincova, Andrea Fortova, Jiri Jarkovsky, and Jiri Damborsky. "Substrate specificity of haloalkane dehalogenases." Biochemical Journal 435, no. 2 (March 29, 2011): 345–54. http://dx.doi.org/10.1042/bj20101405.

Full text
Abstract:
An enzyme's substrate specificity is one of its most important characteristics. The quantitative comparison of broad-specificity enzymes requires the selection of a homogenous set of substrates for experimental testing, determination of substrate-specificity data and analysis using multivariate statistics. We describe a systematic analysis of the substrate specificities of nine wild-type and four engineered haloalkane dehalogenases. The enzymes were characterized experimentally using a set of 30 substrates selected using statistical experimental design from a set of nearly 200 halogenated compounds. Analysis of the activity data showed that the most universally useful substrates in the assessment of haloalkane dehalogenase activity are 1-bromobutane, 1-iodopropane, 1-iodobutane, 1,2-dibromoethane and 4-bromobutanenitrile. Functional relationships among the enzymes were explored using principal component analysis. Analysis of the untransformed specific activity data revealed that the overall activity of wild-type haloalkane dehalogenases decreases in the following order: LinB~DbjA>DhlA~DhaA~DbeA~DmbA>DatA~DmbC~DrbA. After transforming the data, we were able to classify haloalkane dehalogenases into four SSGs (substrate-specificity groups). These functional groups are clearly distinct from the evolutionary subfamilies, suggesting that phylogenetic analysis cannot be used to predict the substrate specificity of individual haloalkane dehalogenases. Structural and functional comparisons of wild-type and mutant enzymes revealed that the architecture of the active site and the main access tunnel significantly influences the substrate specificity of these enzymes, but is not its only determinant. The identification of other structural determinants of the substrate specificity remains a challenge for further research on haloalkane dehalogenases.
APA, Harvard, Vancouver, ISO, and other styles
9

Androniceanu, Ane-Mari, Raluca Dana Căplescu, Manuela Tvaronavičienė, and Cosmin Dobrin. "The Interdependencies between Economic Growth, Energy Consumption and Pollution in Europe." Energies 14, no. 9 (April 30, 2021): 2577. http://dx.doi.org/10.3390/en14092577.

Full text
Abstract:
The strong interdependency between economic growth and conventional energy consumption have led to significant environmental impact, especially with respect to greenhouse gas emissions. Conventional energy-intensive industries release increasing quantities every year, which has prompted global leaders to consider new approaches based on sustainable consumption. The main purpose of this research is to propose a new energy index that accounts for the complexity and interdependences between the research variables. The methodology is based on Principal Component Analysis (PCA) and combines the key components determined into a score that allows for both temporal and cross-country comparisons. All data analyses were performed using IBM SPSS Statistics 25™. The main findings show that most countries improved their economic performance since 2014, but the speed of the improvement varies a lot from one country to another. The final score determined reflects the complex changes taking place in each country and the efficiency of the governmental measures for sustainable economic growth based on low energy consumption and low environmental pollution.
APA, Harvard, Vancouver, ISO, and other styles
10

Picha*, David, and Roger Hinson. "Economic Assessment of Marketing U.S. Sweetpotatoes in the United Kingdom." HortScience 39, no. 4 (July 2004): 765B—765. http://dx.doi.org/10.21273/hortsci.39.4.765b.

Full text
Abstract:
Opportunities for marketing United States (U.S.) sweetpotatoes in the United Kingdom (U.K.) are expanding, particularly within the retail sector. The U.K. import volume has steadily increased in recent years. Trade statistics indicate the U.K. imported nearly 12 thousand metric tons of sweetpotatoes in 2002, with the U.S. providing slightly over half of the total import volume. Considerable competition exists among suppliers and countries of origin in their attempts to penetrate the U.K. market. Currently, over a dozen countries supply sweetpotatoes to the U.K., and additional countries are planning on sending product in the near future. An economic assessment of production and transport costs was made among the principal supplying nations to estimate their comparative market advantages. Price histories for sweetpotatoes in various U.K. market destinations were compiled to determine seasonality patterns. Comparisons of net profit (or loss) between U.S. and U.K. market destinations were made to determine appropriate marketing strategies for U.S. sweetpotato growers/shippers. Results indicated the U.K. to be a profitable and increasingly important potential market for U.S. sweetpotatoes.
APA, Harvard, Vancouver, ISO, and other styles
11

Batmaz, Sedat, Sibel Kocbiyik, and Ozgur Ahmet Yuncu. "Turkish Version of the Cognitive Distortions Questionnaire: Psychometric Properties." Depression Research and Treatment 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/694853.

Full text
Abstract:
Cognitive distortions are interrelated with all layers of cognitions, and they may be part of the treatment once they are accessed, identified, labeled, and changed. From both a research and a clinical perspective, it is of utmost importance to disentangle cognitive distortions from similar constructs. Recently, the Cognitive Distortions Questionnaire (CD-Quest), a brief and comprehensive measure, was developed to assess both the frequency and the intensity of cognitive distortions. The aim of the present study was to assess the psychometric properties of the Turkish version of the CD-Quest in a psychiatric outpatient sample. Demographic and clinical data of the participants were analyzed by descriptive statistics. For group comparisons, Student’st-test was applied. An exploratory principal components factor analysis was performed, followed by an oblique rotation. To assess the internal consistency of the scale Cronbach’sαwas computed. The correlation coefficient was calculated for test-retest reliability over a 4-week period. For concurrent validity, bivariate Pearson correlation analyses were conducted with the measures of mood severity and negatively biased cognitions. The results revealed that the scale had excellent internal consistency, good test-retest reliability, unidimensional factor structure, and evidence of concurrent and discriminant validity.
APA, Harvard, Vancouver, ISO, and other styles
12

Stockton Maxwell, R., Amy E. Hessl, Edward R. Cook, and Brendan M. Buckley. "A Multicentury Reconstruction of May Precipitation for the Mid-Atlantic Region Using Juniperus virginiana Tree Rings*." Journal of Climate 25, no. 3 (February 1, 2012): 1045–56. http://dx.doi.org/10.1175/jcli-d-11-00017.1.

Full text
Abstract:
Abstract This paper presents a multicentury reconstruction of May precipitation (1200–1997) for the mid-Atlantic region of the United States. The reconstruction is based on the first principal component (PC1) of two millennial-length Juniperus virginiana L. (eastern red cedar) tree-ring chronologies collected from rocky, limestone sites in the Ridge and Valley province of West Virginia. A split-calibration linear regression model accounted for 27% of the adjusted variance in the instrumental record and was stable through time. The model was verified by the reduction of error (RE = 0.21) and coefficient of efficiency (CE = 0.20) statistics. Multidecadal changes in precipitation were common throughout the reconstruction, and wetter than median conditions and drier than median conditions occurred during the medieval climate anomaly (1200–1300) and the Little Ice Age (1550–1650), respectively. The full reconstruction contained evidence of interannual and decadal variability; however, the twentieth century recorded the greatest number of decadal extreme wet and dry periods. A comparison of the May precipitation reconstruction to other regional reconstructions [Potomac River, Maryland, streamflow (Cook and Jacoby); Virginia/North Carolina July Palmer hydrologic drought index (PHDI; Stahle et al.); Missouri July PHDI (Cleaveland and Stahle); and White River, Arkansas, streamflow (Cleaveland)] showed that the eastern U.S. decadal drought and pluvial events extended into the mid-Atlantic region. A positive correlation between PC1 and the winter North Atlantic Oscillation (NAO) index and comparisons of smoothed May precipitation and the NAO (Luterbacher et al.) indicated that J. virginiana’s response to May precipitation was mediated by winter temperature.
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, Ren-Jie, Jung-Hua Wang, Chun-Shun Tseng, Zhe-Wei Tu, and Kai-Chun Chiang. "Bayesian Edge Detector Using Deformable Directivity-Aware Sampling Window." Entropy 22, no. 10 (September 25, 2020): 1080. http://dx.doi.org/10.3390/e22101080.

Full text
Abstract:
Conventional image entropy merely involves the overall pixel intensity statistics which cannot respond to intensity patterns over spatial domain. However, spatial distribution of pixel intensity is definitely crucial to any biological or computer vision system, and that is why gestalt grouping rules involve using features of both aspects. Recently, the increasing integration of knowledge from gestalt research into visualization-related techniques has fundamentally altered both fields, offering not only new research questions, but also new ways of solving existing issues. This paper presents a Bayesian edge detector called GestEdge, which is effective in detecting gestalt edges, especially useful for forming object boundaries as perceived by human eyes. GestEdge is characterized by employing a directivity-aware sampling window or mask that iteratively deforms to probe or explore the existence of principal direction of sampling pixels; when convergence is reached, the window covers pixels best representing the directivity in compliance with the similarity and proximity laws in gestalt theory. During the iterative process based on the unsupervised Expectation-Minimization (EM) algorithm, the shape of the sampling window is optimally adjusted. Such a deformable window allows us to exploit the similarity and proximity among the sampled pixels. Comparisons between GestEdge and other edge detectors are shown to justify the effectiveness of GestEdge in extracting the gestalt edges.
APA, Harvard, Vancouver, ISO, and other styles
14

Perera, Omaththage P., Howard W. Fescemyer, Shelby J. Fleischer, and Craig A. Abel. "Temporal Variation in Genetic Composition of Migratory Helicoverpa Zea in Peripheral Populations." Insects 11, no. 8 (July 23, 2020): 463. http://dx.doi.org/10.3390/insects11080463.

Full text
Abstract:
Migrant populations of Helicoverpa zea (Boddie) captured during 2002, 2005, 2016, and 2018 from Landisville and Rock Springs in Pennsylvania, USA were genotyped using 85 single nucleotide polymorphism (SNP) markers. Samples (n = 702) genotyped were divided into 16 putative populations based on collection time and site. Fixation indices (F-statistics), analysis of molecular variance, and discriminant analysis of principal components were used to examine within and among population genetic variation. The observed and expected heterozygosity in putative populations ranged from 0.317–0.418 and 0.320–0.359, respectively. Broad range of FST (0.0–0.2742) and FIS (0.0–0.2330) values indicated different genotype frequencies between and within the populations, respectively. High genetic diversity within and low genetic differentiation between populations was found in 2002 and 2005. Interestingly, high genetic differentiation between populations from two collection sites observed in 2018 populations was not evident in within-site comparisons of putative populations collected on different dates during the season. The shift of H. zea population genetic makeup in 2018 may be influenced by multiple biotic and abiotic factors including tropical storms. Continued assessment of these peripheral populations of H. zea will be needed to assess the impacts of genetic changes on pest control and resistance management tactics.
APA, Harvard, Vancouver, ISO, and other styles
15

Verstraeten, Barbara S. E., Jane Mijovic-Kondejewski, Jun Takeda, Satomi Tanaka, and David M. Olson. "Canada’s pregnancy-related mortality rates: doing well but room for improvement." Clinical & Investigative Medicine 38, no. 1 (February 6, 2015): 15. http://dx.doi.org/10.25011/cim.v38i1.22410.

Full text
Abstract:
Purpose: Canada’s perinatal, infant and maternal mortality rates were examined and compared with other Organization for Economic Cooperation and Development (OECD) countries. The type and the quality of the available data and best practices in several OECD countries were evaluated. Source: A literature search was performed in PubMed and the Cochrane Library. Vital statistics data were obtained from the OECD Health Database and Statistics Canada and subjected to secondary analysis. Principal findings: Overall, Canadian pregnancy mortality rates have fallen dramatically since the early 1960’s. Perinatal and infant mortality rates remain low and stable, but the maternal mortality rate has increased slightly and both mortality rates have declined in their relative OECD rankings over the last 20 years. Data quality and coverage across Canada and internationally, especially for Indigenous peoples, is inconsistent and registration practices differ greatly, making comparisons difficult. Available data do show that Indigenous people’s perinatal and infant mortality rates are nearly twice those of the general population. Best practices in other OECD countries include Australia’s National Maternity Services plan to improve Aboriginal perinatal health, the Netherlands’ midwifery services and National Perinatal Registry and Japan’s national pregnancy registration and Maternal Handbook. Conclusion: To diminish Canadian disparities in perinatal health rates and improve health outcomes we recommend a) uniform registration practices across Canada, b) better data quality and coverage especially among Indigenous communities, c) adoption of a national pregnancy registration and a maternal handbook along with d) improved midwifery and primary practice services to rural and remote communities. At a time when Canada is focusing upon improving pregnancy health in developing nations, it also needs to address its own challenges in improving pregnancy outcomes.
APA, Harvard, Vancouver, ISO, and other styles
16

Sinyuk, Alexander, Brent N. Holben, Thomas F. Eck, David M. Giles, Ilya Slutsker, Sergey Korkin, Joel S. Schafer, Alexander Smirnov, Mikhail Sorokin, and Alexei Lyapustin. "The AERONET Version 3 aerosol retrieval algorithm, associated uncertainties and comparisons to Version 2." Atmospheric Measurement Techniques 13, no. 6 (June 26, 2020): 3375–411. http://dx.doi.org/10.5194/amt-13-3375-2020.

Full text
Abstract:
Abstract. The Aerosol Robotic Network (AERONET) Version 3 (V3) aerosol retrieval algorithm is described, which is based on the Version 2 (V2) algorithm with numerous updates. Comparisons of V3 aerosol retrievals to those of V2 are presented, along with a new approach to estimate uncertainties in many of the retrieved aerosol parameters. Changes in the V3 aerosol retrieval algorithm include (1) a new polarized radiative transfer code (RTC), which replaced the scalar RTC of V2, (2) detailed characterization of gas absorption by adding NO2 and H2O to specify total gas absorption in the atmospheric column, specification of vertical profiles of all the atmospheric species, (3) new bidirectional reflectance distribution function (BRDF) parameters for land sites adopted from the MODIS BRDF/Albedo product, (4) a new version of the extraterrestrial solar flux spectrum, and (5) a new temperature correction procedure of both direct Sun and sky radiance measurements. The potential effect of each change in V3 on single scattering albedo (SSA) retrievals was analyzed. The operational almucantar retrievals of V2 versus V3 were compared for four AERONET sites: GSFC, Mezaira, Mongu, and Kanpur. Analysis showed very good agreement in retrieved parameters of the size distributions. Comparisons of SSA retrievals for dust aerosols (Mezaira) showed a good agreement in 440 nm SSA, while for longer wavelengths V3 SSAs are systematically higher than those of V2, with the largest mean difference at 675 nm due to cumulative effects of both extraterrestrial solar flux and BRDF changes. For non-dust aerosols, the largest SSA deviation is at 675 nm due to differences in extraterrestrial solar flux spectrums used in each version. Further, the SSA 675 nm mean differences are very different for weakly (GSFC) and strongly (Mongu) absorbing aerosols, which is explained by the lower sensitivity to a bias in aerosol scattering optical depth by less absorbing aerosols. A new hybrid (HYB) sky radiance measurement scan is introduced and discussed. The HYB combines features of scans in two different planes to maximize the range of scattering angles and achieve scan symmetry, thereby allowing for cloud screening and spatial averaging, which is an advantage over the principal plane scan that lacks robust symmetry. We show that due to an extended range of scattering angles, HYB SSA retrievals for dust aerosols exhibit smaller variability with solar zenith angles (SZAs) than those of almucantar (ALM), which allows extension of HYB SSA retrievals to SZAs less than 50∘ to as small as 25∘. The comparison of SSA retrievals from closely time-matched HYB and ALM scans in the 50 to 75∘ SZA range showed good agreement with the differences below ∼0.005. We also present an approach to estimate retrieval uncertainties which utilizes the variability in retrieved parameters generated by perturbing both measurements and auxiliary input parameters as a proxy for retrieval uncertainty. The perturbations in measurements and auxiliary inputs are assumed as estimated biases in aerosol optical depth (AOD), radiometric calibration of sky radiances combined with solar spectral irradiance, and surface reflectance. For each set of Level 2 Sun/sky radiometer observations, 27 inputs corresponding to 27 combinations of biases were produced and separately inverted to generate the following statistics of the inversion results: average, standard deviation, minimum and maximum values. From these statistics, standard deviation (labeled U27) is used as a proxy for estimated uncertainty, and a lookup table (LUT) approach was implemented to reduce the computational time. The U27 climatological LUT was generated from the entire AERONET almucantar (1993–2018) and hybrid (2014–2018) scan databases by binning U27s in AOD (440 nm), Angström exponent (AE, 440–870 nm), and SSA (440, 675, 870, 1020 nm). Using this LUT approach, the uncertainty estimates U27 for each individual V3 Level 2 retrieval can be obtained by interpolation using the corresponding measured and inverted combination of AOD, AE, and SSA.
APA, Harvard, Vancouver, ISO, and other styles
17

Hassan, Raaid N. "A comparison between PCA and some enhancement filters for denoising astronomical images." Iraqi Journal of Physics (IJP) 11, no. 22 (February 20, 2019): 82–92. http://dx.doi.org/10.30723/ijp.v11i22.356.

Full text
Abstract:
This paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used. Experimental results shows LPG-PCA method gives better performance, especially in image fine structure preservation, compared with other general denoising algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Heij, Christiaan, Patrick J. F. Groenen, and Dick van Dijk. "Forecast comparison of principal component regression and principal covariate regression." Computational Statistics & Data Analysis 51, no. 7 (April 2007): 3612–25. http://dx.doi.org/10.1016/j.csda.2006.10.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Diana, Giancarlo, and Chiara Tommasi. "Cross-validation methods in principal component analysis: A comparison." Statistical Methods & Applications 11, no. 1 (February 2002): 71–82. http://dx.doi.org/10.1007/s102600200026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

de Viron, Olivier, Michel Van Camp, Alexia Grabkowiak, and Ana M. G. Ferreira. "Comparing global seismic tomography models using varimax principal component analysis." Solid Earth 12, no. 7 (July 19, 2021): 1601–34. http://dx.doi.org/10.5194/se-12-1601-2021.

Full text
Abstract:
Abstract. Global seismic tomography has greatly progressed in the past decades, with many global Earth models being produced by different research groups. Objective, statistical methods are crucial for the quantitative interpretation of the large amount of information encapsulated by the models and for unbiased model comparisons. Here we propose using a rotated version of principal component analysis (PCA) to compress the information in order to ease the geological interpretation and model comparison. The method generates between 7 and 15 principal components (PCs) for each of the seven tested global tomography models, capturing more than 97 % of the total variance of the model. Each PC consists of a vertical profile, with which a horizontal pattern is associated by projection. The depth profiles and the horizontal patterns enable examining the key characteristics of the main components of the models. Most of the information in the models is associated with a few features: large low-shear-velocity provinces (LLSVPs) in the lowermost mantle, subduction signals and low-velocity anomalies likely associated with mantle plumes in the upper and lower mantle, and ridges and cratons in the uppermost mantle. Importantly, all models highlight several independent components in the lower mantle that make between 36 % and 69 % of the total variance, depending on the model, which suggests that the lower mantle is more complex than traditionally assumed. Overall, we find that varimax PCA is a useful additional tool for the quantitative comparison and interpretation of tomography models.
APA, Harvard, Vancouver, ISO, and other styles
21

Bugli, C., and P. Lambert. "Comparison between Principal Component Analysis and Independent Component Analysis in Electroencephalograms Modelling." Biometrical Journal 49, no. 2 (April 2007): 312–27. http://dx.doi.org/10.1002/bimj.200510285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jackson, Donald A. "Stopping Rules in Principal Components Analysis: A Comparison of Heuristical and Statistical Approaches." Ecology 74, no. 8 (December 1993): 2204–14. http://dx.doi.org/10.2307/1939574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

As’ad, Muhammad, Ely Anita, and Yulianto Yulianto. "PENGARUH KEPEMIMPINAN KEPALA SEKOLAH DAN KOMPETENSI PEDAGOGIK GURU TERHADAP HASIL BELAJAR SISWA SMK PGRI 11 CILEDUG PADA KOTA TANGERANG BANTEN." Transparansi Jurnal Ilmiah Ilmu Administrasi 1, no. 2 (February 6, 2019): 149–57. http://dx.doi.org/10.31334/trans.v1i2.310.

Full text
Abstract:
For the Effect of Principal Leadership and Teacher Pedagogical Competence on the learning outcomes of SMK PGRI 11 Ciledug students in Tangerang City, Banten, the method used was quantitative and questionnaire distribution using saturated samples. By using a statistical formula in which Variable X1 is the number of scores from respondents' questions about Principal Leadership and Variable X2 is the number of scores from respondents' questions about teacher pedagogical competencies and Y variable is the number of scores from respondents' questions about student learning outcomes. From the results of the comparison of t count (-8,202) and t table (1,995) then t count> t table means a negative significant effect between the principal's leadership on student learning outcomes. From the results of the comparison of t arithmetic 16.972 and t table 1.995, t count> from t table means a positive significant effect between teacher pedagogical competencies and student learning outcomes. From the results of the comparison of F count 2.116 F, table 0.309 then F count> F there is a significant influence between the leadership of the principal and the competent teachers of teachers on student learning outcomes..
APA, Harvard, Vancouver, ISO, and other styles
24

Al Naib, A., M. Wallace, L. Brennan, T. Fair, and P. Lonergan. "143 METABOLOMIC ANALYSIS OF BOVINE PRE-IMPLANTATION EMBRYOS." Reproduction, Fertility and Development 22, no. 1 (2010): 230. http://dx.doi.org/10.1071/rdv22n1ab143.

Full text
Abstract:
Metabolomics is the study of small molecules or metabolites present in biological samples. The metabolism of the bovine embryo is marked by a transition from the oocyte and early stage embryo, which are entirely dependent on TCA cycle activity for the generation of ATP, toward a significantly greater input of glycolysis during morula compaction and blastocyst formation. The aim of this study was to describe the changes in metabolic profiles of bovine embryos across pre-implantation development. Five pools of 100 embryos at the 2- to 4-cell, 8-cell, 16-cell, morula, and blastocyst stages (i.e. 500 embryos in total per developmental stage) were produced by IVM, IVF, and IVC. Each pool was snap frozen and stored at -80°C until analysis. Extraction of metabolites was performed using 6% perchloric acid. 1H spectra were acquired on a 600-MHz Varian NMR spectrometer operating at 25°C (Varian Inc., Palo Alto, CA, USA). All spectra were processed, baseline corrected, and integrated into bins of 0.02 ppm width. The water region was excluded and data were normalized to the sum of the spectral integral. Data were analyzed using multivariate statistics. Principal component analysis (PCA), an unsupervised pattern recognition technique, was performed initially to assess variation and expose any trends or outlying data. Partial least squares-discriminant analysis (PLS-DA) was subsequently performed to define the maximum separation between the different developmental stages. Data were visualized by constructing principal component scores and loadings plots, where each point on the score plot represented an individual sample and each point on the loadings plot represented a single 1H NMR spectral region. The quality of all models was judged by the goodness-of-fit parameter (R2) and the predictive ability parameter (Q2). PCA analysis of all the data showed distinct separation of the 2- to 4-cell, and 8-cell (i.e. pre-embryonic genome activation, EGA) from 16-cell, morula and blastocyst (i.e. post-EGA) extracts. Pair-wise comparisons between successive developmental stages were performed. PCA analysis showed separation of 2- to 4-cell and 8-cell extracts. A PLS-DA was built with an R2 of 0.97 and a Q2 of 0.55. The main discriminating metabolites were acetate, acetoacetate, and an unidentified peak at 1.25 ppm. Similarly, PCA analysis showed separation of 8-cell and 16-cell embryo extracts. A PLS-DA model was built with an R2 of 0.99 and a Q2 of 0.92. The discrimination was due to higher levels of acetate and an unidentified peak in 8-cell extracts and the appearance of a new peak in 16-cell extracts, tentatively assigned to oxalacetate. PCA analysis showed no separation of the 16-cell, morula, and blastocyst extracts. In conclusion, 1H NMR spectroscopy allows the simultaneous measurement of small-molecular-weight molecules in complex biological samples. Given that the metabolome is downstream of gene function it may represent a superior measure of cellular activities compared to transcriptomic and proteomic approaches. Supported by Science Foundation Ireland (07/SRC/B1156).
APA, Harvard, Vancouver, ISO, and other styles
25

Ramos, Glaucio L., Gustavo F. Rodrigues, Franz M. Camilo, and Cássio G. Rego. "Comparison between statistical and principal component analysis in reduction of near‐field FDTD data." IET Microwaves, Antennas & Propagation 13, no. 13 (July 26, 2019): 2315–18. http://dx.doi.org/10.1049/iet-map.2018.5611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Denton, Shirley R., and Burton V. Barnes. "Spatial distribution of ecologically applicable climatic statistics in Michigan." Canadian Journal of Forest Research 17, no. 7 (July 1, 1987): 598–612. http://dx.doi.org/10.1139/x87-101.

Full text
Abstract:
The paper addresses the problem of selecting appropriate climatic variables from readily available weather data for use in studies of species ranges and ecosystem site classification. Principal component variables and more traditional climatic variables such as heat sums are presented. To facilitate comparison of spatially distributed variables, a procedure is described for estimation of climatic statistics for areas between weather stations. The procedure is objective and provides a measure of its error. Like any regression procedure, it improves with availability of more data. Extension of climatic statistics to a 5 × 5 km grid covering the state of Michigan was used to create contour maps for a number of climatic statistics with potential relevance for plant growth. In addition, a large number of climatic statistics were summarized using principal component analysis. Separate analyses were made for winter temperature and precipitation, growing season temperature, growing season precipitation, and a combination of variables possibly related to stressful conditions. There was a high degree of correlation among many of the statistics. The correlations were due to global climatic controls and to moderation due to the Great Lakes. Principal component variables successfully presented major climatic trends. However, for ecological use they appeared to offer few advantages over more traditional climatic statistics.
APA, Harvard, Vancouver, ISO, and other styles
27

Schott, James R. "Comparison of some goodness of fit tests for a single non-isotropic hypothetical principal component." Communications in Statistics - Theory and Methods 14, no. 5 (January 1985): 1201–15. http://dx.doi.org/10.1080/03610928508828971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Van Ginkel, Joost R., Pieter M. Kroonenberg, and Henk A. L. Kiers. "Missing data in principal component analysis of questionnaire data: a comparison of methods." Journal of Statistical Computation and Simulation 84, no. 11 (April 17, 2013): 2298–315. http://dx.doi.org/10.1080/00949655.2013.788654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Arai, Masafumi, Hajime Tsubaki, and Yoshinori Sagisaka. "Evidence-Based Statistical Evaluation of Japanese L2-Learners’ Proficiency using Principal Component Analysis." SHS Web of Conferences 102 (2021): 01005. http://dx.doi.org/10.1051/shsconf/202110201005.

Full text
Abstract:
This paper aims at an automatic evaluation of second language (L2) learners’ proficiencies and tries to analyze English conversation data having 94 statistics and Global Scale scores of the Common European Framework of Reference (CEFR) given to each participant. The CEFR defines Range, Accuracy, Fluency, Interaction and Coherence as 5 subcategories, which constitute the CEFR Global Scale score. The statistics were classified into the CEFR’s 5 subcategories. We used the Principal Component Analysis (PCA), an unsupervised machine learning method, on each subcategory and obtained the participants’ principal component scores (PC scores) of the 5 subcategories for estimation parameters. We predicted the participants’ CEFR Global scores using the Multiple Regression Analysis (MRA). The proposed prediction method using the PC scores was compared with conventional methods with the 94 statistics. Based on the coefficients of determination (R2), the value of the proposed method (0.82) was nearly equivalent to one of values obtained by the conventional methods. Meanwhile, as for standard deviation, the proposed method showed the smallest value in the comparison. The results indicated usability of the PCA and PC scores calculated from the CEFR subcategory data for objective evaluation of L2 learners’ English proficiencies.
APA, Harvard, Vancouver, ISO, and other styles
30

Cerrato, Robert M. "Interpretable Statistical Tests for Growth Comparisons using Parameters in the von Bertalanffy Equation." Canadian Journal of Fisheries and Aquatic Sciences 47, no. 7 (July 1, 1990): 1416–26. http://dx.doi.org/10.1139/f90-160.

Full text
Abstract:
Likelihood ratio, t-, univariate χ2-, and T2-tests have been proposed to compare von Bertalanffy parameters among stocks. As commonly applied, all of these tests are approximate, with the accuracy of each dependent on the nonlinearity of the von Bertalanffy equation, sample size, and if present, the degree of heterogeneity in the error variances. An empirical comparison of these procedures shows that the likelihood ratio test often differs in outcome from the others. Analysis of the conflicting cases by confidence region comparisons and Monte Carlo simulations almost always resolved the outcome in favor of the likelihood ratio test. The parameter effects component of nonlinearity was found to be the principal factor biasing the t-, univariate χ2-, and T2-tests. Reparameterizations of the von Bertalanffy equation substantially reduced, but did not completely eliminate, conflicting outcomes. It is concluded that the likelihood ratio test is the most accurate of the procedures considered in this study, and whenever possible, it should be the approach of choice.
APA, Harvard, Vancouver, ISO, and other styles
31

Aswinda, Aswinda, Arifuddin Siraj, and Saprin Saprin. "Effect of Principal Supervision on Teacher Pedagogic Competencies." Jurnal Ilmiah Ilmu Administrasi Publik 9, no. 1 (June 9, 2019): 95. http://dx.doi.org/10.26858/jiap.v9i1.9331.

Full text
Abstract:
The research aims to Effect of Principal Supervision on Teacher Pedagogic Competencies at Public Elementary School 237 Aletellue in Soppeng Regency.This research use quantitative methodology with correlational ex post facto design to explain the research questions. To understand the issue, this research use methodological approach (quantitative-positivistic) and scientific approach (pedagogic and psychological). The participants of this reseach are 13 teachers at Publich Elementary School 237 Aletellue. The data were gathered using survey and dokumentasi, and then analysed through descriptive statistic with hypothesis testing using correlation and linear regression coefficient. The research reveals some key findings. The first finding shows that the supervision of principals shows a comparison of the competence of pedagogic teachers. This, seen from every activity of the teacher who is able to be effective in improving the competencies possessed. Helps develop science with innovation and creativity.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Kwang-Y., and Qigang Wu. "A Comparison Study of EOF Techniques: Analysis of Nonstationary Data with Periodic Statistics." Journal of Climate 12, no. 1 (January 1, 1999): 185–99. http://dx.doi.org/10.1175/1520-0442-12.1.185.

Full text
Abstract:
Abstract Identification of independent physical/dynamical modes and corresponding principal component time series is an important aspect of climate studies for they serve as a tool for detecting and predicting climate changes. While there are a number of different eigen techniques their performance for identifying independent modes varies. Considered here are comparison tests of eight eigen techniques in identifying independent patterns from a dataset. A particular emphasis is given to cyclostationary processes such as deforming and moving patterns with cyclic statistics. Such processes are fairly common in climatology and geophysics. Two eigen techniques that are based on the cyclostationarity assumption—cyclostationary empirical orthogonal functions (EOFs) and periodically extended EOFs—perform better in identifying moving and deforming patterns than techniques based on the stationarity assumption. Application to a tropical Pacific surface temperature field indicates that the first dominant pattern and the corresponding principal component (PC) time series are consistent among different techniques. The second mode and the PC time series, however, are not very consistent from one another with hints of significant modal mixing and splitting in some of derived patterns. There also is a detailed difference of intraannual scale between PC time series of a stationary technique and those of a cyclostationary one. This may bear an important implication on the predictability of El Niño. Clearly there is a choice of eigen technique for improved predictability.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Sharon S., Jack C. Yue, and Hong-Chih Huang. "Modeling longevity risks using a principal component approach: A comparison with existing stochastic mortality models." Insurance: Mathematics and Economics 46, no. 1 (February 2010): 254–70. http://dx.doi.org/10.1016/j.insmatheco.2009.09.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Calviño, Aida. "A Simple Method for Limiting Disclosure in Continuous Microdata Based on Principal Component Analysis." Journal of Official Statistics 33, no. 1 (March 1, 2017): 15–41. http://dx.doi.org/10.1515/jos-2017-0002.

Full text
Abstract:
Abstract In this article we propose a simple and versatile method for limiting disclosure in continuous microdata based on Principal Component Analysis (PCA). Instead of perturbing the original variables, we propose to alter the principal components, as they contain the same information but are uncorrelated, which permits working on each component separately, reducing processing times. The number and weight of the perturbed components determine the level of protection and distortion of the masked data. The method provides preservation of the mean vector and the variance-covariance matrix. Furthermore, depending on the technique chosen to perturb the principal components, the proposed method can provide masked, hybrid or fully synthetic data sets. Some examples of application and comparison with other methods previously proposed in the literature (in terms of disclosure risk and data utility) are also included.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Zhenqiu, Dechang Chen, and Halima Bensmail. "Gene Expression Data Classification With Kernel Principal Component Analysis." Journal of Biomedicine and Biotechnology 2005, no. 2 (2005): 155–59. http://dx.doi.org/10.1155/jbb.2005.155.

Full text
Abstract:
One important feature of the gene expression data is that the number of genesMfar exceeds the number of samplesN. Standard statistical methods do not work well whenN<M. Development of new methodologies or modification of existing methodologies is needed for the analysis of the microarray data. In this paper, we propose a novel analysis procedure for classifying the gene expression data. This procedure involves dimension reduction using kernel principal component analysis (KPCA) and classification with logistic regression (discrimination). KPCA is a generalization and nonlinear version of principal component analysis. The proposed algorithm was applied to five different gene expression datasets involving human tumor samples. Comparison with other popular classification methods such as support vector machines and neural networks shows that our algorithm is very promising in classifying gene expression data.
APA, Harvard, Vancouver, ISO, and other styles
36

Choong, Chun Sern, Ahmad Fakhri Ab. Nasir, Muhammad Aizzat Zakaria, Anwar P.P. Abdul Majeed, and Mohd Azraai Mohd Razman. "Pallet-level Classification Using Principal Component Analysis in Ensemble Learning Model." MEKATRONIKA 2, no. 1 (June 5, 2020): 23–27. http://dx.doi.org/10.15282/mekatronika.v2i1.6720.

Full text
Abstract:
In this paper, we present a machine learning pipeline to solve a multiclass classification of radio frequency identification (RFID) signal strength. The goal is to identify ten pallet levels using nine statistical features derived from RFID signals and four various ensemble learning classification models. The efficacy of the models was evaluated by considering features that were dimensionally reduced via Principal Component Analysis (PCA) and original features. It was shown that the PCA reduced features could provide a better classification accuracy of the pallet levels in comparison to the selection of all features via Extra Tree and Random Forest models.
APA, Harvard, Vancouver, ISO, and other styles
37

Sein, Sander, Jose Campos Matos, and Juhan Idnurm. "STATISTICAL ANALYSIS OF REINFORCED CONCRETE BRDIGES IN ESTONIA." Baltic Journal of Road and Bridge Engineering 12, no. 4 (December 13, 2017): 225–33. http://dx.doi.org/10.3846/bjrbe.2017.28.

Full text
Abstract:
This paper introduces a possible way to use a multivariate methodology, called principal component analysis, to reduce the dimensionality of condition state database of bridge elements, collected during visual inspections. Attention is paid to the condition assessment of bridges in Estonian national roads and collected data, which plays an important role in the selection of correct statistical technique and obtaining reliable results. Additionally, detailed overview of typical road bridges and examples of collected information is provided. Statistical analysis is carried out by most natural reinforced concrete bridges in Estonia and comparison is made among different typologies. The introduced multivariate technique algorithms are presented and collated in two different formulations, with contrast on unevenness in variables and taking into account the missing data. Principal components and weighing factors, which are calculated for bridges with different typology, also have differences in results and element groups where variation is retained.
APA, Harvard, Vancouver, ISO, and other styles
38

Stöckl, Dietmar, Katy Dewitte, and Linda M. Thienpont. "Validity of linear regression in method comparison studies: is it limited by the statistical model or the quality of the analytical input data?" Clinical Chemistry 44, no. 11 (November 1, 1998): 2340–46. http://dx.doi.org/10.1093/clinchem/44.11.2340.

Full text
Abstract:
Abstract We compared the application of ordinary linear regression, Deming regression, standardized principal component analysis, and Passing–Bablok regression to real-life method comparison studies to investigate whether the statistical model of regression or the analytical input data have more influence on the validity of the regression estimates. We took measurements of serum potassium as an example for comparisons that cover a narrow data range and measurements of serum estradiol-17β as an example for comparisons that cover a wide data range. We demonstrate that, in practice, it is not the statistical model but the quality of the analytical input data that is crucial for interpretation of method comparison studies. We show the usefulness of ordinary linear regression, in particular, because it gives a better estimate of the standard deviation of the residuals than the other procedures. The latter is important for distinguishing whether the observed spread across the regression line is caused by the analytical imprecision alone or whether sample-related effects also contribute. We further demonstrate the usefulness of linear correlation analysis as a first screening test for the validity of linear regression data. When ordinary linear regression (in combination with correlation analysis) gives poor estimates, we recommend investigating the analytical reason for the poor performance instead of assuming that other linear regression procedures add substantial value to the interpretation of the study. This investigation should address whether (a) the x and y data are linearly related; (b) the total analytical imprecision (sa,tot) is responsible for the poor correlation; (c) sample-related effects are present (standard deviation of the residuals ≫ sa,tot); (d) the samples are adequately distributed over the investigated range; and (e) the number of samples used for the comparison is adequate.
APA, Harvard, Vancouver, ISO, and other styles
39

de Guevara Cortés, Rogelio Ladrón, Salvador Torra Porras, and Enric Monte Moreno. "Comparison of Statistical Underlying Systematic Risk Factors and Betas Driving Returns on Equities." Revista Mexicana de Economía y Finanzas 16, TNEA (August 31, 2021): 1–25. http://dx.doi.org/10.21919/remef.v16i0.697.

Full text
Abstract:
The objective of this paper is to compare four dimension reduction techniques used for extracting the underlying systematic risk factors driving returns on equities of the Mexican Market. The methodology used compares the results of estimation produced by Principal Component Analysis (PCA), Factor Analysis (FA), Independent Component Analysis (ICA), and Neural Networks Principal Component Analysis (NNPCA) under three different perspectives. The results showed that in general: PCA, FA, and ICA produced similar systematic risk factors and betas; NNPCA and ICA produced the greatest number of fully accepted models in the econometric contrast; and, the interpretation of systematic risk factors across the four techniques was not constant. Additional research testing alternative extraction techniques, econometric contrast, and interpretation methodologies are recommended, considering the limitations derived from the scope of this work. The originality and main contribution of this paper lie in the comparison of these four techniques in both the financial and Mexican contexts. The main conclusion is that depending on the purpose of the analysis, one technique will be more suitable than another.
APA, Harvard, Vancouver, ISO, and other styles
40

Saccenti, Edoardo, and José Camacho. "Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods." Chemometrics and Intelligent Laboratory Systems 149 (December 2015): 99–116. http://dx.doi.org/10.1016/j.chemolab.2015.10.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Zhonghua, Ian Barnett, and Xihong Lin. "A comparison of principal component methods between multiple phenotype regression and multiple SNP regression in genetic association studies." Annals of Applied Statistics 14, no. 1 (March 2020): 433–51. http://dx.doi.org/10.1214/19-aoas1312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Webb, A. J., and P. R. Bampton. "Impact of the new statistical technology on pig improvement." BSAP Occasional Publication 12 (1988): 111–28. http://dx.doi.org/10.1017/s0263967x00003323.

Full text
Abstract:
AbstractPast improvement programmes in pigs have concentrated on lean growth, and have relied on direct comparisons of contemporaries in a common environment. The new mixed model technology now allows comparisons across environments and generations, and, by utilizing all possible genetic relationships, more accurate prediction of genetic merit for traits of low heritability. In addition, breeding programme design will benefit from more precise estimation of genetic parameters. Principal benefits of the new technology are greater flexibility of structure of breeding programmes, and faster improvement of litter size. The consequence will be larger populations under selection with lower costs per unit of genetic improvement. For national improvement schemes using a high proportion of AI matings, central testing stations could become unnecessary. Main research priorities are the optimum family structure to balance selection and inbreeding, more efficient computing strategies for large numbers of traits, and the incorporation of single genes into predictions of merit. The new technology will be important for the exploitation of biological advances in manipulation of both reproduction and the genome.
APA, Harvard, Vancouver, ISO, and other styles
43

Samec, P., D. Vavříček, P. Šimková, and J. Pňáček. "Multivariate statistical approach to comparison of the nutrient status of Norway spruce (Picea abies [L.] Karst.) and top-soil properties in differently managed forest stands." Journal of Forest Science 53, No. 3 (January 7, 2008): 101–12. http://dx.doi.org/10.17221/2173-jfs.

Full text
Abstract:
The soil is an irreplaceable component of forest ecosystems. Soil-forming processes directly influence element cycling (EC). Plant-soil interaction is a specific part of EC. Plant-soil interactions were observed on an example of natural spruce stand (NSS), semi-natural spruce stand (SNSS) and allochthonous spruce stand (ASS) in conditions of the spruce forest altitudinal zone (1,140&minus;1,260 m a.s.l.; +3.0&deg;C; 1,200 mm) of the Hrubý Jeseník Mts. (Czech Republic, Central Europe), where Norway spruce (<i>Picea abies</i> [L.] Karst.) is the main edificator and stand-forming tree species. We evaluated the soil properties of H- and Ep-horizons at selected sites with Haplic and Skeletic Podzols and they were compared with the nutrient status of spruce. A method of the principal component analysis was used for definition of the basic hypotheses: (1) each forest stand is in specific and topically individual interactions with soil and these interactions influence its state, (2) the influence of forest management reflects in humification and in the nutrient status in plant assimilatory tissues. Cluster analysis calculated results comparable with the multivariate analysis of variance. The results show that the continuity of linear and multivariate statistical methods gives the approach to detection of the forest stage based on soil and plant tissue data.
APA, Harvard, Vancouver, ISO, and other styles
44

Holst, Helle. "Comparison of Different Calibration Methods Suited for Calibration Problems with Many Variables." Applied Spectroscopy 46, no. 12 (December 1992): 1780–84. http://dx.doi.org/10.1366/0003702924123601.

Full text
Abstract:
This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do not provide robust results in the spirit of calibration equations which can last for long periods. A way of obtaining this property is by smoothing and differentiating the data. These techniques are considered, and it is shown how they fit into the treated description.
APA, Harvard, Vancouver, ISO, and other styles
45

Fatima, Zunaira, Uzma Shahzadi, and Ashfaque Shah. "Financial Management Competence of Selected and Promoted School Heads: A Demographic Comparison." Global Social Sciences Review IV, no. IV (October 30, 2019): 188–94. http://dx.doi.org/10.31703/gssr.2019(iv-iv).24.

Full text
Abstract:
This study intended comparing the financial management competence of selected and promoted school principals (heads). It was quantitative and comparative in nature. The population was principals of government high schools in Upper Punjab. Purposive sampling technique was applied for sampling the size of 213 school heads from district Sahiwal. A selfdeveloped questionnaire comprising 34 items was used to collect data. The reliability of the questionnaire was found to be .82. Data were analyzed using advanced statistics using SPSS. The study established that school heads have awareness planning procedures and implementation processes but school heads are not confident to coordinate with stakeholders in financial matters of school. The significant difference appeared in financial management competence of promoted and selected school heads and males and females school principals. It was recommended that financial management orientation and training may be arranged for school heads of public secondary schools to improve their financial management competence.
APA, Harvard, Vancouver, ISO, and other styles
46

Židek, Radoslav, Daniela Jakabová, Jozef Trandžík, Ján Buleca, František Jakab, Peter Massányi, and László Zöldág. "Comparison of microsatellite and blood group diversity among different genotypes of cattle." Acta Veterinaria Hungarica 56, no. 3 (September 1, 2008): 323–33. http://dx.doi.org/10.1556/avet.56.2008.3.6.

Full text
Abstract:
Genetic variability and relationships among five cattle breeds (Holstein, Pinzgau, Limousin, Slovak Spotted and Charolais) bred in the Slovak Republic were investigated separately using 11 microsatellite markers and 61 blood group systems. Allele frequency, heterozygosity (H O , H E ) and PIC values were investigated. F-statistics were computed separately. For microsatellite markers F IS , F IT , F ST and for blood groups H S , H T , G ST parameters were calculated. Microsatellite and blood group comparison showed similar results by F-statistics but some differences were marked using the other methods. Both methods were able to detect close relation between Slovak Pinzgau and Slovak Spotted cattle breeds. Their relation was confirmed by genetic distance, principal component analysis (PCA) and coefficient of admixture (mY). Important divergences between different markers used in the study were observed by the characterisation of Limousin and Charolais breeds.
APA, Harvard, Vancouver, ISO, and other styles
47

Kazek, Michael S. "A Comparison of Offshore Patrol Vessels." Marine Technology and SNAME News 22, no. 04 (October 1, 1985): 351–57. http://dx.doi.org/10.5957/mt1.1985.22.4.351.

Full text
Abstract:
This is a comparative naval architecture analysis of Royal Navy, Norwegian and Indian Coast Guard vessels. Standard design statistical and estimating techniques were used to identify and compare the principal factors which influence the design of the offshore patrol vessels (OPV's) studied. The investigation covers design requirements, ship characteristics, hull form, speed and propulsion, habitability, arrangements, stability, and seekeeping. The analysis of the designs leads to the conclusion that offshore patrol vessels are ships with limited but well-defined missions. This results in designs which are simple and functional in order to emphasize basic performance for a set monetary investment.
APA, Harvard, Vancouver, ISO, and other styles
48

Jakubus, Monika, and Małgorzata Graczyk. "Evaluation of the usability of single extractors in chemical analysis of composts using principal component analysis." Biometrical Letters 52, no. 2 (December 1, 2015): 115–30. http://dx.doi.org/10.1515/bile-2015-0011.

Full text
Abstract:
Abstract The usability of various single extractors in the chemical analysis of composts was evaluated using principal component analysis. Ten different single extractors were used to determine the contents of microelements obtained in the chemical extraction of four different composts. It was found that principal component analysis is a satisfactory statistical method enabling the comparison of different solutions in terms of efficiency of extraction of microelements from composts of different composition. The results showed that 1mol dm-3 HCL and 10% HNO3 solutions had the highest extraction strength, and 0.01mol dm-3 CaCl2 and 1mol dm-3 NH4NO3 the lowest.
APA, Harvard, Vancouver, ISO, and other styles
49

C. Arcos, Susana, Lee Robertson, Sergio Ciordia, Isabel Sánchez-Alonso, Mercedes Careche, Noelia Carballeda-Sanguiao, Miguel Gonzalez-Muñoz, and Alfonso Navas. "Quantitative Proteomics Comparison of Total Expressed Proteomes of Anisakis simplex Sensu Stricto, A. pegreffii, and Their Hybrid Genotype." Genes 11, no. 8 (August 10, 2020): 913. http://dx.doi.org/10.3390/genes11080913.

Full text
Abstract:
The total proteomes of Anisakis simplex s.s., A. pegreffii and their hybrid genotype have been compared by quantitative proteomics (iTRAQ approach), which considers the level of expressed proteins. Comparison was made by means of two independent experiments considering four biological replicates of A. simplex and two each for A. pegreffii and hybrid between both species. A total of 1811 and 1976 proteins have been respectively identified in the experiments using public databases. One hundred ninety-six proteins were found significantly differentially expressed, and their relationships with the nematodes’ biological replicates were estimated by a multidimensional statistical approach. Results of pairwise Log2 ratio comparisons among them were statistically treated and supported in order to convert them into discrete character states. Principal component analysis (PCA) confirms the validity of the method. This comparison selected thirty seven proteins as discriminant taxonomic biomarkers among A. simplex, A. pegreffii and their hybrid genotype; 19 of these biomarkers, encoded by ten loci, are specific allergens of Anisakis (Ani s7, Ani s8, Ani s12, and Ani s14) and other (Ancylostoma secreted) is a common nematodes venom allergen. The rest of the markers comprise four unknown or non-characterized proteins; five different proteins (leucine) related to innate immunity, four proteolytic proteins (metalloendopeptidases), a lipase, a mitochondrial translocase protein, a neurotransmitter, a thyroxine transporter, and a structural collagen protein. The proposed methodology (proteomics and statistical) solidly characterize a set of proteins that are susceptible to take advantage of the new targeted proteomics.
APA, Harvard, Vancouver, ISO, and other styles
50

Nemani, Ramya. "Cluster and Factorial Analysis Applications in Statistical Methods." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 11, 2021): 5176–82. http://dx.doi.org/10.17762/turcomat.v12i3.2145.

Full text
Abstract:
Cluster analysis is a mathematical technique in Multivariate Data Analysis which indicates the proper guidelines in grouping the data into clusters. We can understand the concept with illustrated notations of cluster Analysis and various Clustering Techniques in this Research paper. Similarity and Dissimilarity measures and Dendogram Analysis will be computed as required measures for Analysis. Factor analysis technique is useful for understanding the underlying hidden factors for the correlations among the variables. Identification and isolation of such facts is sometimes important in several statistical methods in various fields. We can understand the importance of the Factor Analysis and major concept with illustrated Factor Analysis approaches. We can estimated the Basic Factor Modeling and Factor Loadings, and also Factor Rotation process. Provides the complete application process and approaches of Principal Factor M.L.Factor and PCA comparison of Factor Analysis in this Research paper
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography