Academic literature on the topic 'Bootstrap resampling method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bootstrap resampling method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bootstrap resampling method"

1

Putra G, Aditio, Muhammad Arif Tiro, and Muhammad Kasim Aidid. "Metode Boostrap dan Jackknife dalam Mengestimasi Parameter Regresi Linear Ganda (Kasus: Data Kemiskinan Kota Makassar Tahun 2017)." VARIANSI: Journal of Statistics and Its application on Teaching and Research 1, no. 2 (July 12, 2019): 32. http://dx.doi.org/10.35580/variansiunm12895.

Full text
Abstract:
Abstrak Metode kuadrat terkecil merupakan metode standar untuk mengestimasi nilai parameter model regresi linear. Metode tersebut dibangun berdasarkan asumsi error bersifat identik dan independen, serta berdistribusi normal. Apabila asumsi tidak terpenuhi maka metode ini tidak akurat. Alternatif untuk mengatasi hal tersebut adalah dengan menggunakan metode resampling. Adapun metode resampling yang digunakan dalam penelitian ini yaitu metode bootstrap dan Jackknife. Terlebih dahulu dilakukan estimasi nilai parameter regresi untuk analisis data kemiskinan Kota Makassar Tahun 2017. Data tersebut merupakan data sekunder diperoleh dari BAPPEDA Kota Makassar. Dari uji asumsi klasik diperoleh bahwa model tidak bersifat homoskedastis dan residual tidak berdistribusi normal sehingga model regresi yang diperoleh tidak dapat dipertanggungjawabkan. Metode bootstrap dan jackknife yang dikenalkan disini menggunakan program R untuk mencari nilai bias dan nilai standar errornya. Estimasi parameter model regresi linear berganda dari metode resampling bootstrap dengan B=200 dan B=500 serta metode resampling jackknife Terhapus-1 diperoleh model regresi. Hasil yang didapat dalam penelitian ini, metode jackknife merupakan metode yang efisien dibandingkan dengan metode bootstrap, hal ini didukung dengan kecilnya tingkat standar error dan nilai biasnya yang dihasilkan. Kata Kunci: Regrei, Resampling, Bootsrap, JaccknifeAbstract. The Ordinary least squares method is a standard method for estimating the parameter values of a linear regression model. The method is built based on error assumptions that are identical and independent, and are normally distributed. If the assumptions are not met, this method is not accurate. The alternative to overcome this is to use the resampling method. The resampling method used in this study is bootstrap and jackknife methods. First, estimation of regression parameter values for analysis of poverty data in Makassar City in 2017. The data is secondary data obtained from the BAPPEDA of Makassar City. From the classic assumption test, it is obtained that the model is not homosexedastic and residual is not normally distributed so that the regression model obtained cannot be accounted for. Bootstrap and jackknife methods are introduced here using the R program to find the value of the bias and the standard error values. Parameter estimation of multiple linear regression models from Bootstrap resampling method with B= 200, B= 500 and jackknife deleted-1 resampling method obtained regression models. The results obtained in this study, Jackknife method is an efficient method compared with the bootstrap method, and this is supported by the small standard level error and bias in resulting value.Keywords: regression, resampling, bootstrap, jackknife.
APA, Harvard, Vancouver, ISO, and other styles
2

S.W., Fransiska Grace, Sri Sulistijowati Handajani, and Titin Sri Martini. "Bootstrap Residual Ensemble Methods for Estimation of Standard Error of Parameter Logistic Regression To Hypercolesterolemia Patient Data In Health Laboratory Yogyakarta." Indonesian Journal of Applied Statistics 1, no. 1 (September 19, 2018): 29. http://dx.doi.org/10.13057/ijas.v1i1.24086.

Full text
Abstract:
Logistic regression is one of regression analysis to determine the relationship between response variable that have two possible values and some predictor variables. The method used to estimate logistic regression parameters is the maximum likelihood estimation (MLE) method. This method will produce a good estimate of the parameters if the estimation results have a small standard error.<br />In a research, the characteristics of good data must be representative of the population. If the samples taken in small size they will cause a large standard error value. Bootstrap is a resampling method that can be used to obtain a good estimate based on small data samples. Small data will be resampling so it can represent the population to obtain minimum standard error. Previous studies have discussed resampling bootstrap on residuals as much as b times. In this research we will be analyzed resampling bootstrap on the error added to the dependent variable and take the average parameter estimation ensemble logistic regression model resampling result. Next we calculate the standard value error logistic regression parameters bootstrap results.<br />This method is applied to the hypercholesterolemic patient status data in Health Laboratory Yogyakarta and after bootstrapping, the standard error produced is smaller than before the bootstrap resampling.<br />Keywords : logistic regression, standard error, bootstrap resampling, parameter estimation ensemble
APA, Harvard, Vancouver, ISO, and other styles
3

Naik, Bhaven, Laurence R. Rilett, Justice Appiah, and Lubinda F. Walubita. "Resampling Methods for Estimating Travel Time Uncertainty: Application of the Gap Bootstrap." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 42 (August 23, 2018): 137–47. http://dx.doi.org/10.1177/0361198118792124.

Full text
Abstract:
To a large extent, methods of forecasting travel time have placed emphasis on the quality of the forecasted value—how close is the forecast point estimate of the mean travel time to its respective field value? However, understanding the reliability or uncertainty margin that exists around the forecasted point estimate is also important. Uncertainty about travel time is a fundamental factor as it leads end-users to change their routes and schedules even when the average travel time is low. Statistical resampling methods have been used previously for uncertainty modeling within the travel time prediction environment. This paper applies a recently developed nonparametric resampling method, the gap bootstrap, to the travel time uncertainty estimation problem, especially as it pertains to large (probe) data sets for which common resampling methods may not be practical because of the possible computational burden and complex patterns of inhomogeneity. The gap bootstrap partitions the original data into smaller groups of approximately uniform data sets and recombines individual group uncertainty estimates into a single estimate of uncertainty. Results of the gap bootstrap uncertainty estimates are compared with those of two popular resampling methods—the traditional bootstrap and the block bootstrap. The results suggest that, for the datasets used in this research, the gap bootstrap adequately captures the dependent structure when compared with the traditional and block bootstrap methods and may thus yield more credible estimates of uncertainty than either the block bootstrap method or the traditional bootstrap method.
APA, Harvard, Vancouver, ISO, and other styles
4

Mohd Noh, Muhamad Husnain, Mohd Akramin Mohd Romlay, Chuan Zun Liang, Mohd Shamil Shaari, and Akiyuki Takahashi. "Analysis of stress intensity factor for fatigue crack using bootstrap S-version finite element model." International Journal of Structural Integrity 11, no. 4 (March 16, 2020): 579–89. http://dx.doi.org/10.1108/ijsi-10-2019-0108.

Full text
Abstract:
PurposeFailure of the materials occurs once the stress intensity factor (SIF) overtakes the material fracture toughness. At this level, the crack will grow rapidly resulting in unstable crack growth until a complete fracture happens. The SIF calculation of the materials can be conducted by experimental, theoretical and numerical techniques. Prediction of SIF is crucial to ensure safety life from the material failure. The aim of the simulation study is to evaluate the accuracy of SIF prediction using finite element analysis.Design/methodology/approachThe bootstrap resampling method is employed in S-version finite element model (S-FEM) to generate the random variables in this simulation analysis. The SIF analysis studies are promoted by bootstrap S-version Finite Element Model (BootstrapS-FEM). Virtual crack closure-integral method (VCCM) is an important concept to compute the energy release rate and SIF. The semielliptical crack shape is applied with different crack shape aspect ratio in this simulation analysis. The BootstrapS-FEM produces the prediction of SIFs for tension model.FindingsThe mean of BootstrapS-FEM is calculated from 100 samples by the resampling method. The bounds are computed based on the lower and upper bounds of the hundred samples of BootstrapS-FEM. The prediction of SIFs is validated with Newman–Raju solution and deterministic S-FEM within 95 percent confidence bounds. All possible values of SIF estimation by BootstrapS-FEM are plotted in a graph. The mean of the BootstrapS-FEM is referred to as point estimation. The Newman–Raju solution and deterministic S-FEM values are within the 95 percent confidence bounds. Thus, the BootstrapS-FEM is considered valid for the prediction with less than 6 percent of percentage error.Originality/valueThe bootstrap resampling method is employed in S-FEM to generate the random variables in this simulation analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Kashani, M., M. Arashi, and M. R. Rabiei. "Resampling in Fuzzy Regression via Jackknife-after-Bootstrap (JB)." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29, no. 04 (August 2021): 517–35. http://dx.doi.org/10.1142/s0218488521500227.

Full text
Abstract:
In fuzzy regression modeling, when the sample size is small, resampling methods are appropriate and useful for improving model estimation. However, in the commonly used bootstrap method, the standard errors of estimates are also random because of randomness existing in samples. This paper investigates the use of Jackknife-after-Bootstrap (JB) in fuzzy regression modeling to address this problem and produce estimates with smaller mean prediction errors. Performance analysis is carried out through some numerical illustrations and some interactive graphs to illustrate the superiority of the JB method compared to the bootstrap. Moreover, it is demonstrated that using the JB method, we have a significant model, with some sense; however, this is not the case using the bootstrap method.
APA, Harvard, Vancouver, ISO, and other styles
6

Fitrianto, Anwar, and Punitha Linganathan. "Comparisons between Resampling Techniques in Linear Regression: A Simulation Study." CAUCHY: Jurnal Matematika Murni dan Aplikasi 7, no. 3 (October 11, 2022): 345–53. http://dx.doi.org/10.18860/ca.v7i3.14550.

Full text
Abstract:
The classic methods used in estimating the parameters in linear regression need to fulfill some assumptions. If the assumptions are not fulfilled, the conclusion is questionable. Resampling is one of the ways to avoid such problems. The study aims to compare resampling techniques in linear regression. The original data used in the study is clean, without any influential observations, outliers and leverage points. The ordinary least square method was used as the primary method to estimate the parameters and then compared with resampling techniques. The variance, p-value, bias, and standard error are used as a scale to estimate the best method among random bootstrap, residual bootstrap and delete-one Jackknife. After all the analysis took place, it was found that random bootstrap did not perform well while residual and delete-one Jackknife works quite well. Random bootstrap, residual bootstrap, and Jackknife estimate better than ordinary least square. Is was found that residual bootstrap works well in estimating the parameter in the small sample. At the same time, it is suggested to use Jackknife when the sample size is big because Jackknife is more accessible to apply than residual bootstrap and Jackknife works well when the sample size is big.
APA, Harvard, Vancouver, ISO, and other styles
7

Ivšinović, Josip, and Nikola Litvić. "Application of the bootstrap method on a large input data set - case study western part of the Sava Depression." Rudarsko-geološko-naftni zbornik 36, no. 5 (2021): 13–19. http://dx.doi.org/10.17794/rgn.2021.5.2.

Full text
Abstract:
The bootstrap method is a nonparametric statistical method that provides through resampling the input data set to obtain a new data set that is normally distributed. Due to various factors, deep geological data are difficult to obtain many data set, and in most cases, they are not normally distributed. Therefore, it is necessary to introduce a statistical tool that will enable obtaining a set with which statistical analyses can be done. The bootstrap method was applied to field "A", reservoir "L" located in the western part of the Sava Depression. It was applied to the geological variable of porosity on a set of 25 data. The minimum number of resampling's required for a large sample to obtain a normal distribution is 1000. Interval estimation of porosity for reservoir "L" obtained by bootstrap method is 0.1875 to 0.2144 with 95% confidence level.
APA, Harvard, Vancouver, ISO, and other styles
8

Hung, Wen-Liang, E. Stanley Lee, and Shun-Chin Chuang. "Balanced bootstrap resampling method for neural model selection." Computers & Mathematics with Applications 62, no. 12 (December 2011): 4576–81. http://dx.doi.org/10.1016/j.camwa.2011.10.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dwornicka, Renata, Andrii Goroshko, and Jacek Pietraszek. "The Smoothed Bootstrap Fine-Tuning." System Safety: Human - Technical Facility - Environment 1, no. 1 (March 1, 2019): 716–23. http://dx.doi.org/10.2478/czoto-2019-0091.

Full text
Abstract:
AbstractThe bootstrap method is a well-known method to gather a full probability distribution from the dataset of a small sample. The simple bootstrap i.e. resampling from the raw dataset often leads to a significant irregularities in a shape of resulting empirical distribution due to the discontinuity of a support. The remedy for these irregularities is the smoothed bootstrap: a small random shift of source points before each resampling. This shift is controlled by specifically selected distributions. The key issue is such parameter settings of these distributions to achieve the desired characteristics of the empirical distribution. This paper describes an example of this procedure.
APA, Harvard, Vancouver, ISO, and other styles
10

HE, XUMING, and FEIFANG HU. "SOME RECENT ADVANCES ON BOOTSTRAP." COSMOS 01, no. 01 (May 2005): 75–86. http://dx.doi.org/10.1142/s021960770500005x.

Full text
Abstract:
The bootstrap is a computer-based resampling method that can provide good approximations to the distribution of a given statistic. We review some common forms of bootstrap-based confidence intervals, with emphasis on some recent work on the estimating function bootstrap and Markov chain marginal bootstrap.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bootstrap resampling method"

1

Yam, Chiu Yu. "Quasi-Monte Carlo methods for bootstrap." HKBU Institutional Repository, 2000. http://repository.hkbu.edu.hk/etd_ra/272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

BAI, HAIYAN. "A NEW RESAMPLING METHOD TO IMPROVE QUALITY RESEARCH WITH SMALL SAMPLES." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1172526468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Willrich, Niklas. "Resampling-based tuning of ordered model selection." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17376.

Full text
Abstract:
In dieser Arbeit wird die Smallest-Accepted Methode als neue Lepski-Typ Methode für Modellwahl im geordneten Fall eingeführt. In einem ersten Schritt wird die Methode vorgestellt und im Fall von Schätzproblemen mit bekannter Fehlervarianz untersucht. Die Hauptkomponenten der Methode sind ein Akzeptanzkriterium, basierend auf Modellvergleichen für die eine Familie von kritischen Werten mit einem Monte-Carlo-Ansatz kalibriert wird, und die Wahl des kleinsten (in Komplexität) akzeptierten Modells. Die Methode kann auf ein breites Spektrum von Schätzproblemen angewandt werden, wie zum Beispiel Funktionsschätzung, Schätzung eines linearen Funktionals oder Schätzung in inversen Problemen. Es werden allgemeine Orakelungleichungen für die Methode im Fall von probabilistischem Verlust und einer polynomialen Verlustfunktion gezeigt und Anwendungen der Methode in spezifischen Schätzproblemen werden untersucht. In einem zweiten Schritt wird die Methode erweitert auf den Fall einer unbekannten, möglicherweise heteroskedastischen Fehlerstruktur. Die Monte-Carlo-Kalibrierung wird durch eine Bootstrap-basierte Kalibrierung ersetzt. Eine neue Familie kritischer Werte wird eingeführt, die von den (zufälligen) Beobachtungen abhängt. In Folge werden die theoretischen Eigenschaften dieser Bootstrap-basierten Smallest-Accepted Methode untersucht. Es wird gezeigt, dass unter typischen Annahmen unter normalverteilten Fehlern für ein zugrundeliegendes Signal mit Hölder-Stetigkeits-Index s > 1/4 und log(n) (p^2/n) klein, wobei n hier die Anzahl der Beobachtungen und p die maximale Modelldimension bezeichnet, die Anwendung der Bootstrap-Kalibrierung anstelle der Monte-Carlo-Kalibrierung theoretisch gerechtfertigt ist.
In this thesis, the Smallest-Accepted method is presented as a new Lepski-type method for ordered model selection. In a first step, the method is introduced and studied in the case of estimation problems with known noise variance. The main building blocks of the method are a comparison-based acceptance criterion relying on Monte-Carlo calibration of a set of critical values and the choice of the model as the smallest (in complexity) accepted model. The method can be used on a broad range of estimation problems like function estimation, estimation of linear functionals and inverse problems. General oracle results are presented for the method in the case of probabilistic loss and for a polynomial loss function. Applications of the method to specific estimation problems are studied. In a next step, the method is extended to the case of an unknown possibly heteroscedastic noise structure. The Monte-Carlo calibration step is now replaced by a bootstrap-based calibration. A new set of critical values is introduced, which depends on the (random) observations. Theoretical properties of this bootstrap-based Smallest-Accepted method are then studied. It is shown for normal errors under typical assumptions, that the replacement of the Monte-Carlo step by bootstrapping in the Smallest-Accepted method is valid, if the underlying signal is Hölder-continuous with index s > 1/4 and log(n) (p^2/n) is small for a sample size n and a maximal model dimension p.
APA, Harvard, Vancouver, ISO, and other styles
4

Yazici, Ceyda. "A Computational Approach To Nonparametric Regression: Bootstrapping Cmars Method." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613708/index.pdf.

Full text
Abstract:
Bootstrapping is a resampling technique which treats the original data set as a population and draws samples from it with replacement. This technique is widely used, especially, in mathematically intractable problems. In this study, it is used to obtain the empirical distributions of the parameters to determine whether they are statistically significant or not in a special case of nonparametric regression, Conic Multivariate Adaptive Regression Splines (CMARS). Here, the CMARS method, which uses conic quadratic optimization, is a modified version of a well-known nonparametric regression model, Multivariate Adaptive Regression Splines (MARS). Although performing better with respect to several criteria, the CMARS model is more complex than that of MARS. To overcome this problem, and to improve the CMARS performance further, three different bootstrapping regression methods, namely, Random-X, Fixed-X and Wild Bootstrap are applied on four data sets with different size and scale. Then, the performances of the models are compared using various criteria including accuracy, precision, complexity, stability, robustness and efficiency. Random-X yields more precise, accurate and less complex models particularly for medium size and medium scale data even though it is the least efficient method.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Yifan. "Modelling and resampling based multiple testing with applications to genetics." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1123278702.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xii, 97 p.; also includes graphics. Includes bibliographical references (p. 94-97). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
6

Critchfield, Brian L. "Statistical Methods For Kinetic Modeling Of Fischer Tropsch Synthesis On A Supported Iron Catalyst." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thangavelu, Karthinathan. "Quantile estimation based on the almost sure central limit theorem." Doctoral thesis, [S.l.] : [s.n.], 2006. http://webdoc.sub.gwdg.de/diss/2006/thangavelu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Albuquerque, João David Ferreira de Castro. "Classification methods applied to familial hypercholesterolemia diagnosis in pediatric age." Master's thesis, 2019. http://hdl.handle.net/10451/40378.

Full text
Abstract:
Tese de mestrado, Bioestatística, Universidade de Lisboa, Faculdade de Ciências, 2019
Introdução: A Hipercolesterolemia Familiar (FH) é uma doença genética do metabolismo lipídico, caracterizada por níveis elevados de colesterol proveniente das lipoproteínas de baixa densidade (LDLc). A severa dislipidemia resultante leva ao desenvolvimento precoce de aterosclerose, representando um grande factor de risco de doença cardiovascular (CVD). O diagnóstico antecipado da FH encontra-se associado com uma redução significativa do risco de CVD, fundamentando a introdução de medidas terapêuticas mais precoces e agressivas. Existem diferentes critérios clínicos disponíveis para o diagnóstico da FH, sendo que apenas através de teste genético se pode confirmar o mesmo. Os critérios de Simon Broome (SB) para o diagnóstico da FH são dos mais frequentemente utilizados em contexto clínico, e são baseados na história familiar, presença de sinais físicos, e concentração plasmática de LDLc e colesterol total (TC). Quando comparados com os resultados do diagnóstico genético contudo, os critérios de SB apresentam uma elevada taxa de falsos positivos, o que constitui um pesado fardo em termos de despesas de saúde, e limita o acesso ao estudo molecular por parte de um maior universo de potenciais casos de FH. Objectivos: O objectivo principal do presente estudo foi desenvolver métodos de classificação alternativos para o diagnóstico da FH, a partir de diferentes indicadores bioquímicos, que pudessem demonstrar melhor capacidade para rastrear esta patologia comparativamente aos critérios de SB. Dois modelos distintos foram desenvolvidos para este propósito: um modelo de regressão logística (LR) e um modelo em árvore de decisão (DT). Métodos: Concentrações séricas de TC, LDLc, colesterol associado às lipoproteínas de alta densidade (HDLc), triglicerídeos (TG), apolipoproteinas AI (apoAI) e B (apoB), e lipoproteína(a) (Lp(a)) foram determinadas, e o diagnóstico molecular foi efectuado, numa amostra de 252 participantes no estudo Português de FH, em idade pediátrica (2-17 anos). Todos os participantes possuíam os critérios clínicos de dislipidemia, e não se encontravam sob medicação hipolipidémica durante o período de avaliação. Os modelos de LR e DT foram ajustados aos dados da amostra. Para o modelo de LR, dois valores de corte distintos foram definidos, através de análise de curvas ROC (receiver operating characteristics), de acordo com os métodos do índice de Youden e mínimo valor-p (min p). A construção da DT foi baseada em medidas de redução da entropia, ou ganho de informação. Uma versão modificada da DT foi implementada, na qual se procedeu à exclusão sequencial de variáveis á medida que eram incluídas no modelo. Este processo permite produzir uma regra de classificação que utiliza valores de corte únicos para cada biomarcador, simplificando a sua interpretação. Diferentes características operacionais (OC) foram estimadas para todos os modelos: acurácia (Acc), sensibilidade (Se), especificidade (Spe), valor preditivo positivo (PPV) e valor preditivo negativo (NPV). Estas OC foram calculadas através de uma matriz de confusão, considerando os resultados do teste molecular como o verdadeiro estado da doença. O modelo de LR e a DT com melhor desempenho foram comparados com os critérios bioquímicos de SB, através de técnicas de bootstrap resampling. Os valores da média e da mediana para as OC de 200 amostras bootstrap foram utilizados para comparação da performance preditiva dos modelos. Resultados: A função logit para o modelo de LR final foi expressa como g(π) = -7:083 +0:086 X LDLc - 0:041 X TG - 0:037 X apoAI. O modelo DT com melhor desempenho incluiu as variáveis LDLc, TG, apoAI, apoB e HDLc, por ordem descendente de importância. Entre os diferentes métodos de classificação, os valores de Acc, Spe e PPV foram mais elevados para o modelo DT, seguido do modelo LR com valor de corte (c) definido pelo método min p (c = 0:35). Os valores mais reduzidos para estas OC são encontrados com os critérios de SB (p < 0:01). Valores mais elevados de Se e NPV por outro lado, são alcançados pelos critérios de SB, e pelo modelo de LR com o valor de corte calculado através do índice de Youden (c = 0:17). O modelo de LR utilizando este ponto de corte revela contudo valores significativamente mais elevados de Acc, Spe e NPV (p < 0:01) em relação aos critérios de SB. Conclusões: Tanto o modelo de LR como DT parecem ser alternativas válidas aos tradicionais critérios clínicos para diagnóstico da FH. Parece ser possível ajustar o valor de corte do modelo de LR para obter níveis de Se similares aos observados para os critérios de SB, com uma retenção de casos falsos positivos significativamente menor. A validação destes resultados por dados adicionais, indicaria indubitavelmente este método como preferível entre os dois, e poderá ter um impacto muito significativo em termos de relação custo-efectividade. Ao evitar a repetição de variáveis predictoras, e providenciar valores de corte únicos para cada biomarcador, o modelo DT modificado assume uma estrutura que se assemelha aos critérios médicos clássicos, e pode portanto ser facilmente utilizado na prática clínica. Parece que, apesar de serem baseados em metodologias distintas, tanto o modelo de LR como a DT são capazes de dividir a amostra de acordo com os indicadores bioquímicos mais relevantes para o diagnóstico da FH. De acordo com ambos os métodos de classificação, a presença de FH encontra-se directamente relacionada com os níveis de LDLc, e inversamente relacionada com as concentrações de TG e apoAI, por esta ordem de importância. O modelo de classificação preferido, assim como as especificações do mesmo, podem variar em função das OC que são consideradas mais importantes, e do contexto em que este é aplicado.
Introduction: Familial Hypercholesterolemia (FH) is an inherited disorder of lipid metabolism, characterized by increased low density lipoprotein cholesterol (LDLc) levels. The resulting severe dyslipidemia leads to the early development of atherosclerosis, representing a major risk factor for cardiovascular disease (CVD). The early diagnosis of FH is associated with a significant reduction in CVD risk, supporting the introduction of precocious and more aggressive therapeutic measures. There are different clinical criteria available for the diagnosis of FH, although only genetic testing can confirm the diagnostic. Simon Broome (SB) criteria for FH diagnosis are among the most frequently used in clinical setting, and are based on family history, presence of physical signs, and LDLc and total cholesterol (TC) levels. When compared to genetic diagnosis results however, SB criteria present a high false positive rate, which constitutes a heavy burden in terms of healthcare costs, and limits the access to the genetic study of a larger universe of potential FH cases. Aim: The main purpose of this work was to develop alternative classification methods for FH diagnosis, based on different biochemical indicators, with improved ability to screen for FH cases in comparison to SB criteria. Two different models were developed for this purpose: a logistic regression (LR), and a decision tree (DT) model. Methods: Serum concentrations of TC, LDLc, high density lipoprotein cholesterol (HDLc), triglycerides (TG), apolipoproteins AI (apoAI) and B (apoB), and lipoprotein(a) (Lp(a)) were determined, and genetic diagnosis was performed, in a sample of 252 participants in the Portuguese FH Study, at pediatric age (2-17 years). All patients met the clinical criteria for dyslipidemia, and were not under hypolipidemic medication during the evaluation period. LR and DT models were fitted to sample data. For the LR model, two different cutoff points were defined, through receiver operating characteristics (ROC) curve analysis, following Yoden index and minimum p-value (min p) methods. The DT was built based on entropy reduction, or information gain measures. A modified version of the DT method was implemented, consisting in the sequential exclusion of predictor variables as they are introduced in the model. This allows producing a classification rule that uses single cutpoints for biomarkers, simplifying its interpretation. Different operating characteristics (OC) were estimated for all models: accuracy (Acc), sensitivity (Se), specificity (Spe), positive predictive value (PPV ) and negative predictive value (NPV ). These OC were calculated by generating a confusion matrix, considering molecular study results as the true state of the disease. The best performing LR and DT models were compared with SB biochemical criteria for FH diagnosis, through bootstrap resampling techniques. Median and mean values of the OC for 200 bootstrap samples were used for predictive performance comparison. Results: The logit function for the LR final model was expressed as g(π) = -7:083 + 0:086 X LDLc -0:041 X TG - 0:037X apoAI. The best performing DT model included the variables LDLc, TG, apoAI, apoB and HDLc, by descending order of importance. Between the different classification methods, Acc, Spe and PPV were higher in the DT model, followed by the LR model with the cut point value (c) defined by the min p method (c = 0:35). The lower values in these OC are found for SB criteria (p < 0:01). Higher Se and NPV on the other hand, are achieved by SB criteria, and the LR model with the cutpoint value calculated by Youden index (c = 0:17). However, the LR model using this cutpoint achieves significantly higher Acc, Spe and NPV than SB criteria (p < 0:01). Conclusions: Both LR and DT models seem to be a valid alternative to traditional clinical criteria for FH diagnosis. It seems possible to adjust the cutoff value in the LR model for similar Se levels as the ones observed in SB criteria, with significantly less false positive retention. To be validated by additional data, this would undoubtedly indicate this method as preferable between the two, and can have a very important impact in terms of cost-effectiveness. By avoiding the repetition of predictor variables, and providing single cutoff values for each biomarker, the modified DT model assumes a structure that typically resembles medical criteria, and can therefore be easily used in clinical practice. It seems that, in spite using different methodological approaches, both LR and DT models are able to divide the sample according to the most relevant biochemical characteristics for FH diagnosis. According to both classification methods, presence of FH is directly related to LDLc levels, and inversely related to TG and ApoAI concentrations, by this order of importance. The preferred classification model, as well as model specifications, may vary as a function of the OC that are considered more important, and context in which it is applied.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Bootstrap resampling method"

1

McMurry, Timothy, and Dimitris Politis. Resampling methods for functional data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.7.

Full text
Abstract:
This article examines the current state of methodological and practical developments for resampling inference techniques in functional data analysis, paying special attention to situations where either the data and/or the parameters being estimated take values in a space of functions. It first provides the basic background and notation before discussing bootstrap results from nonparametric smoothing, taking into account confidence bands in density estimation as well as confidence bands in nonparametric regression and autoregression. It then considers the major results in subsampling and what is known about bootstraps, along with a few recent real-data applications of bootstrapping with functional data. Finally, it highlights possible directions for further research and exploration.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Russell. Non-Standard Parametric Statistical Inference. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.001.0001.

Full text
Abstract:
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferraty, Frédéric, and Yves Romain, eds. The Oxford Handbook of Functional Data Analysis. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.001.0001.

Full text
Abstract:
This handbook presents the state-of-the-art of the statistics dealing with functional data analysis. With contributions from international experts in the field, it discusses a wide range of the most important statistical topics (classification, inference, factor-based analysis, regression modeling, resampling methods, time series, random processes) while also taking into account practical, methodological, and theoretical aspects of the problems. The book is organised into three sections. Part I deals with regression modeling and covers various statistical methods for functional data such as linear/nonparametric functional regression, varying coefficient models, and linear/nonparametric functional processes (i.e. functional time series). Part II considers related benchmark methods/tools for functional data analysis, including curve registration methods for preprocessing functional data, functional principal component analysis, and resampling/bootstrap methods. Finally, Part III examines some of the fundamental mathematical aspects of the infinite-dimensional setting, with a focus on the stochastic background and operatorial statistics: vector-valued function integration, spectral and random measures linked to stationary processes, operator geometry, vector integration and stochastic integration in Banach spaces, and operatorial statistics linked to quantum statistics.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bootstrap resampling method"

1

Lahiri, S. N. "Bootstrap Methods." In Resampling Methods for Dependent Data, 17–43. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-1-4757-3803-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lahiri, S. N. "Model-Based Bootstrap." In Resampling Methods for Dependent Data, 199–220. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-1-4757-3803-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lahiri, S. N. "Frequency Domain Bootstrap." In Resampling Methods for Dependent Data, 221–40. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-1-4757-3803-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lahiri, S. N. "Comparison of Block Bootstrap Methods." In Resampling Methods for Dependent Data, 115–44. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-1-4757-3803-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lahiri, S. N. "Properties of Block Bootstrap Methods for the Sample Mean." In Resampling Methods for Dependent Data, 45–71. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-1-4757-3803-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Deheuvels, Paul, and Gérard Derzko. "Asymptotic Certainty Bands for Kernel Density Estimators Based upon a Bootstrap Resampling Scheme." In Statistical Models and Methods for Biomedical and Technical Systems, 171–86. Boston, MA: Birkhäuser Boston, 2008. http://dx.doi.org/10.1007/978-0-8176-4619-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chatterjee, Arpita, and Santu Ghosh. "A Review of Bootstrap Methods in Ranked Set Sampling." In Ranked Set Sampling Models and Methods, 171–89. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-7556-7.ch008.

Full text
Abstract:
This chapter provides a brief review of the existing resampling methods for RSS and its implementation to construct a bootstrap confidence interval for the mean parameter. The authors present a brief comparison of these existing methods in terms of their flexibility and consistency. To construct the bootstrap confidence interval, three methods are adopted, namely, bootstrap percentile method, bias-corrected and accelerated method, and method based on monotone transformation along with normal approximation. Usually, for the second method, the accelerated constant is computed by employing the jackknife method. The authors discuss an analytical expression for the accelerated constant, which results in reducing the computational burden of this bias-corrected and accelerated bootstrap method. The usefulness of the proposed methods is further illustrated by analyzing real-life data on shrubs.
APA, Harvard, Vancouver, ISO, and other styles
8

Rodrigues Liska, Gilberto, Luiz Alberto Beijo, Marcelo Ângelo Cirillo, Flávio Meira Borém, and Fortunato Silva de Menezes. "Intensive Computational Method Applied for Assessing Specialty Coffees by Trained and Untrained Consumers." In Confidence Regions - Applications, Tools and Challenges of Estimation [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.95234.

Full text
Abstract:
The sensory analysis of coffees assumes that a sensory panel is formed by tasters trained according to the recommendations of the American Specialty Coffee Association. However, the choice that routinely determines the preference of a coffee is made through experimentation with consumers, in which, for the most part, they have no specific ability in relation to sensory characteristics. Considering that untrained consumers or those with basic knowledge regarding the quality of specialty coffees have little ability to discriminate between different sensory attributes, it is reasonable to admit the highest score given by a taster. Given this fact, probabilistic studies considering appropriate probability distributions are necessary. To access the uncertainty inherent in the notes given by the tasters, resampling methods such as Monte Carlo’s can be considered and when there is no knowledge about the distribution of a given statistic, p-Bootstrap confidence intervals become a viable alternative. This text will bring considerations about the use of the non-parametric resampling method by Bootstrap with application in sensory analysis, using probability distributions related to the maximum scores of tasters and accessing the most frequent region (mode) through computational resampling methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Edge, M. D. "Semiparametric estimation and inference." In Statistical Thinking from Scratch, 139–64. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198827627.003.0010.

Full text
Abstract:
Nonparametric and semiparametric statistical methods assume models whose properties cannot be described by a finite number of parameters. For example, a linear regression model that assumes that the disturbances are independent draws from an unknown distribution is semiparametric—it includes the intercept and slope as regression parameters but has a nonparametric part, the unknown distribution of the disturbances. Nonparametric and semiparametric methods focus on the empirical distribution function, which, assuming that the data are really independent observations from the same distribution, is a consistent estimator of the true cumulative distribution function. In this chapter, with plug-in estimation and the method of moments, functionals or parameters are estimated by treating the empirical distribution function as if it were the true cumulative distribution function. Such estimators are consistent. To understand the variation of point estimates, bootstrapping is used to resample from the empirical distribution function. For hypothesis testing, one can either use a bootstrap-based confidence interval or conduct a permutation test, which can be designed to test null hypotheses of independence or exchangeability. Resampling methods—including bootstrapping and permutation testing—are flexible and easy to implement with a little programming expertise.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yun, and Lee Seidman. "Risk Factors to Retrieve Anomaly Intrusion Information and Profile User Behavior." In Information Security and Ethics, 2407–21. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-937-3.ch159.

Full text
Abstract:
The use of network traffic audit data for retrieving anomaly intrusion information and profiling user behavior has been studied previously, but the risk factors associated with attacks remain unclear. This study aimed to identify a set of robust risk factors via the bootstrap resampling and logistic regression modeling methods based on the KDD-cup 1999 data. Of the 46 examined variables, 16 were identified as robust risk factors, and the classification showed similar performances in sensitivity, specificity, and correctly classified rate in comparison with the KDD-cup 1999 winning results that were based on a rule-based decision tree algorithm with all variables. The study emphasizes that the bootstrap simulation and logistic regression modeling techniques offer a novel approach to understanding and identifying risk factors for better information protection on network security.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bootstrap resampling method"

1

Monbet, Vale´rie, Pierre Ailliot, and Marc Prevosto. "Nonlinear Simulation of Multivariate Sea State Time Series." In ASME 2005 24th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2005. http://dx.doi.org/10.1115/omae2005-67490.

Full text
Abstract:
In this paper, three nonlinear methods are described for artificially generating operational sea state histories. In the first method, referred to as Translated Gaussian Process the observed time series is transformed to a process which is supposed to be Gaussian. This Gaussian process is simulated and back transformed. The second method, called Local Grid Bootstrap, consists in a resampling algorithm for Markov chains within which the transition probabilities are estimated locally. The last models is a Markov Switching Autoregressive model which allows in particular to model different weather types.
APA, Harvard, Vancouver, ISO, and other styles
2

Tyler, James Clay, and T. Agami Reddy. "Using the Bootstrap Method to Determine Uncertainty Bounds for Change Point Utility Bill Energy Models." In ASME 2013 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/imece2013-62209.

Full text
Abstract:
Statistical inverse modeling of energy use in buildings and of HVAC&R equipment and systems has been widely researched, and are fairly well ingrained in the profession. However, there are still a few nagging issues; one of them is related to the accuracy in estimating model prediction uncertainty bands at a pre-specified confidence level. This issue is important since it bears directly on the risk associated with the identified energy savings. While several papers have been published dealing with uncertainty in statistical models, the non-heteroscedascity and the non-gaussian behavior of the residuals are problematic to handle using classical statistical equations of model prediction uncertainty. This paper proposes and illustrates the use of the Bootstrap method as a robust and flexible alternative approach to determining uncertainty bands for change point model predictions identified from utility bills. In essence, the bootstrap method works by taking a data set and resampling it with replacement. In the case of utility bill analysis, one starts with 12 data points representing energy use for each month of the year. Such samples are repeatedly generated to produce a large (say, m) number of synthetic data sets from which m different change point models can be identified. These m models are used to make predictions at any pre-specified outdoor temperature, and the 95% (or any other) prediction interval bands can be determined non-parametrically from the m data predictions. This paper fully describes and illustrates this approach along with a case study example.
APA, Harvard, Vancouver, ISO, and other styles
3

Boufidi, Elissavet, Sergio Lavagnoli, and Fabrizio Fontaneto. "A Probabilistic Uncertainty Estimation Method for Turbulence Parameters Measured by Hot Wire Anemometry in Short Duration Wind Tunnels." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-90461.

Full text
Abstract:
Abstract A robust and complete uncertainty estimation method is developed to quantify the uncertainty of turbulence quantities measured by hot-wire anemometry at the inlet of a short-duration turbine test rig. The uncertainty is categorized into two macro-uncertainty sources: the measurement related uncertainty (the uncertainty of each instantaneous velocity sample) and the uncertainty stemming from the statistical treatment of the time series. The former is addressed by the implementation of a Monte Carlo method. The latter, which is directly related to the duration of the acquired signal, is estimated using the moving block bootstrap method, a non-parametric resampling algorithm suitable for correlated time series. This methodology allows computing the confidence intervals of the spanwise distributions of mean velocity, turbulence intensity, length scales and other statistical moments at the inlet of the turbine test section.
APA, Harvard, Vancouver, ISO, and other styles
4

Randell, David, Yanyun Wu, Philip Jonathan, and Kevin Ewans. "Modelling Covariate Effects in Extremes of Storm Severity on the Australian North West Shelf." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10187.

Full text
Abstract:
Careful modelling of covariate effects is critical to reliable specification of design criteria. We present a spline based methodology to incorporate spatial, directional, temporal and other covariate effects in extreme value models for environmental variables such as storm severity. For storm peak significant wave height events, the approach uses quantile regression to estimate a suitable extremal threshold, a Poisson process model for the rate of occurrence of threshold exceedances, and a generalised Pareto model for size of threshold. Multidimensional covariate effects are incorporated at each stage using penalised tensor products of B-splines to give smooth model parameter variation as a function of multiple covariates. Optimal smoothing penalties are selected using cross-validation, and model uncertainty is quantified using a bootstrap resampling procedure. The method is applied to estimate return values for a large spatial neighbourhood of locations off the North West Shelf of Australia, incorporating spatial and directional effects.
APA, Harvard, Vancouver, ISO, and other styles
5

Lewis, John R., Dusty Brooks, and Michael L. Benson. "Methods for Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions." In ASME 2017 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/pvp2017-65552.

Full text
Abstract:
Weld residual stress (WRS) is a major driver of primary water stress corrosion cracking (PWSCC) in safety critical components of nuclear power plants. Accurate understanding of WRS is thus crucial for reliable prediction of safety performance of component design throughout the life of the plant. However, measurement uncertainty in WRS is significant, driven by the method and the indirect nature in which WRS must be measured. Likewise, model predictions of WRS vary due to uncertainty induced by individual modeling choices. The uncertainty in WRS measurements and modeling predictions is difficult to quantify and complicates the use of WRS measurements in validating WRS predictions for future use in safety evaluations. This paper describes a methodology for quantifying WRS uncertainty that facilitates the comparison of predictions and measurements and informs design safety evaluations. WRS is considered as a function through the depth of the weld. To quantify its uncertainty, functional data analysis techniques are utilized to account for the two types of variation observed in functional data: phase and amplitude. Phase variability, also known as horizontal variability, describes the variability in the horizontal direction (i.e., through the depth of the weld). Amplitude variability, also known as vertical variability, describes the variation in the vertical direction (i.e., magnitude of stresses). The uncertainty in both components of variability is quantified using statistical models in principal component space. Statistical confidence/tolerance bounds are constructed using statistical bootstrap (i.e., resampling) techniques applied to these models. These bounds offer a succinct quantification of the uncertainty in both the predictions and measurements as well as a method to quantitatively compare the two. Major findings show that the level of uncertainty among measurements is comparable to that among predictions and further experimental work is recommended to inform a validation effort for prediction models.
APA, Harvard, Vancouver, ISO, and other styles
6

"DEAL EFFECT CURVE AND PROMOTIONAL MODELS - Using Machine Learning and Bootstrap Resampling Test." In International Conference on Pattern Recognition Applications and Methods. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003732705370540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jekel, Charles F., and Vicente Romero. "Bootstrapping and Jackknife Resampling to Improve Sparse-Sample UQ Methods for Tail Probability Estimation." In ASME 2019 Verification and Validation Symposium. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/vvs2019-5127.

Full text
Abstract:
Abstract Tolerance Interval Equivalent Normal (TI-EN) and Superdistribution (SD) sparse-sample uncertainty quantification (UQ) methods are used for conservative estimation of small tail probabilities. These methods are used to estimate the probability of a response laying beyond a specified threshold with limited data. The study focused on sparse-sample regimes ranging from N = 2 to 20 samples, because this is reflective of most experimental and some expensive computational situations. A tail probability magnitude of 10−4 was examined on four different distribution shapes, in order to be relevant for quantification of margins and uncertainty (QMU) problems that arise in risk and reliability analyses. In most cases the UQ methods were found to have optimal performance with a small number of samples, beyond which the performance deteriorated as samples were added. Using this observation, a generalized Jackknife resampling technique was developed to average many smaller subsamples. This improved the performance of the SD and TI-EN methods, specifically when a larger than optimal number of samples were available. A Complete Jackknifing technique, which considered all possible sub-sample combinations, was shown to perform better in most cases than an alternative Bootstrap resampling technique.
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Vander, Katerina Lukasova, and Maria Carthery Goulart. "APPLICATION OF A BATTERY OF EXECUTIVE FUNCTIONS IN HEALTHY ELDERLY: A PILOT STUDY." In XIII Meeting of Researchers on Alzheimer's Disease and Related Disorders. Zeppelini Editorial e Comunicação, 2021. http://dx.doi.org/10.5327/1980-5764.rpda106.

Full text
Abstract:
Background: Several studies demonstrate that healthy elderly people present impairments in different executive functions (for example, inhibition, updating and alternation). However, these works use tasks that measure reaction time as a dependent variable, and it is already known that processing speed decreases with age. Objective: As a consequence of that, this study aimed to test a battery of representative executive tests. This freely accessible battery includes 2 tests for each executive domain (inhibition, updating and alternation), controls the effects of processing speed, as the participants themselves regulate the time of stimulus presentation (paradigm - self-paced) and all responses were given verbally (thus controlling the effect of psychomotor speed). Methods: For this pilot study, 13 healthy elderly females (M=68.23, SD=6.13) were evaluated, each one performed a total of 6 executive tests. For the inferential statistical analysis, the t test of repeated measures with a bootstrap of 5000 resamplings was used. Results: As a result, we observed that in the executive blocks, participants obtained fewer correct answers per unit of time than in the control blocks, demonstrating that the executive block is in fact evaluating an executive function regardless of the processing speed. Conclusion: As a pilot study, this battery proved to be effective and easy to apply in elderly population.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography