Academic literature on the topic 'Test error estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Test error estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Test error estimation"

1

Wang, Haiying, Xinping Wang, Chao Wang, and Jian Xu. "Concrete Compression Test Data Estimation Based on a Wavelet Neural Network Model." Mathematical Problems in Engineering 2019 (February 11, 2019): 1–10. http://dx.doi.org/10.1155/2019/4952036.

Full text
Abstract:
Firstly, a genetic algorithm (GA) and simulated annealing (SA) optimized fuzzy c-means clustering algorithm (FCM) was proposed in this paper, which was developed to allow for a clustering analysis of the massive concrete cube specimen compression test data. Then, using an optimized error correction time series estimation method based on the wavelet neural network (WNN), a concrete cube specimen compressive strength test data estimation model was constructed. Taking the results of cluster analysis as data samples, the short-term accurate estimation of concrete quality was carried out. It was found that the mean absolute percentage error, e1, and the root mean square error, e2, for the samples were 6.03385% and 3.3682KN, indicating that the proposed method had higher estimation accuracy and was suitable for concrete compressive test data short-term quality estimations.
APA, Harvard, Vancouver, ISO, and other styles
2

ODEN, J. TINSLEY, SERGE PRUDHOMME, TIM WESTERMANN, JON BASS, and MARK E. BOTKIN. "ERROR ESTIMATION OF EIGENFREQUENCIES FOR ELASTICITY AND SHELL PROBLEMS." Mathematical Models and Methods in Applied Sciences 13, no. 03 (March 2003): 323–44. http://dx.doi.org/10.1142/s0218202503002520.

Full text
Abstract:
In this paper, a method for deriving computable estimates of the approximation error in eigenvalues or eigenfrequencies of three-dimensional linear elasticity or shell problems is presented. The analysis for the error estimator follows the general approach of goal-oriented error estimation for which the error is estimated in so-called quantities of interest, here the eigenfrequencies, rather than global norms. A general theory is developed and is then applied to the linear elasticity equations. For the shell analysis, it is assumed that the shell model is not completely known and additional errors are introduced due to modeling approximations. The approach is then based on recovering three-dimensional approximations from the shell eigensolution and employing the error estimator developed for linear elasticity. The performance of the error estimator is demonstrated on several test problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Weiss, Andrew A. "Estimating Nonlinear Dynamic Models Using Least Absolute Error Estimation." Econometric Theory 7, no. 1 (March 1991): 46–68. http://dx.doi.org/10.1017/s0266466600004230.

Full text
Abstract:
We consider least absolute error estimation in a dynamic nonlinear model with neither independent nor identically distributed errors. The estimator is shown to be consistent and asymptotically normal, with asymptotic covariance matrix depending on the errors through the heights of their density functions at their medians (zero). A consistent estimator of the asymptotic covariance matrix of the estimator is given, and the Wald, Lagrange multiplier, and likelihood ratio tests for linear restrictions on the parameters are discussed. A Lagrange multiplier test for heteroscedasticity based upon the absolute residuals is analyzed. This will be useful whenever the heights of the density functions are related to the dispersions.
APA, Harvard, Vancouver, ISO, and other styles
4

Wee, Seungwoo, Changryoul Choi, and Jechang Jeong. "Blind Interleaver Parameters Estimation Using Kolmogorov–Smirnov Test." Sensors 21, no. 10 (May 15, 2021): 3458. http://dx.doi.org/10.3390/s21103458.

Full text
Abstract:
The use of error-correcting codes (ECCs) is essential for designing reliable digital communication systems. Usually, most systems correct errors under cooperative environments. If receivers do not know interleaver parameters, they must first find out them to decode. In this paper, a blind interleaver parameters estimation method is proposed using the Kolmogorov–Smirnov (K–S) test. We exploit the fact that rank distributions of square matrices of linear codes differ from those of random sequences owing to the linear dependence of linear codes. We use the K–S test to make decision whether two groups are extracted from the same distribution. The K–S test value is used as a measure to find the most different rank distribution for the blind interleaver parameters estimation. In addition to control false alarm rates, multinomial distribution is used to calculate the probability that the most different rank distribution will occur. By exploiting those, we can estimate the interleaver period with relatively low complexity. Experimental results show that the proposed algorithm outperforms previous methods regardless of the bit error rate.
APA, Harvard, Vancouver, ISO, and other styles
5

Spenceley, S. E., and D. B. Henson. "Visual field test simulation and error in threshold estimation." British Journal of Ophthalmology 80, no. 4 (April 1, 1996): 304–8. http://dx.doi.org/10.1136/bjo.80.4.304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Giles, David E. A. "Pre-test estimation in regression under absolute error loss." Economics Letters 41, no. 4 (January 1993): 339–43. http://dx.doi.org/10.1016/0165-1765(93)90202-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saputri, Ovi Delviyanti, Ferra Yanuar, and Dodi Devianto. "Simulation Study The Implementation of Quantile Bootstrap Method on Autocorrelated Error." CAUCHY 5, no. 3 (December 5, 2018): 95. http://dx.doi.org/10.18860/ca.v5i3.5349.

Full text
Abstract:
<span lang="DE">Quantile regression is a regression method with the approach of separating or dividing data into certain quantiles by minimizing the number of absolute values from asymmetrical errors to overcome unfulfilled assumptions, including the presence of autocorrelation. The resulting model parameters are tested for accuracy using the bootstrap method. The bootstrap method is a parameter estimation method by re-sampling from the original sample as much as R replication. The bootstrap trust interval was then used as a test consistency test algorithm constructed on the estimator by the quantile regression method. And test the uncommon quantile regression method with bootstrap method. The data obtained in this test is data replication 10 times. The biasness is calculated from the difference between the quantile estimate and bootstrap estimation. Quantile estimation methods are said to be unbiased if the standard deviation bias is less than the standard bootstrap deviation. This study proves that the estimated value with quantile regression is within the bootstrap percentile confidence interval and proves that 10 times replication produces a better estimation value compared to other replication measures. Quantile regression method in this study is also able to produce unbiased parameter estimation values.</span>
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Liang, Cheng Chen, Tong Guo, and Menghui Chen. "Discrete Tangent Stiffness Estimation Method for Pseudo Dynamic Test." International Journal of Structural Stability and Dynamics 19, no. 01 (December 20, 2018): 1940014. http://dx.doi.org/10.1142/s0219455419400145.

Full text
Abstract:
In pseudo dynamic (PSD) test, researchers have long recognized the importance and potential benefits of utilizing the tangent stiffness of experimental specimen to correct the restoring force and analyze the energy error. However, improving accuracy and efficiency of the instantaneous stiffness estimation still presents a challenge. Based on the theory of discrete curve parameter recognition and the geometrical analysis approach, this paper proposes a discrete tangent stiffness estimation (DTSE) method to estimate the instantaneous tangent stiffness of a single degree of freedom (SDOF) experimental specimen. For different magnitudes of measurement noise, the proposed method can adaptively select and retain a series of latest valid data and ignore outdated information, of which the advantage is highly improving the accuracy and promptness of instantaneous stiffness estimation. The numerical study shows that the DTSE method has better accuracy and promptness of tangent stiffness estimation when compared with other existing methods. In a PSD test involving a sliding isolator, the DTSE method is utilized to analyze the cumulative energy error, the result of which shows the cumulative energy error is negative and decreases gradually. The analysis of experimental results demonstrates that the undershooting error of actuator added extra energy into the PSD testing system. Thus, the proposed method provides a desirable solution to instantaneous stiffness estimation.
APA, Harvard, Vancouver, ISO, and other styles
9

Qi, Gengxin, Xiaobin Fan, and Hao Li. "A comparative study of the unscented Kalman filter and particle filter estimation methods for the measurement of the road adhesion coefficient." Mechanical Sciences 13, no. 2 (August 25, 2022): 735–49. http://dx.doi.org/10.5194/ms-13-735-2022.

Full text
Abstract:
Abstract. The measurement of the road adhesion coefficient is of great significance for the vehicle active safety control system and is one of the key technologies for future autonomous driving. With a focus on the problems of interference uncertainty and system nonlinearity in the estimation of the road adhesion coefficient, this work adopts a vehicle model with 7 degrees of freedom (7-DOF) and the Dugoff tire model and uses these models to estimate the road adhesion coefficient in real time based on the particle filter (PF) algorithm. The estimations using the PF algorithm are verified by selecting typical working conditions, and they are compared with estimations using the unscented Kalman filter (UKF) algorithm. Simulation results show that the road adhesion coefficient estimator error based on the UKF algorithm is less than 7 %, whereas the road adhesion coefficient estimator error based on the PF algorithm is less than 0.1 %. Thus, compared with the UKF algorithm, the PF algorithm has a higher accuracy and control effect with respect to estimating the road adhesion coefficient under different road conditions. In order to verify the robustness of the road adhesion coefficient estimator, an automobile test platform based on a four-wheel-hub-motor car is built. According to the experimental results, the estimator based on the PF algorithm can realize the road surface identification with an error of less than 1 %, which verifies the feasibility and effectiveness of the algorithm with respect to estimating the road adhesion coefficient and shows good robustness.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Chao, Chun Jie Qiao, Yue Ke Wang, and Shen Zhao. "A New Method for Target Motion Analysis." Applied Mechanics and Materials 336-338 (July 2013): 2354–58. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.2354.

Full text
Abstract:
The paper proposes a new method for a targets trajectory, assumed to be linear and uniform, based on the observation of its speed and bearings. After introducing the new method based on assumed model, the paper analyzes the relative error of range estimation caused by speed and bearings estimations relative error; The results of target motion analysis (TMA) is optimized by linearizing the model and using kalman filter. Pond test shows that relative error of range estimation calculated by this method is less than 10-1.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Test error estimation"

1

Steeno, Gregory Sean. "Robust and Nonparametric Methods for Topology Error Identification and Voltage Calibration in Power Systems Engineering." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/39305.

Full text
Abstract:
There is a growing interest in robust and nonparametric methods with engineering applications, due to the nature of the data. Here, we study two power systems engineering applications that employ or recommend robust and nonparametric methods; topology error identification and voltage calibration. Topology errors are a well-known, well-documented problem for utility companies. A topology error occurs when a line's status in a power network, whether active or deactive, is misclassified. This will lead to an incorrect Jacobian matrix used to estimate the unknown parameters of a network in a nonlinear regression model. We propose a solution using nonlinear regression techniques to identify the correct status of every line in the network by deriving a statistical model of the power flows and injections while employing Kirchhoff's Current Law. Simulation results on the IEEE-118 bus system showed that the methodology was able to detect where topology errors occurred as well as identify gross measurement errors. The Friedman Two-Way Analysis of Variance by Ranks test is advocated to calibrate voltage measurements at a bus in a power network. However, it was found that the Friedman test was only slightly more robust or resistant in the presence of discordant measurements than the classical F-test. The resistance of a statistical test is defined as the fraction of bad data necessary to switch a statistical conclusion. We mathematically derive the maximum resistance to rejection and to acceptance of the Friedman test, as well as the Brown-Mood test, and show that the Brown-Mood test has a higher maximum resistance to rejection and to acceptance than the Friedman test. In addition, we simulate the expected resistance to rejection and to acceptance of both tests and show that on average the Brown-Mood test is slightly more robust to rejection while on average the Friedman test is more robust to acceptance.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Albertson, K. V. "Pre-test estimation in a regression model with a mis-specified error covariance matrix." Thesis, University of Canterbury. Economics, 1993. http://hdl.handle.net/10092/4315.

Full text
Abstract:
This thesis considers some finite sample properties of a number of preliminary test (pre-test) estimators of the unknown parameters of a linear regression model that may have been mis-specified as a result of incorrectly assuming that the disturbance term has a scalar covariance matrix, and/or as a result of the exclusion of relevant regressors. The pre-test itself is a test for exact linear restrictions and is conducted using the usual Wald statistic, which provides a Uniformly Most Powerful Invariant test of the restrictions in a well specified model. The parameters to be estimated are the coefficient vector, the prediction vector (i.e. the expectation of the dependent variable conditional on the regressors), and the regression scale parameter. Note that while the problem of estimating the prediction vector is merely a special case of estimating the coefficient vector when the model is well specified, this is not the case when the model is mis-specified. The properties of each of these estimators in a well specified regression model have been examined in the literature, as have the effects of a number of different model mis-specifications, and we survey these results in Chapter Two. We will extend the existing literature by generalising the error covariance matrix in conjunction with allowing for possibly excluded regressors. To motivate the consideration of a nonscalar error covariance matrix in the context of a pre-test situation we briefly examine the literature on autoregressive and heteroscedastic error processes in Chapter Three. In Chapters Four, Five, Six, and Seven we derive the cumulative distribution function of the test statistic, and exact formulae for the bias and risk (under quadratic loss) of the unrestricted, restricted and pre-test estimators, in a model with a general error covariance matrix and possibly excluded relevant regressors. These formulae are data dependent and, to illustrate the results, are evaluated for a number of regression models and forms of error covariance matrix. In particular we determine the effects of autoregressive errors and heteroscedastic errors on each of the regression models under consideration. Our evaluations confirm the known result that the presence of a non scalar error covariance matrix introduces a distortion into the pre-test power function and we show the effects of this on the pre-test estimators. In addition to this we show that one effect of the mis-specification may be that the pre-test and restricted estimators may be strictly dominated by the corresponding unrestricted estimator even if there are no relevant regressors excluded from the model. If there are relevant regressors excluded from the model it appears that the additional mis-specification of the error covariance matrix has little qualitative impact unless the coefficients on the excluded regressors are small in magnitude or the excluded regressors are not correlated with the included regressors. As one of the effects of the mis-specification is to introduce a distortion into the pre-test power function, in Chapter Eight we consider the problem of determining the optimal critical value (under the criterion of minimax regret) for the pre-test when estimating the regression coefficient vector. We show that the mis-specification of the error covariance matrix may have a substantial impact on the optimal critical value chosen for the pre-test under this criterion, although, generally, the actual size of the pre-test is relatively unaffected by increasing degrees of mis-specification. Chapter Nine concludes this thesis and provides a summary of the results obtained in the earlier chapters. In addition, we outline some possible future research topics in this general area.
APA, Harvard, Vancouver, ISO, and other styles
3

Kutluay, Umit. "Aerodynamic Parameter Estimation Using Flight Test Data." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613786/index.pdf.

Full text
Abstract:
This doctoral study aims to develop a methodology for use in determining aerodynamic models and parameters from actual flight test data for different types of autonomous flight vehicles. The stepwise regression method and equation error method are utilized for the aerodynamic model identification and parameter estimation. A closed loop aerodynamic parameter estimation approach is also applied in this study which can be used to fine tune the model parameters. Genetic algorithm is used as the optimization kernel for this purpose. In the optimization scheme, an input error cost function is used together with a final position penalty as opposed to widely utilized output error cost function. Available methods in the literature are developed for and mostly applied to the aerodynamic system identification problem of piloted aircraft
a very limited number of studies on autonomous vehicles are available in the open literature. This doctoral study shows the applicability of the existing methods to aerodynamic model identification and parameter estimation problem of autonomous vehicles. Also practical considerations for the application of model structure determination methods to autonomous vehicles are not well defined in the literature and this study serves as a guide to these considerations.
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Fei. "Essays in Spatial Econometrics: Estimation, Specification Test and the Bootstrap." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365612737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Langer, Michelle M. Thissen David. "A reexamination of Lord's Wald test for differential item functioning using item response theory and modern error estimation." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,2084.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Feb. 17, 2009). "... in partial fulfillment of the requirements for the degree of Doctor in Philosophy in the Department of Psychology Quantitative." Discipline: Psychology; Department/School: Psychology.
APA, Harvard, Vancouver, ISO, and other styles
6

Tanner, Whitney Ford. "Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal Studies." UKnowledge, 2018. https://uknowledge.uky.edu/epb_etds/20.

Full text
Abstract:
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result. First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections. Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model. Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
APA, Harvard, Vancouver, ISO, and other styles
7

Lehmann, Rüdiger. "Observation error model selection by information criteria vs. normality testing." Hochschule für Technik und Wirtschaft Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-211721.

Full text
Abstract:
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
APA, Harvard, Vancouver, ISO, and other styles
8

Lehmann, Rüdiger. "Observation error model selection by information criteria vs. normality testing." Hochschule für Technik und Wirtschaft Dresden, 2015. https://htw-dresden.qucosa.de/id/qucosa%3A23301.

Full text
Abstract:
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yu. "Estimation, Decision and Applications to Target Tracking." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1758.

Full text
Abstract:
This dissertation mainly consists of three parts. The first part proposes generalized linear minimum mean-square error (GLMMSE) estimation for nonlinear point estimation. The second part proposes a recursive joint decision and estimation (RJDE) algorithm for joint decision and estimation (JDE). The third part analyzes the performance of sequential probability ratio test (SPRT) when the log-likelihood ratios (LLR) are independent but not identically distributed. The linear minimum mean-square error (LMMSE) estimation plays an important role in nonlinear estimation. It searches for the best estimator in the set of all estimators that are linear in the measurement. A GLMMSE estimation framework is proposed in this disser- tation. It employs a vector-valued measurement transform function (MTF) and finds the best estimator among all estimators that are linear in MTF. Several design guidelines for the MTF based on a numerical example were provided. A RJDE algorithm based on a generalized Bayes risk is proposed in this dissertation for dynamic JDE problems. It is computationally efficient for dynamic problems where data are made available sequentially. Further, since existing performance measures for estimation or decision are effective to evaluate JDE algorithms, a joint performance measure is proposed for JDE algorithms for dynamic problems. The RJDE algorithm is demonstrated by applications to joint tracking and classification as well as joint tracking and detection in target tracking. The characteristics and performance of SPRT are characterized by two important functions—operating characteristic (OC) and average sample number (ASN). These two functions have been studied extensively under the assumption of independent and identically distributed (i.i.d.) LLR, which is too stringent for many applications. This dissertation relaxes the requirement of identical distribution. Two inductive equations governing the OC and ASN are developed. Unfortunately, they have non-unique solutions in the general case. They do have unique solutions in two special cases: (a) the LLR sequence converges in distributions and (b) the LLR sequence has periodic distributions. Further, the analysis can be readily extended to evaluate the performance of the truncated SPRT and the cumulative sum test.
APA, Harvard, Vancouver, ISO, and other styles
10

Lauer, Peccoud Marie-Reine. "Méthodes statistiques pour le controle de qualité en présence d'erreurs de mesure." Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997NICE5136.

Full text
Abstract:
Lorsqu'on cherche a controler la qualite d'un ensemble de pieces a partir de mesures bruitees des grandeurs d'une caracteristique de ces pieces, on peut commettre des erreurs de decision nuisibles a la qualite. Il est donc essentiel de maitriser les risques encourus afin d'assurer la qualite finale de la fourniture. Nous considerons qu'une piece est defectueuse ou non selon que la grandeur g correspondante de la caracteristique est superieure ou inferieure a une valeur g#o donnee. Nous supposons que, compte tenu de l'infidelite de l'instrument de mesure, la mesure m obtenue de cette grandeur est de la forme f(g) + ou f est une fonction croissante telle que la valeur f(g#o) est connue et est une erreur aleatoire centree de variance donnee. Nous examinons d'abord le probleme de la fixation d'une mesure de rejet m de maniere a ce que, pour le tri d'un lot de pieces consistant a accepter ou rejeter chacune selon que la mesure associee est inferieure ou superieure a m, un objectif donne de qualite du lot apres tri soit respecte. Nous envisageons ensuite le probleme de test de la qualite globale d'un lot au vu de mesures pour un echantillon de pieces extrait du lot. Pour ces deux types de problemes, avec differents objectifs de qualite, nous proposons des solutions en privilegiant le cas ou la fonction f est affine et ou l'erreur et la variable g sont gaussiennes. Des resultats de simulations permettent d'apprecier les performances des procedures de controle definies et leur robustesse a des ecarts aux hypotheses utilisees dans les developpements theoriques.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Test error estimation"

1

Spray, Judith A. The effect of item parameter estimation error on decisions made using the sequential probability ratio test. Iowa City, Iowa: American College Testing Program, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maine, Richard E. Application of parameter estimation to highly unstable aircraft. Edwards, Calif: National Aeronautics and Space Administration, Ames Research Center, Dryden Flight Research Facility, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

W, Hardin James, ed. Common errors in statistics (and how to avoid them). Hoboken, NJ: Wiley-Interscience, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Babeshko, Lyudmila, and Irina Orlova. Econometrics and econometric modeling in Excel and R. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1079837.

Full text
Abstract:
The textbook includes topics of modern econometrics, often used in economic research. Some aspects of multiple regression models related to the problem of multicollinearity and models with a discrete dependent variable are considered, including methods for their estimation, analysis, and application. A significant place is given to the analysis of models of one-dimensional and multidimensional time series. Modern ideas about the deterministic and stochastic nature of the trend are considered. Methods of statistical identification of the trend type are studied. Attention is paid to the evaluation, analysis, and practical implementation of Box — Jenkins stationary time series models, as well as multidimensional time series models: vector autoregressive models and vector error correction models. It includes basic econometric models for panel data that have been widely used in recent decades, as well as formal tests for selecting models based on their hierarchical structure. Each section provides examples of evaluating, analyzing, and testing models in the R software environment. Meets the requirements of the Federal state educational standards of higher education of the latest generation. It is addressed to master's students studying in the Field of Economics, the curriculum of which includes the disciplines Econometrics (advanced course)", "Econometric modeling", "Econometric research", and graduate students."
APA, Harvard, Vancouver, ISO, and other styles
5

United States. National Aeronautics and Space Administration., ed. A PC program for estimating measurement uncertainty for aeronautics test instrumentation. [Washington, DC]: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

United States. National Aeronautics and Space Administration., ed. A PC program for estimating measurement uncertainty for aeronautics test instrumentation. [Washington, DC]: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Good, Phillip I., and James W. Hardin. Common Errors in Statistics: (and How to Avoid Them). Wiley-Interscience, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

BOENO, D., and R. P. MULAZZANI. Teste de infiltração: estimativa de erro por fluxo lateral em medição com duplo anel concêntrico. Dialética, 2022. http://dx.doi.org/10.48021/978-65-252-2740-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Test error estimation"

1

Anjum, Muhammad, Moizzah Asif, and Jonathan Williams. "Towards an Optimal Deep Neural Network for SOC Estimation of Electric-Vehicle Lithium-Ion Battery Cells." In Springer Proceedings in Energy, 11–18. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63916-7_2.

Full text
Abstract:
AbstractThis paper has identified a minimal configuration of a DNN architecture and hyperparameter settings to effectively estimate SOC of EV battery cells. The results from the experimental work has shown that a minimal configuration of hidden layers and neurons can reduce the computational cost and resources required without compromising the performance. This is further supported by the number of epochs taken to train the best DNN SOC estimations model. Hence, demonstrating that, the risk of overfitting estimation models to training datasets, can also be subsided. This is further supported by the generalisation capability of the best model demonstrated through the decrease in error metrics values from test phase to those in validation phase.
APA, Harvard, Vancouver, ISO, and other styles
2

Taddia, Yuri, Luca Ercolin, and Alberto Pellegrinelli. "A Low-Cost GNSS Prototype for Tracking Runners in RTK Mode: Comparison with Running Watch Performance." In Communications in Computer and Information Science, 233–45. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94426-1_17.

Full text
Abstract:
AbstractGNSS positioning is widely use in every kind of application. Nowadays, low-cost GNSS modules are becoming available to apply the Real-Time Kinematic mode in those applications in which a centimeter-level accuracy would be appreciated for a precise positioning. In this work, we developed a prototype for collecting data in RTK mode with a single-frequency multi-constellation device during some physical tests performed by a professional runner. Prior to do this, we assessed the accuracy in estimating the distance actually covered during a walking on a signalized line. Also, we verified the capability to detect short sprints of about 12–15 s. Finally, we compared the results of our prototype with a Polar M430 running watch during three Cooper tests and a Kosmin test. The comparison highlighted that the running watch overestimated the total distance systematically and did not describe the performance of the athlete accurately in time. The distance overestimation was +4.7% on average using the running watch, whereas our prototype system exhibited an error level of about 0.1%.
APA, Harvard, Vancouver, ISO, and other styles
3

Branicki, Michal, Nan Chen, and Andrew J. Majda. "Non-Gaussian Test Models for Prediction and State Estimation with Model Errors." In Partial Differential Equations: Theory, Control and Approximation, 99–138. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-41401-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, J. Z. "Further Tests on the Derivative Recovery Technique and a Posteriori Error Estimator." In The finite element method in the 1990’s, 595–604. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-10326-5_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao, and Lennart Ljung. "Classical System Identification." In Regularized System Identification, 17–31. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_2.

Full text
Abstract:
AbstractSystem identification as a field has been around since the 1950s with roots from statistical theory. A substantial body of concepts, theory, algorithms and experience has been developed since then. Indeed, there is a very extensive literature on the subject, with many text books, like [5, 8, 12]. Some main points of this “classical” field are summarized in this chapter, just pointing to the basic structure of the problem area. The problem centres around four main pillars: (1) the observed data from the system, (2) a parametrized set of candidate models, “the Model structure”, (3) an estimation method that fits the model parameters to the observed data and (4) a validation process that helps taking decisions about the choice of model structure. The crucial choice is that of the model structure. The archetypical choice for linear models is the ARX model, a linear difference equation between the system’s input and output signals. This is a universal approximator for linear systems—for sufficiently high orders of the equations, arbitrarily good descriptions of the system are obtained. For a “good” model, proper choices of structural parameters, like the equation orders, are required. An essential part of the classical theory deals with asymptotic quality measures, bias and variance, that aim at giving the best mean square error between the model and the true system. Some of this theory is reviewed in this chapter for estimation methods of the maximum likelihood character.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhong, Ziyuan, Yuchi Tian, and Baishakhi Ray. "Understanding Local Robustness of Deep Neural Networks under Natural Variations." In Fundamental Approaches to Software Engineering, 313–37. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71500-7_16.

Full text
Abstract:
AbstractDeep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention.While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g., a rotated or a rainy version of the original input, are especially concerning as they can occur naturally in the field without any active adversary and may lead to undesirable consequences. Thus, it is important to identify the inputs whose small variations may lead to erroneous DNN behaviors. The very few studies that looked at DNN’s robustness under natural variants, however, focus on estimating the overall robustness of DNNs across all the test data rather than localizing such error-producing points. This work aims to bridge this gap.To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B) tool to automatically identify the non-robust points. Our evaluation of these methods on three DNN models spanning three widely used image classification datasets shows that they are effective in flagging points of poor robustness. In particular, DeepRobust-W and DeepRobust-B are able to achieve an F1 score of up to 91.4% and 99.1%, respectively. We further show that DeepRobust-W can be applied to a regression problem in a domain beyond image classification. Our evaluation on three self-driving car models demonstrates that DeepRobust-W is effective in identifying points of poor robustness with F1 score up to 78.9%.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, He, Yixin He, Longpeng Zhang, Zhicheng Zeng, Tu Ouyang, and Zhimin Zeng. "Leveraging Modern Big Data Stack for Swift Development of Insights into Social Developments." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 325–33. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_34.

Full text
Abstract:
AbstractInsights of social development, presented in various forms, such as metrics, figures, text summaries, whose purpose is to summarize, explain, and predict the situations and trends of society, is extremely useful to guide organizations and individuals to better realize their own objectives in accordance with the whole society. Deriving these insights accurately and swiftly has become an interest for a range of organizations, including agencies governing districts, city even the whole country, they use these insights to inform policy-makings. Business investors who peak into statistical numbers for estimating current economical situations and future trends. Even for individuals, they could look at some of these insights to better align themselves with macroscopical social trends. There are many challenges to develop these insights in a data-driven approach. First, required data come from a large number of heterogeneous sources in a variety of formats. One single source’s data could be in the size of hundreds of Gigabytes to several TeraBytes, ingesting and governing such huge amount of data is not a small challenge. Second, many complex insights are derived by domain human experts in a trail-and-error fashion, while interacting with data with the aid of computer algorithms. To quickly experiment various algorithms, it asks for software capabilities for infusing human experts and machine intelligence together, this is challenging but critical for success.By designing and implementing a flexible big data stack that could bring in a variety of data components. We address some of the challenges to infuse data, computer algorithm and human together in Zilian Tech company [20]. In this paper we present the architecture of our data stack and articulate some of the important technical choices when building such stack. The stack is designed to be equipped with scalable storage that could scale up to PetaBytes, as well as elastic distributed compute engine with parallel computing algorithms. With these features the data stack enables a) swift data analysis, by human analysts interacting with data and machine algorithms via software support, with on-demand question answering time reduced from days to minutes; b) agile building of data products for end users to interact with, in weeks if not days from months.
APA, Harvard, Vancouver, ISO, and other styles
8

Edge, M. D. "Parametric estimation and inference." In Statistical Thinking from Scratch, 165–85. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198827627.003.0011.

Full text
Abstract:
If it is reasonable to assume that the data are generated by a fully parametric model, then maximum-likelihood approaches to estimation and inference have many appealing properties. Maximum-likelihood estimators are obtained by identifying parameters that maximize the likelihood function, which can be done using calculus or using numerical approaches. Such estimators are consistent, and if the costs of errors in estimation are described by a squared-error loss function, then they are also efficient compared with their consistent competitors. The sampling variance of a maximum-likelihood estimate can be estimated in various ways. As always, one possibility is the bootstrap. In many models, the variance of the maximum-likelihood estimator can be derived directly once its form is known. A third approach is to rely on general properties of maximum-likelihood estimators and use the Fisher information. Similarly, there are many ways to test hypotheses about parameters estimated by maximum likelihood. This chapter discusses the Wald test, which relies on the fact that the sampling distribution of maximum-likelihood estimators is normal in large samples, and the likelihood-ratio test, which is a general approach for testing hypotheses relating nested pairs of models.
APA, Harvard, Vancouver, ISO, and other styles
9

Diz Felipe, Ane, Andreas Ziegl, Dieter Hayn, and Günter Schreier. "Development of Algorithms for Automated Timed Up-and-Go Test Subtask and Step Frequency Analysis." In Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti210935.

Full text
Abstract:
Frailty is one of the major problems associated with an aging society. Therefore, frailty assessment tools which support early detection and autonomous monitoring of the frailty status are heavily needed. One of the most used tests for functional assessment of the elderly is the “Timed Up-and-Go” test. In previous projects, we have developed an ultrasound-based device that enables performing the test autonomously. This paper described the development and validation of algorithms for detection of subtasks (stand up, walk, turn around, walk, sit down) and for step frequency estimation from the Timed Up-and-Go signals. The algorithms have been tested with an annotated test set recorded in 8 healthy subjects. The mean error for the developed subtask transition detection algorithms was in between 0.22 and 0.35 s. The mean step frequency error was 0.15 Hz. Future steps will include prospective evaluation of the algorithms with elderly people.
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Shivam, and Sudhir Kumar Mishra. "Electricity Demand Estimation Using ARIMA Forecasting Model." In Advances in Transdisciplinary Engineering. IOS Press, 2023. http://dx.doi.org/10.3233/atde221331.

Full text
Abstract:
The aim of this study is to estimate the future electricity demand for domestic and commercial purpose. With the rising demand for power at households and industrial levels, it is more critical and important than ever to estimate future electricity needs so that demands in future can be met. In this paper, the ARIMA forecasting model with machine leaning techniques is presented for electricity demands forecasting. Time series decomposition is used to understand and split the data into test and train. ARIMA model is also compared to some similar models and benefits of using ARIMA model are also discussed. The results of this study show that ARIMA model can be used for forecasting electricity demand with lesser train and test error values as 0.10 and 0.04 respectively.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Test error estimation"

1

Mirkhani, Shahrzad, Subhasish Mitra, Chen-Yong Cher, and Jacob Abraham. "Efficient Soft Error Vulnerability Estimation of Complex Designs." In Design, Automation and Test in Europe. New Jersey: IEEE Conference Publications, 2015. http://dx.doi.org/10.7873/date.2015.0367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sargolzaie, M. H., M. Semsarzadeh, M. R. Hashemi, and Z. Navabi. "Low cost error tolerant motion estimation for H.264/AVC standard." In Test Symposium (EWDTS). IEEE, 2010. http://dx.doi.org/10.1109/ewdts.2010.5742102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vijayan, Arunkumar, Abhishek Koneru, Mojtaba Ebrahimit, Krishnendu Chakrabarty, and Mehdi B. Tahoori. "Online soft-error vulnerability estimation for memory arrays." In 2016 IEEE 34th VLSI Test Symposium (VTS). IEEE, 2016. http://dx.doi.org/10.1109/vts.2016.7477301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Iizuka, Shoichi, Yutaka Masuda, Masanori Hashimoto, and Takao Onoye. "Stochastic timing error rate estimation under process and temporal variations." In 2015 IEEE International Test Conference (ITC). IEEE, 2015. http://dx.doi.org/10.1109/test.2015.7342404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghasemazar, Amin, and Mieszko Lis. "Gaussian mixture error estimation for approximate circuits." In 2017 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2017. http://dx.doi.org/10.23919/date.2017.7927004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brum, J. P., T. Kraemer Sartori, J. Lin, M. Garay Trindade, H. Fourati, R. Velazco, and R. Possamai Bastos. "Evaluation of Attitude Estimation Algorithm under Soft Error Effects." In 2021 IEEE 22nd Latin American Test Symposium (LATS). IEEE, 2021. http://dx.doi.org/10.1109/lats53581.2021.9651794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Katz, A., and R. Becker. "THERMAL CONDUCTIVITY TEST OF MOIST MATERIALS -ESTIMATION OF ERROR." In International Heat Transfer Conference 9. Connecticut: Begellhouse, 1990. http://dx.doi.org/10.1615/ihtc9.280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hong, Dongwoo, and Kwang-ting Cheng. "Bit Error Rate Estimation for Improving Jitter Testing of High-Speed Serial Links." In 2006 IEEE International Test Conference. IEEE, 2006. http://dx.doi.org/10.1109/test.2006.297723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guin, Ujjwal, and Chen-Huan Chiang. "Design for Bit Error Rate estimation of high speed serial links." In 2011 IEEE VLSI Test Symposium (VTS). IEEE, 2011. http://dx.doi.org/10.1109/vts.2011.5783734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Seong, Sang. "A Compensation Method for Setting Misalignment Error in Gyroscope Deterministic Error Estimation Test." In 2006 SICE-ICASE International Joint Conference. IEEE, 2006. http://dx.doi.org/10.1109/sice.2006.314867.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Test error estimation"

1

Gunay, Selim, Fan Hu, Khalid Mosalam, Arpit Nema, Jose Restrepo, Adam Zsarnoczay, and Jack Baker. Blind Prediction of Shaking Table Tests of a New Bridge Bent Design. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2020. http://dx.doi.org/10.55461/svks9397.

Full text
Abstract:
Considering the importance of the transportation network and bridge structures, the associated seismic design philosophy is shifting from the basic collapse prevention objective to maintaining functionality on the community scale in the aftermath of moderate to strong earthquakes (i.e., resiliency). In addition to performance, the associated construction philosophy is also being modernized, with the utilization of accelerated bridge construction (ABC) techniques to reduce impacts of construction work on traffic, society, economy, and on-site safety during construction. Recent years have seen several developments towards the design of low-damage bridges and ABC. According to the results of conducted tests, these systems have significant potential to achieve the intended community resiliency objectives. Taking advantage of such potential in the standard design and analysis processes requires proper modeling that adequately characterizes the behavior and response of these bridge systems. To evaluate the current practices and abilities of the structural engineering community to model this type of resiliency-oriented bridges, the Pacific Earthquake Engineering Research Center (PEER) organized a blind prediction contest of a two-column bridge bent consisting of columns with enhanced response characteristics achieved by a well-balanced contribution of self-centering, rocking, and energy dissipation. The parameters of this blind prediction competition are described in this report, and the predictions submitted by different teams are analyzed. In general, forces are predicted better than displacements. The post-tension bar forces and residual displacements are predicted with the best and least accuracy, respectively. Some of the predicted quantities are observed to have coefficient of variation (COV) values larger than 50%; however, in general, the scatter in the predictions amongst different teams is not significantly large. Applied ground motions (GM) in shaking table tests consisted of a series of naturally recorded earthquake acceleration signals, where GM1 is found to be the largest contributor to the displacement error for most of the teams, and GM7 is the largest contributor to the force (hence, the acceleration) error. The large contribution of GM1 to the displacement error is due to the elastic response in GM1 and the errors stemming from the incorrect estimation of the period and damping ratio. The contribution of GM7 to the force error is due to the errors in the estimation of the base-shear capacity. Several teams were able to predict forces and accelerations with only moderate bias. Displacements, however, were systematically underestimated by almost every team. This suggests that there is a general problem either in the assumptions made or the models used to simulate the response of this type of bridge bent with enhanced response characteristics. Predictions of the best-performing teams were consistently and substantially better than average in all response quantities. The engineering community would benefit from learning details of the approach of the best teams and the factors that caused the models of other teams to fail to produce similarly good results. Blind prediction contests provide: (1) very useful information regarding areas where current numerical models might be improved; and (2) quantitative data regarding the uncertainty of analytical models for use in performance-based earthquake engineering evaluations. Such blind prediction contests should be encouraged for other experimental research activities and are planned to be conducted annually by PEER.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Qadi, Imad, Qingqing Cao, Lama Abufares, Siqi Wang, Uthman Mohamed Ali, and Greg Renshaw. Moisture Content and In-place Density of Cold-Recycling Treatments. Illinois Center for Transportation, May 2022. http://dx.doi.org/10.36501/0197-9191/22-007.

Full text
Abstract:
Cold-recycling treatments are gaining popularity in the United States because of their economic and environmental benefits. Curing is the most critical phase for these treatments. Curing is the process where emulsion breaks and water evaporates, leaving residual binder in the treated material. In this process, the cold-recycled mix gains strength. Sufficient strength is required before opening the cold-treated layer to traffic or placing an overlay. Otherwise, premature failure, related to insufficient strength and trapped moisture, would be expected. However, some challenges arise from the lack of relevant information and specifications to monitor treatment curing. This report presents the outcomes of a research project funded by the Illinois Department for Transportation to investigate the feasibility of using the nondestructive ground-penetrating radar (GPR) for density and moisture content estimation of cold-recycled treatments. Monitoring moisture content is an indicator of curing level; treated layers must meet a threshold of maximum allowable moisture content (2% in Illinois) to be considered sufficiently cured. The methodology followed in this report included GPR numerical simulations and GPR indoor and field tests for data sources. The data were used to correlate moisture content to dielectric properties calculated from GPR measurements. Two models were developed for moisture content estimation: the first is based on numerical simulations and the second is based on electromagnetic mixing theory and called the Al-Qadi-Cao-Abufares (ACA) model. The simulation model had an average error of 0.33% for moisture prediction for five different field projects. The ACA model had an average error of 2% for density prediction and an average root-mean-square error of less than 0.5% for moisture content prediction for both indoor and field tests. The ACA model is presented as part of a developed user-friendly tool that could be used in the future to continuously monitor curing of cold-recycled treatments.
APA, Harvard, Vancouver, ISO, and other styles
3

Clark, E. L. Error propagation equations and tables for estimating the uncertainty in high-speed wind tunnel test results. Office of Scientific and Technical Information (OSTI), August 1993. http://dx.doi.org/10.2172/10178382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography