Dissertations / Theses on the topic 'Measurement error'

To see the other types of publications on this topic, follow the link: Measurement error.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Measurement error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bashir, Saghir Ahmed. "Measurement error in epidemiology." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Qiong. "Robust Estimation via Measurement Error Modeling." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-08112005-222926/.

Full text
Abstract:
We introduce a new method to robustifying inference that can be applied in any situation where a parametric likelihood is available. The key feature is that data from the postulated parametric models are assumed to be measured with error where the measurement error distribution is chosen to produce the occasional gross errors found in data. We show that the tails of the error-contamination model control the properties (boundedness, redescendingness) of the resulting influence functions, with heavier tails in the error contamination model producing more robust estimators. In the application to location-scale models with independent and identically distributed data, the resulting analytically-intractable likelihoods are approximated via Monte Carlo integration. In the application to time series models, we propose a Bayesian approach to the robust estimation of time series parameters. We use Markov Chain Monte Carlo (MCMC) to estimate the parameters of interest and also the gross errors. The latter are used as outlier diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
3

Johansson, Fredrik. "Essays on measurement error and nonresponse /." Uppsala : Department of Economics, Uppsala University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boaventura, Guimareas Dumangane Montezuma. "Essays on duration response measurement error." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

AHMAD, SHOAIB. "Finite Precision Error in FPGA Measurement." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-49646.

Full text
Abstract:
Finite precision error in digital signal processing creates a threshold of quality of the processed signal. It is very important to agree on the outcome while paying in terms of power and performance. This project deals with the design and implementation of digital filters FIR and IIR, which is further utilized by a measurement system in order to correctly measure different parameters. Compared to analog filters, these digital filters have more precise and accurate results along with the flexibility of expected hardware and environmental changes. The error is exposed and the filters are implemented to meet the requirements of a measurement system using finite precision arithmetic and the results are also verified through MATLAB. Moreover with the help of simulations, a comparison between FIR and IIR digital filters have been presented.

Passed


Digital filters and FPGA
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Chendi. "Linear regression with Laplace measurement error." Kansas State University, 2016. http://hdl.handle.net/2097/32719.

Full text
Abstract:
Master of Science
Statistics
Weixing Song
In this report, an improved estimation procedure for the regression parameter in simple linear regression models with the Laplace measurement error is proposed. The estimation procedure is made feasible by a Tweedie type equality established for E(X|Z), where Z = X + U, X and U are independent, and U follows a Laplace distribution. When the density function of X is unknown, a kernel estimator for E(X|Z) is constructed in the estimation procedure. A leave-one-out cross validation bandwidth selection method is designed. The finite sample performance of the proposed estimation procedure is evaluated by simulation studies. Comparison study is also conducted to show the superiority of the proposed estimation procedure over some existing estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Hirst, William Mark. "Outcome measurement error in survival analysis." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lo, Sau Yee. "Measurement error in logistic regression model /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?MATH%202004%20LO.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 82-83). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
9

Fang, Xiaoqiong. "Mixtures-of-Regressions with Measurement Error." UKnowledge, 2018. https://uknowledge.uky.edu/statistics_etds/36.

Full text
Abstract:
Finite Mixture model has been studied for a long time, however, traditional methods assume that the variables are measured without error. Mixtures-of-regression model with measurement error imposes challenges to the statisticians, since both the mixture structure and the existence of measurement error can lead to inconsistent estimate for the regression coefficients. In order to solve the inconsistency, We propose series of methods to estimate the mixture likelihood of the mixtures-of-regressions model when there is measurement error, both in the responses and predictors. Different estimators of the parameters are derived and compared with respect to their relative efficiencies. The simulation results show that the proposed estimation methods work well and improve the estimating process.
APA, Harvard, Vancouver, ISO, and other styles
10

De, Nadai Michele. "Measurement Error Issues in Consumption Data." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421980.

Full text
Abstract:
Data available in commonly employed consumer surveys, like the \emph{Consumer Expenditure Survey} in the US, are widely known to be affected by measurement errors. Ignoring the effect of such errors in the estimation of consumption models may result in severely biased estimates of the quantities of interest. In this thesis I consider identification of three different models of consumption behavior allowing for the presence of measurement errors. Identification is particularly difficult to achieve due to the high non-linearity of the specifications involved and to peculiarities of consumptions models. In fact in many instances, allowing for mismeasured covariates also implies correlated measurement errors also in the dependent variable. This further complicates the identification of the model, invalidating most of the non-linear errors in variables results in the literature. The core of the thesis is made of three Chapters. In the first Chapter I consider identification of a particular specification of Engel curves when unobserved expenditure is endogenous and measured with error. In the second Chapter I study identification of a general non-linear errors in variables model allowing for correlated measurement errors on both sides of the equation. In the third Chapter I derive identification and estimation of the distribution of consumption when only expenditure and the number purchases are observed.
I dati disponibili nelle più comuni indagini sui consumatori, come la Consumer Expenditure Survey negli Stati Uniti, sono noti per essere affetti da errori di misura. Ignorare l'effetto di questi errori nella stima di modelli di consumo può portare a stime distorte delle quantità di interesse. Questa tesi discute l'identificazione di tre differenti modelli di comportamento dei consumatori in presenza di errori di misura. L'identificazione risulta particolarmente difficile a causa della elevata non-linearità delle specificazioni utilizzate e di alcune peculiarità proprie dei modelli con dati di consumo. In molti casi infatti, la presenza di covariate misurate con errore implica errori di misura correlati nella variabile dipendente. Questo complica ulteriormente l'identificazione del modello, invalidando la maggior parte dei risultati presenti nella letteratura su errori di misura in modelli non-lineari. Il contenuto della tesi è discusso in tre Capitoli. Il primo Capitolo discute l'identificazione di una particolare specificazione di curve di Engel quando la spesa totale non osservata è endogena e misurata con errore. Il secondo Capitolo studia l'identificazione di un modello non-lineare molto generale con errori di misura correlati su entrambi i lati dell'equazione. Il terzo Capitolo ottiene identificazione e stima della distribuzione di consumo quando solo la spesa e il numero di acquisti sono osservati.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Zhuosong. "Error Mitigation in Roughness Measurements." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/50146.

Full text
Abstract:
Road roughness is an important factor in determining the quality of a stretch of road. The International Roughness Index, a specific measure of road roughness, is widely used metric. However, in order to measure roughness, an accurate road profile must exist. To measure the roads, terrain profiling systems are commonly used. Modern systems based on laser scanners and inertial navigation systems (INS) are able to measure thousands of data points per seconds over a wide path. However, because of the subsystems in the profiling systems, they are susceptible to errors that reduce the accuracy of the measurements. Thus, both major subsystems - the laser and the navigation system - must be accurate and synchronized for the road to be correctly scanned. The sensors' mounting was investigated to ensure that the vehicle motion is accurately captured and accounted for, demonstrated in the Vehicle Terrain Performance Lab's (VTPL) Ford Explorer profilometer. Next, INS errors were addressed. These may include drift in the inertial measurement unit or errors due to poor reception with the global navigation satellite system. The solution to these errors was demonstrated through the VTPL's HMMWV profilometer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Breitner, Susanne. "Time-varying coefficient models and measurement error." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-79772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Marsh, Jennifer Lucy. "Measurement error in longitudinal film badge data." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.394638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Woodhouse, Geoffrey M. "Adjustment for measurement error in multilevel analysis." Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/10019113/.

Full text
Abstract:
Measurements in educational research are often subject to error. Where it is desired to base conclusions on underlying characteristics rather than on the raw measurements of them, it is necessary to adjust for measurement error in the modelling process. In this thesis it is shown how the classical model for measurement error may be extended to model the more complex structures of error variance and covariance that typically occur in multilevel models, particularly multivariate multilevel models, with continuous response. For these models parameter estimators are derived, with adjustment based on prior values of the measurement error variances and covariances among the response and explanatory variables. A straightforward method of specifring these prior values is presented. In simulations using data with known characteristics the new procedure is shown to be effective in reducing the biases in parameter estimates that result from unadjusted estimation. Improved estimates of the standard errors also are demonstrated. In particular, random coefficients of variables with error are successfully estimated. The estimation procedure is then used in a two-level analysis of an educational data set. It is shown how estimates and conclusions can vary, depending on the degree of measurement error that is assumed to exist in explanatory variables at level 1 and level 2. The importance of obtaining satisfactory prior estimates of measurement error variances and covariances, and of correctly adjusting for them during analysis, is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
15

Johnson, Nels Gordon. "Semiparametric Regression Methods with Covariate Measurement Error." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/49551.

Full text
Abstract:
In public health, biomedical, epidemiological, and other applications, data collected are often measured with error. When mismeasured data is used in a regression analysis, not accounting for the measurement error can lead to incorrect inference about the relationships between the covariates and the response. We investigate measurement error in the covariates of two types of regression models.  For each we propose a fully Bayesian approach that treats the variable measured with error as a latent variable to be integrated over, and a semi-Bayesian approach which uses a first order Laplace approximation to marginalize the variable measured with error out of the likelihood.

The first model is the matched case-control study for analyzing clustered binary outcomes. We develop low-rank thin plate splines for the case where a variable measured with error has an unknown, nonlinear relationship with the response. In addition to the semi- and fully Bayesian approaches, we propose another using expectation-maximization to detect both parametric and nonparametric relationships between the covariates and the binary outcome. We assess the performance of each method via simulation terms of mean squared error and mean bias. We illustrate each method on a perturbed example of 1--4 matched case-control study.

The second regression model is the generalized linear model (GLM) with unknown link function. Usually, the link function is chosen by the user based on the distribution of the response variable, often to be the canonical link. However, when covariates are measured with error, incorrect inference as a result of the error can be compounded by incorrect choice of link function. We assess performance via simulation of the semi- and fully Bayesian methods in terms of mean squared error. We illustrate each method on the Framingham Heart Study dataset.

The simulation results for both regression models support that the fully Bayesian approach is at least as good as the semi-Bayesian approach for adjusting for measurement error, particularly when the distribution of the variable of measure with error and the distribution of the measurement error are misspecified.

Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Bautista, Rene. "An examination of sources of error in exit polls| Nonresponse and measurement error." Thesis, The University of Nebraska - Lincoln, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3715450.

Full text
Abstract:

This dissertation focuses on understudied aspects of nonresponse in a context where limited information is available from refusals. In particular, this study examines social and psychological predictors of nonresponse in fast-paced face-to-face surveys; namely, election day surveys —popularly known as exit polls. Exit polls present unique challenges to study nonresponse since the population being sampled is fleeting and several conditions are beyond the researcher’s control.

If sample voters choose not participate, there is no practical way of contacting them to collect information in a timely manner. Using a proof-of-concept approach, this study explores a unique dataset that links information of respondents, nonrespondents, interviewer characteristics, as well as precinct-level information. Using this information, model-based plausible information is generated for nonrespondents (i.e., imputed data) to examine nonresponse dynamics. These data are then analyzed with multilevel regression methods. Nonresponse hypotheses are motivated by literature on cognitive abilities, cognition and social behavior.

Results from multiply imputed data and multilevel regression analyses are consistent with hypothesized relationships, suggesting that this approach may offer a way of studying nonresponse where limited information exists. Additionally, this dissertation explores sources of measurement error in exit polls. It examines whether the mechanisms likely to produce refusals are the same mechanisms likely introduce error once survey cooperation is established. A series of statistical interaction terms in OLS regressions —motivated by social interactions between interviewers and respondents— are used to explore hypothesized relationships. Overall, this research finds that cognitive mechanisms appear to account for voter nonresponse, whereas social desirability mechanisms seem to explain exit polling error.

APA, Harvard, Vancouver, ISO, and other styles
17

McMahan, Angela Renee. "Measurement Error in Designed Experiments for Second Order Models." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30290.

Full text
Abstract:
Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest optimal values for axial points in Central Composite Designs. The proper analysis for experimental data including ME is outlined for first and second order models. A comparison of this analysis to a typical Ordinary Least Squares analysis is made for second order models. The comparison is used to quantify the difference in performance of the two methods, both of which yield unbiased coefficient estimates. Robustness to misspecification of the ME variance is also explored. A solution for experimental planning is also suggested. A design optimality criterion, called the DME criterion, is used to create a second-stage design when ME is present. The performance of the criterion is compared to a D-optimal design augmentation. A final comparison is made between methods accounting for ME and methods ignoring ME.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Lian. "Topics in measurement error and missing data problems." Thesis, [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rummel, David. "Correction for covariate measurement error in nonparametric regression." Diss., [S.l.] : [s.n.], 2006. http://edoc.ub.uni-muenchen.de/archive/00006436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Holst, Daryl Allan. "Inquiring into measurement error in the science laboratory." Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/holst/HolstD0811.pdf.

Full text
Abstract:
High school students often struggle with accurate data collection in the science laboratory. This study examined the effects of inquiry-based laboratory learning experiences on student ability to recognize the limited precision of measurements, ability to see error, manipulative ability in using laboratory instruments and commitment to accuracy. Results indicate increased student ability to see and correct error as well as improved understanding of error.
APA, Harvard, Vancouver, ISO, and other styles
21

Davies, Jonathan. "Sparse regression methods with measurement-error for magnetoencephalography." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/48062/.

Full text
Abstract:
Magnetoencephalography (MEG) is a neuroimaging method for mapping brain activity based on magnetic field recordings. The inverse problem associated with MEG is severely ill-posed and is complicated by the presence of high collinearity in the forward (leadfield) matrix. This means that accurate source localisation can be challenging. The most commonly used methods for solving the MEG problem do not employ sparsity to help reduce the dimensions of the problem. In this thesis we review a number of the sparse regression methods that are widely used in statistics, as well as some more recent methods, and assess their performance in the context of MEG data. Due to the complexity of the forward model in MEG, the presence of measurement-error in the leadfield matrix can create issues in the spatial resolution of the data. Therefore we investigate the impact of measurement-error on sparse regression methods as well as how we can correct for it. We adapt the conditional score and simulation extrapolation (SIMEX) methods for use with sparse regression methods and build on an existing corrected lasso method to cover the elastic net penalty. These methods are demonstrated using a number of simulations for different types of measurement-error and are also tested with real MEG data. The measurement-error methods perform well in simulations, including high dimensional examples, where they are able to correct for attenuation bias in the true covariates. However the extent of their correction is much more restricted in the more complex MEG data where covariates are highly correlated and there is uncertainty over the distribution of the error.
APA, Harvard, Vancouver, ISO, and other styles
22

Han, Hillary H. "Measurement-error bias correction in spawner-recruitment relationships." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37541.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Coy, Joanne. "The quantification of sampling error in coordinate measurement." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/35678/.

Full text
Abstract:
This work was canied out between October 1986 and February 1989 at the School of Engineering, University of Warwick. The thesis begins with a review of the configurations of coordinate measuring machines in common use and an investigation into the types and magnitudes of the errors incurred due to various phenomena associated with the design, deformation or misalignment of the machine components. Some of the more significant of these errors are then measured and tabulated with a view to using them as a comparison to further work. Methods by which these errors can be rectified are then briefly reviewed. Chapter 2 is concerned with the inadequacies associated with current coordinate measuring machine software algorithm design. Measurement practices are reviewed and sources of inconsistency or potential misinterpretation are identified. Sampling error is singled out as being of particular significance. Chapter 3 reviews geometric element fitting procedures and the errors that can result from ill advised measuring practice. Systematic and random error analyses of the errors incurred in the estimates of geometric parameters are reviewed and an original investigation is performed into the errors incurred in parameters due to not considering all possible data (sampling error.) Chapter 4 presents an assessment of the nature of the problem of sampling error and outlines the way in which a robust algorithm for the formal quantification of these errors should be formulated. Chapter 5 then identifies the criteria that would maximise the implementability of an algorithm of this type. An algorithm satisfying these particular requirements is duly developed. Finally, chapter 6 consists of an investigation into the effect of probe geometry on the phenomenon of sampling errors. A method is then developed whereby the probe geometry that will minimise sampling error can be readily selected.
APA, Harvard, Vancouver, ISO, and other styles
24

Saneii, Seyed Hassan. "Measurement error modelling for ordered covariates in epidemiology." Thesis, University of Liverpool, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hayley, S. "Cognitive error in the measurement of investment returns." Thesis, City University London, 2015. http://openaccess.city.ac.uk/13172/.

Full text
Abstract:
This thesis identifies and quantifies the impact of cognitive errors in certain aspects of investor decision-making. One error is that investors are unaware that the Internal Rate of Return (IRR) is a biased indicator of expected terminal wealth for any dynamic strategy where the amount invested is systematically related to the returns made to date. This error leads investors to use Value Averaging (VA). This thesis demonstrates that this is an inefficient strategy, since alternative strategies can generate identical outturns with lower initial capital. Investors also wrongly assume that the lower average purchase cost which is achieved by Dollar Cost Averaging (DCA) results in higher expected returns. DCA is a similarly inefficient strategy. Investors also adopt strategies such as Volatility Pumping, which appears to benefit from high asset volatility and large rebalancing trades. This thesis demonstrates that any increase in the expected geometric mean associated with rebalancing is likely to be due to reduced volatility drag, and that simpler strategies involving lower transactions costs are likely to be more profitable. Academic papers in highly-ranked journals similarly misinterpret the reduction in volatility drag achieved by rebalanced portfolios, mistakenly claiming that it results from the rebalancing trades “buying low and selling high”. The previously unidentified bias in the IRR has also affected an increasing number of academic studies, leading to misleadingly low estimates of the equity risk premium and exaggerated estimates of the losses resulting from bad investment timing. This thesis also derives a method for decomposing the differential between the GM return and the IRR into (i) the effects of this retrospective bias, and (ii) genuine effects of investor timing. Using this method I find that the low IRR on US equities is almost entirely due to this bias, and so should not lead us to revise down our estimates of the equity risk premium. This method has wider applications in fields where IRRs are used (e.g. mutual fund performance and project evaluation). In identifying these errors this thesis makes a contribution: (i) to the academic literature by correcting previous misleading results and improving research methods; (ii) to investment practitioners by identifying avoidable errors in investor decision-making. It also makes a contribution to the field of behavioural finance by altering the range of investor behaviour which should be seen as resulting from cognitive error rather than the pursuit of different objectives.
APA, Harvard, Vancouver, ISO, and other styles
26

Xie, Xiangwen. "Covariate measurement error methods in failure time regression /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shifa, Naima. "Estimation of Qvf Measurement Error Models Using Empirical Likelihood Method." Bowling Green State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1245415705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Perera, Walgampolage Ranjith Indrasiri. "Detection of the interaction term in measurement-error-models." Thesis, University of East London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

COSTA, PAULO WERNECK DE ANDRADE. "ADAPTIVE CONTROL OF A MACROECONOMETRIC MODEL WITH MEASUREMENT ERROR." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1991. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9400@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
O Planejamento econômico, abordado como um problema de controle, tem por objetivo estabelecer trajetórias ótimas (ou sub-ótimas) para as variáveis que estão sujeitas ao controle do Governo. Isto significa dizer que as varáveis de política (controle) não mais serão arbitrariamente determinadas pelos seus planejadores, sendo agora resultantes de um processo de otimização , tendo em vista o cumprimento de metas previamente estabelecidas. Neste artigo aplicamos um controlador adaptativo de certeza equivalente a um modelo macroeconométrico da economia brasileira, considerando erro de medida nas variáveis de estado. A adoção de um controlador adaptativo é justificada tendo em vista as críticas (principalmente a crítica de Lucas) que recaíram sobre os modelos macroeconométricos estacionários. Uma das formas adequadas de se tratar a não estacionariedade de tais modelos é por intermédio de um controlador adaptativo cujo objetivo será controlar e identificar simultaneamente o modelo em questão. Apresentamos uma pequena resenha das aplicações de controle ótimo e controle adaptativo em problema econômicos, ressaltando a aplicação de ambas as técnicas em modelos macroeconométricos com expectativas racionais. Por intermédio de simulações comparamos a política realmente efetivada pelo governo federal e a política ótima obtida via controle ótimo não adaptativo.
Economic planning, when considered as a control problem, has as its objective establishing optimal (or sub-optimal) trajectories for the variables subject to Government Control. This means that the policy variables (control), instead of being arbitrarily determined by the policymakers, will be the result of an optimization process, with the objective of reaching pre-established goals. In this work a Certainly Equivalence Adaptative Control is applied to a macroeconometric model of the Brazilian economy with measurement error. Since the employment of time-invariant models has been widely criticized (Lucas critique) the model used here is time- varying. An adequate way to treat such a case is through an adaptative control scheme, in which control and identification of the model are perfomed simultaneously. By means of simulations the policy obtained with the adaptative controller is compared to the policy adopted by the Brazilian Government.
APA, Harvard, Vancouver, ISO, and other styles
30

Kon, Henry B. "Data quality management : foundations in error measurement and propagation." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/9838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Haanpää, T. (Tuomas). "Fuzz testing coverage measurement based on error log analysis." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605051644.

Full text
Abstract:
Fuzz testing is a black box testing method in which a SUT is subjected to anomalous inputs in order to uncover faults such as crashes or other incorrect behaviour. A defining attribute of any black box testing is the assumption that the inner workings and the source code of the SUT are unknown. This lack of information adds an element of difficulty to the task of estimating test coverage. During testing a SUT typically produces a log of error conditions and other events which were triggered by the testing process. This log data is available even when the source code is not. The purpose of this study was to research whether a meaningful metric of test coverage could be extracted from the log data. The goal was to discover a metric similar to code coverage, but applicable to black box testing. A hypothesis was presented that a large variety of observed events translated to great code coverage as well. To extract this metric, a rudimentary pattern recognition algorithm was devised in order to automatically classify the events encountered during a test run. Measurements were performed on three open source SUTs representing three widely used communication protocols. Log analysis results were compared to code coverage measurements in order to study any possible correlation between them. The results were positive, as the study showed clear correlation between the code coverage metric and the log analysis results for two of the three case studies. Further study is required to establish whether the studied log analysis method is generally applicable
Fuzz-testaus on mustalaatikkotestausmenetelmä, jossa testikohteesta pyritään löytämään vikoja altistamalla se virheelliselle syötteelle. Mahdolliset ohjelmistoviat ilmenevät kaatumisina tai muuna virheellisenä toimintana. Mustalaatikkotestaukselle ominaista on se, että kohteen sisäistä toimintaa ja lähdekoodia ei tunneta, mikä tekee testauskattavuuden arvioinnista ongelmallista. Testauksen aikana kohde tavallisesti tuottaa lokitiedoston, joka sisältää kohteessa havaitut virhetilat. Tämä lokiaineisto on käytettävissä myös silloin, kun lähdekoodia ei tunneta. Tämän tutkielman tarkoituksena on selvittää, onko mahdollista kehittää mittaustekniikka testikattavuuden arviointiin lokiaineiston perusteella. Tämä mittaustekniikka muistuttaisi koodikattavuusmittausta, mutta sitä voisi soveltaa myös mustalaatikkotestauksen yhteydessä. Tutkielmassa esitetty hypoteesi oli se, että mikäli lokissa havaitaan suuri määrä erilaisia virhetiloja, myös koodikattavuus olisi korkea. Mittausten suorittamiseksi kehitettiin alkeellinen hahmontunnistusalgoritmi, joka luokitteli testauksen aikana kerätyn lokiaineiston. Mittaukset toistettiin kolmella testikohteella, joiden lähdekoodi oli avointa, ja jotka edustivat yleisesti käytettyjä tietoliikenneprotokollia. Lokianalyysituloksia verrattiin koodikattavuusmittaustuloksiin, jotta mahdollinen korrelaatio tulosten välillä havaittaisiin. Tutkimuksen tulokset olivat positiiviset, sillä kahdessa esimerkkitapauksessa kolmesta havaittiin selkeää korrelaatiota koodikattavuusmittausten ja lokianalyysitulosten välillä. Menetelmän yleinen sovellettavuus vaatii kuitenkin lisätutkimusta
APA, Harvard, Vancouver, ISO, and other styles
32

Miles, Caleb Hilliard. "Semiparametric Methods for Causal Mediation Analysis and Measurement Error." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:23845420.

Full text
Abstract:
Chapter 1: Since the early 2000s, evidence has accumulated for a significant differential effect of first-line antiretroviral therapy (ART) regimens on human immunodeficiency virus (HIV) treatment outcomes, such as CD4 response and viral load suppression. This finding was replicated in our data from the Harvard President's Emergency Plan for AIDS Relief (PEPFAR) program in Nigeria. Investigators were interested in finding the source of these differences, i.e., understanding the mechanisms through which one regimen outperforms another, particularly via adherence. This amounts to a mediation question with adherence playing the role of mediator. Existing mediation analysis results, however, have relied on an assumption of no exposure-induced confounding of the intermediate variable, and generally require an assumption of no unmeasured confounding for nonparametric identification. Both assumptions are violated by the presence of drug toxicity. In this paper, we relax these assumptions and show that certain path-specific effects remain identified under weaker conditions. We focus on the path-specific effect solely mediated by adherence and not by toxicity and propose a suite of estimators for this effect, including a semiparametric-efficient, multiply-robust estimator. We illustrate with simulations and present results from a study applying the methodology to the Harvard PEPFAR data. Chapter 2: In causal mediation analysis, nonparametric identification of the pure (natural) direct effect typically relies on fundamental assumptions of (i) so-called ``cross-world-counterfactuals" independence and (ii) no exposure-induced confounding. When the mediator is binary, bounds for partial identification have been given when neither assumption is made, or alternatively when assuming only (ii). We extend these bounds to the case of a polytomous mediator, and provide bounds for the case assuming only (i). We apply these bounds to data from the Harvard PEPFAR program in Nigeria, where we evaluate the extent to which the effects of antiretroviral therapy on virological failure are mediated by a patient's adherence, and show that inference on this effect is somewhat sensitive to model assumptions. Chapter 3: When assessing the presence of an exposure causal effect on a given outcome, it is well known that classical measurement error of the exposure can seriously reduce the power of a test of the null hypothesis in question, although its type I error rate will generally remain controlled at the nominal level. In contrast, classical measurement error of a confounder can have disastrous consequences on the type I error rate of a test of treatment effect. In this paper, we develop a large class of semiparametric test statistics of an exposure causal effect, which are completely robust to classical measurement error of a subset of confounders. A unique and appealing feature of our proposed methods is that they require no external information such as validation data or replicates of error-prone confounders. The approach relies on the observation that under the sharp null hypothesis of no exposure causal effect, the standard assumption of no unmeasured confounding implies that the outcome is in fact a valid instrumental variable for the association between the error-prone confounder and the exposure. We present a doubly-robust form of this test that requires only one of two models -- an outcome-regression and a propensity-score model -- to be correctly specified for the resulting test statistic to have correct type I error rate. Validity and power within our class of test statistics is demonstrated via multiple simulation studies. We apply the methods to a multi-U.S.-city, time-series data set to test for an effect of temperature on mortality while adjusting for atmospheric particulate matter with diameter of 2.5 micrometres or less (PM2.5), which is well known to be measured with error.
Biostatistics
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Long. "Essays on measurement error, nonstationary panels and nonparametrics econometrics." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ng, YongKad. "An essay on unit root tests and measurement error." [Lincoln, Neb. : University of Nebraska-Lincoln], 2004. http://www.unl.edu/libr/Dissertations/2004/NgDis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Hyo Soo. "Periodic error in heterodyne interferometry measurement, uncertainty, and elimination /." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hong, Cefu. "Error Calibration on Five-axis Machine Tools by Relative Displacement Measurement between Spindle and Work Table." 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/157572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tataryn, Douglas Joseph 1960. "Standard errors of measurement, confidence intervals, and the distribution of error for the observed score curve." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277223.

Full text
Abstract:
This paper reviews the basic literature on the suggested applications of the standard error of measurement (SEM), and points out that there are discrepancies in its suggested application. In the process of determining the efficacy and appropriateness of each of the proposals, a formula to determine the distribution of error for the observed score curve is derived. The final recommendation, which is congruent with Cronbach, Gleser, Nanda & Rajaratnam's (1972) recommendations, is to not use the SEM to create confidence intervals around the observed score: The predicted true score and the standard error of the prediction are better suited (non-biased and more efficient) for the task of estimating a confidence interval which will contain an individual's true score. Finally, the distribution of future observed scores around the expected true score is derived.
APA, Harvard, Vancouver, ISO, and other styles
38

Rainey, Cameron Scott. "Error Estimations in the Design of a Terrain Measurement System." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50501.

Full text
Abstract:
Terrain surface measurement is an important tool in vehicle design work as well as pavement classification and health monitoring. �Non-deformable terrains are the primary excitation to vehicles traveling over it, and therefore it is important to be able to quantify these terrain surfaces. Knowledge of the terrain can be used in combination with vehicle models in order to predict force loads the vehicles would experience while driving over the terrain surface. �This is useful in vehicle design, as it can speed the design process through the use of simulation as opposed to prototype construction and durability testing. �Additionally, accurate terrain maps can be used by highway engineers and maintenance personnel to identify deterioration in road surface conditions for immediate correction. �Repeated measurements of terrain surfaces over an extended length of time can also allow for long term pavement health monitoring.
Many systems have been designed to measure terrain surfaces, most of them historically single line profiles, with more modern equipment capable of capturing three dimensional measurements of the terrain surface. �These more modern systems are often constructed using a combination of various sensors which allow the system to measure the relative height of the terrain with respect to the terrain measurement system. �Additionally, these terrain measurement systems are also equipped with sensors which allow the system to be located in some global coordinate space, as well as the angular attitude of that system to be estimated. �Since all sensors return estimated values, with some uncertainty, the combination of a group of sensors serves to also combine their uncertainties, resulting in a system which is less precise than any of its individual components. �In order to predict the precision of the system, the individual probability densities of the components must be quantified, in some cases transformed, and finally combined in order to predict the system precision. �This thesis provides a proof-of-concept as to how such an evaluation of final precision can be performed.

Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Salmanoglu, Murat. "An Error Prevention Model For Cosmic Functional Size Measurement Method." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614632/index.pdf.

Full text
Abstract:
Estimation and measurement of the size of software is crucial for project management activities. Functional size measurement is one of the most frequently used methods to measure size of software and COSMIC is one of the popular methods for functional size measurement. Although precise size measurement is critical, the results may differ because of the errors made in the measurement process. The erroneous measurement results cause lack of confidence for the methods as well as reliability problems for effort and cost estimations. This research proposes an error prevention model for COSMIC Functional Size Measurement method to increase the reliability of the measurements. The prevention model defines data movement patterns for different types of the functional processes and a cardinality table to prevent errors. We validated the prevention model with two different case studies and observed that it can decrease errors up to 90% in our case studies.
APA, Harvard, Vancouver, ISO, and other styles
40

Jia, Weijia. "Goodness-of-fit tests in measurement error models with replications." Diss., Kansas State University, 2018. http://hdl.handle.net/2097/38660.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Weixing Song
In this dissertation, goodness-of-fit tests are proposed for checking the adequacy of parametric distributional forms of the regression error density functions and the error-prone predictor density function in measurement error models, when replications of the surrogates of the latent variables are available. In the first project, we propose goodness-of-fit tests on the density function of the regression error in the errors-in-variables model. Instead of assuming that the distribution of the measurement error is known as is done in most relevant literature, we assume that replications of the surrogates of the latent variables are available. The test statistic is based upon a weighted integrated squared distance between a nonparametric estimate and a semi-parametric estimate of the density functions of certain residuals. Under the null hypothesis, the test statistic is shown to be asymptotically normal. Consistency and local power results of the proposed test under fixed alternatives and local alternatives are also established. Finite sample performance of the proposed test is evaluated via simulation studies. A real data example is also included to demonstrate the application of the proposed test. In the second project, we propose a class of goodness-of-fit tests for checking the parametric distributional forms of the error-prone random variables in the classic additive measurement error models. We also assume that replications of the surrogates of the error-prone variables are available. The test statistic is based upon a weighted integrated squared distance between a non-parametric estimator and a semi-parametric estimator of the density functions of the averaged surrogate data. Under the null hypothesis, the minimum distance estimator of the distribution parameters and the test statistics are shown to be asymptotically normal. Consistency and local power of the proposed tests under fixed alternatives and local alternatives are also established. Finite sample performance of the proposed tests is evaluated via simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
41

Dagalp, Rukiye Esener. "Estimators For Generalized Linear Measurement Error Models With Interaction Terms." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20011019-142524.

Full text
Abstract:

The primary objectives of this research are to develop andstudy estimators for generalized linear measurement errormodels when the mean function contains error-free predictorsas well as predictors measured with error and interactions between error-free and error-prone predictors. Attention is restricted to generalized linear models in canonical form with independent additive Gaussian measurement error in the error-prone predictors.Estimators appropriate for the functional (Fuller, 1987, Ch.1) version of the measurement error model are derived and studied. The estimators are also appropriate in the structural version of the model and thus the methods developed in this research are functional in the sense of Carroll, Ruppert and Stefanski (1995, Ch. 6).The primary approach to the development of estimators in this research is the conditional-score method proposed byStefanski and Carroll (1987) and described by Carroll et al.(1995, Ch. 6). Sufficient statistics for the unobserved predictors are obtained and the conditional distribution of the observed data given these sufficient statistics is derived. The latter admits unbiased score functions that arefree of the nuisance parameters (the unobserved predictors) and are used to construct unbiased estimating equations for model parameters.Estimators for the parameters of the model of interest are also derived using the corrected approach proposed by Nakamura (1990) and Stefanski (1989). These are also functional estimators in the sense of Carroll et al. (1995, Ch. 6) that are less dependent on the exponential-family model assumptions and thus provide a benchmark against whichto compare the conditional-score estimators.Large-sample distribution approximations for both theconditional-score and corrected-score estimators are derivedand the performance of the estimators and the adequacy of the large-sample distribution theory are studied via Monte Carlo simulation.

APA, Harvard, Vancouver, ISO, and other styles
42

Shaw, Pamela. "Estimation methods for Cox regression with nonclassical covariate measurement error /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/9544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Marques, Tiago André. "Incorporating measurement error and density gradients in distance sampling surveys /." St Andrews, 2007. http://hdl.handle.net/10023/391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hu, Yingyao. "Estimation of nonlinear models with measurement error using marginal information." Available to US Hopkins community, 2003. http://www.gbv.de/dms/zbw/558224571.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Al-Jaralla, Reem Abdulla. "Optimal design for Bayesian linear hierarchical models with measurement error." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Marques, Tiago Andre Lamas Oliveira. "Incorporating measurement error and density gradients in distance sampling surveys." Thesis, University of St Andrews, 2007. http://hdl.handle.net/10023/391.

Full text
Abstract:
Distance sampling is one of the most commonly used methods for estimating density and abundance. Conventional methods are based on the distances of detected animals from the center of point transects or the center line of line transects. These distances are used to model a detection function: the probability of detecting an animal, given its distance from the line or point. The probability of detecting an animal in the covered area is given by the mean value of the detection function with respect to the available distances to be detected. Given this probability, a Horvitz-Thompson- like estimator of abundance for the covered area follows, hence using a model-based framework. Inferences for the wider survey region are justified using the survey design. Conventional distance sampling methods are based on a set of assumptions. In this thesis I present results that extend distance sampling on two fronts. Firstly, estimators are derived for situations in which there is measurement error in the distances. These estimators use information about the measurement error in two ways: (1) a biased estimator based on the contaminated distances is multiplied by an appropriate correction factor, which is a function of the errors (PDF approach), and (2) cast into a likelihood framework that allows parameter estimation in the presence of measurement error (likelihood approach). Secondly, methods are developed that relax the conventional assumption that the distribution of animals is independent of distance from the lines or points (usually guaranteed by appropriate survey design). In particular, the new methods deal with the case where animal density gradients are caused by the use of non-random sampler allocation, for example transects placed along linear features such as roads or streams. This is dealt with separately for line and point transects, and at a later stage an approach for combining the two is presented. A considerable number of simulations and example analysis illustrate the performance of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Young, Steven Eric. "Discretionary accounting accruals : systematic measurement error and firm-specific determinants." Thesis, Lancaster University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Humphreys, Keith. "Latent variable models for discrete longitudinal data with measurement error." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kepe, Lulama Patrick. "Estimating measurement error in blood pressure, using structural equations modelling." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/53739.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: Any branch in science experiences measurement error to some extent. This maybe due to conditions under which measurements are taken, which may include the subject, the observer, the measurement instrument, and data collection method. The inexactness (error) can be reduced to some extent through the study design, but at some level further reduction becomes difficult or impractical. It then becomes important to determine or evaluate the magnitude of measurement error and perhaps evaluate its effect on the investigated relationships. All this is particularly true for blood pressure measurement. The gold standard for measunng blood pressure (BP) is a 24-hour ambulatory measurement. However, this technology is not available in Primary Care Clinics in South Africa and a set of three mercury-based BP measurements is the norm for a clinic visit. The quality of the standard combination of the repeated measurements can be improved by modelling the measurement error of each of the diastolic and systolic measurements and determining optimal weights for the combination of measurements, which will give a better estimate of the patient's true BP. The optimal weights can be determined through the method of structural equations modelling (SEM) which allows a richer model than the standard repeated measures ANOVA. They are less restrictive and give more detail than the traditional approaches. Structural equations modelling which is a special case of covariance structure modelling has proven to be useful in social sciences over the years. Their appeal stem from the fact that they includes multiple regression and factor analysis as special cases. Multi-type multi-time (MTMT) models are a specific type of structural equations models that suit the modelling of BP measurements. These designs (MTMT models) constitute a variant of repeated measurement designs and are based on Campbell and Fiske's (1959) suggestion that the quality of methods (time in our case) can be determined by comparing them with other methods in order to reveal both the systematic and random errors. MTMT models also showed superiority over other data analysis methods because of their accommodation of the theory of BP. In particular they proved to be a strong alternative to be considered for the analysis of BP measurement whenever repeated measures are available even when such measures do not constitute equivalent replicates. This thesis focuses on SEM and its application to BP studies conducted in a community survey of Mamre and the Mitchells Plain hypertensive clinic population.
AFRIKAANSE OPSOMMING: Elke vertakking van die wetenskap is tot 'n minder of meerdere mate onderhewig aan metingsfout. Dit is die gevolg van die omstandighede waaronder metings gemaak word soos die eenheid wat gemeet word, die waarnemer, die meetinstrument en die data versamelingsmetode. Die metingsfout kan verminder word deur die studie ontwerp maar op 'n sekere punt is verdere verbetering in presisie moeilik en onprakties. Dit is dan belangrik om die omvang ven die metingsfout te bepaal en om die effek hiervan op verwantskappe te ondersoek. Hierdie aspekte is veral waar vir die meting van bloeddruk by die mens. Die goue standaard vir die meet van bloeddruk is 'n 24-uur deurlopenee meting. Hierdie tegnologie is egter nie in primêre gesondheidsklinieke in Suid-Afrika beskikbaar nie en 'n stel van drie kwik gebasseerde bloedrukmetings is die norm by 'n kliniek besoek. Die kwaliteit van die standard kombinasie van die herhaalde metings kan verbeter word deur die modellering van die metingsfout van diastoliese en sistoliese bloeddruk metings. Die bepaling van optimale gewigte vir die lineêre kombinasie van die metings lei tot 'n beter skatting van die pasiënt se ware bloedruk. Die gewigte kan berekening word met die metode van strukturele vergelykings modellering (SVM) wat 'n ryker klas van modelle bied as die standaard herhaalde metings analise van variansie modelle. Dié model het minder beperkings en gee dus meer informasie as die tradisionele benaderings. Strukurele vergelykings modellering wat 'n spesial geval van kovariansie strukturele modellering is, is oor die jare nuttig aangewend in die sosiale wetenskap. Die aanhang is die gevolg van die feit dat meervoudige lineêre regressie en faktor analise ook spesiale gevalle van die metode is. Meervoudige-tipe meervoudige-tyd (MTMT) modelle is 'n spesifieke strukturele vergelykings model wat die modellering van bloedruk pas. Hierdie tipe model is 'n variant van die herhaalde metings ontwerp en is gebaseer op Campbell en Fiske (1959) se voorstel dat die kwaliteit van verskillende metodes bepaal kan word deur dit met ander metodes te vergelyk om sodoende sistematiese en stogastiese foute te onderskei. Die MTMT model pas ook goed in by die onderliggende fisiologies aspekte van bloedruk en die meting daarvan. Dit is dus 'n goeie alternatief vir studies waar die herhaalde metings nie ekwivalente replikate is nie. Hierdie tesis fokus op die strukturele vergelykings model en die toepassing daarvan in hipertensie studies uitgevoer in die Mamre gemeenskap en 'n hipertensie kliniek populasie in Mitchells Plain.
APA, Harvard, Vancouver, ISO, and other styles
50

Elliott, Laine Elizabeth. "Adjustment for measurement error." 2009. http://www.lib.ncsu.edu/theses/available/etd-08142009-132845/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography