To see the other types of publications on this topic, follow the link: Random measurement error.

Dissertations / Theses on the topic 'Random measurement error'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Random measurement error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhou, Tingxian, Xiaohua Yin, and Xianming Zhao. "A New Error Control Scheme for Remote Control System." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/611658.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
How to rise the reliability of the data transmission is one of the main problem faced by modern digital communication designers. This paper studies the error-correcting codes being suitable for the channel existing both the random and burst error. A new error control scheme is given. The scheme is a concatenated coding system using an interleaved Reed-Solomon code with symbols over GF (24) as the outer code and a Viterbi-decoded convolutional code as the inner code. As a result of the computer simulation, it is proved that the concatenated coding system has a output at a very low bit error rate (BER)and can correct a lot of compound error patterns. It is suitable for the serious disturb channel existing both the random and burst error. This scheme will be adopted for a remote control system.
APA, Harvard, Vancouver, ISO, and other styles
2

Häggström, Lundevaller Erling. "Tests of random effects in linear and non-linear models." Doctoral thesis, Umeå universitet, Statistik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Varughese, Suma. "A Study On Effects Of Phase - Amplitude Errors In Planar Near Field Measurement Facility." Thesis, Indian Institute of Science, 2000. https://etd.iisc.ac.in/handle/2005/231.

Full text
Abstract:
Antenna is an indispensable part of a radar or free space communication system. Antenna requires different stringent specifications for different applications. Designed and fabricated for an intended application, antenna or antenna array has to be evaluated for its far-field characteristics in real free space environment which requires setting up of far-field test site. Maintenance of the site to keep the stray reflections levels low, the cost of the real estate are some of the disadvantages. Nearfield measurements are compact and can be used to test the antennas by exploiting the relationship between near-field and far-field. It is shown that the far-field patterns of an antenna can be sufficiently accurately predicted provided the near-field measurements are accurate. Due to limitation in the near-field measurement systems, errors creep in corrupting the nearfield-measured data thus making error in prediction of the far field. All these errors ultimately corrupt the phase and amplitude data. In this thesis, one such near-field measurement facility, the Planar Near Field Measurement facility is discussed. The limitations of the facility and the errors that occur due to their limitations are discussed. Various errors that occur in measurements ultimately corrupt the near-field phase and amplitude. Investigations carried out aim at a detailed study of these phase and amplitude errors and their effect on the far-field patterns of the antenna. Depending on the source of error, the errors are classified as spike, pulse and random errors. The location of occurrence of these types of errors in the measurement plane, their effects on the far-field of the antenna is studied both for phase and amplitude errors. The studies conducted for various phase and amplitude errors show that the near-field phase and amplitude data are more tolerant to random errors as the far-field patterns do not get affected even for low sidelobe cases. The spike errors, though occur as a wedge at a single point in the measurement plane, have more pronounced effect on the far-field patterns. Lower the taper value of the antenna, more pronounced is the error. It is also noticed that the far-field pattern gets affected only in the plane where the error has occurred and has no effect in the orthogonal plane. Pulse type of errors which occur even for a short length in the measurement affect both the principle plane far-field patterns. This study can be used extensively as a tool to determine to the level to which various error such as mechanical, RF etc need to be controlled to make useful and correct pattern predictions on a particular facility. Thereby, the study can be used as a tool to economise the budget of the facility wherein the parameters required for building the facility need not be over specified beyond the requirement. In general, though this is a limited study, it is certainly a trendsetter in this direction.
APA, Harvard, Vancouver, ISO, and other styles
4

Varughese, Suma. "A Study On Effects Of Phase - Amplitude Errors In Planar Near Field Measurement Facility." Thesis, Indian Institute of Science, 2000. http://hdl.handle.net/2005/231.

Full text
Abstract:
Antenna is an indispensable part of a radar or free space communication system. Antenna requires different stringent specifications for different applications. Designed and fabricated for an intended application, antenna or antenna array has to be evaluated for its far-field characteristics in real free space environment which requires setting up of far-field test site. Maintenance of the site to keep the stray reflections levels low, the cost of the real estate are some of the disadvantages. Nearfield measurements are compact and can be used to test the antennas by exploiting the relationship between near-field and far-field. It is shown that the far-field patterns of an antenna can be sufficiently accurately predicted provided the near-field measurements are accurate. Due to limitation in the near-field measurement systems, errors creep in corrupting the nearfield-measured data thus making error in prediction of the far field. All these errors ultimately corrupt the phase and amplitude data. In this thesis, one such near-field measurement facility, the Planar Near Field Measurement facility is discussed. The limitations of the facility and the errors that occur due to their limitations are discussed. Various errors that occur in measurements ultimately corrupt the near-field phase and amplitude. Investigations carried out aim at a detailed study of these phase and amplitude errors and their effect on the far-field patterns of the antenna. Depending on the source of error, the errors are classified as spike, pulse and random errors. The location of occurrence of these types of errors in the measurement plane, their effects on the far-field of the antenna is studied both for phase and amplitude errors. The studies conducted for various phase and amplitude errors show that the near-field phase and amplitude data are more tolerant to random errors as the far-field patterns do not get affected even for low sidelobe cases. The spike errors, though occur as a wedge at a single point in the measurement plane, have more pronounced effect on the far-field patterns. Lower the taper value of the antenna, more pronounced is the error. It is also noticed that the far-field pattern gets affected only in the plane where the error has occurred and has no effect in the orthogonal plane. Pulse type of errors which occur even for a short length in the measurement affect both the principle plane far-field patterns. This study can be used extensively as a tool to determine to the level to which various error such as mechanical, RF etc need to be controlled to make useful and correct pattern predictions on a particular facility. Thereby, the study can be used as a tool to economise the budget of the facility wherein the parameters required for building the facility need not be over specified beyond the requirement. In general, though this is a limited study, it is certainly a trendsetter in this direction.
APA, Harvard, Vancouver, ISO, and other styles
5

Bručas, Domantas. "Geodezinių kampų matavimo prietaisų kalibravimo įrangos kūrimas ir tyrimas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080620_092018-51641.

Full text
Abstract:
Disertacijoje nagrinėjamos geodezinių kampus matuojančių prietaisų patikros bei kalibravimo metodai bei priemonės. Pagrindinis tyrimo objektas yra geodezinių prietaisų tikslumo parametrų matavimo būdų ir priemonių analizė, kalibravimo įrenginio kūrimas, jo tikslumo charakteristikų tyrimas bei įrenginio tobulinimas. Šie objektai yra svarbūs vykdant geodezinių prietaisų kalibravimą, kas savo ruožtu yra labai svarbu užtikrinant reikiamą šių prietaisų matavimų tikslumą geodezijoje, statybose, mašinų gamyboje ir t. t. Pagrindinis disertacijos tikslas – geodezinių kampų matavimo prietaisų kalibravimo galimybių analizė ir sukurto kalibravimo įrenginio tikslumo parametr�� tyrimas. Darbe sprendžiami keli pagrindiniai uždaviniai: plokščiųjų kampų matavimo metodų bei įrenginių, tinkamų geodeziniams prietaisams kalibruoti, analizė; daugiaetalonio kampų kalibravimo stendo kūrimas Vilniaus Gedimino technikos universiteto Geodezijos institute bei jo tikslumo charakteristikų tyrimas; stendo tikslumo didinimo galimybių bei priemonių tyrimas, ir apskritiminių skalių kalibravimo būdų tobulinimas. Disertaciją sudaro šeši skyriai, įvadas, išvados, literatūros sąrašas bei priedai. Įvade nagrinėjamas problemos aktualumas, formuluojamas darbo tikslas bei uždaviniai, aprašomas mokslinis darbo naujumas, pristatomi autoriaus pranešimai ir publikacijos, disertacijos struktūra. Pirmasis skyrius skirtas labiausiai paplitusių plokščiųjų kampų matavimo būdų bei priemonių tinkamų geodeziniams prietaisams... [toliau žr. visą tekstą]
The main idea of current PhD thesis is an accuracy analysis of testing and calibration of geodetic instruments. The object of investigation is an analysis of means and methods for testing and calibration of geodetic instruments for plane angle measurement, development of such calibration equipment, its accuracy investigation and the research of its accuracy increasing possibilities. These objects are important for successful testing or calibration of geodetic instruments for angle measuring which is essential in ensuring the precision of measurements taken in surveying, construction, mechanical engineering, etc. There are several main goals of the presented work. First one is an analysis of the angle measuring methods and devices suitable for the testing and calibration of geodetic instruments, according to the results of the mentioned analysis the second task can be formulated – creation of a multi-reference plane angle testing and calibration equipment at Institute of Geodesy, Vilnius Gediminas Technical University and investigate the parameters of its accuracy. The third task is to investigate the accuracy increasing possibilities of the equipment, and implementation some of them into the practice. The thesis consists of four chapters, introduction, conclusions, list of references and appendixes. Introduction is dedicated for an introduction to the problem and its topicality. There are also formulated purposes and tasks of the work; the used methods and novelty of... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
6

Olson, Brent. "Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/332.

Full text
Abstract:
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Bručas, Domantas. "Development and research of the test bench for the angle calibration of geodetic instruments." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080620_092300-91259.

Full text
Abstract:
The main idea of current PhD thesis is an accuracy analysis of testing and calibration of geodetic instruments. The object of investigation is an analysis of means and methods for testing and calibration of geodetic instruments for plane angle measurement, development of such calibration equipment, its accuracy investigation and the research of its accuracy increasing possibilities. These objects are important for successful testing or calibration of geodetic instruments for angle measuring which is essential in ensuring the precision of measurements taken in surveying, construction, mechanical engineering, etc. There are several main goals of the presented work. First one is an analysis of the angle measuring methods and devices suitable for the testing and calibration of geodetic instruments, according to the results of the mentioned analysis the second task can be formulated – creation of a multi-reference plane angle testing and calibration equipment at Institute of Geodesy, Vilnius Gediminas Technical University and investigate the parameters of its accuracy. The third task is to investigate the accuracy increasing possibilities of the equipment, and implementation some of them into the practice. The thesis consists of four chapters, introduction, conclusions, list of references and appendixes. Introduction is dedicated for an introduction to the problem and its topicality. There are also formulated purposes and tasks of the work; the used methods and novelty of... [to full text]
Disertacijoje nagrinėjamos geodezinių kampus matuojančių prietaisų patikros bei kalibravimo metodai bei priemonės. Pagrindinis tyrimo objektas yra geodezinių prietaisų tikslumo parametrų matavimo būdų ir priemonių analizė, kalibravimo įrenginio kūrimas, jo tikslumo charakteristikų tyrimas bei įrenginio tobulinimas. Šie objektai yra svarbūs vykdant geodezinių prietaisų kalibravimą, kas savo ruožtu yra labai svarbu užtikrinant reikiamą šių prietaisų matavimų tikslumą geodezijoje, statybose, mašinų gamyboje ir t. t. Pagrindinis disertacijos tikslas – geodezinių kampų matavimo prietaisų kalibravimo galimybių analizė ir sukurto kalibravimo įrenginio tikslumo para¬metrų tyrimas. Darbe sprendžiami keli pagrindiniai uždaviniai: plokščiųjų kampų mata¬vimo metodų bei įrenginių, tinkamų geodeziniams prietaisams kalibruoti, analizė; daugiaetalonio kampų kalibravimo stendo kūrimas Vilniaus Gedimino technikos universiteto Geodezijos institute bei jo tikslumo charakteristikų tyri¬mas; stendo tikslumo didinimo galimybių bei priemonių tyrimas, ir apskri¬timinių skalių kalibravimo būdų tobulinimas. Disertaciją sudaro šeši skyriai, įvadas, išvados, literatūros sąrašas bei priedai. Įvade nagrinėjamas problemos aktualumas, formuluojamas darbo tikslas bei uždaviniai, aprašomas mokslinis darbo naujumas, pristatomi autoriaus pranešimai ir publikacijos, disertacijos struktūra. Pirmasis skyrius skirtas labiausiai paplitusių plokščiųjų kampų matavimo būdų bei priemonių tinkamų geodeziniams prietaisams... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Ziyang. "Next Generation Ultrashort-Pulse Retrieval Algorithm for Frequency-Resolved Optical Gating: The Inclusion of Random (Noise) and Nonrandom (Spatio-Temporal Pulse Distortions) Error." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04122005-224257/unrestricted/wang%5Fziyang%5F200505%5Fphd.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Physics, Georgia Institute of Technology, 2005.
You, Li, Committee Member ; Buck, John A., Committee Member ; Kvam, Paul, Committee Member ; Kennedy, Brian, Committee Member ; Trebino, Rick, Committee Chair. Vita. Includses bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhong, Shan. "Measurement calibration/tuning & topology processing in power system state estimation." Texas A&M University, 2003. http://hdl.handle.net/1969.1/1595.

Full text
Abstract:
State estimation plays an important role in modern power systems. The errors in the telemetered measurements and the connectivity information of the network will greatly contaminate the estimated system state. This dissertation provides solutions to suppress the influences of these errors. A two-stage state estimation algorithm has been utilized in topology error identification in the past decade. Chapter II discusses the implementation of this algorithm. A concise substation model is defined for this purpose. A friendly user interface that incorporates the two-stage algorithm into the conventional state estimator is developed. The performances of the two-stage state estimation algorithms rely on accurate determination of suspect substations. A comprehensive identification procedure is described in chapter III. In order to evaluate the proposed procedure, a topology error library is created. Several identification methods are comparatively tested using this library. A remote measurement calibration method is presented in chapter IV. The un-calibrated quantities can be related to the true values by the characteristic functions. The conventional state estimation algorithm is modified to include the parameters of these functions. Hence they can be estimated along with the system state variables and used to calibrate the measurements. The measurements taken at different time instants are utilized to minimize the influence of the random errors. A method for auto tuning of measurement weights in state estimation is described in chapter V. Two alternative ways to estimate the measurement random error variances are discussed. They are both tested on simulation data generated based on IEEE systems. Their performances are compared. A comprehensive solution, which contains an initialization process and a recursively updating process, is presented. Chapter VI investigates the errors introduced in the positive sequence state estimation due to the usual assumptions of having fully balanced bus loads/generations and continuously transposed transmission lines. Several tests are conducted using different assumptions regarding the availability of single and multi-phase measurements. It is demonstrated that incomplete metering of three-phase system quantities may lead to significant errors in the positive sequence state estimates for certain cases. A novel sequence domain three-phase state estimation algorithm is proposed to solve this problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Bassani, N. P. "STATISTICAL METHODS FOR THE ASSESSMENT OF WITHIN-PLAFORM REPEATABILITY AND BETWEEN-PLATFORM AGREEMENT IN MICRORNA MICROARRAY TECHNOLOGIES." Doctoral thesis, Università degli Studi di Milano, 2013. http://hdl.handle.net/2434/215072.

Full text
Abstract:
Over the last years miRNA microarray platforms have provided great insights in the biological mechanisms underlying onset and development of several diseases such as tumours and chronic pathologies. However, only a few studies have evaluated reliability and concordance of these technologies, mostly by using improper statistical methods. In this project we have studied performance of three different miRNA microarray platforms (Affymetrix, Agilent and Illumina) in terms of their within-platform repeatability by means of a of random-effects model to assess contribution to overall variability from different technical sources, and interpret the quotas of explained variance as Intraclass Correlation Coefficients. Jointly, Concordance Correlation Coefficients between technical array replicates are estimated and assessed in a non-inferiority setting to further evaluate patterns of platform repeatability. Concordance between-platforms has been studied using a modifed version of the Bland-Altman plot which makes use of a linear measurement error to build the agreement intervals. All the analysis have been performed on unfiltered and non-normalized data, yet all results have been compared to those obtained after applying standard filtering procedures and quantile and loess normalization algorithms. In this project two biological samples were considered: one tumor cell line, A498, and a pool of healthy human tissues, hREF, with three technical replicates for each platform. Our results suggest a good degree of repeatability for all the technologies considered, whereas only Agilent and Illumina show good patterns of concordance. The proposed methods have the advantage of being very flexible, and can be useful for performance assessment of other emerging genomic platforms other than microarrays, such as RNASeq technologies.
APA, Harvard, Vancouver, ISO, and other styles
11

Prasai, Nilam. "Testing Criterion Validity of Benefit Transfer Using Simulated Data." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/34685.

Full text
Abstract:
The purpose of this thesis is to investigate how the differences between the study and policy sites impact the performance of benefit function transfer. For this purpose, simulated data are created where all information necessary to conduct the benefit function transfer is available. We consider the six cases of difference between the study and policy sites- scale parameter, substitution possibilities, observable characteristics, population preferences, measurement error in variables, and a case of preference heterogeneity at the study site and fixed preferences at the policy site. These cases of difference were considered one at time and their impact on quality of transfer is investigated. RUM model based on reveled preference was used for this analysis. Function estimated at the study site is transferred to the policy site and willingness to pay for five different cases of policy changes are calculated at the study site. The willingness to pay so calculated is compared with true willingness to pay to evaluate the performance of benefit function transfer. When the study and policy site are different only in terms of scale parameter, equality of estimated and true expected WTP is not rejected for 89.7% or more when the sample size is 1000. Similarly, equality of estimated preference coefficients and true preference coefficients is not rejected for 88.8% or more. In this study, we find that benefit transfer performs better only in one direction. When the function is estimated at lower scale and transferred to the policy site with higher scale, the transfer error is less in magnitude than those which are estimated at higher scale and transferred to the policy site with lower scale. This study also finds that transfer error is less when the function from the study site having more site substitutes is transferred to the policy site having less site substitutes whenever there is difference in site substitution possibilities. Transfer error is magnified when measurement error is involved in any of the variables. This study do not suggest function transfer whenever the study siteâ s model is missing one of the important variable at the policy site or whenever the data on variables included in study siteâ s model is not available at the policy site for benefit transfer application. This study also suggests the use of large representative sample with sufficient variation to minimize transfer error in benefit transfer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Qiuju. "Statistical inference for joint modelling of longitudinal and survival data." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/statistical-inference-for-joint-modelling-of-longitudinal-and-survival-data(65e644f3-d26f-47c0-bbe1-a51d01ddc1b9).html.

Full text
Abstract:
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.
APA, Harvard, Vancouver, ISO, and other styles
13

Garcia, Luz Mery González. "Modelos baseados no planejamento para análise de populações finitas." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-19062008-183609/.

Full text
Abstract:
Estudamos o problema de obtenção de estimadores/preditores ótimos para combinações lineares de respostas coletadas de uma população finita por meio de amostragem aleatória simples. Nesse contexto, estendemos o modelo misto para populações finitas proposto por Stanek, Singer & Lencina (2004, Journal of Statistical Planning and Inference) para casos em que se incluem erros de medida (endógenos e exógenos) e informação auxiliar. Admitindo que as variâncias são conhecidas, mostramos que os estimadores/preditores propostos têm erro quadrático médio menor dentro da classe dos estimadores lineares não viciados. Por meio de estudos de simulação, comparamos o desempenho desses estimadores/preditores empíricos, i.e., obtidos com a substituição das componentes de variância por estimativas, com aquele de competidores tradicionais. Também, estendemos esses modelos para análise de estudos com estrutura do tipo pré-teste/pós-teste. Também por intermédio de simulação, comparamos o desempenho dos estimadores empíricos com o desempenho do estimador obtido por meio de técnicas clássicas de análise de medidas repetidas e com o desempenho do estimador obtido via análise de covariância por meio de mínimos quadrados, concluindo que os estimadores/ preditores empíricos apresentaram um menor erro quadrático médio e menor vício. Em geral, sugerimos o emprego dos estimadores/preditores empíricos propostos para dados com distribuição assimétrica ou amostras pequenas.
We consider optimal estimation of finite population parameters with data obtained via simple random samples. In this context, we extend a finite population mixed model proposed by Stanek, Singer & Lencina (2004, Journal of Statistical Planning and Inference) by including measurement errors (endogenous or exogenous) and auxiliary information. Assuming that variance components are known, we show that the proposed estimators/predictors have the smallest mean squared error in the class of unbiased estimators. Using simulation studies, we compare the performance of the empirical estimators/predictors obtained by replacing variance components with estimates with the performance of a traditional estimator. We also extend the finite population mixed model to data obtained via pretest-posttest designs. Through simulation studies, we compare the performance of the empirical estimator of the difference in gain between groups with the performance of the usual repeated measures estimator and with the performance of the usual analysis of covariance estimator obtained via ordinary least squares. The empirical estimator has smaller mean squared error and bias than the alternative estimators under consideration. In general, we recommend the use of the proposed estimators/ predictors for either asymmetric response distributions or small samples.
APA, Harvard, Vancouver, ISO, and other styles
14

Lam, Edward W. H. "Restoration of randomly blurred images with measurement error in the point spread function." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29629.

Full text
Abstract:
The restoration of images degraded by a stochastic, time varying point spread func-tion(PSF) is addressed. The object to be restored is assumed to remain fixed during the observation time. A sequence of observations of the unknown object is assumed available. The true value of the random PSF is not known. However, for each observation a "noisy" measurement of the random PSF at the time of observation is assumed available. Practical applications in which the PSF is time varying include situations in which the images are obtained through a nonhomogeneous medium such as water or the earth's atmosphere. Under such conditions, it is not possible to determine the PSF in advance, so attempts must be made to extract it from the degraded images themselves. A measurement of the PSF may be obtained by either isolating a naturally occurring point object in the scene, such as a reference star in optical astronomy, or by artificially installing an impulse light source in the scene. The noise in the measurements of point spread functions obtained in such a manner are particularly troublesome in cases when the light signals emitted by the point object are not very strong. In this thesis, we formulate a model for this restoration problem with PSF measurement error. A maximum likelihood filter and a Wiener filter are then developed for this model. Restorations are performed on simulated degraded images. Comparisons are made with standard filters of the classical restoration model(ignoring the PSF error), and also with results based on the averaged degraded image and averaged PSF's. Experimental results confirm that the filters we developed perform better than those based on averaging and than those ignoring the PSF measurement error.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
15

Falk, Matthew Gregory. "Incorporating uncertainty in environmental models informed by imagery." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/33235/1/Matthew_Falk_Thesis.pdf.

Full text
Abstract:
In this thesis, the issue of incorporating uncertainty for environmental modelling informed by imagery is explored by considering uncertainty in deterministic modelling, measurement uncertainty and uncertainty in image composition. Incorporating uncertainty in deterministic modelling is extended for use with imagery using the Bayesian melding approach. In the application presented, slope steepness is shown to be the main contributor to total uncertainty in the Revised Universal Soil Loss Equation. A spatial sampling procedure is also proposed to assist in implementing Bayesian melding given the increased data size with models informed by imagery. Measurement error models are another approach to incorporating uncertainty when data is informed by imagery. These models for measurement uncertainty, considered in a Bayesian conditional independence framework, are applied to ecological data generated from imagery. The models are shown to be appropriate and useful in certain situations. Measurement uncertainty is also considered in the context of change detection when two images are not co-registered. An approach for detecting change in two successive images is proposed that is not affected by registration. The procedure uses the Kolmogorov-Smirnov test on homogeneous segments of an image to detect change, with the homogeneous segments determined using a Bayesian mixture model of pixel values. Using the mixture model to segment an image also allows for uncertainty in the composition of an image. This thesis concludes by comparing several different Bayesian image segmentation approaches that allow for uncertainty regarding the allocation of pixels to different ground components. Each segmentation approach is applied to a data set of chlorophyll values and shown to have different benefits and drawbacks depending on the aims of the analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

Sobotková, Kateřina. "Zvýšení efektivity kontroly ramene tankovací nádrže." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-401062.

Full text
Abstract:
The diploma thesis consists of a new design of a controlling procedure for a plastic component of a tank. Its theoretical part deals with the potential sources of errors and uncertainties arising from the measuring itself. It also deals with the characteristics of available CMM devices and includes an analysis of the methods assessing the acceptability of the measurement plan. The practical part analyses systematically the current state and proposes a new solution using a program created using a coordinate machine. A comparison of both variants is presented as an output of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
17

Fujdiak, Radek. "Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-358408.

Full text
Abstract:
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
APA, Harvard, Vancouver, ISO, and other styles
18

Hoque, Md Erfanul. "Longitudinal data analysis with covariates measurement error." 2017. http://hdl.handle.net/1993/31988.

Full text
Abstract:
Longitudinal data occur frequently in medical studies and covariates measured by error are typical features of such data. Generalized linear mixed models (GLMMs) are commonly used to analyse longitudinal data. It is typically assumed that the random effects covariance matrix is constant across the subject (and among subjects) in these models. In many situations, however, this correlation structure may differ among subjects and ignoring this heterogeneity can cause the biased estimates of model parameters. In this thesis, following Lee et al. (2012), we propose an approach to properly model the random effects covariance matrix based on covariates in the class of GLMMs where we also have covariates measured by error. The resulting parameters from this decomposition have a sensible interpretation and can easily be modelled without the concern of positive definiteness of the resulting estimator. The performance of the proposed approach is evaluated through simulation studies which show that the proposed method performs very well in terms biases and mean square errors as well as coverage rates. The proposed method is also analysed using a data from Manitoba Follow-up Study.
February 2017
APA, Harvard, Vancouver, ISO, and other styles
19

Litton, Nathaniel A. "Deconvolution in Random Effects Models via Normal Mixtures." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-3229.

Full text
Abstract:
This dissertation describes a minimum distance method for density estimation when the variable of interest is not directly observed. It is assumed that the underlying target density can be well approximated by a mixture of normals. The method compares a density estimate of observable data with a density of the observable data induced from assuming the target density can be written as a mixture of normals. The goal is to choose the parameters in the normal mixture that minimize the distance between the density estimate of the observable data and the induced density from the model. The method is applied to the deconvolution problem to estimate the density of $X_{i}$ when the variable $% Y_{i}=X_{i}+Z_{i}$, $i=1,\ldots ,n$, is observed, and the density of $Z_{i}$ is known. Additionally, it is applied to a location random effects model to estimate the density of $Z_{ij}$ when the observable quantities are $p$ data sets of size $n$ given by $X_{ij}=\alpha _{i}+\gamma Z_{ij},~i=1,\ldots ,p,~j=1,\ldots ,n$, where the densities of $\alpha_{i} $ and $Z_{ij}$ are both unknown. The performance of the minimum distance approach in the measurement error model is compared with the deconvoluting kernel density estimator of Stefanski and Carroll (1990). In the location random effects model, the minimum distance estimator is compared with the explicit characteristic function inversion method from Hall and Yao (2003). In both models, the methods are compared using simulated and real data sets. In the simulations, performance is evaluated using an integrated squared error criterion. Results indicate that the minimum distance methodology is comparable to the deconvoluting kernel density estimator and outperforms the explicit characteristic function inversion method.
APA, Harvard, Vancouver, ISO, and other styles
20

Tang, Shih-Fen, and 唐世芬. "Improve Random Measurement Error and Practice Effect of the Tablet-based Symbol Digit Modalities Test in Patients with Schizophrenia." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/33872371512029361068.

Full text
Abstract:
碩士
國立臺灣大學
職能治療研究所
104
Background: Slow information processing speed is a common deficit of patients with schizophrenia, severely affecting their functions of daily life. The Tablet-based Symbol Digit Modalities Test (T-SDMT) is a quick and easy-to use measure assessing processing speed in patients with psychosis. However, the practice effect and substantial random measurement error have limited the utility of the T-SDMT. The purposes of this study were to explore the effect of different measuring methods on reducing the T- SDMT’s practice effect and the random measurement error. Methods: The author proposed three kinds of measuring methods on three groups of patients, respectively. The first group was measured by repeated measurements for 2 formal tests, the second group was to increase practice time for three minutes and then taking one formal testing, and the last group was the combination of increased practice times for three minutes and taking repeated measurements for 2 formal tests. The author recruited 87 patients with chronic schizophrenia. These patients were randomly assigned to 3 groups mentioned above, repeated-measurements group, increasing- practice-time group, combination of repeated measurements and increasing practice time group. Three groups of patients received test and re-test testing. The test and retest were administered 2 weeks apart. Results: The results of this study found that the three methods had good test-retest consistency. The increasing-practice-time group had the highest test-retest consistency. The combination group could effectively reduce random measurement errors. However, practice effect existed in all three groups. The increasing-practice-time group and combination group could reduce the practice effects effectively than the repeated- measurements group. Conclusion: The combination of repeated measurements and increasing practice time can reduce the random measurement errors, which was better than the other 2 groups. However, the random measurement errors needed to be further decreased for clinical utility. In addition to the practice effect, the future studies should further increase practice time and the number of testing (e.g., 3 formal tests), in order to enhance the utility of the T-SDMT.
APA, Harvard, Vancouver, ISO, and other styles
21

Haque, Mohammadul S. K. "Denoising and Refinement Methods for 3D Reconstruction." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5354.

Full text
Abstract:
Capturing raw 3D data from the real world is the initial step for many 3D reconstruction pipelines in different computer vision applications. However, due to inaccuracies in measurement and oversimplification in mathematical assumptions during capture, 3D data remain contaminated with significant amounts of errors. In this thesis, we investigate two different types of errors that are invariably encountered in 3D reconstruction. The first type of errors comprises the random measurement error or noise that is inevitably present in raw 3D data obtained from the real world. The second type of errors comprises those that are geometrically-structured. Specifically, we consider non-rigid alignment errors that arise in multi-view scenarios where complete 3D reconstructions of real-world objects are obtained from observations taken from multiple viewpoints. Firstly, we consider random measurement errors, modelled as an additive 3D Gaussian noise. We consider the task of denoising 3D data obtained in two different modalities, i.e. 3D meshes and 3D point clouds and establish the important factors that dictate the quality of denoising. We develop denoising schemes that account for these factors and provide evidence of superior denoising performance on synthetic and real datasets over existing approaches. Secondly, we consider non-rigid errors that are encountered in a multi-view 3D reconstruction pipeline. In particular, we address the problem of multi-view surface refinement for high quality 3D reconstruction, where low quality reconstructions obtained from consumer-grade depth cameras are enhanced using additional photometric information. We show that non-rigid estimation discrepancies that emerge in such tasks are a major issue limiting the quality of reconstruction. We systematically delineate the underlying factors and show that existing refinement methods in the literature do not consider these factors, hence, failing to carry out a proper refinement. Considering these factors, we develop a complete multi-view pipeline for high quality 3D reconstruction. We show the efficacy of our pipeline on synthetic and real datasets, as compared to other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Ting, Yi-Hao, and 丁怡皓. "The Estimation of the Log-Linear Model with Measurement Errors and Random Effects." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61273041689057506753.

Full text
Abstract:
碩士
淡江大學
數學學系碩士班
102
There are not many literatures discuss the statistical inference when the measurement error and random effect exist in the generalized linear model. The main reason is that the distribution after integrating the random effect is no longer a generalized linear model, hence the conventional conditional score or corrected score are difficult in application. This paper discussed the estimation method when measurement error and random effect coexist in the log-linear model, the estimation was done by an extended corrected QVF score. We compare the efficient of the methods of Naive, regression calibration and partially conditional score with the proposed method by simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Cherng-Hann, and 林承翰. "The Estimation of the Logistic Regression Model with Measurement Errors and Random Effects." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/45425766748486641616.

Full text
Abstract:
碩士
淡江大學
數學學系碩士班
101
There are not many literatures discuss the statistical inference when the measurement error and random effect exist in the generalized linear model. The main reason is that the distribution after integrating the random effect is no longer a generalized linear model, hence the conventional conditional score or corrected score are difficult in application. This paper discussed the estimation method when measurement error and random effect coexist in the logistic regression model, the estimation was done by a partially conditional score. We compare the efficient of the methods of Naive and regression calibration with the proposed method by simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
24

Schmidt, Philip J. "Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration Data." Thesis, 2010. http://hdl.handle.net/10012/5596.

Full text
Abstract:
Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography