To see the other types of publications on this topic, follow the link: Estimator Procedure.

Dissertations / Theses on the topic 'Estimator Procedure'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimator Procedure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, and Sandamali dharmasena@rmit edu au. "Sequential Procedures for Nonparametric Kernel Regression." RMIT University. Mathematical and Geospatial Sciences, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090119.134815.

Full text
Abstract:
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Stephen Man Sing. "Generalised bootstrap procedures." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, Tsz-hin, and 陳子軒. "Hybrid bootstrap procedures for shrinkage-type estimators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48521826.

Full text
Abstract:
In statistical inference, one is often interested in estimating the distribution of a root, which is a function of the data and the parameters only. Knowledge of the distribution of a root is useful for inference problems such as hypothesis testing and the construction of a confidence set. Shrinkage-type estimators have become popular in statistical inference due to their smaller mean squared errors. In this thesis, the performance of different bootstrap methods is investigated for estimating the distributions of roots which are constructed based on shrinkage estimators. Focus is on two shrinkage estimation problems, namely the James-Stein estimation and the model selection problem in simple linear regression. A hybrid bootstrap procedure and a bootstrap test method are proposed to estimate the distributions of the roots of interest. In the two shrinkage problems, the asymptotic errors of the traditional n-out-of-n bootstrap, m-out-of-n bootstrap and the proposed methods are derived under a moving parameter framework. The problem of the lack of uniform consistency of the n-out-of-n and the m-out-of-n bootstraps is exposed. It is shown that the proposed methods have better overall performance, in the sense that they yield improved convergence rates over almost the whole range of possible values of the underlying parameters. Simulation studies are carried out to illustrate the theoretical findings.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
4

Binard, Carole. "Estimation de fonctions de régression : sélection d'estimateurs ridge, étude de la procédure PLS1 et applications à la modélisation de la signature génique du cancer du poumon." Thesis, Nice, 2016. http://www.theses.fr/2016NICE4015.

Full text
Abstract:
Cette thèse porte sur l’estimation d'une fonction de régression fournissant la meilleure relation entredes variables pour lesquelles on possède un certain nombre d’observations. Une première partie portesur une étude par simulation de deux méthodes automatiques de sélection du paramètre de laprocédure d'estimation ridge. D'un point de vue plus théorique, on présente et compare ensuite deuxméthodes de sélection d'un multiparamètre intervenant dans une procédure d'estimation d'unefonction de régression sur l'intervalle [0,1]. Dans une deuxième partie, on étudie la qualité del'estimateur PLS1, d'un point de vue théorique, à travers son risque quadratique et, plus précisément,le terme de variance dans la décomposition biais/variance de ce risque. Enfin, dans une troisièmepartie, une étude statistique sur données réelles est menée afin de mieux comprendre la signaturegénique de cellules cancéreuses à partir de la signature génique des sous-types cellulaires constituantle stroma tumoral associé
This thesis deals with the estimation of a regression function providing the best relationship betweenvariables for which we have some observations. In a first part, we complete a simulation study fortwo automatic selection methods of the ridge parameter. From a more theoretical point of view, wethen present and compare two selection methods of a multiparameter, that is used in an estimationprocedure of a regression function on [0,1]. In a second part, we study the quality of the PLS1estimator through its quadratic risk and, more precisely, the variance term in its bias/variancedecomposition. In a third part, a statistical study is carried out in order to explain the geneticsignature of cancer cells thanks to the genetic signatures of cellular subtypes which compose theassociated tumor stroma
APA, Harvard, Vancouver, ISO, and other styles
5

Denne, Jonathan S. "Sequential procedures for sample size estimation." Thesis, University of Bath, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Weiguo. "Portfolio optimization based on robust estimation procedures." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0430104-144655/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ehlers, Rene. "Maximum likelihood estimation procedures for categorical data." Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-07222005-124541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Adams, Michael Roy. "Development of a User Cost Estimation Procedure for Work Zones." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd860.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mohamed, Ahmed H. "Optimizing the estimation procedure in INS/GPS integration for kinematic applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0031/NQ38492.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ericsson, Anna. "Evaluation of an automated formant estimation procedure with optimized formant ceiling." Thesis, Stockholms universitet, Institutionen för lingvistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-185197.

Full text
Abstract:
This study evaluates an automated formant estimation procedure designed to adapt to speakers and variations in speech. The adaption is achieved by using the formant ceiling with the least variation (in combined estimates of F1 and F2) as the optimal ceiling. This optimization renders the best possible estimations given the data, therefore it could presumably also adapt to variations such as high fo. The procedure has not been evaluated by using material with known formant frequencies. Therefore, this is done here. The performance of the procedure is tested through comparison with a common procedure with fixed ceilings, based on speaker sex. The estimations are carried out on synthetic vowel tokens, systematically varied in formant frequencies and in fo, to match the natural variation within vowels and between speakers. The formant estimations are compared to target values, compared between procedures and to earlier studies. The results reveal that the formant estimation procedure with optimized ceilings does not perform better than the common procedure. Both procedures perform better than earlier methods, but neither deals satisfactorily with high fo.
Denna studie utvärderar en automatisk formantmätningsprocedur utvecklad för anpassning efter talare och variationer i tal. Anpassningen åstadkoms genom att använda det formanttak som uppvisar minst variation (i mätningar av F1 och F2 i kombination) som det optimerade taket. Denna optimering ger bästa möjliga estimeringar utifrån data, därför skulle troligtvis anpassningen även kunna ske till variation såsom hög fo. Proceduren har inte utvärderats genom att använda material med kända formantfrekvenser, varför det görs här. Formantmätningsprocedurens prestation testas genom jämförelse med gängse procedur med fasta formanttak, baserade på skillnader mellan kön. Formantmätningarna utförs på syntetiska vokalexemplar, systematiskt varierade i formantfrekvenser och i fo för att motsvara naturlig variation inom vokaler och mellan talare. Formantmätningarna jämförs mot ursprungsvärdena, procedurerna sinsemellan och med tidigare studier. Resultatet visar att formantmätningsproceduren med optimerat formanttak inte presterar bättre än den gängse proceduren. Båda procedurer presterar bättre än tidigare metoder, men ingen hanterar hög fo på ett tillfredställande sätt.
APA, Harvard, Vancouver, ISO, and other styles
11

Berning, Thomas Louw. "Improved estimation procedures for a positive extreme value index." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5260.

Full text
Abstract:
Thesis (PhD (Statistics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: In extreme value theory (EVT) the emphasis is on extreme (very small or very large) observations. The crucial parameter when making inferences about extreme quantiles, is called the extreme value index (EVI). This thesis concentrates on only the right tail of the underlying distribution (extremely large observations), and specifically situations where the EVI is assumed to be positive. A positive EVI indicates that the underlying distribution of the data has a heavy right tail, as is the case with, for example, insurance claims data. There are numerous areas of application of EVT, since there are a vast number of situations in which one would be interested in predicting extreme events accurately. Accurate prediction requires accurate estimation of the EVI, which has received ample attention in the literature from a theoretical as well as practical point of view. Countless estimators of the EVI exist in the literature, but the practitioner has little information on how these estimators compare. An extensive simulation study was designed and conducted to compare the performance of a wide range of estimators, over a wide range of sample sizes and distributions. A new procedure for the estimation of a positive EVI was developed, based on fitting the perturbed Pareto distribution (PPD) to observations above a threshold, using Bayesian methodology. Attention was also given to the development of a threshold selection technique. One of the major contributions of this thesis is a measure which quantifies the stability (or rather instability) of estimates across a range of thresholds. This measure can be used to objectively obtain the range of thresholds over which the estimates are most stable. It is this measure which is used for the purpose of threshold selection for the proposed PPD estimator. A case study of five insurance claims data sets illustrates how data sets can be analyzed in practice. It is shown to what extent discretion can/should be applied, as well as how different estimators can be used in a complementary fashion to give more insight into the nature of the data and the extreme tail of the underlying distribution. The analysis is carried out from the point of raw data, to the construction of tables which can be used directly to gauge the risk of the insurance portfolio over a given time frame.
AFRIKAANSE OPSOMMING: Die veld van ekstreemwaardeteorie (EVT) is bemoeid met ekstreme (baie klein of baie groot) waarnemings. Die parameter wat deurslaggewend is wanneer inferensies aangaande ekstreme kwantiele ter sprake is, is die sogenaamde ekstreemwaarde-indeks (EVI). Hierdie verhandeling konsentreer op slegs die regterstert van die onderliggende verdeling (baie groot waarnemings), en meer spesifiek, op situasies waar aanvaar word dat die EVI positief is. ’n Positiewe EVI dui aan dat die onderliggende verdeling ’n swaar regterstert het, wat byvoorbeeld die geval is by versekeringseis data. Daar is verskeie velde waar EVT toegepas word, aangesien daar ’n groot aantal situasies is waarin mens sou belangstel om ekstreme gebeurtenisse akkuraat te voorspel. Akkurate voorspelling vereis die akkurate beraming van die EVI, wat reeds ruim aandag in die literatuur geniet het, uit beide teoretiese en praktiese oogpunte. ’n Groot aantal beramers van die EVI bestaan in die literatuur, maar enige persoon wat die toepassing van EVT in die praktyk beoog, het min inligting oor hoe hierdie beramers met mekaar vergelyk. ’n Uitgebreide simulasiestudie is ontwerp en uitgevoer om die akkuraatheid van beraming van ’n groot verskeidenheid van beramers in die literatuur te vergelyk. Die studie sluit ’n groot verskeidenheid van steekproefgroottes en onderliggende verdelings in. ’n Nuwe prosedure vir die beraming van ’n positiewe EVI is ontwikkel, gebaseer op die passing van die gesteurde Pareto verdeling (PPD) aan waarnemings wat ’n gegewe drempel oorskrei, deur van Bayes tegnieke gebruik te maak. Aandag is ook geskenk aan die ontwikkeling van ’n drempelseleksiemetode. Een van die hoofbydraes van hierdie verhandeling is ’n maatstaf wat die stabiliteit (of eerder onstabiliteit) van beramings oor verskeie drempels kwantifiseer. Hierdie maatstaf bied ’n objektiewe manier om ’n gebied (versameling van drempelwaardes) te verkry waaroor die beramings die stabielste is. Dit is hierdie maatstaf wat gebruik word om drempelseleksie te doen in die geval van die PPD beramer. ’n Gevallestudie van vyf stelle data van versekeringseise demonstreer hoe data in die praktyk geanaliseer kan word. Daar word getoon tot watter mate diskresie toegepas kan/moet word, asook hoe verskillende beramers op ’n komplementêre wyse ingespan kan word om meer insig te verkry met betrekking tot die aard van die data en die stert van die onderliggende verdeling. Die analise word uitgevoer vanaf die punt waar slegs rou data beskikbaar is, tot op die punt waar tabelle saamgestel is wat direk gebruik kan word om die risiko van die versekeringsportefeulje te bepaal oor ’n gegewe periode.
APA, Harvard, Vancouver, ISO, and other styles
12

Yan, Huey. "A Comparison of Estimation Procedures for the Beta Distribution." DigitalCommons@USU, 1991. https://digitalcommons.usu.edu/etd/7126.

Full text
Abstract:
The beta distribution may be used as a stochastic model for continuous proportions in many situations in applied statistics. This thesis was concerned with estimation of the parameters of the beta distribution in three different situations. Three different estimation procedures-the method of moments, maximum likelihood, and a hybrid of these two methods, which we call the one-step improvement-were compared by computer simulation, for beta data and beta data contaminated by zeros and ones. We also evaluated maximum likelihood estimation in the context of censored data, and Newton's method as a numerical procedure for solving the likelihood equations for censored beta data.
APA, Harvard, Vancouver, ISO, and other styles
13

SIMONETTO, ANNA. "Estimation procedures for latent variable models with psychological traits." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/17370.

Full text
Abstract:
The starting point for this thesis is a concrete problem: to measure, using statistical models, aspects of subjective perceptions and assessments and to understand their dependencies. The objective is to study the statistical properties of some estimators of the parameters of regression models with variables affected by measurement errors. These models are widely used in surveys based on questionnaires developed to detect subjective assessments and perceptions with Likert-type scales. It is a highly debated topic, as many of the relevant aspects in this field are not directly observable and therefore the variables used to estimate them are affected by measurement errors. The models with measurement errors were very thorough in literature. In this work we will developed two of the most used approaches that the authors have with this topic. Obviously, according to the approach chosen, different models were proposed to estimate the relationships between variables affected by measurement error. After exposing the main features of these models, the thesis focuses on providing an original contribution to comparative analysis of the two presented approaches.
APA, Harvard, Vancouver, ISO, and other styles
14

Palaszewski, Bo. "On multiple test procedures for finding deviating parameters /." Göteborg : Stockholm, Sweden : University of Göteborg ; Almqvist & Wiksell International [distributor], 1993. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=005857463&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fischer, Manfred M., Katarina Hlavácková-Schindler, and Martin Reismann. "A Gobal Search Procedure for Parameter Estimation in Neural Spatial Interaction Modelling." WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/4149/1/WSG_DP_6398.pdf.

Full text
Abstract:
Parameter estimation is one of the central issues in neural spatial interaction modelling. Current practice is dominated by gradient based local minimization techniques. They find local minima efficiently and work best in unimodal minimization problems, but can get trapped in multimodal problems. Global search procedures provide an alternative optimization scheme that allows to escape from local minima. Differential evolution has been recently introduced as an efficient direct search method for optimizing real-valued multi-modal objective functions (Storn and Price 1997). The method is conceptually simple and attractive, but little is known about its behaviour in real world applications. This paper explores this method as an alternative to current practice for solving the parameter estimation task, and attempts to assess ist robustness, measured in terms of in-sample and out-of-sample performance. A benchmark comparison against backpropagation of conjugate gradients is based on Austrian interregional telecommunication traffic data. (authors' abstract)
Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
16

Khosravi, Sara. "Camera-based estimation of needle pose for ultrasound percutaneous procedures." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2505.

Full text
Abstract:
A pose estimation method is proposed for measuring the position and orientation of a biopsy needle. The technique is to be used as a touchless needle guide system for guidance of percutaneous procedures with 4D ultrasound. A pair of uncalibrated, light-weight USB cameras are used as inputs. A database is prepared offline, using both the needle line estimated from camera-captured images and the true needle line recorded from an independent tracking device. A nonparametric learning algorithm determines the best fit model from the database. This model can then be used in real-time to estimate the true position of the needle with inputs from only the camera images. Simulation results confirm the feasibility of the method and show how a small, accurately made database can provide satisfactory results. In a series of tests with cameras, we achieved an average error of 2.4mm in position and 2.61° in orientation. The system is also extended to real ultrasound imaging, as the two miniature cameras capture images of the needle in air and the ultrasound system captures a volume as the needle moves through the workspace. A new database is created with the estimated 3D position of the needle from the ultrasound volume and the 2D position and orientation of the needle calculated from the camera images. This study achieved an average error of 0.94 mm in position and 3.93° in orientation.
APA, Harvard, Vancouver, ISO, and other styles
17

Kudowor, Andrew Yao Tete. "Subsurface data management and volume estimation : techniques, procedures, and concepts." Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Khan, M. H. R. "Variable selection and estimation procedures for high-dimensional survival data." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/57484/.

Full text
Abstract:
In survival analysis the popular models are usually well suited for data with few covariates and many observations. In contrast for many other fields such as microarray, it is necessary in practice to consider the opposite case where the number of covariates (number of genes) far exceeds the number of observations. However, with such data the accelerated failure time models (AFT) have not received much attention in variable selection literature. This thesis attempts to meet this need, extending and applying the modern tools of variable selection and estimation to high–dimensional censored data. We introduce two new variable selection strategies for AFT models. The first is based upon regularized weighted least squares that leads to four adaptive elastic net type variable selection approaches. In particular one adaptive elastic net, one weighted elastic net and two extensions that incorporate censoring constraints into the optimization framework of the methods. The second variable selection strategy is based upon the synthesis of the Buckley–James method and the Dantzig selector, that results in two modified Buckley– James methods and one adaptive Dantzig selector. The adaptive Dantzig selector uses both standard and novel weights giving rise to three new algorithms. Out of the variable selection strategies we focus on two important issues. One is the sensitivity of Stute’s weighted least squares estimator to the censored largest observations when Efron’s tail correction approach violates one of the basic right censoring assumptions. We propose some intuitive imputing approaches for the censored largest observations that allow Efron’s approach to be applied without violating the censoring assumption, and furthermore, generate estimates with reduced mean squared errors and bias. The other issue is related to proposing some modifications to the jackknife estimate of bias for Kaplan– Meier estimators. The proposed modifications relax the conditions needed for such bias creation by suitably applying the above imputing methods. It also appears that without the modifications the bias of Kaplan–Meier estimators can be badly underestimated by the jackknifing.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Ying-keh. "Empirical Bayes procedures in time series regression models." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/76089.

Full text
Abstract:
In this dissertation empirical Bayes estimators for the coefficients in time series regression models are presented. Due to the uncontrollability of time series observations, explanatory variables in each stage do not remain unchanged. A generalization of the results of O'Bryan and Susarla is established and shown to be an extension of the results of Martz and Krutchkoff. Alternatively, as the distribution function of sample observations is hard to obtain except asymptotically, the results of Griffin and Krutchkoff on empirical linear Bayes estimation are extended and then applied to estimating the coefficients in time series regression models. Comparisons between the performance of these two approaches are also made. Finally, predictions in time series regression models using empirical Bayes estimators and empirical linear Bayes estimators are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Baldwin, Richard P. "Aircraft engine reliability analysis using lower confidence limit estimate procedures." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23522.

Full text
Abstract:
Approved for public release; distribution is unlimited
In this thesis, a spreadsheet model was developed to compute the lower confidence limit (LCL) for the reliability of a complex weapon system using a personal computer. the LCL is an estimate of the lowest reliability a system is expected to have at a given point in time with a given level of confidence. The reliability model is based on a Weibull distribution for the system component failure times. the reliability LCL procedures has been extensively validated and determined to be quite accurate when the expected number of failures is at least 10. This model is capable of supporting :LCL decisions in support of the Component Improvement Program or new weapon system procurement where reliability growth analysis is used as a decision support tool. This procedure also provides program managers and engineers with a method to perform LCL analysis and thereby reduce their dependence on contractor supplied reliability data.
APA, Harvard, Vancouver, ISO, and other styles
21

Rosenbloom, E. S. "Selecting the best of k multinomial parameter estimation procedures using SPRT." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0005/MQ45119.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kim, Buyong. "Lp norm estimation procedures and an L1 norm algorithm for unconstrained and constrained estimation for linear models." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/53627.

Full text
Abstract:
When the distribution of the errors in a linear regression model departs from normality, the method of least squares seems to yield relatively poor estimates of the coefficients. One alternative approach to least squares which has received a great deal of attention of late is minimum Lp norm estimation. However, the statistical efüciency of a Lp estimator depends greatly on the underlying distribution of errors and on the value of p. Thus, the choice of an appropriate value of p is crucial to the effectiveness of p estimation. Previous work has shown that L₁ estimation is a robust procedure in the sense that it leads to an estimator which has greater statistical efficiency than the least squares estimator in the presence of outliers, and that L₁ estimators have some- desirable statistical properties asymptotically. This dissertation is mainly concerned with the development of a new algorithm for L₁ estimation and constrained L₁ estimation. The mainstream of computational procedures for L₁ estimation has been the simplex-type algorithms via the linear programming formulation. Other procedures are the reweighted least squares method, and. nonlinear programming technique using the penalty function approach or descent method. A new computational algorithm is proposed which combines the reweighted least squares method and the linear programming approach. We employ a modified Karmarkar algorithm to solve the linear programming problem instead of the simplex method. We prove that the proposed algorithm converges in a finite number of iterations. From our simulation study we demonstrate that our algorithm requires fewer iterations to solve standard problems than are required by the simplex-type methods although the amount of computation per iteration is greater for the proposed algorithm. The proposed algorithm for unconstrained L₁ estimation is extended to the case where the L₁ estimates of the parameters of a linear model satisfy certain linear equality and/or inequality constraints. These two procedures are computationally simple to implement since a weighted least squares scheme is adopted at each iteration. Our results indicate that the proposed L₁ estimation procedure yields very accurate and stable estimates and is efficient even when the problem size is large.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Vásquez, Chicata Luis Fernando Gonzalo. "Computational procedure for the estimation of pile capacity including simulation of the installation process /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pupashenko, Mykhailo [Verfasser], and Ralf [Akademischer Betreuer] Korn. "Variance Reduction Procedures for Market Risk Estimation / Mykhailo Pupashenko. Betreuer: Ralf Korn." Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1059109360/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lerche, Veronika [Verfasser], and Andreas [Akademischer Betreuer] Voß. "Parameter Estimation in Diffusion Modeling: Guidelines on Requisite Trial Numbers and Estimation Procedures / Veronika Lerche ; Betreuer: Andreas Voß." Heidelberg : Universitätsbibliothek Heidelberg, 2016. http://d-nb.info/1180737725/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Parise, Luís Fernando Schiano. "Fully plastic J and CTOD estimation procedure for circumferential surface cracks in biaxially loaded pipes." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3135/tde-14122014-171536/.

Full text
Abstract:
The main goal of this work is to develop an estimation procedure for the and CTOD driving forces for circumferential surface cracks in pipelines under combined bending and internal pressure loading. It is intended that the methodology proposed here will be applicable to a significant range of pipe and crack geometries, material yielding and strain hardening characteristics as well as loading biaxiality levels. In particular, pipelines currently employed in the offshore oil and gas production industry constitute an important class of potential application for this kind of procedure, and thus the current structural integrity concerns involving the reeling of pressurized pipelines have served as a motivating theme and bridge with real-world application throughout this research. The central theoretical framework upon which the developments presented here are based is the driving force estimation scheme known as EPRI methodology. This traditional approach for estimating and CTOD relies on splitting the driving forces into elastic and plastic components. The elastic part is calculated directly from widely available stress intensity factor solutions, while the plastic part is determined using fully plastic solutions derived from power-law descriptions of material behaviour. In the first part of this work the EPRI methodology in its more conventional form is extended to cover the cases of interest, allowing the calculation of and CTOD for circumferential surface cracks in pipelines subjected to combined internal pressure and bending loadings. This is done by carrying out detailed finite element simulations of bending of pressurized cracked pipes, the results of which then allow the direct determination of non-dimensional functions that correlate (CTOD) with applied loading, consistent with the form of the original EPRI fully plastic solutions. In the second part of the work attention is given to certain drawbacks and limitations of the procedure developed in the first part. A new, alternative procedure is then proposed which aims to overcome these problems by combining the main ideas behind the EPRI methodology with concepts from strain based design. The theoretical framework underlying this development is laid out and the analytical derivation of the new driving force estimation scheme is presented. Finally, results are given in a form similar to that employed in the first part, and the two procedures are compared. While they are shown to be conceptually equivalent, the strain based methodology is argued to be more readily applicable to some important classes of real world problems. The work concludes with comments on the effects of load biaxiality over crack driving forces and with discussions on the quality, accuracy and physical meaningfulness of the non-dimensional scaling functions obtained which correlate crack driving forces to loads or strains.
O objetivo principal deste trabalho é o desenvolvimento de um procedimento para estimação das forças motrizes e CTOD para trincas circumferenciais superficiais em dutos submetidos a carregamento combinado de flexão e pressão interna. Pretende-se que a metodologia aqui proposta seja aplicável a uma ampla faixa de geometrias de duto e trinca, características de escoamento e encruamento de material e níveis de biaxialidade de carregamento. Em particular, dutos atualmente empregados na exploração submarina de óleo e gás constituem uma classe importante de aplicações em potencial para procedimentos dessa natureza. Por essa razão, a avaliação de integridade estrutural de dutos pressurizados submetidos a enrolamento em carretel serve como tema motivador e ponto de conexão com a aplicação real ao longo deste trabalho. A base teórica fundamental sobre a qual se assentam os desenvolvimentos aqui propostos é o procedimento de estimação de forças motrizes de trinca conhecido como metodologia EPRI. Este método tradicional de cálculo de e CTOD separa as forças motrizes em componentes elástica e plástica. A component elástica é calculada diretamente a partir de soluções para o fator de intensidade de tensões, que se encontram amplamente disponíveis. A componente plástica, por sua vez, é determinada a partir de soluções totalmente plásticas derivadas de um modelo de lei de potência para o comportamento do material. Na primeira parte deste trabalho a metodologia EPRI em sua forma tradicional é estendida para abranger os casos de interesse, permitindo assim a determinação de e CTOD para trincas circumferenciais superficiais em dutos carregados por flexão e pressão interna. Para isto empregam-se simulações computacionais por elementos finitos, os resultados das quais permitem a determinação direta de fatores adimensionais que correlacionam (CTOD) com o carregamento aplicado de maneira consistente com as soluções totalmente plásticas da metodologia EPRI original. Na segunda parte do trabalho a atenção se volta para algumas deficiências e limitações da metodologia desenvolvida na primeira parte. Um procedimento novo é proposto como alternativa, tendo por objetivo superar estas dificuldades a partir da combinação dos principais conceitos da metodologia EPRI com ideias derivadas do projeto baseado em deformações. Apresentam-se a base teórica subjacente à estes conceitos e a derivação analítica do novo procedimento de estimação de forças motrizes. Finalmente, resultados semelhantes aos obtidos na primeira parte são calculados e os dois procedimentos são comparados. Embora ambos sejam conceitualmente equivalentes, argumenta-se que o procedimento baseado em deformações é mais imediata e convenientemente aplicado a algumas classes importantes de problemas práticos. O trabalho encerra com comentários acerca dos efeitos de biaxialidade de carregamento sobre as forças motrizes de trinca e com discussões sobre a acurácia, relevância, e significância física das funções adimensionais que escalam as forças motrizes com cargas ou deformações.
APA, Harvard, Vancouver, ISO, and other styles
27

Maier, Gunther. "The Estimation of Discrete Choice Models by use of the SAS Procedures Bdels by use of the SAS Procedures BPORBIT and MNLOGIT." WU Vienna University of Economics and Business, 1989. http://epub.wu.ac.at/6233/1/IIR_Disc_39.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hill, Terry. "Metrics and Test Procedures for Data Quality Estimation in the Aeronautical Telemetry Channel." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596445.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
There is great potential in using Best Source Selectors (BSS) to improve link availability in aeronautical telemetry applications. While the general notion that diverse data sources can be used to construct a consolidated stream of "better" data is well founded, there is no standardized means of determining the quality of the data streams being merged together. Absent this uniform quality data, the BSS has no analytically sound way of knowing which streams are better, or best. This problem is further exacerbated when one imagines that multiple vendors are developing data quality estimation schemes, with no standard definition of how to measure data quality. In this paper, we present measured performance for a specific Data Quality Metric (DQM) implementation, demonstrating that the signals present in the demodulator can be used to quickly and accurately measure the data quality, and we propose test methods for calibrating DQM over a wide variety of channel impairments. We also propose an efficient means of encapsulating this DQM information with the data, to simplify processing by the BSS. This work leads toward a potential standardization that would allow data quality estimators and best source selectors from multiple vendors to interoperate.
APA, Harvard, Vancouver, ISO, and other styles
29

Moore, Joann Lynn. "Estimating standard errors of estimated variance components in generalizability theory using bootstrap procedures." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/860.

Full text
Abstract:
This study investigated the extent to which rules proposed by Tong and Brennan (2007) for estimating standard errors of estimated variance components held up across a variety of G theory designs, variance component structures, sample size patterns, and data types. Simulated data was generated for all combinations of conditions, and point estimates, standard error estimates, and coverage for three types of confidence intervals were calculated for each estimated variance component and relative and absolute error variance across a variety of bootstrap procedures for each combination of conditions. It was found that, with some exceptions, Tong and Brennan's (2007) rules produced adequate standard error estimates for normal and polytomous data, while some of the results differed for dichotomous data. Additionally, some refinements to the rules were suggested with respect to nested designs. This study provides support for the use of bootstrap procedures for estimating standard errors of estimated variance components when data are not normally distributed.
APA, Harvard, Vancouver, ISO, and other styles
30

Wilson, Daryl E. "Assessing process dissociation procedure and implicit memory estimates of automatic retrieval for a retention interval manipulation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq24394.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bennett, Shonnie M. "Preference reversal and the estimation of indifference points using a fast-adjusting-delay procedure with rats." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Esmaelili-Mahani, Shayesteh. "Improving high-resolution IR satellite-based precipitation estimation: A procedure for cloud-top relief displacement adjustment." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284113.

Full text
Abstract:
An efficient and simple method has been developed to improve quality and accuracy of satellite-based VIS/IR images through cloud-top relief spatial displacements adjustment. The products of this algorithm, including cloud-top temperatures and heights, atmospheric temperature profiles for cloudy sky, and displacement-adjusted cloud images, can be useful for weather/climate and atmospheric studies, particularly for high-resolution hydrologic applications such as developing IR satellite-based rainfall estimates, which are urgently needed by mesoscale atmospheric modeling and studies, severe weather monitoring, and heavy precipitation and flash flood forecasting. Cloud-top height and displacement are estimated by applying stereoscopic analysis to a pair of corresponding scan-synchronous infrared images from geostationary satellites (GOES-east and GOES-west). A piecewise linear approximation relationship between cloud-top height and temperature, with a few (6 and 8) parameters is developed to simplify and speed-up the retrieval process. Optimal parameters are estimated using the Shuffled Complex Evolution (SCE-UA) algorithm to minimize the discrepancies between the brightness temperatures of the same location as registered by two satellites. The combination of the linear approximation and the fast optimization algorithm simplifies stereoscopic analysis and allows for its implementation on standard desktop computers. When compared to the standard isotherm matching approaches the proposed method yields higher correlation between simultaneous GOES-8 and GOES-9 images after parallax adjustment. The validity of the linear approximation was also tested against temperature profiles obtained from ground sounding measurements of the TRMM-TEFLUN experiments. This comparison demonstrated good fit between the optimized relationship and atmospheric sounding profile. The accuracy of cloud pixel geo-location was demonstrated through a spatial comparison between correlation of ground-based radar rainfall rate and corresponding both adjusted and original satellite IR images. Higher correlation was represented using displacement-adjusted IR images from both geostationary satellites (GOES) with high altitudes and low altitude satellite (TRMM). Higher correlation and lower RMSE between ground-based NEXRAD observations and estimated rainfall rates from spatial adjusted IR images, using an artificial neural networks algorithm (PERSIANN), present the rainfall retrieval improvement. The ability to differentiate ground surface particularly snow-covered areas from clouds in near-real-time is another useful application of estimated cloud-top height.
APA, Harvard, Vancouver, ISO, and other styles
33

Bade, Benjamin [Verfasser]. "Essays on PD-LGD dependencies : modeling issues, estimation procedures, and predictive accuracy / Benjamin Bade." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2013. http://d-nb.info/1036694488/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Engberg, Alexander. "An empirical comparison of extreme value modelling procedures for the estimation of high quantiles." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297063.

Full text
Abstract:
The peaks over threshold (POT) method provides an attractive framework for estimating the risk of extreme events such as severe storms or large insurance claims. However, the conventional POT procedure, where the threshold excesses are modelled by a generalized Pareto distribution, suffers from small samples and subjective threshold selection. In recent years, two alternative approaches have been proposed in the form of mixture models that estimate the threshold and a folding procedure that generates larger tail samples. In this paper the empirical performances of the conventional POT procedure, the folding procedure and a mixture model are compared by modelling data sets on fire insurance claims and hurricane damage costs. The results show that the folding procedure gives smaller standard errors of the parameter estimates and in some cases more stable quantile estimates than the conventional POT procedure. The mixture model estimates are dependent on the starting values in the numerical maximum likelihood estimation, and are therefore difficult to compare with those from the other procedures. The conclusion is that none of the procedures is overall better than the others but that there are situations where one method may be preferred.
APA, Harvard, Vancouver, ISO, and other styles
35

Voorduin, Raquel. "A non-parametric procedure to estimate a linear discriminant function with an application to credit scoring." Thesis, University of Warwick, 2004. http://wrap.warwick.ac.uk/3710/.

Full text
Abstract:
The present work studies the application of two group discriminant analysis in the field of credit scoring. The view here given provides a completely different approach to how this problem is usually targeted. Credit scoring is widely used among financial institutions and is performed in a number of ways, depending on a wide range of factors, which include available information, support data bases, and informatic resources. Since each financial institution has its own methods of measuring risk, the ways in which an applicant is evaluated for the concession of credit for a particular product are at least as many as credit concessioners. However, there exist certain standard procedures for different products. For example, in the credit card business, when databases containing applicant information are available, usually credit score cards are constructed. These score cards provide an aid to qualify the applicant and decide if he or she represents a high risk for the institution or, on the contrary, a good investment. Score cards are generally used in conjunction with other criteria, such as the institution's own policies. In building score cards, generally parametric regression based procedures are used, where the assumption of an underlying model generating the data has to be made. Another aspect is that, in general, score cards are built taking into consideration only the probability that a particular applicant will not default. In this thesis, the objective will be to present a method of calculating a risk score that, does not depend on the actual process generating the data and that takes into account the costs and profits related to accepting a particular applicant. The ultimate objective of the financial institution should be to maximise profit and this view is a fundamental part of the procedure presented here.
APA, Harvard, Vancouver, ISO, and other styles
36

Katkuri, Jaipal. "Application of Dirichlet Distribution for Polytopic Model Estimation." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1210.

Full text
Abstract:
The polytopic model (PM) structure is often used in the areas of automatic control and fault detection and isolation (FDI). It is an alternative to the multiple model approach which explicitly allows for interpolation among local models. This thesis proposes a novel approach to PM estimation by modeling the set of PM weights as a random vector with Dirichlet Distribution (DD). A new approximate (adaptive) PM estimator, referred to as a Quasi-Bayesian Adaptive Kalman Filter (QBAKF) is derived and implemented. The model weights and state estimation in the QBAKF is performed adaptively by a simple QB weights' estimator and a single KF on the PM with the estimated weights. Since PM estimation problem is nonlinear and non-Gaussian, a DD marginalized particle filter (DDMPF) is also developed and implemented similar to MPF. The simulation results show that the newly proposed algorithms have better estimation accuracy, design simplicity, and computational requirements for PM estimation.
APA, Harvard, Vancouver, ISO, and other styles
37

Tardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.

Full text
Abstract:
Considérons un vecteur gaussien Y de loi N (m,sigma²Idn) et X une matrice de dimension n x p avec Y observé, m inconnu, Sigma et X connus. Dans le cadre du modèle linéaire, m est supposé être une combinaison linéaire des colonnes de X. En petite dimension, lorsque n ≥ p et que ker (X) = 0, il existe alors un unique paramètre Beta* tel que m = X Beta* ; on peut alors réécrire Y sous la forme Y = X Beta* + Epsilon. Dans le cadre du modèle linéaire gaussien en petite dimension, nous construisons une nouvelle procédure de tests multiples contrôlant le FWER pour tester les hypothèses nulles Beta*i = 0 pour i appartient à [[1,p]]. Cette procédure est appliquée en métabolomique au travers du programme ASICS qui est disponible en ligne. ASICS permet d'identifier et de quantifier les métabolites via l'analyse des spectres RMN. En grande dimension, lorsque n < p on a ker (X) ≠ 0, ainsi le paramètre Beta* décrit précédemment n'est pas unique. Dans le cas non bruité lorsque Sigma = 0, impliquant que Y = m, nous montrons que les solutions du système linéaire d'équations Y = X Beta avant un nombre de composantes non nulles minimales s'obtiennent via la minimisation de la "norme" lAlpha avec Alpha suffisamment petit
Let Y be a Gaussian vector distributed according to N (m,sigma²Idn) and X a matrix of dimension n x p with Y observed, m unknown, sigma and X known. In the linear model, m is assumed to be a linear combination of the columns of X In small dimension, when n ≥ p and ker (X) = 0, there exists a unique parameter Beta* such that m = X Beta*; then we can rewrite Y = Beta* + Epsilon. In the small-dimensional linear Gaussian model framework, we construct a new multiple testing procedure controlling the FWER to test the null hypotheses Beta*i = 0 for i belongs to [[1,p]]. This procedure is applied in metabolomics through the freeware ASICS available online. ASICS allows to identify and to qualify metabolites via the analyse of RMN spectra. In high dimension, when n < p we have ker (X) ≠ 0 consequently the parameter Beta* described above is no longer unique. In the noiseless case when Sigma = 0, implying thus Y = m, we show that the solutions of the linear system of equation Y = X Beta having a minimal number of non-zero components are obtained via the lalpha with alpha small enough
APA, Harvard, Vancouver, ISO, and other styles
38

DAGRAMODJOPOULOS, TIFFANY. "A CASE STUDY OF THE PROCEDURE OF DEVELOPMENT OF A LARGE REAL ESTATE PROJECT IN SÃO PAULO, B RAZIL." Thesis, KTH, Bygg- och fastighetsekonomi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-77490.

Full text
Abstract:
This thesis report presents the process which has been followed to develop a large real estate project in São Paulo, Brazil where the buildings remaining on the site are landmarks. The report includes an extensive case study the Matarazzo Project about which I performed the analysis of the procedure of development during my internship at SCPM – a French Project Management company. The ultimate goal of the thesis is to provide investors with brief recommendations to develop similar project in Brazil with respect of the cultural values. A French Investor intends to develop the Matarazzo Project – a large and complex real estate project in São Paulo, Brazil – on a site defined as a national and municipal landmark by public authorities, respectively CONDEPHAAT and CONPRESP, due to the remaining buildings erected from 1904 which are witnesses of the well-organized institutions of Italian immigrants. The protection of the existing buildings involved a particular procedure to apply for permits. Indeed, it implies the presentation of the project to several organs such as IPHAN, CONDEPHAAT , CONPRESP, SEHAB, SMT, DEPAVE, etc. with a list of required documents – TAC , Projeto de Restauro, Relatorió de Impacto de Vicinhenza, plans, layout, renderings, etc. Thus, to apply for building permits such a situation implies a selected numbers of particular consultants as a Legal Authorization Specialist, Retrofit Specialist, DEPAVE Specialist, Cultural Centre Specialist , lawyers, added to the stakeholders normally present during the development of a real estate project – architects, engineers, land surveyor, quantity surveyor , insurance companies, etc. The case study involved, at this stage, more than 23 entities (2 from the Direction, 3 from the Supervision, and 18 from the Executive Stakeholders) The combination of actors was such because I realized my internship at an early stage of the project – maybe the earliest. Indeed, when I started, the Master Plan had not been defined yet and the Work Cost Estimate had not been performed, even though the Project Manager already had an idea about the overall schedule and was hiring the appropriate stakeholders. By now, the Master Plan has been fixed and shares both green and brown field areas. The existing buildings will host a Retail Centre (18.000m²) surrounded by glazed roof, a Palace Hotel (10.000m²), the Chapel remains a religious place, the Paediatric will be replaced by a Village Hall (500m²). Underground constructions will be located all above the site with a Cultural Centre (18.500m²) and a Parking Lot (55.000m²). Plus, depending on the Right-to- Built, a Tower (21.000m²) will be erected near the Ponta. Consequently, in terms of time, the Project Manager forecasts the whole project to last no less than five years – including legal documents approvals and works execution. In terms of budget, a Work Cost Estimate – more or less accurate depending on the level of completion of the plans of each specific area – has been done so that the Client can start to set up the Business Plan and develop the strategy to finance the project – finding financers, operators, tenants, etc. Having work more than five months on the Matarazzo Project enables to make an analysis of what the situation had been and what it should be. It is crystal clear that mentalities and ways of proceeding between France and Brazil are different. Nothing is said but that is the role of the consultants to establish what strategy to choose, or to state things such as what is allowed to build, how to build, etc. Nothing is written either, indeed there is no code of construction, barely a Código de Obras e Edificações which define for which permit to apply depending on the work to perform. So, the spirit is ‘ do as best as you can and let’s see if will be accepted by legal authorities’. And conflict is avoided – problems are not pointed directly, they last and they became bigger putting the whole project on hold. The solution to all this has been to hire a Project Manager Assistant to work directly from there, increasing communication between France and Brazil, making researches about similar projects, and trying to keep everyone on the right track cause – due to the size of the project – minor points are often forgotten and became major points. For the future, the Project Manager starts to forecast the whole organization of the project, in particular for the detailed conception and execution phases. Regarding the work breakdown structure, to simplify communication proceeding having one representative for the architectural team and one general contractor is the favourite option despite the disadvantages it implies (information retention, increased fees for management of sub-contractor, etc.). The analysis of the procedure of development of a large real estate project in São Paulo, Brazil has resulted in future recommendations on what attention should be focused on. In short, the recommendations include the following: Being aware of local culture and local way of proceeding (steps of development, local institutions, subsequent required documents); Having a good intern organization (being aware of what is due and by who); For more details on the future recommendations, cf. chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
39

CURI, CLAUDIA. "An Improved procedure for Bootstrapping Malmquist Indices and its applications on the regional economic growth." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2008. http://hdl.handle.net/2108/725.

Full text
Abstract:
Estendendo l’approccio di decomposizione dell’Indice di Malmquist proposto da Fare et al. (1992) secondo cui la variazione della produttività è descritta come variazione della tecnologia e dell’efficienza, Simar e Wilson (1999) hanno fornito per la prima volta un’interpretazione statistica dell’indice e dei suoi componenti, proponendo un algoritmo basato sul bootstrap per stimare gli intervalli di confidenza degli indici sopra definiti. In questa tesi si è, per la prima volta, proposto un nuovo metodo di stima della densità, basato su una selezione più accurata della bandwidth, partendo e adattando i recenti sviluppi introdotti da Simar e Wilson (2007) nel caso univariato al caso bivariato. Inoltre, per la prima volta è stata testata la performance delle procedure per stimare gli indici, attraverso l’implementazione di simulazioni Monte Carlo. Essi hanno mostrato un basso livello di performance del modello proposto da Simar e Wilson nel 1999 rispetto a quello proposto in questo lavoro. In particolare, essi hanno evidenziato che la procedura di stima della densità è molto sensibile alla presenza di valori unitari dell’efficienza, tanto da fornire seri problemi nella valutazione della stima della funzione di densità continua. Inoltre, sono stati applicati e adattati i data driven methods, che hanno evidenziato risultati diversi rispetto alla procedura originale, lasciano ampi spazi a ricerche future. Da un punto di vista empirico, è stata analizzata la crescita delle regioni italiane attraverso la Total Factor Productivity (TFP), nel periodo 1980-2001. Quindi sono stati stimati l’indice di Malmquist e i suoi componenti come pure i loro rispettivi intervalli di confidenza, applicando la procedure migliore, identificata nella fase di ricerca precedente. E’ stato registrato un guadagno complessivo della variazione della produttività, corretta nella bias, del 2.1%, dell’efficienza del 0.5% e della tecnologia del 1.6%. L’analisi di sensibilità, basata su tecniche bootstrap, ha rilevato che per la maggior parte delle regioni italiane l’efficienza e la tecnologia non hanno mostrato cambiamenti statisticamente significativi. Secondo questi risultati, l’approccio inferenziale ha fornito un’analisi più accurata e rigorosa rispetto all’approccio tradizionale, adottato da Leonida et al.(2004 ,Table 1, pg. 2190) nella quale le stime sono state valutate come miglioramenti o recessioni, trascurando sia la correzione della bias che il loro significato statistico.
Improving the Fare et al. (1992) approach on Malmquist index of productivity, which can be decomposed into indices describing changes in technology and changes in efficiency , Simar and Wilson (1999) provided a statistical interpretation to their Malmquist productivity index and its components, and presented a bootstrap algorithm to estimate confidence intervals for the indices. Extending the recent developments introduced by Simar and Wilson (2007) in the bandwidth specification in the univariate case, we propose new methods of density estimation, based on more accurate bandwidth specification. Monte Carlo experiments have been computed for the first time in this context. They have shown a low quality of performance of the Simar and Wilson (1999)'s bootstrap approximations, and high level of quality for the proposed methods. In particular, they have found out as best performer method the procedure based on the density estimation without considering the ones, revealing the severe problem of deteriorating the estimation of the continuous density of the efficiency scores. Moreover, data driven methods have been applied to the Malmquist Index framework and at this stage of research they have shown different results from those provided by Simar and Wilson (1999). From an empirical point of view, Total Factor Productivity (TFP) growth of the Italian regions over the period 1980-2001 has been analyzed. Malmquist Productivity Index (MPI) and its components (namely Efficiency Change and Technical Change) as well as confidence intervals have been estimated by applying the best performed procedure, previously determinated. Including human capital among inputs, we estimated an overall bias-corrected productivity gain of 2.1 percent, an efficiency gain of 0.5 and a technical gain of 1.6 percent. The bootstrap analysis revealed that for most Italian regions efficiency and technical changes did not show a statistically significant change. According to these results, the inferential approach has provided a more rigorous and accurate insights on the Italian regional TFP than the traditional Data Envelopment Analysis (DEA) estimation carried out by Leonida et al.(2004 ,Table 1, pg. 2190) in which all the estimated values are interpreted as progress or regress without taking into account the bias of the estimated values and their statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Jian. "Long-horizon predictability of foreign currency prices and excess returns : alternative procedures for estimation and inference /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1280251178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Konoshima, Risako. "A Study on Effective Spatial Pooling Procedure in Estimation of Return Values of Precipitation for Adaptation to Climate Change." 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/157554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bunn, Wendy Jill. "Sensitivity to Distributional Assumptions in Estimation of the ODP Thresholding Function." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/953.

Full text
Abstract:
Recent technological advances in fields like medicine and genomics have produced high-dimensional data sets and a challenge to correctly interpret experimental results. The Optimal Discovery Procedure (ODP) (Storey 2005) builds on the framework of Neyman-Pearson hypothesis testing to optimally test thousands of hypotheses simultaneously. The method relies on the assumption of normally distributed data; however, many applications of this method will violate this assumption. This thesis investigates the sensitivity of this method to detection of significant but nonnormal data. Overall, estimation of the ODP with the method described in this thesis is satisfactory, except when the nonnormal alternative distribution has high variance and expectation only one standard deviation away from the null distribution.
APA, Harvard, Vancouver, ISO, and other styles
43

Ezzo, Issa. "Determination of the conversion factor for the estimation of effective dose in lungs, urography and cardiac procedures." Thesis, Stockholm University, Medical Radiation Physics (together with KI), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8520.

Full text
Abstract:

Patient dose in diagnostic radiology is usually expressed in terms of organ dose and effective dose. The latter is used as a measure of the stochastic risk. Determinations of these doses are obtained by measurements (Thermoluminescent dosemeters) or by calculations (Monte Carlo simulation).

Conversion factors for the calculation of effective dose from dose-area product (DAP) values are commonly used to determine radiation dose in conventional x-ray imaging to realize radiation risks for different investigations, and for different ages. The exposure can easily be estimated by converting the DAP into an effective dose.

The aim of this study is to determine the conversion factor in procedures by computing the ratio between effective dose and DAP for fluoroscopic cardiac procedures in adults and for conventional lung and urography examinations in children.

Thermoluminescent dosemeters (TLD) were placed in an anthropomorphic phantom (Alderson Rando phantom) and child phantom (one year old) in order to measure the organ dose and compute the effective dose. A DAP meter was used to measure dose-area product.

MC calculations of radiation transport in mathematical anthropomorphic phantoms were used to obtain the effective dose for the same conditions with DAP as input data.

The deviation between the measured and calculated data was less than 10 %. The conversion factor for cardiac procedures varies between 0.19 mSvGy-1 cm-2 and 0.18 mSvGy-1 cm-2, for TLD respective MC. For paediatric simulation of a one year old phantom the average conversion factor for urography was 1.34 mSvGy-1 cm-2 and 1,48 mSvGy-1cm-2 for TLD respective MC. This conversion factor will decrease to 1.07 mSvGy-1 cm-2 using the TLD method, if the new ICRP (ICRP Publication 103) weighting factors were used to calculate the effective dose.

For lung investigations, the conversion factor for children was 1.75 mSvGy-1 cm-2 using TLD, while this value was 1.62 mSvGy-1 cm-2 using MC simulation. The conversion value increased to 2.02 mSvGy-1 cm-2 using ICRP’s new recommendation for tissue weighting factors and child phantom.

APA, Harvard, Vancouver, ISO, and other styles
44

Abbas, Qaisar. "Weak Boundary and Interface Procedures for Wave and Flow Problems." Doctoral thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-159440.

Full text
Abstract:
In this thesis, we have analyzed the accuracy and stability aspects of weak boundary and interface conditions (WBCs) for high order finite difference methods on Summations-By-Parts (SBP) form. The numerical technique has been applied to wave propagation and flow problems. The advantage of WBCs over strong boundary conditions is that stability of the numerical scheme can be proven. The boundary procedures in the advection-diffusion equation for a boundary layer problem is analyzed. By performing Navier-Stokes calculations, it is shown that most of the conclusions from the model problem carries over to the fully nonlinear case. The work was complemented to include the new idea of using WBCs on multiple grid points in a region, where the data is known, instead of at a single point. It was shown that we can achieve high accuracy, an increased rate of convergence to steady-state and non-reflecting boundary conditions by using this approach. Using the SBP technique and WBCs, we have worked out how to construct conservative and energy stable hybrid schemes for shocks using two different approaches. In the first method, we combine a high order finite difference scheme with a second order MUSCL scheme. In the second method, a procedure to locally change the order of accuracy of the finite difference schemes is developed. The main purpose is to obtain a higher order accurate scheme in smooth regions and a low order non-oscillatory scheme in the vicinity of shocks. Furthermore, we have analyzed the energy stability of the MUSCL scheme, by reformulating the scheme in the framework of SBP and artificial dissipation operators. It was found that many of the standard slope limiters in the MUSCL scheme do not lead to a negative semi-definite dissipation matrix, as required to get pointwise stability. Finally, high order simulations of shock diffracting over a convex wall with two facets were performed. The numerical study is done for a range of Reynolds numbers. By monitoring the velocities at the solid wall, it was shown that the computations were resolved in the boundary layer. Schlieren images from the computational results were obtained which displayed new interesting flow features.
APA, Harvard, Vancouver, ISO, and other styles
45

Charbonnier, Sylvie. "Contribution de l'automatique au développement d'un procédé de production d'enzymes : modélisation, estimation, commande." Grenoble INPG, 1994. http://www.theses.fr/1994INPG0125.

Full text
Abstract:
Le travail presente dans cette these est consacre a l'application de techniques d'automatique a un procede de production de lipase en vue de mettre au point un systeme de controle maximisant sa production. Les techniques classiques en terme d'estimation et de commande ne permettant pas de resoudre le probleme, des solutions originales prenant en compte les specificites du procede sont developpees puis testees en simulation et sur le procede reel. Elles ont a la fois simples a mettre en uvre et leur performances sont garanties d'un point de vue theorique. Apres une etude de modelisation a l'issue de laquelle un modele de connaissance caracterisant la dynamique du procede a ete propose, une strategie d'estimation a ete definie pour assurer la surveillance du procede et pallier au manque de mesures en ligne. Elle permet d'estimer les principales variables d'etat et les parametres cinetiques du procede a partir de la seule mesure en ligne disponible fournie par une analyse de gaz. Cette strategie consiste a scinder le probleme global d'estimation en sous problemes plus simples pour lesquels des solutions sont proposees, qui font appel le moins possible aux modeles des cinetiques afin de s'affranchir des erreurs de modelisation. Elle est validee en simulation et experimentalement pour differents modes de fonctionnement du procede. Ensuite, une strategie de controle qui permet de maximiser la production de lipase est elaboree a partir du modele du procede et une loi de commande qui linearise le procede par bouclage est developpee. Afin de la mettre en uvre sur le procede reel, la commande est couplee a l'estimateur d'etat et de parametres et les performances de l'ensemble sont etudiees d'un point de vue theorique et en simulation
APA, Harvard, Vancouver, ISO, and other styles
46

Glórias, Ludgero Miguel Carraça. "Estimating a knowledge production function and knowledge spillovers : a new two-step estimation procedure of a Spatial Autoregressive Poisson Model." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20711.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
Vários estudos econométricos procuram explicar os determinantes da criação de conhecimento usando como variável dependente o número de patenteamentos numa região. Alguns destes procuram captar os efeitos de Knowledge Spillovers através de modelos lineares que incorporam dependência espacial. No entanto, nenhum estudo foi encontrado que captasse este efeito, tendo em atenção a natureza discreta da variável dependente. Este trabalho pretende preencher essa lacuna propondo um novo estimador de máxima verosimilhança a dois passos para um modelo Poisson Autorregressivo Espacial. As propriedades do estimador são avaliadas num conjunto de simulações de Monte Carlo. Os resultados sugerem que este estimador tem menor Bias e menor RMSE, na generalidade, que outros estimadores propostos, sendo que apenas mostra piores resultados quando a dependência espacial é próxima da unidade. Um exemplo empírico, empregando o novo estimador e um conjunto de estimadores alternativos, é realizado, sendo que a criação de conhecimento em 234 NUTS II de 24 países europeus é analisada. Os resultados evidenciam que existe uma forte dependência espacial na criação de inovação entre as regiões. Conclui-se também que o ambiente socioeconómico é essencial para o processo de formação de conhecimento e que contrariamente às instituições públicas, as empresas privadas são eficientes na produção de inovação. É de realçar, que regiões com menor capacidade em transformar despesas R&D em patenteamentos apresentam maior capacidade de absorção e segregação de conhecimento, evidenciando que regiões vizinhas menos eficientes na produção de conhecimento tendem a criar relações fortalecidas na partilha de conhecimento.
Several econometric studies seek to explain the determinants of knowledge production using as dependent variable the number of patents in a region. Some of these capture the effects of knowledge spillovers through linear models with spatial autorregressive term. However, no study has been found that estimates such effect while also considering the discrete nature of the dependent variable: a count variable. This essay aims to fill this gap by proposing a new Two-step Maximum Likelihood estimator for a Spatial Autorregressive Poisson model. The properties of this estimator are evaluated in a set of Monte Carlo Experiments. The simulation results suggest that this estimator presents lower Bias and lower RMSE than the alternative estimators proposed, only showing worse results when the spatial dependence is close to the unit. An empirical example, using the new estimator and a set of alternative estimators, is executed, where the creation of knowledge in 234 NUTS II from 24 European countries is analyzed. The results show that there is a strong spatial dependence on the creation of innovation. It is also concluded that the socio-economic environment is essential for the knowledge formation and, unlike public R&D institutions, private companies are efficient in producing innovation. It should be noted that regions with less capacity to transform R&D expenses into new patents, have greater capacity for absorption and segregation of knowledge, which shows that neighboring regions less efficient in the production of knowledge tend to create strong relations with each other taking advantage of the knowledge sharing process.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
47

HUAMANI, LUIS ALBERTO NAVARRO. "A BAYESIAN PROCEDUCE TO ESTIMATE THE INDIVIDUAL CONTRIBUTION OF INDIVIDUAL END USES IN RESIDENCIAL ELECTRICAL ENERGY CONSUMPTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8691@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta dissertação investiga a utilização do Modelo de Regressão Multivariada Seemingly Unrelated sob uma perspectiva Bayesiana, na estimação das curvas de carga dos principais eletrodomésticos. Será utilizada uma estrutura de Demanda Condicional (CDA), consideradas de especial interesse no setor comercial e residencial para o gerenciamento pelo lado da demanda (Demand Side Management) dos hábitos dos consumidores residenciais. O trabalho envolve três partes principais: uma apresentação das metodologias estatísticas clássicas usadas para estimar as curvas de cargas; um estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated usando uma aproximação Bayesiana. E por último o desenvolvimento do modelo num estudo de caso. Na apresentação das metodologias clássicas fez-se um levantamento preliminar da estrutura CDA para casos univariados usando Regressão Múltipla, e multivariada usando Regressão Multivariada Seemingly Unrelated, onde o desempenho desta estrutura depende da estrutura de correlação entre os erros de consumo horário durante um dia específico; assim como as metodologias usadas para estimar as curvas de cargas. No estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated a partir da abordagem Bayesiana considerou-se um fator importante no desempenho da metodologia de estimação, a saber: informação a priori. No desenvolvimento do modelo, foram estimadas as curvas de cargas dos principais eletrodomésticos numa abordagem Bayesiana mostrando o desempenho da metodologia na captura de ambos tipos de informação: estimativas de engenharia e estimativas CDA. Os resultados obtidos avaliados pelo método acima comprovaram superioridade na explicação de dados em relação aos modelos clássicos.
The present dissertation investigates the use of multivariate regression models from a Bayesian point of view. These models were used to estimate the electric load behavior of household end uses. A conditional demand structure was used considering its application to the demand management of the residential and commercial consumers. This work is divided in three main parts: a description of the classical statistical methodologies used for the electric load prediction, a study of the multivariate regression models using a Bayesian approach and a further development of the model applied to a case study. A preliminary revision of the CDA structure was done for univariate cases using multiple regression. A similar revision was done for other cases using multivariate regression (Seemingly Unrelated). In those cases, the behavior of the structure depends on the correlation between a minimization of the daily demand errors and the methodologies used for the electric load prediction. The study on multivariate regression models (Seemingly Unrelated) was done from a Bayesian point of view. This kind of study is very important for the prediction methodology. When developing the model, the electric load curves of the main household appliances were predicted using a Bayesian approach. This fact showed the performance of the metodology on the capture of two types of information: Engineering prediction and CDA prediction. The results obtained using the above method, for describing the data, were better than the classical models.
APA, Harvard, Vancouver, ISO, and other styles
48

Dufresne, Jean-Louis. "Etude et developpement d'une procedure experimentale pour l'identification des parametres d'un modele thermique de capteurs solaires a air en regime dynamique." Paris 7, 1987. http://www.theses.fr/1987PA077107.

Full text
Abstract:
Construction d'un banc d'essais permettant de tester le fonctionnement d'un capteur solaire a air en regime dynamique (variation de l'eclairement et du debit dans le capteur). Analyse des donnees d'ensoleillement recueillies toutes les minutes pendant un an a orsay (france). Methode d'identification de dix parametres libres d'un modele de capteur a air en regime dynamique, basee sur la mesure d'une seule sortie du systeme: la puissance extraite. Discussion de l'estimation des erreurs de mesure. Application a un capteur industriel (capteur acret)
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Peng. "Adaptive Mixture Estimation and Subsampling PCA." Case Western Reserve University School of Graduate Studies / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1220644686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

GRUNEWALD, CARLOS AUGUSTO. "A RECURSIVE PROCEDURE FOR THE FACE AND FCCE ESTIMATION IN THE IDENTIFICATION OF THE BOX & JENKINS AND TIAO & BOX MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1985. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14445@1.

Full text
Abstract:
ELETROBRAS - CENTRAIS ELÉTRICAS BRASILEIRAS S. A.
O objetivo principal deste trabalho é discutir a implementação de um algoritmo recursivo para a estimação da Função de Auto-Correlação Extendida (FACE) para séries univariadas e da Função de Correlação Cruzada Extendida (FCCE) para vetores de séries temporais. Um programa de computador, escrito na Linguagem PL/I foi desenvolvido, utilizando o algoritmo recursivo mencionado para o cálculo somente da FACE. A utilização deste procedimento permitiu um aumento nas dimensões da matriz FACE e, como conseqüência, a identificação de modelos sazonais via FACE.
In this work we discuss the implementation of a recursive procedure for the estimation of the Extended Sample Autocorrelation Function (FACE) for univariate time series, and the Extended Sample Cross-Correlation Function (FCCE) for multivariate time series. A computer program written in PL/I was developed using the procedure above mentioned for the univariate case only. The use of this procedure allows high dimensions for the required FACE matrix, as a consequence, the identification of seasonal series via FACE is included as particular case.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography