Dissertations / Theses on the topic 'Estimation theory'

To see the other types of publications on this topic, follow the link: Estimation theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Baur, Cordula. "Risk Estimation in Portfolio Theory." St. Gallen, 2007. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/05609706001/$FILE/05609706001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Feinstein, Jonathan S. "Detection controlled estimation : theory and applications." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Li. "Nonparametric estimation in actuarial ruin theory." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khorramzadeh, Yasamin. "Network Reliability: Theory, Estimation, and Applications." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/64383.

Full text
Abstract:
Network reliability is the probabilistic measure that determines whether a network remains functional when its elements fail at random. Definition of functionality varies depending on the problem of interest, thus network reliability has much potential as a unifying framework to study a broad range of problems arising in complex network contexts. However, since its introduction in the 1950's, network reliability has remained more of an interesting theoretical construct than a practical tool. In large part, this is due to well-established complexity costs for both its evaluation and approximation, which has led to the classification of network reliability as a NP-Hard problem. In this dissertation we present an algorithm to estimate network reliability and then utilize it to evaluate the reliability of large networks under various descriptions of functionality. The primary goal of this dissertation is to pose network reliability as a general scheme that provides a practical and efficiently computable observable to distinguish different networks. Employing this concept, we are able to demonstrate how local structural changes can impose global consequences. We further use network reliability to assess the most critical network entities which ensure a network's reliability. We investigate each of these aspects of reliability by demonstrating some example applications.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Qian. "Stock bubbles : The theory and estimation." Thesis, Brunel University, 2006. http://bura.brunel.ac.uk/handle/2438/3597.

Full text
Abstract:
This work attempts to make a breakthrough in the empirical research of market inefficiency by introducing a new approach, the value frontier method, to estimate the magnitude of stock bubbles, which has been an interesting topic that has attracted a lot of research attention in the past. The theoretical framework stems from the basic argument of Blanchard & Watson’s (1982) rational expectation of asset value that should be equal to the fundamental value of the stock, and the argument of Scheinkman & Xiong (2003) and Hong, Scheinkman & Xiong (2006) that bubbles are formed by heterogeneous beliefs which can be refined as the optimism effect and the resale option effect. The applications of the value frontier methodology are demonstrated in this work at the market level and the firm level respectively. The estimated bubbles at the market level enable us to analyse bubble changes over time among 37 countries across the world, which helps further examine the relationship between economic factors (e.g. inflation) and bubbles. Firm-level bubbles are estimated in two developed markets, the US and the UK, as well as one emerging market, China. We found that the market-average bubble is less volatile than industry-level bubbles. This finding provides a compelling explanation to the failure of many existing studies in testing the existence of bubbles at the whole market level. In addition, the significant decreasing trend of Chinese bubbles and their co-moving tendency with the UK and the US markets offer us evidence in support of our argument that even in an immature market, investors can improve their investment perceptions towards rationality by learning not only from previous experience but also from other opened markets. Furthermore, following the arguments of “sustainable bubbles” from Binswanger (1999) and Scheinkman & Xiong (2003), we reinforce their claims at the end that a market with bubbles can also be labelled efficient; in particular, it has three forms of efficiency. First, a market without bubbles is completely efficient from the perspective of investors’ responsiveness to given information; secondly, a market with “sustainable bubbles” (bubbles that co-move with the economy), which results from rational responses to economic conditions, is in the strong form of information-responsive efficiency; thirdly, a market with “non-sustainable bubbles”, i.e. the bubble changes are not linked closely with economic foundations, is in the weak form of information-responsive efficiency.
APA, Harvard, Vancouver, ISO, and other styles
6

Whaley, Dewey Lonzo. "The Interquartile Range: Theory and Estimation." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etd/1030.

Full text
Abstract:
The interquartile range (IQR) is used to describe the spread of a distribution. In an introductory statistics course, the IQR might be introduced as simply the “range within which the middle half of the data points lie.” In other words, it is the distance between the two quartiles, IQR = Q3 - Q1. We will compute the population IQR, the expected value, and the variance of the sample IQR for various continuous distributions. In addition, a bootstrap confidence interval for the population IQR will be evaluated.
APA, Harvard, Vancouver, ISO, and other styles
7

Pilota, Evdoxia. "Extreme value thepory forvalue at risk estimation : Theory and empirical application." Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Baidhani, Fadil Ajab. "Reliability theory in operational research." Thesis, University of St Andrews, 1991. http://hdl.handle.net/10023/13745.

Full text
Abstract:
This thesis is concerned principally with the problem of estimating the parameters of the Weibull and Beta distributions using several different techniques. These distributions are used in the area of reliability testing and it is important to achieve the best estimates possible of the parameters involved. After considering several accepted methods of estimating the relevant parameters, it is considered that the best method depends on the aim of the analysis, and on the value of the shape parameter β. For estimating the two-parameter Weibull distribution, it is recommended that Generalized Least Squares (GLS) is the best method to use for values of β between 0.5 and 30. However, Maximum Likelihood Estimator (MLE) is a good method for estimating quantiles. On this basis, the three-parameter Weibull distribution is investigated. The traditional parametrization is compared with a new parametrization developed in this work. By considering parameter effects and intrinsic curvature it is shown that the new parametrization results in a linear effect of the shape parameter. Also it has advantages in quantile estimation because of its ability to provide estimates for a wider range of data sets. A less frequently used distribution in the field of reliability is the Beta distribution. The lack of frequency of its use is partly due to the difficulty in estimating its parameters. A simple, applicable method is developed here of estimating these parameters. This 'group method' involves estimating the two ends of the distribution. It is shown that this procedure can be used, together with other methods of estimating the two- parameter Beta distribution successfully to estimate the four-parameter Beta distribution.
APA, Harvard, Vancouver, ISO, and other styles
9

Vishwanath, T. G. "Adaptive estimation theory with real-time implementation." Thesis, University of the West of Scotland, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Ming. "When Decision Meets Estimation: Theory and Applications." ScholarWorks@UNO, 2007. http://scholarworks.uno.edu/td/627.

Full text
Abstract:
In many practical problems, both decision and estimation are involved. This dissertation intends to study the relationship between decision and estimation in these problems, so that more accurate inference methods can be developed. Hybrid estimation is an important formulation that deals with state estimation and model structure identification simultaneously. Multiple-model (MM) methods are the most widelyused tool for hybrid estimation. A novel approach to predict the Internet end-to-end delay using MM methods is proposed. Based on preliminary analysis of the collected end-to-end delay data, we propose an off-line model set design procedure using vector quantization (VQ) and short-term time series analysis so that MM methods can be applied to predict on-line measurement data. Experimental results show that the proposed MM predictor outperforms two widely used adaptive filters in terms of prediction accuracy and robustness. Although hybrid estimation can identify model structure, it mainly focuses on the estimation part. When decision and estimation are of (nearly) equal importance, a joint solution is preferred. By noticing the resemblance, a new Bayes risk is generalized from those of decision and estimation, respectively. Based on this generalized Bayes risk, a novel, integrated solution to decision and estimation is introduced. Our study tries to give a more systematic view on the joint decision and estimation (JDE) problem, which we believe the work in various fields, such as target tracking, communications, time series modeling, will benefit greatly from. We apply this integrated Bayes solution to joint target tracking and classification, a very important topic in target inference, with simplified measurement models. The results of this new approach are compared with two conventional strategies. At last, a surveillance testbed is being built for such purposes as algorithm development and performance evaluation. We try to use the testbed to bridge the gap between theory and practice. In the dissertation, an overview as well as the architecture of the testbed is given and one case study is presented. The testbed is capable to serve the tasks with decision and/or estimation aspects, and is helpful for the development of the JDE algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Kubitzki, Marcus. "State and Parameter Estimation in Quantum Theory." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10806359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lopes, Renato da Rocha. "Iterative estimation, equalization and decoding." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gaby, James Eliot. "Artificial intelligence applied to spectrum estimation." Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/15715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Schön, Thomas B. "Estimation of Nonlinear Dynamic Systems : Theory and Applications." Doctoral thesis, Linköpings universitet, Reglerteknik, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7124.

Full text
Abstract:
This thesis deals with estimation of states and parameters in nonlinear and non-Gaussian dynamic systems. Sequential Monte Carlo methods are mainly used to this end. These methods rely on models of the underlying system, motivating some developments of the model concept. One of the main reasons for the interest in nonlinear estimation is that problems of this kind arise naturally in many important applications. Several applications of nonlinear estimation are studied. The models most commonly used for estimation are based on stochastic difference equations, referred to as state-space models. This thesis is mainly concerned with models of this kind. However, there will be a brief digression from this, in the treatment of the mathematically more intricate differential-algebraic equations. Here, the purpose is to write these equations in a form suitable for statistical signal processing. The nonlinear state estimation problem is addressed using sequential Monte Carlo methods, commonly referred to as particle methods. When there is a linear sub-structure inherent in the underlying model, this can be exploited by the powerful combination of the particle filter and the Kalman filter, presented by the marginalized particle filter. This algorithm is also known as the Rao-Blackwellized particle filter and it is thoroughly derived and explained in conjunction with a rather general class of mixed linear/nonlinear state-space models. Models of this type are often used in studying positioning and target tracking applications. This is illustrated using several examples from the automotive and the aircraft industry. Furthermore, the computational complexity of the marginalized particle filter is analyzed. The parameter estimation problem is addressed for a relatively general class of mixed linear/nonlinear state-space models. The expectation maximization algorithm is used to calculate parameter estimates from batch data. In devising this algorithm, the need to solve a nonlinear smoothing problem arises, which is handled using a particle smoother. The use of the marginalized particle filter for recursive parameterestimation is also investigated. The applications considered are the camera positioning problem arising from augmented reality and sensor fusion problems originating from automotive active safety systems. The use of vision measurements in the estimation problem is central to both applications. In augmented reality, the estimates of the camera’s position and orientation are imperative in the process of overlaying computer generated objects onto the live video stream. The objective in the sensor fusion problems arising in automotive safety systems is to provide information about the host vehicle and its surroundings, such as the position of other vehicles and the road geometry. Information of this kind is crucial for many systems, such as adaptive cruise control, collision avoidance and lane guidance.
APA, Harvard, Vancouver, ISO, and other styles
15

Schön, Thomas B. "Estimation of Nonlinear dynamic systems : theory and applications /." Linköping : Univ, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tian, Yuandong. "Theory and Practice of Globally Optimal Deformation Estimation." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/269.

Full text
Abstract:
Nonrigid deformation modeling and estimation from images is a technically challenging task due to its nonlinear, nonconvex and high-dimensional nature. Traditional optimization procedures often rely on good initializations and give locally optimal solutions. On the other hand, learning-based methods that directly model the relationship between deformed images and their parameters either cannot handle complicated forms of mapping, or suffer from the Nyquist Limit and the curse of dimensionality due to high degrees of freedom in the deformation space. In particular, to achieve a worst-case guarantee of ∈ error for a deformation with d degrees of freedom, the sample complexity required is O(1/∈d). In this thesis, a generative model for deformation is established and analyzed using a unified theoretical framework. Based on the framework, three algorithms, Data-Driven Descent, Top-down and Bottom-up Hierarchical Models, are designed and constructed to solve the generative model. Under Lipschitz conditions that rule out unsolvable cases (e.g., deformation of a blank image), all algorithms achieve globally optimal solutions to the specific generative model. The sample complexity of these methods is substantially lower than that of learning-based approaches, which are agnostic to deformation modeling. To achieve global optimality guarantees with lower sample complexity, the structureembedded in the deformation model is exploited. In particular, Data-driven Descentrelates two deformed images that are far away in the parameter space by compositionalstructures of deformation and reduce the sample complexity to O(Cd log 1/∈).Top-down Hierarchical Model factorizes the local deformation into patches once theglobal deformation has been estimated approximately and further reduce the samplecomplexity to O(Cd/1+C2 log 1/∈). Finally, the Bottom-up Hierarchical Model buildsrepresentations that are invariant to local deformation. With the representations, theglobal deformation can be estimated independently of local deformation, reducingthe sample complexity to O((C/∈)d0) (d0 ≪ d). From the analysis, this thesis showsthe connections between approaches that are traditionally considered to be of verydifferent nature. New theoretical conjectures on approaches like Deep Learning, arealso provided. practice, broad applications of the proposed approaches have also been demonstrated to estimate water distortion, air turbulence, cloth deformation and human pose with state-of-the-art results. Some approaches even achieve near real-time performance. Finally, application-dependent physics-based models are built with good performance in document rectification and scene depth recovery in turbulent media.
APA, Harvard, Vancouver, ISO, and other styles
17

di, Liberto Adriana. "Human capital and convergence : theory, estimation and applications." Thesis, University College London (University of London), 2004. http://discovery.ucl.ac.uk/1446793/.

Full text
Abstract:
In growth theory, convergence analysis tries to answer three fundamental questions "Are poor countries catching up with richer ones How quickly And what are the determinants of this process" This thesis deals with issues that are relevant to all these questions. It begins by setting out the key theoretical contributions to the analysis of the role of human capital in growth and convergence. Secondly, attention is turned to the way that convergence is estimated from data. The econometric techniques used in the convergence literature usually assume that shocks are uncorrelated across countries. We claim that this is unlikely for most data sets and investigate the use of an estimator so far ignored, namely the annual panel estimator where shocks are allowed to be correlated. Our analysis indicates that this estimator is more efficient than conventional ones for plausible values of cross-country error correlation. The study then turns to the analysis of the third question. Although differences in human capital endowments and rates of investment have long been recognised as crucial elements for explaining observed GDP gaps, nevertheless, human capital proxies are rarely significant in growth regressions. In this study some possible solutions to this puzzle are explored. We estimate aggregate returns to education in Italy and Spain, and compare our results with the predictions of competing theoretical frameworks. In general, our empirical analysis identifies a positive role for human capital, and stresses the relevance of theoretical models in which human capital has a fundamental but indirect role in the catching up process. The final part of the thesis proposes a new methodology designed to estimate technology levels and to test whether part of observed convergence is due to technology convergence. The results seem to confirm the existence of technology catch-up among regions.
APA, Harvard, Vancouver, ISO, and other styles
18

Kasonga, Raphael Abel Carleton University Dissertation Mathematics. "Asymptotic parameter estimation theory for stochastic differential equations." Ottawa, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
19

Breidert, Christoph. "Estimation of willingness-to-pay theory, measurement, application /." Wiesbaden : Dt. Univ.-Verl, 2006. https://www.lib.umn.edu/slog.phtml?url=http://www.myilibrary.com?id=134357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Iranzo, Susana. "Three essays on economic geography : theory and estimation /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2003. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Xue, Huitian, and 薛惠天. "Maximum likelihood estimation of parameters with constraints in normaland multinomial distributions." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47850012.

Full text
Abstract:
Motivated by problems in medicine, biology, engineering and economics, con- strained parameter problems arise in a wide variety of applications. Among them the application to the dose-response of a certain drug in development has attracted much interest. To investigate such a relationship, we often need to conduct a dose- response experiment with multiple groups associated with multiple dose levels of the drug. The dose-response relationship can be modeled by a shape-restricted normal regression. We develop an iterative two-step ascent algorithm to estimate normal means and variances subject to simultaneous constraints. Each iteration consists of two parts: an expectation{maximization (EM) algorithm that is utilized in Step 1 to compute the maximum likelihood estimates (MLEs) of the restricted means when variances are given, and a newly developed restricted De Pierro algorithm that is used in Step 2 to find the MLEs of the restricted variances when means are given. These constraints include the simple order, tree order, umbrella order, and so on. A bootstrap approach is provided to calculate standard errors of the restricted MLEs. Applications to the analysis of two real datasets on radioim-munological assay of cortisol and bioassay of peptides are presented to illustrate the proposed methods. Liu (2000) discussed the maximum likelihood estimation and Bayesian estimation in a multinomial model with simplex constraints by formulating this constrained parameter problem into an unconstrained parameter problem in the framework of missing data. To utilize the EM and data augmentation (DA) algorithms, he introduced latent variables {Zil;Yil} (to be defined later). However, the proposed DA algorithm in his paper did not provide the necessary individual conditional distributions of Yil given (the observed data and) the updated parameter estimates. Indeed, the EM algorithm developed in his paper is based on the assumption that{ Yil} are fixed given values. Fortunately, the EM algorithm is invariant under any choice of the value of Yil, so the final result is always correct. We have derived the aforesaid conditional distributions and hence provide a valid DA algorithm. A real data set is used for illustration.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
22

Millar, R. B. "Estimation of mixing and mixed distributions /." Thesis, Connect to this title online; UW restricted, 1989. http://hdl.handle.net/1773/8984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bober, Miroslaw Zbigniew. "General motion estimation and segmentation from image sequences." Thesis, University of Surrey, 1994. http://epubs.surrey.ac.uk/843929/.

Full text
Abstract:
This thesis is concerned with the problem of motion estimation and segmentation, mainly related to planar motion in the image plane. The emphasis is placed on several important issues, namely: the study of different motion models and their performance, the benefits resulting from the use of contextual information, the application of multiresolution strategies, and the use of Robust methods and confidence measures. The thesis investigates the application of global motion models, in particular the affine model, in different estimation and segmentation approaches. It is shown that the use of such models, which globally constrain the estimate, results in improved accuracy and robustness. Robust techniques, which can cope with outliers often present when larger data sets are used, are adopted and tested here. The performance is further improved by the use of confidence measures, and of contextual information such as intensity edges or moving feature information. Two broad classes of approach are developed and investigated. The first one is based on the theory of Markov Random Fields. Novel elements in this approach include the introduction of a complex motion model - capable of describing translation, rotation and change of scale - and confidence factors describing the reliability of the data. The application of the Supercoupling approach for multiresolution optimisation speeds up convergence and further improves the quality of the estimate. The second class of algorithms is based on the Hough Transform. An in-depth investigation of the behaviour of the standard Hough Transform is conducted. This leads to the adoption of a robust statistics method providing a better estimate accuracy, better motion segmentation and guaranteed convergence. The use of multiresolution representation in the image plane, in addition to multiresolution in the parameter space, brings the advantage of robust and fast convergence even for large displacements. An important contribution of the research is the evaluation of different kernel functions from the point of view of robustness to noise and change in illumination conditions. Two algorithms from this group have been developed. The first one processes an entire image and provides parallel motion segmentation and estimation. The other is used as a local and robust method for the estimation of optic flow, with the ability to detect multimodal motions. A comparative study with other state-of-the-art methods is conducted, and the results are strongly in favour of the new algorithms. In summary, all stages of motion estimation and segmentation have been investigated. At the low-level, a robust algorithm for optic flow estimation has been developed. It can cope with multiple moving objects, and detects motion boundaries and occluded/uncovered regions. The spatial coherence of motion is enforced here very strongly, resulting in an accurate estimate and reliable confidence measures. This low-level estimate may be globally interpreted, together with other clues and a priori knowledge of the world using a multi-scale Markov Random Field approach. Alternatively, motion estimation and segmentation may be performed in parallel globally using the Robust Hough Transform approach. At this stage meaningful objects can be segmented, thus providing a high-level description of the scene.
APA, Harvard, Vancouver, ISO, and other styles
24

Yoo, Hyungsuk. "Quality of the Volterra transfer function estimation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zheng, Xueying, and 郑雪莹. "Robust joint mean-covariance model selection and time-varying correlation structure estimation for dependent data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50899703.

Full text
Abstract:
In longitudinal and spatio-temporal data analysis, repeated measurements from a subject can be either regional- or temporal-dependent. The correct specification of the within-subject covariance matrix cultivates an efficient estimation for mean regression coefficients. In this thesis, robust estimation for the mean and covariance jointly for the regression model of longitudinal data within the framework of generalized estimating equations (GEE) is developed. The proposed approach integrates the robust method and joint mean-covariance regression modeling. Robust generalized estimating equations using bounded scores and leverage-based weights are employed for the mean and covariance to achieve robustness against outliers. The resulting estimators are shown to be consistent and asymptotically normally distributed. Robust variable selection method in a joint mean and covariance model is considered, by proposing a set of penalized robust generalized estimating equations to estimate simultaneously the mean regression coefficients, the generalized autoregressive coefficients and innovation variances introduced by the modified Cholesky decomposition. The set of estimating equations select important covariate variables in both mean and covariance models together with the estimating procedure. Under some regularity conditions, the oracle property of the proposed robust variable selection method is developed. For these two robust joint mean and covariance models, simulation studies and a hormone data set analysis are carried out to assess and illustrate the small sample performance, which show that the proposed methods perform favorably by combining the robustifying and penalized estimating techniques together in the joint mean and covariance model. Capturing dynamic change of time-varying correlation structure is both interesting and scientifically important in spatio-temporal data analysis. The time-varying empirical estimator of the spatial correlation matrix is approximated by groups of selected basis matrices which represent substructures of the correlation matrix. After projecting the correlation structure matrix onto the space spanned by basis matrices, varying-coefficient model selection and estimation for signals associated with relevant basis matrices are incorporated. The unique feature of the proposed model and estimation is that time-dependent local region signals can be detected by the proposed penalized objective function. In theory, model selection consistency on detecting local signals is provided. The proposed method is illustrated through simulation studies and a functional magnetic resonance imaging (fMRI) data set from an attention deficit hyperactivity disorder (ADHD) study.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
26

Warner, Carl Michael 1952. "ESTIMATION OF NONSTATIONARY SIGNALS IN NOISE (PROCESSING, ADAPTIVE, WIENER FILTERS, ESTIMATION, DIGITAL)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kouch, Richard Banking &amp Finance Australian School of Business UNSW. "Efficient estimation in portfolio management." Awarded by:University of New South Wales. School of Banking and Finance, 2006. http://handle.unsw.edu.au/1959.4/26943.

Full text
Abstract:
This thesis investigates whether estimating the inputs of the Markowitz (1952) Mean- Variance framework using various econometric techniques leads to improved optimal portfolio allocations at the country, sector and stock levels over a number of time periods. We build upon previous work by using various combinations of conventional and Bayesian expected returns and covariance matrix estimators in a Mean-Variance framework that incorporates a benchmark reference, an allowable deviation range from the benchmark weights and short-selling constraints so as to achieve meaningful and realistic outcomes. We found that models based on the classical maximum likelihood method performed just as well as the more sophisticated Bayesian return estimators in the study. We also found that the covariance matrix estimators analysed created covariance matrices that were similar to one another and, as a result, did not seem to have a large effect on the overall portfolio allocation. A sensitivity analysis on the level of risk aversion confirmed that the simulation results were robust for the different levels of risk aversion.
APA, Harvard, Vancouver, ISO, and other styles
28

Navarro, Myra C. "Small area estimation : estimating selected economic statistics for provinces of the Philippines." Thesis, Canberra, ACT : The Australian National University, 1992. http://hdl.handle.net/1885/116918.

Full text
Abstract:
Different methods of small area estimation are discussed in detail together with a summary of the applications for each method. A case study consisting of an application to small area estimation of selected Philippine economic data (employment, salary and value of output) are described. Seven different estimators are applied to this problem of estimating intercensal small area characteristics; 3 synthetic estimators, a SPREE estimator, a RATIO-SPREE estimator and 2 regression-based estimators. In addition, estimates of the variance of each estimator are derived. The various estimators are evaluated on the basis of their performance in replicating the 1988 Census of Establishments (CE) preliminary estimates. The absolute relative error percentage and the root mean square error are used in assessing the performance of each estimator. The SPREE is a best choice based on percentage absolute relative error while the RATIO-SPREE estimator is the most accurate in terms of root mean square error.
APA, Harvard, Vancouver, ISO, and other styles
29

Tan, Li. "Asymptotic study of the change-point MLE under contiguous change and its applications." Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Dissertations/Summer2010/L_Tan_052710.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Banerjee, Moulinath. "Likelihood ratio inference in regular and non-regular problems /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/8938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Warwick, Jane. "Selecting tuning parameters in minimum distance estimators." n.p, 2001. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Carlsson, Martin. "Variance Estimation of the Calibration Estimator with Measurement Errors in the Auxiliary Information." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-68928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hu, Huilin. "Large sample theory for pseudo-maximum likelihood estimates in semiparametric models /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/8936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Podolskij, Mark. "New theory on estimation of integrated volatility with applications /." Bochum, 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980588391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Shunpu. "Some contributions to empirical Bayes theory and functional estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23100.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Amiri-Simkooei, AliReza. "Least-squares variance component estimation : theory and GPS applications /." Delft : NCG, Nederlandse Commissie voor Geodesie, 2007. http://www.loc.gov/catdir/toc/fy0803/2007416648.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wright, George Alfred Jr. "Nonparameter density estimation and its application in communication theory." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/14979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Mathur, Shailendra. "Edge localization in surface reconstruction using optimal estimation theory." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23750.

Full text
Abstract:
In this thesis the problem of localizing discontinuities while smoothing noisy data is solved for the surface reconstruction method known as Curvature Consistency. In this algorithm, noisy initial estimates of surface patches are refined according to a continuity model, using a relaxation process. The interaction between neighbouring pixels in local neighbourhoods during relaxation is shown to be equivalent to a multiple measurement fusion process, where each pixel acts as a measurement source. Using optimal estimation theory as a basis, an adaptive weighting technique is developed to estimate interpolant surface patch parameters from neighbouring pixels. By applying the weighting process iteratively within local neighbourhoods, discontinuities are localized and a piecewise-continuous surface description is achieved. The resulting discontinuity localization algorithm is adaptive over different signal to noise ratios, robust over discontinuities of different scales and independent of user set parameters.
APA, Harvard, Vancouver, ISO, and other styles
39

Leung, Denis H. Y. "Robust estimation : efficiency and bias, in theory and practice." Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hemvanich, Sanha. "GMM estimation for nonignorable missing data : theory and practice." Thesis, University of Warwick, 2007. http://wrap.warwick.ac.uk/1148/.

Full text
Abstract:
This thesis develops several Generalized Method ofMoments (GMM) estimators for analysing Not Missing at Random (NMAR) data, which is commonly referred to as the self-selection problem in an economic context. We extend the semiparametric estimation procedures of Ramalho and Smith (2003) to include the case where the missing data mechanism (MDM) depends on both a continuous response variable and covariates. Within this framework, it is possible to avoid imposing any assumptions on the missing data mechanism. We also discuss the asymptotic properties of the proposed GMM estimators and establish the connections of them to the GMM estimators of Ramalho and Smith and to the pseudolikelihood estimators of Tang, Little and Raghunathan (2003). All of the aforementioned estimators are then compared to other standard estimators for missing data such as the inverse probability weighted and sample selection model estimators in a number of Monte Carlo experiments. As an empirical application, these estimators are also applied to analyse the UK wage distribution. We found that, in many circumstances, our proposed estimators perform better than the other estimators described; especially when there is no exclusion restriction or other additional information available. Finally, we summarise that, since the true MDM is unlikely to be known, several estimators which impose different assumptions on the MDM should be used together to examine the sensitivity of the outcomes of interest with respect to the assumptions made and the estimation procedures adopted.
APA, Harvard, Vancouver, ISO, and other styles
41

Mansour, Tony, and Majdi Murtaja. "Applied estimation theory on power cable as transmission line." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46583.

Full text
Abstract:
This thesis presents how to estimate the length of a power cable using the MaximumLikelihood Estimate (MLE) technique by using Matlab. The model of the power cableis evaluated in the time domain with additive white Gaussian noise. The statistics havebeen used to evaluate the performance of the estimator, by repeating the experiment fora large number of samples where the random additive noise is generated for each sample.The estimated sample variance is compared to the theoretical Cramer Raw lower Bound(CRLB) for unbiased estimators. At the end of thesis, numerical results are presentedthat show when the resulting sample variance is close to the CRLB, and hence that theperformance of the estimator will be more accurate.
APA, Harvard, Vancouver, ISO, and other styles
42

Gechert, Sebastian. "On the Measurement, Theory and Estimation of Fiscal Multipliers." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-155008.

Full text
Abstract:
The study is intended to identify relevant channels and possibly biasing factors with respect to fiscal multipliers, and thus to contribute to improving the precision of multiplier forecasts. This is done by, first, defining the concept of the multiplier used in the present study, presenting the main theoretical channels of influence as discussed in the literature and the problems of empirical identification. Second, by conducting a meta-regression analysis on the reported multipliers from a unique data set of 1069 multiplier observations and the respective study characteristics in order to derive quantitative stylzed facts. Third, by developing a simple multiplier model that explicitly takes into account the time elapse of the multiplier process as an explanatory factor that has been largely overlooked by the relevant theoretical literature. Fourth, by identifying, for US macroeconomic time series data, the extent to which fiscal multiplier estimates could be biased in the presence of financial cycles that have not been taken into account by the relevant empirical literature.
APA, Harvard, Vancouver, ISO, and other styles
43

Atkinson, D. J. "The application of estimation theory to induction motor control." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Feng, Wei Pate Thomas H. "The QR algorithm for eigenvalue estimation theory and experiments /." Auburn, Ala, 2008. http://hdl.handle.net/10415/1488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yu, Tinghui. "Estimation theory of a location parameter in small samples." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8097.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
46

Thomas, Derek C. "Theory and Estimation of Acoustic Intensity and Energy Density." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2560.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sanches, Fabio Miessi. "Essays on estimation of dynamic games." Thesis, London School of Economics and Political Science (University of London), 2013. http://etheses.lse.ac.uk/716/.

Full text
Abstract:
This thesis considers estimation of discrete choice stationary dynamic games. Chapter 1 shows that when payoffs are linear in the parameters value functions are linear in the parameters and the equation system characterizing the Markovian equilibrium is linear in the parameters. This formulation allows us to estimate the model using Least Squares. We derive an optimal weight matrix for the Least Squares estimator and show that the efficient estimator is a Generalized Least Squares estimator. Chapter 2 shows that when time invariant unobservables are present the efficient estimator is a Generalized Fixed Effects estimator. Time invariant unobservables can be correlated with observed states. We do not need to impose any distributional assumption on time invariant unobservables. Our estimators have a closed form solution. In Chapter 3 we apply the framework developed in Chapters 1 and 2 to analyze the effects of the privatization of public banks on financial development. We build a dynamic entry game to analyze the Brazilian banking market. We show that profits of private banks are positively affected by the number of public branches operating in Brazilian isolated markets. The spill-over generated by public banks is quantified based on a dynamic oligopoly model. A counterfactual in which public banks are privatized is examined. It shows that the number of active branches operating in the long-run in a small market drops significantly.
APA, Harvard, Vancouver, ISO, and other styles
48

Taylor, Luke. "Essays in nonparametric estimation and inference." Thesis, London School of Economics and Political Science (University of London), 2017. http://etheses.lse.ac.uk/3569/.

Full text
Abstract:
This thesis consists of three chapters which represent my journey as a researcher during this PhD. The uniting theme is nonparametric estimation and inference in the presence of data problems. The first chapter begins with nonparametric estimation in the presence of a censored dependent variable and endogenous regressors. For Chapters 2 and 3 my attention moves to problems of inference in the presence of mismeasured data. In Chapter 1 we develop a nonparametric estimator for the local average response of a censored dependent variable to endogenous regressors in a nonseparable model where the unobservable error term is not restricted to be scalar and where the nonseparable function need not be monotone in the unobservables. We formalise the identification argument put forward in Altonji, Ichimura and Otsu (2012), construct a nonparametric estimator, characterise its asymptotic property, and conduct a Monte Carlo investigation to study its small sample properties. We show that the estimator is consistent and asymptotically normally distributed. Chapter 2 considers specification testing for regression models with errors-in-variables. In contrast to the method proposed by Hall and Ma (2007), our test allows general nonlinear regression models. Since our test employs the smoothing approach, it complements the nonsmoothing one by Hall and Ma in terms of local power properties. We establish the asymptotic properties of our test statistic for the ordinary and supersmooth measurement error densities and develop a bootstrap method to approximate the critical value. We apply the test to the specification of Engel curves in the US. Finally, some simulation results endorse our theoretical findings: our test has advantages in detecting high frequency alternatives and dominates the existing tests under certain specifications. Chapter 3 develops a nonparametric significance test for regression models with measurement error in the regressors. To the best of our knowledge, this is the first test of its kind. We use a ‘semi-smoothing’ approach with nonparametric deconvolution estimators and show that our test is able to overcome the slow rates of convergence associated with such estimators. In particular, our test is able to detect local alternatives at the √n rate. We derive the asymptotic distribution under i.i.d. and weakly dependent data, and provide bootstrap procedures for both data types. We also highlight the finite sample performance of the test through a Monte Carlo study. Finally, we discuss two empirical applications. The first considers the effect of cognitive ability on a range of socio-economic variables. The second uses time series data - and a novel approach to estimate the measurement error without repeated measurements - to investigate whether future inflation expectations are able to stimulate current consumption.
APA, Harvard, Vancouver, ISO, and other styles
49

Parker, Imelda. "Transformations in regression, estimation, testing and modelling." Thesis, University of St Andrews, 1988. http://hdl.handle.net/10023/13759.

Full text
Abstract:
Transformation is a powerful tool for model building. In regression the response variable is transformed in order to achieve the usual assumptions of normality, constant variance and additivity of effects. Here the normality assumption is replaced by the Laplace distributional assumption, appropriate when more large errors occur than would be expected if the errors were normally distributed. The parametric model is enlarged to include a transformation parameter and a likelihood procedure is adopted for estimating this parameter simultaneously with other parameters of interest. Diagnostic methods are described for assessing the influence of individual observations on the choice of transformation. Examples are presented. In distribution methodology the independent responses are transformed in order that a distributional assumption is satisfied for the transformed data. Here the interest is in the family of distributions which are not dependent on an unknown shape parameter. The gamma distribution (known order), with special case the exponential distribution, is a member of this family. An information number approach is proposed for transforming a known distribution to the gamma distribution (known order). The approach provides an insight into the large-sample behaviour of the likelihood procedure considered by Draper and Guttman (1968) for investigating transformations of data which allow the transformed observations to follow a gamma distribution. The information number approach is illustrated for three examples end the improvement towards the gamma distribution introduced by transformation is measured numerically and graphically. A graphical procedure is proposed for the general case of investigating transformations of data which allow the transformed observations to follow a distribution dependent on unknown threshold and scale parameters. The procedure is extended to include model testing and estimation for any distribution which with the aid of a power transformation can be put in the simple form of a distribution that is not dependent on an unknown shape parameter. The procedure is based on a ratio, R(y), which is constructed from the power transformation. Also described is a ratio-based technique for estimating the threshold parameter in important parametric models, including the three-parameter Weibull and lognormal distributions. Ratio estimation for the weibull distribution is assessed and compared with the modified maximum likelihood estimation of Cohen and Whitten (1982) in terms of bias and root mean squared error, by means of a simulation study. The methods are illustrated with several examples and extend naturally to singly Type 1 and Type 2 censored data.
APA, Harvard, Vancouver, ISO, and other styles
50

Parker, I. "Transformation in regression, estimation, testing and modelling." Thesis, University of St Andrews, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography