Dissertations / Theses on the topic 'Estimation and inference'

To see the other types of publications on this topic, follow the link: Estimation and inference.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation and inference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cho, Young Su. "Empirical [gamma]-divergence : estimation and inference /." Bonn, 2005. http://www.gbv.de/dms/zbw/493498524.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taylor, Luke. "Essays in nonparametric estimation and inference." Thesis, London School of Economics and Political Science (University of London), 2017. http://etheses.lse.ac.uk/3569/.

Full text
Abstract:
This thesis consists of three chapters which represent my journey as a researcher during this PhD. The uniting theme is nonparametric estimation and inference in the presence of data problems. The first chapter begins with nonparametric estimation in the presence of a censored dependent variable and endogenous regressors. For Chapters 2 and 3 my attention moves to problems of inference in the presence of mismeasured data. In Chapter 1 we develop a nonparametric estimator for the local average response of a censored dependent variable to endogenous regressors in a nonseparable model where the unobservable error term is not restricted to be scalar and where the nonseparable function need not be monotone in the unobservables. We formalise the identification argument put forward in Altonji, Ichimura and Otsu (2012), construct a nonparametric estimator, characterise its asymptotic property, and conduct a Monte Carlo investigation to study its small sample properties. We show that the estimator is consistent and asymptotically normally distributed. Chapter 2 considers specification testing for regression models with errors-in-variables. In contrast to the method proposed by Hall and Ma (2007), our test allows general nonlinear regression models. Since our test employs the smoothing approach, it complements the nonsmoothing one by Hall and Ma in terms of local power properties. We establish the asymptotic properties of our test statistic for the ordinary and supersmooth measurement error densities and develop a bootstrap method to approximate the critical value. We apply the test to the specification of Engel curves in the US. Finally, some simulation results endorse our theoretical findings: our test has advantages in detecting high frequency alternatives and dominates the existing tests under certain specifications. Chapter 3 develops a nonparametric significance test for regression models with measurement error in the regressors. To the best of our knowledge, this is the first test of its kind. We use a ‘semi-smoothing’ approach with nonparametric deconvolution estimators and show that our test is able to overcome the slow rates of convergence associated with such estimators. In particular, our test is able to detect local alternatives at the √n rate. We derive the asymptotic distribution under i.i.d. and weakly dependent data, and provide bootstrap procedures for both data types. We also highlight the finite sample performance of the test through a Monte Carlo study. Finally, we discuss two empirical applications. The first considers the effect of cognitive ability on a range of socio-economic variables. The second uses time series data - and a novel approach to estimate the measurement error without repeated measurements - to investigate whether future inflation expectations are able to stimulate current consumption.
APA, Harvard, Vancouver, ISO, and other styles
3

Amjad, Muhammad Jehangir. "Sequential data inference via matrix estimation : causal inference, cricket and retail." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120190.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 185-193).
This thesis proposes a unified framework to capture the temporal and longitudinal variation across multiple instances of sequential data. Examples of such data include sales of a product over a period of time across several retail locations; trajectories of scores across cricket games; and annual tobacco consumption across the United States over a period of decades. A key component of our work is the latent variable model (LVM) which views the sequential data as a matrix where the rows correspond to multiple sequences while the columns represent the sequential aspect. The goal is to utilize information in the data within the sequence and across different sequences to address two inferential questions: (a) imputation or "filling missing values" and "de-noising" observed values, and (b) forecasting or predicting "future" values, for a given sequence of data. Using this framework, we build upon the recent developments in "matrix estimation" to address the inferential goals in three different applications. First, a robust variant of the popular "synthetic control" method used in observational studies to draw causal statistical inferences. Second, a score trajectory forecasting algorithm for the game of cricket using historical data. This leads to an unbiased target resetting algorithm for shortened cricket games which is an improvement upon the biased incumbent approach (Duckworth-Lewis-Stern). Third, an algorithm which leads to a consistent estimator for the time- and location-varying demand of products using censored observations in the context of retail. As a final contribution, the algorithms presented are implemented and packaged as a scalable open-source library for the imputation and forecasting of sequential data with applications beyond those presented in this work.
by Muhammad Jehangir Amjad.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Min. "The estimation and inference of complex models." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/387.

Full text
Abstract:
In this thesis, we investigate the estimation problem and inference problem for the complex models. Two major categories of complex models are emphasized by us, one is generalized linear models, the other is time series models. For the generalized linear models, we consider one fundamental problem about sure screening for interaction terms in ultra-high dimensional feature space; for time series models, an important model assumption about Markov property is considered by us. The first part of this thesis illustrates the significant interaction pursuit problem for ultra-high dimensional models with two-way interaction effects. We propose a simple sure screening procedure (SSI) to detect significant interactions between the explanatory variables and the response variable in the high or ultra-high dimensional generalized linear regression models. Sure screening method is a simple, but powerful tool for the first step of feature selection or variable selection for ultra-high dimensional data. We investigate the sure screening properties of the proposal method from theoretical insight. Furthermore, we indicate that our proposed method can control the false discovery rate at a reasonable size, so the regularized variable selection methods can be easily applied to get more accurate feature selection in the following model selection procedures. Moreover, from the viewpoint of computational efficiency, we suggest a much more efficient algorithm-discretized SSI (DSSI) to realize our proposed sure screening method in practice. And we also investigate the properties of these two algorithms SSI and DSSI in simulation studies and apply them to some real data analyses for illustration. For the second part, our concern is the testing of the Markov property in time series processes. Markovian assumption plays an extremely important role in time series analysis and is also a fundamental assumption in economic and financial models. However, few existing research mainly focused on how to test the Markov properties for the time series processes. Therefore, for the Markovian assumption, we propose a new test procedure to check if the time series with beta-mixing possesses the Markov property. Our test is based on the Conditional Distance Covariance (CDCov). We investigate the theoretical properties of the proposed method. The asymptotic distribution of the proposed test statistic under the null hypothesis is obtained, and the power of the test procedure under local alternative hypothesizes have been studied. Simulation studies are conducted to demonstrate the finite sample performance of our test.
APA, Harvard, Vancouver, ISO, and other styles
5

Bispham, Francesco Devere. "Estimation and inference with nonstationary panel data." Thesis, University of Hull, 2005. http://hydra.hull.ac.uk/resources/hull:5635.

Full text
Abstract:
This PhD thesis applies the time-series concepts of unit-roots and cointegration to nonstationary panel data. The first three chapters set the scene for what follows and together are the first methodological core of the thesis, on nonstationary panel data estimation and testing. In chapter 1 we consider the established panel unit root tests of Levin, Lin and Chu (2002) and Im, Pesaran and Shin (2003) and also Pesaran (2005) for cross-sectional dependence, with a panel of 20 OECD inflation rates. In chapter 2 we consider the established panel cointegration tests of Kao (1999), Pedroni (1999) and Larsson, Lyhagen and Lothgren (200 1) with a panel of 25 OECD exchange rates to test for long run PPP, again including cross-sectional dependence. In chapter 3 a more original contribution is given. We conduct an extensive empirical study of the long run determinants of consumption expenditure for a panel of 20 OECD countries. A panel data cointegrating regression is estimated using the panel DOLS and FMOLS estimators of Kao and Chiang (2000) and Pedroni (2000,2001). Using Bai and Kao (2005) we again consider cross-sectional dependence. The second methodological core is the statistical inference of nonstationary panel data, in the last two chapters. In chapter 4 is another original contribution using the bootstrap with nonstationary panel data. New bootstrap algorithms are presented for the panel DOLS estimators mentioned above and also the group-mean estimator of Pesaran and Smith (1995). In our last original contribution, in chapter 5, we consider the asymptotic properties of nonstationary panel data estimators. The asymptotic normality and asymptotic consistency of our panel FMOLS, DOLS and OLS estimators are proved for the simple case of the panel cointegrating regression with a constant intercept and trend. The new sequential limit asymptotic theory of Phillips and Moon (1999) is highlighted.
APA, Harvard, Vancouver, ISO, and other styles
6

Hall, A. "Estimation and inference in simultaneous equation models." Thesis, University of Warwick, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grant, Nicky Lee. "Estimation & inference under non-standard conditions." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Callahan, Margaret D. "Bayesian Parameter Estimation and Inference Across Scales." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459523006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lyons, Simon. "Inference and parameter estimation for diffusion processes." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10518.

Full text
Abstract:
Diffusion processes provide a natural way of modelling a variety of physical and economic phenomena. It is often the case that one is unable to observe a diffusion process directly, and must instead rely on noisy observations that are discretely spaced in time. Given these discrete, noisy observations, one is faced with the task of inferring properties of the underlying diffusion process. For example, one might be interested in inferring the current state of the process given observations up to the present time (this is known as the filtering problem). Alternatively, one might wish to infer parameters governing the time evolution the diffusion process. In general, one cannot apply Bayes’ theorem directly, since the transition density of a general nonlinear diffusion is not computationally tractable. In this thesis, we investigate a novel method of simplifying the problem. The stochastic differential equation that describes the diffusion process is replaced with a simpler ordinary differential equation, which has a random driving noise that approximates Brownian motion. We show how one can exploit this approximation to improve on standard methods for inferring properties of nonlinear diffusion processes.
APA, Harvard, Vancouver, ISO, and other styles
10

Goldman, Nicholas. "Statistical estimation of evolutionary trees." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wondmagegnehu, Eshetu Tesfaye. "Small area rates, methods of estimation and inference." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0002/MQ34435.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Shows, Justin Hall. "Sparse Estimation and Inference for Censored Median Regression." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-05152009-143030/.

Full text
Abstract:
Censored median regression models have been shown to be useful for analyzing a variety of censored survival data with the robustness property. We study sparse estimation and inference of censored median regression. The new method minimizes an inverse censoring probability weighted least absolute deviation subject to the adaptive LASSO penalty. We show that, with a proper choice of the tuning parameter, the proposed estimator has nice theoretical properties such as root-n consistency and asymptotic normality. The estimator can also identify the underlying sparse model consistently. We propose using a resampling method to estimate the variance of the proposed estimator. Furthermore, the new procedure enjoys great advantages in computation, since its entire solution path can be obtained efficiently. Also, the method can be extended to multivariate survival data, where there is a natural or artificial clustering structure. The performance of our estimator is evaluated by extensive simulations and two real data applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Kyriacou, Maria. "jackknife estimation and inference in non-stationary autoregression." Thesis, University of Essex, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.536965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Saunderson, James (James Francis). "Semidenite representations with applications in estimation and inference." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99782.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 235-244).
Semidenite optimization problems are an expressive family of convex optimization problems that can be solved eciently. We develop semidenite optimization-based formulations and approximations for a number of families of optimization problems, including problems arising in spacecraft attitude estimation and in learning tree-structured statistical models. We construct explicit exact reformulations of two families of optimization problems in terms of semidenite optimization. The first family are linear optimization problems over the derivative relaxations of spectrahedral cones. The second family are linear optimization problems over rotation matrices, i.e. orthogonal matrices with unit determinant. We use our semidenite description of linear optimization problems over rotation matrices to express a joint spin-rate and attitude estimation problem for a spinning spacecraft exactly as a semidenite optimization problem. For families of optimization problems that are, in general, intractable, one cannot hope for ecient semidenite optimization-based formulations. Nevertheless, there are natural ways to develop approximations for these problems called semidenite relaxations. We analyze one such relaxation of a broad family of optimization problems with multiple variables interacting pairwise, including, for instance, certain multivariate optimization problems over rotation matrices. We characterize the worst-case gap between the optimal value of the original problem and a particular semidenite relaxation, and develop systematic methods to round solutions of the semidenite relaxation to feasible points of the original problem. Our results establish a correspondence between the analysis of rounding schemes for these problems and a natural geometric optimization problem that we call the normalized maximum width problem. We also develop semidenite optimization-based methods for a statistical modeling problem. The problem involves realizing a given multivariate Gaussian distribution as the marginal distribution among a subset of variables in a Gaussian tree model. This is desirable because Gaussian tree models enjoy certain conditional independence relations that allow for very ecient inference. We reparameterize this realization problem as a structured matrix decomposition problem and show how it can be approached using a semidenite optimization formulation. We establish sucient conditions on the parameters and structure of an underlying Gaussian tree model so that our methods can recover it from the marginal distribution on its leaf-indexed variables.
by James Francis Saunderson.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Banerjee, Moulinath. "Likelihood ratio inference in regular and non-regular problems /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/8938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mukherjee, Rajarshi. "Statistical Inference for High Dimensional Problems." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11516.

Full text
Abstract:
In this dissertation, we study minimax hypothesis testing in high-dimensional regression against sparse alternatives and minimax estimation of average treatment effect in an semiparametric regression with possibly large number of covariates.
APA, Harvard, Vancouver, ISO, and other styles
17

Leung, Andy Chin Yin. "Robust estimation and inference under cellwise and casewise contamination." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60145.

Full text
Abstract:
Cellwise outliers are likely to occur together with casewise outliers in datasets of relatively large dimension. Recent work has shown that traditional high breakdown point procedures may fail when applied to such datasets. In this thesis, we consider this problem when the goal is to (1) estimate multivariate location and scatter matrix and (2) estimate regression coefficients and confidence intervals for inference, which both are cornerstones in multivariate data analysis. To address the first problem, we propose a two-step procedure to deal with casewise and cellwise outliers, which generally proceeds as follows: first, it uses a filter to identify cellwise outliers and replace them by missing values; then, it applies a robust estimator to the incomplete data to down-weight casewise outliers. We show that the two-step procedure is consistent under the central model provided the filter is appropriately chosen. The proposed two-step procedure for estimating location and scatter matrix is then applied in regression for the case of continuous covariates by simply adding a third step, which computes robust regression coefficients from the estimated robust multivariate location and scatter matrix obtained in the second step. We show that the three-step estimator is consistent and asymptotically normal at the central model, for the case of continuous covariates. Finally, the estimator is extended to handle both continuous and dummy covariates. Extensive simulation results and real data examples show that the proposed methods can handle both cellwise and casewise outliers similarly well.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
18

Thiemann, Michael, and Michael Thiemann. "Uncertainty estimation of hydrological models using bayesian inference methods." Thesis, The University of Arizona, 1999. http://hdl.handle.net/10150/626808.

Full text
Abstract:
Intensive investigations of hydrologic model calibration during the last two decades have resulted in a reasonably good understanding of the issues involved in the process of estimating the numerous parameters employed by these codes. Nevertheless, these classical "batch" calibration approaches require substantial amounts of data to be stable, and the subsequent model forecasts do not usually represent the various imbedded uncertainties. Especially in the light of thousands of uncalibrated catchments in need of model simulations for streamflow predictions, a parameter estimation approach is required that is able to simultaneously perform model calibration and prediction without neglecting the substantial uncertainties in the computed forecasts. This thesis introduces the Bayesian Recursive Estimation scheme (BaRE), a method derived from Bayesian probability computation and adapted for the use in "on-line" hydrologic model calibration. The results of preliminary case studies are presented to illustrate the practicality of this simple and efficient approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Dominicy, Yves. "Quantile-based inference and estimation of heavy-tailed distributions." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209311.

Full text
Abstract:
This thesis is divided in four chapters. The two first chapters introduce a parametric quantile-based estimation method of univariate heavy-tailed distributions and elliptical distributions, respectively. If one is interested in estimating the tail index without imposing a parametric form for the entire distribution function, but only on the tail behaviour, we propose a multivariate Hill estimator for elliptical distributions in chapter three. In the first three chapters we assume an independent and identically distributed setting, and so as a first step to a dependent setting, using quantiles, we prove in the last chapter the asymptotic normality of marginal sample quantiles for stationary processes under the S-mixing condition.

The first chapter introduces a quantile- and simulation-based estimation method, which we call the Method of Simulated Quantiles, or simply MSQ. Since it is based on quantiles, it is a moment-free approach. And since it is based on simulations, we do not need closed form expressions of any function that represents the probability law of the process. Thus, it is useful in case the probability density functions has no closed form or/and moments do not exist. It is based on a vector of functions of quantiles. The principle consists in matching functions of theoretical quantiles, which depend on the parameters of the assumed probability law, with those of empirical quantiles, which depend on the data. Since the theoretical functions of quantiles may not have a closed form expression, we rely on simulations.

The second chapter deals with the estimation of the parameters of elliptical distributions by means of a multivariate extension of MSQ. In this chapter we propose inference for vast dimensional elliptical distributions. Estimation is based on quantiles, which always exist regardless of the thickness of the tails, and testing is based on the geometry of the elliptical family. The multivariate extension of MSQ faces the difficulty of constructing a function of quantiles that is informative about the covariation parameters. We show that the interquartile range of a projection of pairwise random variables onto the 45 degree line is very informative about the covariation.

The third chapter consists in constructing a multivariate tail index estimator. In the univariate case, the most popular estimator for the tail exponent is the Hill estimator introduced by Bruce Hill in 1975. The aim of this chapter is to propose an estimator of the tail index in a multivariate context; more precisely, in the case of regularly varying elliptical distributions. Since, for univariate random variables, our estimator boils down to the Hill estimator, we name it after Bruce Hill. Our estimator is based on the distance between an elliptical probability contour and the exceedance observations.

Finally, the fourth chapter investigates the asymptotic behaviour of the marginal sample quantiles for p-dimensional stationary processes and we obtain the asymptotic normality of the empirical quantile vector. We assume that the processes are S-mixing, a recently introduced and widely applicable notion of dependence. A remarkable property of S-mixing is the fact that it doesn't require any higher order moment assumptions to be verified. Since we are interested in quantiles and processes that are probably heavy-tailed, this is of particular interest.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Patschkowski, Tim [Verfasser], and Angelika [Akademischer Betreuer] Rohde. "New approaches to locally adaptive nonparametric estimation and inference." Freiburg : Universität, 2017. http://d-nb.info/1135134197/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Park, In Kyoung, Dan C. Boger, and Michael G. Sovereign. "Software cost estimation through Bayesian inference of software size." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kyriakou, S. "Reduced-bias estimation and inference for mixed-effects models." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10049958/.

Full text
Abstract:
A popular method for reducing the mean and median bias of the maximum likelihood estimator in regular parametric models is through the additive adjustment of the score equation (Firth, 1993; Kenne Pagui et al., 2017). The current work focuses on mean and median bias-reducing adjusted score equations in models with latent variables. First, we give estimating equations based on a mean bias-reducing adjustment of the score function for mean bias reduction in linear mixed models. Second, we propose an extension of the adjusted score equation approach (Firth, 1993) to obtain bias-reduced estimates for models with either computationally infeasible adjusted score equations and/or intractable likelihood. The proposed bias-reduced estimator is obtained by solving an approximate adjusted score equation, which uses an approximation of the log-likelihood to obtain tractable derivatives, and Monte Carlo approximation of the bias function to get feasible expressions. Under certain general conditions, we prove that the feasible and tractable bias-reduced estimator is consistent and asymptotically normally distributed. The “iterated bootstrap with likelihood adjustment” algorithm is presented that can compute the solution of the new bias-reducing adjusted score equation. The effectiveness of the proposed method is demonstrated via simulation studies and real data examples in the case of generalised linear models and generalised linear mixed models. Finally, we derive the median bias-reducing adjusted scores for linear mixed models and random-effects meta-analysis and meta-regression models.
APA, Harvard, Vancouver, ISO, and other styles
23

Jaakkola, Tommi S. (Tommi Sakari). "Variational methods for inference and estimation in graphical models." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Menzel, Konrad Ph D. Massachusetts Institute of Technology. "Essays on set estimation and inference with moment inequalities." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54638.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 141-145).
This thesis explores power and consistency of estimation and inference procedures with moment inequalities, and applications of the moment inequality framework to estimation of frontiers in finance. In the first chapter, I consider estimation of the identified set and inference on a partially identified parameter when the number of moment inequalities is large relative to sample size. Many applications in the recent literature on set estimation have this feature. Examples discussed in this paper include set-identified instrumental variables models, inference under conditional moment inequalities, and dynamic games. I show that GMM-type test statistics will often be poorly centered when the number of moment inequalities is large. My results establish consistency of the set estimator based on a Wald-type criterion, and I give conditions for uniformly valid inference under many weak moment asymptotics for both plug-in and subsampling procedures. The second chapter evaluates the performance of an Anderson-Rubin (AR) type test for a finite number of moment inequalities, and propose a modified Lagrange Multiplier (LM) and a conditional minimum distance (CMD) statistic. The paper outlines a procedure to construct asymptotically valid critical values for both procedures. All three tests are robust, to weak identification, however in most settings, conservative inference using the LM statistic seems to have greater power against local alternatives than the AR-type test. Furthermore, confidence regions based on the LM statistic will remain non-empty if the model is misspecified.
(cont.) Finally, the third chapter, which is co-authored with Victor Chernozhukov and Emre Kocatulum, presents various set inference problems as they appear in finance and proposes practical and powerful inferential tools. Our tools will be applicable to any problem where the set of interest solves a system of smooth estimable inequalities, though we particularly focus on the following two problems: the admissible mean-variance sets of stochastic discount factors and the admissible mean-variance sets of asset portfolios. We propose to make inference on such sets using weighted likelihood-ratio and Wald type statistics, building upon and substantially enriching the available methods for inference on sets.
by Konrad Menzel.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
25

Bull, Adam. "Asymptotics of nonparametric methods in estimation, inference and optimisation." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chan, Karen Pui-Shan. "Kernel density estimation, Bayesian inference and random effects model." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/13350.

Full text
Abstract:
This thesis contains results of a study in kernel density estimation, Bayesian inference and random effects models, with application to forensic problems. Estimation of the Bayes' factor in a forensic science problem involved the derivation of predictive distributions in non-standard situations. The distribution of the values of a characteristic of interest among different items in forensic science problems is often non-Normal. Background, or training, data were available to assist in the estimation of the distribution for measurements on cat and dog hairs. An informative prior, based on the kernel method of density estimation, was used to derive the appropriate predictive distributions. The training data may be considered to be derived from a random effects model. This was taken into consideration in modelling the Bayes' factor. The usual assumption of the random factor being Normally distributed is unrealistic, so a kernel density estimate was used as the distribution of the unknown random factor. Two kernel methods were employed: the ordinary and adaptive kernel methods. The adaptive kernel method allowed for the longer tail, where little information was available. Formulae for the Bayes' factor in a forensic science context were derived assuming the training data were grouped or not grouped (for example, hairs from one cat would be thought of as belonging to the same group), and that the within-group variance was or was not known. The Bayes' factor, assuming known within-group variance, for the training data, grouped or not grouped, was extended to the multivariate case. The method was applied to a practical example in a bivariate situation. Similar modelling of the Bayes' factor was derived to cope with a particular form of mixture data. Boundary effects were also taken into consideration. Application of kernel density estimation to make inferences about the variance components under the random effects model was studied. Employing the maximum likelihood estimation method, it was shown that the between-group variance and the smoothing parameter in the kernel density estimation were related. They were not identifiable separately. With the smoothing parameter fixed at some predetermined value, the within-and between-group variance estimates from the proposed model were equivalent to the usual ANOVA estimates. Within the Bayesian framework, posterior distribution for the variance components, using various prior distributions for the parameters were derived incorporating kernel density functions. The modes of these posterior distributions were used as estimates for the variance components. A Student-t within a Bayesian framework was derived after introduction of a prior for the smoothing prameter. Two methods of obtaining hyper-parameters for the prior were suggested, both involving empirical Bayes methods. They were a modified leave-one-out maximum likelihood method and a method of moments based on the optimum smoothing parameter determined from Normality assumption.
APA, Harvard, Vancouver, ISO, and other styles
27

Veraart, Almut Elisabeth Dorothea. "Volatility estimation and inference in the presence of jumps." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.670107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Xu. "Accelerated estimation and inference for heritability of fMRI data." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/67103/.

Full text
Abstract:
In this thesis, we develop some novel methods for univariate and multivariate analyses of additive genetic factors including heritability and genetic correlation. For the univariate heritability analysis, we present 3 newly proposed estimation methods—Frequentist ReML, LR-SD and LR-SD ReML. The comparison of these novel and those currently available approaches demonstrates the non-iterative LRSD method is extremely fast and free of any convergence issues. The properties of this LR-SD method motivate the use of the non-parametric permutation and bootstrapping inference approaches. The permutation framework also allows the utilization of spatial statistics, which we find increases the statistical sensitivity of the test. For the bivariate genetic analysis, we generalize the univariate LR-SD method to the bivariate case, where the integration of univariate and bivariate LR-SD provides a new estimation method for genetic correlation. Although simulation studies show that our measure of genetic correlation is not ideal, we propose a closely related test statistic based on the ERV, which we show to be a valid hypothesis test for zero genetic correlation. The rapid implementation of this ERV estimator makes it feasible to use with permutation as well. Finally, we consider a method for high-dimensional multivariate genetic analysis based on pair-wise correlations of different subject pairs. While traditional genetic analysis models the correlation over subjects to produce an estimate of heritability, this approach estimates correlation over a (high-dimensional) phenotype for pairs of subjects, and then estimates heritability based on the difference in MZ-pair and DZ-pair correlations. A significant two-sample t-test comparing MZ and DZ correlations implies the existence of heritable elements. The resulting summary measure of aggregate heritability, defined as twice the difference of MZ and DZ mean correlations, can be treated as a quick screening estimate of whole-phenotype heritability that is closely related to the average of traditional heritability.
APA, Harvard, Vancouver, ISO, and other styles
29

Hamadeh, Lina. "Periodically integrated models : estimation, simulation, inference and data analysis." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/periodically-integrated-models-estimation-simulation-inference-and-data-analysis(f7b345e9-bad7-424a-9746-bfe771d7ba8c).html.

Full text
Abstract:
Periodically correlated time series generally exist in several fields including hydrology, climatology, economics and finance, and are commonly modelled using periodic autoregressive (PAR) model. For a time series with stochastic periodic trend, for which a unit root is expected, a periodically integrated autoregressive PIAR model with periodic and/or seasonal unit root has been shown to be a satisfactory model. The existing theory used the multivariate methodology to study PIAR models. However, this theory is convoluted, majority of it only developed for quarterly time series and its generalisation to time series with larger number of periods is quite cumbersome. This thesis studies the existing theory and highlights its restrictions and flaws. It provides a coherent presentation of the steps for analysing PAR and PIAR models for different number of periods. It presents the different unit roots representations and compares the performance of different unit root tests available in literature. The restrictions of existing studies gave us the impetus to develop a unified theory that gives a clear understanding of the integration and unit roots in the periodic models. This theory is based on the spectral information of the multi-companion matrix of the periodic models. It is more general than the existing theory, since it can be applied to any number of periods whereas the existing methods are developed for quarterly time series. Using the multi-companion method, we specify and estimate the periodic models without the need to extract complicated restrictions on the model parameters corresponding to the unit roots, as required by NLS method. The multi-companion estimation method performed well and its performance is equivalent to the NLS estimation method that has been used in the literature. Analysing integrated multivariate models is a problematic issue in time series. The multi-companion theory provides a more general approach than the error correction method that is commonly used to analyse such time series. A modified state state representation for the seasonal periodically integrated autoregressive (SPIAR) model with periodic and seasonal unit roots is presented. Also an alternative state space representations from which the state space representations of PAR, PIAR and the seasonal periodic autoregressive (SPAR) models can be directly obtained is proposed. The seasons of the parameters in these representations have been clearly specified, which guarantees correct estimated parameters. Kalman filter have been used to estimate the parameters of these models and better estimation results are obtained when the initial values were estimated rather than when they were given.
APA, Harvard, Vancouver, ISO, and other styles
30

Almerström, Przybyl Simon. "A Trade-based Inference Algorithm for Counterfactual Performance Estimation." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254453.

Full text
Abstract:
A methodology for increasing the success rate in debt collection by matching individual call center agents with optimal debtors is developed. This methodology, called the trade algorithm, consists of the following steps. The trade algorithm first identifies groups of debtors for which agent performance varies. Based on these differences in performance, agents are put into clusters. An optimal call allocation for the clusters is then decided. Two methods to estimate the performance of an optimal call allocation are suggested. These methods are combined with Monte Carlo cross-validation and an alternative time-consistent validation procedure. Tests of significance are applied to the results and the effect size is estimated. The trade algorithm is applied to a dataset from the credit management services company Intrum and is shown to enhance performance.
En metodik för att öka andelen lyckade inkassoärenden genom att para ihop telefonhandläggare med optimala gäldenärer utvecklas. Denna metodik, kallad handels-algoritmen, består av följande steg. Handelsalgoritmen identifierar först grupper av gäldenärer för vilka agenters prestationsförmåga varierar. Utifrån dessa skillnader i prestationsförmåga är agenter placerade i kluster. En optimal samtalsallokering för klustren bestäms sedan. Två metoder för att estimera en optimal samtalsallokerings prestanda föreslås. Dessa metoder kombineras med Monte Carlo-korsvalidering och en alternativ tidskonsistent valideringsteknik. Signifikanstester tillämpas på resultaten och effektstorleken estimeras. Handelsalgoritmen tillämpas på data från kredithanteringsföretaget Intrum och visas förbättra prestanda.
APA, Harvard, Vancouver, ISO, and other styles
31

Jones, Mary Beatrix. "Likelihood inference for parametric models of dispersal /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/8934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nissilä, M. (Mauri). "Iterative receivers for digital communications via variational inference and estimation." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514286865.

Full text
Abstract:
Abstract In this thesis, iterative detection and estimation algorithms for digital communications systems in the presence of parametric uncertainty are explored and further developed. In particular, variational methods, which have been extensively applied in other research fields such as artificial intelligence and machine learning, are introduced and systematically used in deriving approximations to the optimal receivers in various channel conditions. The key idea behind the variational methods is to transform the problem of interest into an optimization problem via an introduction of extra degrees of freedom known as variational parameters. This is done so that, for fixed values of the free parameters, the transformed problem has a simple solution, solving approximately the original problem. The thesis contributes to the state of the art of advanced receiver design in a number of ways. These include the development of new theoretical and conceptual viewpoints of iterative turbo-processing receivers as well as a new set of practical joint estimation and detection algorithms. Central to the theoretical studies is to show that many of the known low-complexity turbo receivers, such as linear minimum mean square error (MMSE) soft-input soft-output (SISO) equalizers and demodulators that are based on the Bayesian expectation-maximization (BEM) algorithm, can be formulated as solutions to the variational optimization problem. This new approach not only provides new insights into the current designs and structural properties of the relevant receivers, but also suggests some improvements on them. In addition, SISO detection in multipath fading channels is considered with the aim of obtaining a new class of low-complexity adaptive SISOs. As a result, a novel, unified method is proposed and applied in order to derive recursive versions of the classical Baum-Welch algorithm and its Bayesian counterpart, referred to as the BEM algorithm. These formulations are shown to yield computationally attractive soft decision-directed (SDD) channel estimators for both deterministic and Rayleigh fading intersymbol interference (ISI) channels. Next, by modeling the multipath fading channel as a complex bandpass autoregressive (AR) process, it is shown that the statistical parameters of radio channels, such as frequency offset, Doppler spread, and power-delay profile, can be conveniently extracted from the estimated AR parameters which, in turn, may be conveniently derived via an EM algorithm. Such a joint estimator for all relevant radio channel parameters has a number of virtues, particularly its capability to perform equally well in a variety of channel conditions. Lastly, adaptive iterative detection in the presence of phase uncertainty is investigated. As a result, novel iterative joint Bayesian estimation and symbol a posteriori probability (APP) computation algorithms, based on the variational Bayesian method, are proposed for both constant-phase channel models and dynamic phase models, and their performance is evaluated via computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
33

McGinnity, Shaun Joseph. "Nonlinear estimation techniques for target tracking." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Korte, Robert A. "Inference in Power Series Distributions." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1352937611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Bingwen. "Change-points Estimation in Statistical Inference and Machine Learning Problems." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/344.

Full text
Abstract:
"Statistical inference plays an increasingly important role in science, finance and industry. Despite the extensive research and wide application of statistical inference, most of the efforts focus on uniform models. This thesis considers the statistical inference in models with abrupt changes instead. The task is to estimate change-points where the underlying models change. We first study low dimensional linear regression problems for which the underlying model undergoes multiple changes. Our goal is to estimate the number and locations of change-points that segment available data into different regions, and further produce sparse and interpretable models for each region. To address challenges of the existing approaches and to produce interpretable models, we propose a sparse group Lasso (SGL) based approach for linear regression problems with change-points. Then we extend our method to high dimensional nonhomogeneous linear regression models. Under certain assumptions and using a properly chosen regularization parameter, we show several desirable properties of the method. We further extend our studies to generalized linear models (GLM) and prove similar results. In practice, change-points inference usually involves high dimensional data, hence it is prone to tackle for distributed learning with feature partitioning data, which implies each machine in the cluster stores a part of the features. One bottleneck for distributed learning is communication. For this implementation concern, we design communication efficient algorithm for feature partitioning data sets to speed up not only change-points inference but also other classes of machine learning problem including Lasso, support vector machine (SVM) and logistic regression."
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Joonhwan, and Iván Fernández-Val. "Panel data models with nonadditive unobserved heterogeneity : estimation and inference." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87526.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Economics, 2014.
"February 2014." Abstract page contains the following information: "This paper is based in part on the second chapter of Fernández-Val (2005)'s MIT PhD dissertation." -- Authors: "Iván Fernández-Val and Joonhwan Lee." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 25-27 (first group)).
This paper considers fixed effects estimation and inference in linear and nonlinear panel data models with random coefficients and endogenous regressors. The quantities of interest - means, variances, and other moments of the random coefficients - are estimated by cross sectional sample moments of GMM estimators applied separately to the time series of each individual. To deal with the incidental parameter problem introduced by the noise of the within-individual estimators in short panels, we develop bias corrections. These corrections are based on higher-order asymptotic expansions of the GMM estimators and produce improved point and interval estimates in moderately long panels. Under asymptotic sequences where the cross sectional and time series dimensions of the panel pass to infinity at the same rate, the uncorrected estimator has an asymptotic bias of the same order as the asymptotic variance. The bias corrections remove the bias without increasing variance. An empirical example on cigarette demand based on Becker, Grossman and Murphy (1994) shows significant heterogeneity in the price effect across U.S. states.
by Joonhwan Lee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
37

Alquier, Pierre. "Transductive and inductive adaptative inference for regression and density estimation." Paris 6, 2006. http://www.theses.fr/2006PA066436.

Full text
Abstract:
Inférence Adaptative, Inductive et Transductive, pour l'Estimation de la Régression et de la Densité (Pierre Alquier) Cette thèse a pour objet l'étude des propriétés statistiques de certains algorithmes d'apprentissage dans le cas de l'estimation de la régression et de la densité. Elle est divisée en trois parties. La première partie consiste en une généralisation des théorèmes PAC-Bayésiens, sur la classification, d'Olivier Catoni, au cas de la régression avec une fonction de perte générale. Dans la seconde partie, on étudie plus particulièrement le cas de la régression aux moindres carrés et on propose un nouvel algorithme de sélection de variables. Cette méthode peut être appliquée notamment au cas d'une base de fonctions orthonormales, et conduit alors à des vitesses de convergence optimales, mais aussi au cas de fonctions de type noyau, elle conduit alors à une variante des méthodes dites "machines à vecteurs supports" (SVM). La troisième partie étend les résultats de la seconde au cas de l'estimation de densité avec perte quadratique.
APA, Harvard, Vancouver, ISO, and other styles
38

Çetin, Özgür. "Multi-rate modeling, model inference, and estimation for statistical classifiers /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Veenadhar, Katragadda. "An Interactive Tool to Investigate the Inference Performance of Network Dynamics From Data." Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc149617/.

Full text
Abstract:
Network structure plays a significant role in determining the performance of network inference tasks. An interactive tool to study the dependence of network topology on estimation performance was developed. The tool allows end-users to easily create and modify network structures and observe the performance of pole estimation measured by Cramer-Rao bounds. The tool also automatically suggests the best measurement locations to maximize estimation performance, and thus finds its broad applications on the optimal design of data collection experiments. Finally, a series of theoretical results that explicitly connect subsets of network structures with inference performance are obtained.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Yan, and 王艷. "Statistical inference for capture-recapture studies in continuoustime." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31243721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Catlin, Sandra N. "Statistical inference for partially observed Markov population processes /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chatterjee, Nilanjan. "Semiparametric inference based on estimating equations in regression models for two phase outcome dependent sampling /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/8959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ozbozkurt, Pelin. "Bayesian Inference In Anova Models." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611532/index.pdf.

Full text
Abstract:
Estimation of location and scale parameters from a random sample of size n is of paramount importance in Statistics. An estimator is called fully efficient if it attains the Cramer-Rao minimum variance bound besides being unbiased. The method that yields such estimators, at any rate for large n, is the method of modified maximum likelihood estimation. Apparently, such estimators cannot be made more efficient by using sample based classical methods. That makes room for Bayesian method of estimation which engages prior distributions and likelihood functions. A formal combination of the prior knowledge and the sample information is called posterior distribution. The posterior distribution is maximized with respect to the unknown parameter(s). That gives HPD (highest probability density) estimator(s). Locating the maximum of the posterior distribution is, however, enormously difficult (computationally and analytically) in most situations. To alleviate these difficulties, we use modified likelihood function in the posterior distribution instead of the likelihood function. We derived the HPD estimators of location and scale parameters of distributions in the family of Generalized Logistic. We have extended the work to experimental design, one way ANOVA. We have obtained the HPD estimators of the block effects and the scale parameter (in the distribution of errors)
they have beautiful algebraic forms. We have shown that they are highly efficient. We have given real life examples to illustrate the usefulness of our results. Thus, the enormous computational and analytical difficulties with the traditional Bayesian method of estimation are circumvented at any rate in the context of experimental design.
APA, Harvard, Vancouver, ISO, and other styles
44

Golinelli, Daniela. "Bayesian inference in hidden stochastic population processes /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/8969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Lizhen. "Nonparametric Inference for Bioassay." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222849.

Full text
Abstract:
This thesis proposes some new model independent or nonparametric methods for estimating the dose-response curve and the effective dosage curve in the context of bioassay. The research problem is also of importance in environmental risk assessment and other areas of health sciences. It is shown in the thesis that our new nonparametric methods while bearing optimal asymptotic properties also exhibit strong finite sample performance. Although our specific emphasis is on bioassay and environmental risk assessment, the methodology developed in this dissertation applies broadly to general order restricted inference.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Yan. "Statistical inference for capture-recapture studies in continuous time /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23501765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nagy, Béla. "Valid estimation and prediction inference in analysis of a computer model." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1561.

Full text
Abstract:
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
APA, Harvard, Vancouver, ISO, and other styles
48

Kalnina, Ilze. "Essays on estimation and inference for volatility with high frequency data." Thesis, London School of Economics and Political Science (University of London), 2009. http://etheses.lse.ac.uk/3005/.

Full text
Abstract:
Volatility is a measure of risk, and as such it is crucial for finance. But volatility is not observable, which is why estimation and inference for it are important. Large high frequency data sets have the potential to increase the precision of volatility estimates. However, this data is also known to be contaminated by market microstructure frictions, such as bid-ask spread, which pose a challenge to estimation of volatility. The first chapter, joint with Oliver Linton, proposes an econometric model that captures the effects of market microstructure on a latent price process. In particular, this model allows for correlation between the measurement error and the return process and allows the measurement error process to have diurnal heteroskedasticity. A modification of the TSRV estimator of quadratic variation is proposed and asymptotic distribution derived. Financial econometrics continues to make progress in developing more robust and efficient estimators of volatility. But for some estimators, the asymptotic variance is hard to derive or may take a complicated form and be difficult to estimate. To tackle these problems, the second chapter develops an automated method of inference that does not rely on the exact form of the asymptotic variance. The need for a new approach is motivated by the failure of traditional bootstrap and subsampling variance estimators with high frequency data, which is explained in the paper. The main contribution is to propose a novel way of conducting inference for an important general class of estimators that includes many estimators of integrated volatility. A subsampling scheme is introduced that consistently estimates the asymptotic variance for an estimator, thereby facilitating inference and the construction of valid confidence intervals. The third chapter shows how the multivariate version of the subsampling method of Chapter 2 can be used to study the question of time variability in equity betas.
APA, Harvard, Vancouver, ISO, and other styles
49

Khatoon, Rabeya. "Estimation and inference of microeconometric models based on moment condition models." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/estimation-and-inference-of-microeconometric-models-based-on-moment-condition-models(fb572e1e-7238-4410-8e27-052b4a438962).html.

Full text
Abstract:
The existing estimation techniques for grouped data models can be analyzed as a class of estimators of instrumental variable-Generalized Method of Moments (GMM) type with the matrix of group indicators being the set of instruments. Econometric literature (e.g. Smith, 1997; Newey and Smith, 2004) show that, in some cases of empirical relevance, GMM can have shortcomings in terms of the large sample behaviour of the estimator being different from the finite sample properties. Generalized Empirical Likelihood (GEL) estimators are developed that are not sensitive to the nature and number of instruments and possess improved finite sample properties compared to GMM estimators. In this thesis, with the assumption that the data vector is iid within a group, but inid across groups, we developed GEL estimators for grouped data model having population moment conditions of zero mean of errors in each group. First order asymptotic analysis of the estimators show that they are √N consistent (N being the sample size) and normally distributed. The thesis explores second order bias properties that demonstrate sources of bias and differences between choices of GEL estimators. Specifically, the second order bias depends on the third moments of the group errors and correlation among the group errors and explanatory variables. With symmetric errors and no endogeneity all three estimators Empirical Likelihood (EL), Exponential Tilting (ET) and Continuous Updating Estimator (CUE) yield unbiased estimators. A detailed simulation exercise is performed to test comparative performance of the EL, ET and their bias corrected estimators to the standard 2SLS/GMM estimators. Simulation results reveal that while, with a few strong instruments, we can simply use 2SLS/GMM estimators, in case of many and/or weak instruments, increased degree of endogeneity, or varied signal to noise ratio, bias corrected EL, ET estimators dominate in terms of both least bias and accurate coverage proportions of asymptotic confidence intervals even for a considerably large sample. The thesis includes a case where there are within group dependent data, to assess the consequences of a key assumption being violated, namely the within-group iid assumption. Theoretical analysis and simulation results show that ignoring this feature can result in misleading inference. The proposed estimators are used to estimate the returns to an additional year of schooling in the UK using Labour Force Survey data over 1997-2009. Pooling the 13 years data yields roughly the same estimate of 11.27% return for British-born men aged 25-50 using any of the estimation techniques. In contrast using 2009 LFS data only, for a relatively small sample and many weak instruments, the return to first degree holder men is 13.88% using EL bias corrected estimator, where 2SLS estimator yields an estimate of 6.8%.
APA, Harvard, Vancouver, ISO, and other styles
50

Guyonvarch, Yannick. "Essays in robust estimation and inference in semi- and nonparametric econometrics." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLG007/document.

Full text
Abstract:
Dans le chapitre introductif, nous dressons une étude comparée des approches en économétrie et en apprentissage statistique sur les questions de l'estimation et de l'inférence en statistique.Dans le deuxième chapitre, nous nous intéressons à une classe générale de modèles de variables instrumentales nonparamétriques. Nous généralisons la procédure d'estimation de Otsu (2011) en y ajoutant un terme de régularisation. Nous prouvons la convergence de notre estimateur pour la norme L2 de Lebesgue.Dans le troisième chapitre, nous montrons que lorsque les données ne sont pas indépendantes et identiquement distribuées (i.i.d) mais simplement jointement échangeables, une version modifiée du processus empirique converge faiblement vers un processus gaussien sous les mêmes conditions que dans le cas i.i.d. Nous obtenons un résultat similaire pour une version adaptée du processus empirique bootstrap. Nous déduisons de nos résultats la normalité asymptotique de plusieurs estimateurs non-linéaires ainsi que la validité de l'inférence basée sur le bootstrap. Nous revisitons enfin l'article empirique de Santos Silva et Tenreyro (2006).Dans le quatrième chapitre, nous abordons la question de l'inférence pour des ratios d'espérances. Nous trouvons que lorsque le dénominateur ne tend pas trop vite vers zéro quand le nombre d'observations n augmente, le bootstrap nonparamétrique est valide pour faire de l'inférence asymptotique. Dans un second temps, nous complétons un résultat d'impossibilité de Dufour (1997) en montrant que quand n est fini, il est possible de construire des intervalles de confiance qui ne sont pas pathologiques sont certaines conditions sur le dénominateur.Dans le cinquième chapitre, nous présentons une commande Stata qui implémente les estimateurs proposés par de Chaisemartin et d'Haultfoeuille (2018) pour mesurer plusieurs types d'effets de traitement très étudiés en pratique
In the introductory chapter, we compare views on estimation and inference in the econometric and statistical learning disciplines.In the second chapter, our interest lies in a generic class of nonparametric instrumental models. We extend the estimation procedure in Otsu (2011) by adding a regularisation term to it. We prove the consistency of our estimator under Lebesgue's L2 norm.In the third chapter, we show that when observations are jointly exchangeable rather than independent and identically distributed (i.i.d), a modified version of the empirical process converges weakly towards a Gaussian process under the same conditions as in the i.i.d case. We obtain a similar result for a modified version of the bootstrapped empirical process. We apply our results to get the asymptotic normality of several nonlinear estimators and the validity of bootstrap-based inference. Finally, we revisit the empirical work of Santos Silva and Tenreyro (2006).In the fourth chapter, we address the issue of conducting inference on ratios of expectations. We find that when the denominator tends to zero slowly enough when the number of observations n increases, bootstrap-based inference is asymptotically valid. Secondly, we complement an impossibility result of Dufour (1997) by showing that whenever n is finite it is possible to construct confidence intervals which are not pathological under some conditions on the denominator.In the fifth chapter, we present a Stata command which implements estimators proposed in de Chaisemartin et d'Haultfoeuille (2018) to measure several types of treatment effects widely studied in practice
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography