To see the other types of publications on this topic, follow the link: Estimating function.

Dissertations / Theses on the topic 'Estimating function'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimating function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liang, Longjuan. "A semi-parametric approach to estimating item response functions." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180453363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rahikainen, I. (Ilkka). "Direct methodology for estimating the risk neutral probability density function." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201404241289.

Full text
Abstract:
The target of the study is to find out if the direct methodology could provide same information about the parameters of the risk neutral probability density function (RND) than the reference RND methodologies. The direct methodology is based on for defining the parameters of the RND from underlying asset by using futures contracts and only few at-the-money (ATM) and/or close at-the-money (ATM) options on asset. Of course for enabling the analysis of the feasibility of the direct methodology the reference RNDs must be estimated from the option data. Finally the results of estimating the parameters by the direct methodology are compared to the results of estimating the parameters by the selected reference methodologies for understanding if the direct methodology can be used for understanding the key parameters of the RND. The study is based on S&P 500 index option data from year 2008 for estimating the reference RNDs and for defining the reference moments from the reference RNDs. The S&P 500 futures contract data is necessary for finding the expectation value estimation for the direct methodology. Only few ATM and/or close ATM options from the S&P 500 index option data are necessary for getting the standard deviation estimation for the direct methodology. Both parametric and non-parametric methods were implemented for defining reference RNDs. The reference RND estimation results are presented so that the reference RND estimation methodologies can be compared to each other. The moments of the reference RNDs were calculated from the RND estimation results so that the moments of the direct methodology can be compared to the moments of the reference methodologies. The futures contracts are used in the direct methodology for getting the expectation value estimation of the RND. Only few ATM and/or close ATM options are used in the direct methodology for getting the standard deviation estimation of the RND. The implied volatility is calculated from option prices using ATM and/or close ATM options only. Based on implied volatility the standard deviation can be calculated directly using time scaling equations. Skewness and kurtosis can be calculated from the estimated expectation value and the estimated standard deviation by using the assumption of the lognormal distribution. Based on the results the direct methodology is acceptable for getting the expectation value estimation using the futures contract value directly instead of the expectation value, which is calculated from the RND of full option data, if and only if the time to maturity is relative short. The standard deviation estimation can be calculated from few ATM and/or at close ATM options instead of calculating the RND from full option data only if the time to maturity is relative short. Skewness and kurtosis were calculated from the expectation value estimation and the standard deviation estimation by using the assumption of the lognormal distribution. Skewness and kurtosis could not be estimated by using the assumption of the lognormal distribution because the lognormal distribution is not correct generic assumption for the RND distributions.
APA, Harvard, Vancouver, ISO, and other styles
3

Farquharson, Maree Louise. "Estimating the parameters of polynomial phase signals." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16312/1/Maree_Farquharson_Thesis.pdf.

Full text
Abstract:
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Farquharson, Maree Louise. "Estimating the parameters of polynomial phase signals." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16312/.

Full text
Abstract:
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

SALGADO, MARIA JOSE SEUANEZ. "MONETARY POLICY DURING THE REAL PLAN: ESTIMATING THE CENTRAL BANKS REACTION FUNCTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2001. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14073@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Esta dissertação visa estudar a função de reação do Banco Central do Brasil durante o Plano Real. Argumenta-se que a taxa de juros nominal foi o instrumento mais importante de política monetária, sendo ajustado como resposta a variações na taxa de inflação, hiato do produto, reservas internacionais e ao seu próprio defasado. Estima-se então um modelo linear para a taxa de juros nominal. Em seguida, um Modelo como Limiar (modelo TAR) é usado para explicar uma mudança de regime na taxa de juros. Usando um indicador de crises cambiais, que é escolhido endogenamente, o modelo tenta explicar a diferença na dinâmica da taxa de juros durante e fora das crises. O modelo linear e o não-linear são então comparados e conclui-se que a última abordagem é a mais adequada para estudar a função de reação do Banco Central do Brasil.
This dissertation studies the Central Bank of Brazil`s reaction function during the Real Plan. It is argued that the nominal interest rate was the most important monetary policy instrument, being adjusted to changes in the rate of inflation, output gap, international reserves and its own lagged value. First, a linear model is estimated for the nominal interest rate. Second, a Threshold Autoregressive model with exogenous variables is used to explain a change in regime in interest rates. By using an indicator of currency crises, which is chosen endogenously, the model tries to explain the difference in dynamic of nominal interest rates during and out of a currency crises. The paper then compares the linear and non-linear models and shows that the latter performs considerably better than the former.
APA, Harvard, Vancouver, ISO, and other styles
6

Alnaji, Lulah A. "Generalized Estimating Equations for Mixed Models." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1530292694012892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Glórias, Ludgero Miguel Carraça. "Estimating a knowledge production function and knowledge spillovers : a new two-step estimation procedure of a Spatial Autoregressive Poisson Model." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20711.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
Vários estudos econométricos procuram explicar os determinantes da criação de conhecimento usando como variável dependente o número de patenteamentos numa região. Alguns destes procuram captar os efeitos de Knowledge Spillovers através de modelos lineares que incorporam dependência espacial. No entanto, nenhum estudo foi encontrado que captasse este efeito, tendo em atenção a natureza discreta da variável dependente. Este trabalho pretende preencher essa lacuna propondo um novo estimador de máxima verosimilhança a dois passos para um modelo Poisson Autorregressivo Espacial. As propriedades do estimador são avaliadas num conjunto de simulações de Monte Carlo. Os resultados sugerem que este estimador tem menor Bias e menor RMSE, na generalidade, que outros estimadores propostos, sendo que apenas mostra piores resultados quando a dependência espacial é próxima da unidade. Um exemplo empírico, empregando o novo estimador e um conjunto de estimadores alternativos, é realizado, sendo que a criação de conhecimento em 234 NUTS II de 24 países europeus é analisada. Os resultados evidenciam que existe uma forte dependência espacial na criação de inovação entre as regiões. Conclui-se também que o ambiente socioeconómico é essencial para o processo de formação de conhecimento e que contrariamente às instituições públicas, as empresas privadas são eficientes na produção de inovação. É de realçar, que regiões com menor capacidade em transformar despesas R&D em patenteamentos apresentam maior capacidade de absorção e segregação de conhecimento, evidenciando que regiões vizinhas menos eficientes na produção de conhecimento tendem a criar relações fortalecidas na partilha de conhecimento.
Several econometric studies seek to explain the determinants of knowledge production using as dependent variable the number of patents in a region. Some of these capture the effects of knowledge spillovers through linear models with spatial autorregressive term. However, no study has been found that estimates such effect while also considering the discrete nature of the dependent variable: a count variable. This essay aims to fill this gap by proposing a new Two-step Maximum Likelihood estimator for a Spatial Autorregressive Poisson model. The properties of this estimator are evaluated in a set of Monte Carlo Experiments. The simulation results suggest that this estimator presents lower Bias and lower RMSE than the alternative estimators proposed, only showing worse results when the spatial dependence is close to the unit. An empirical example, using the new estimator and a set of alternative estimators, is executed, where the creation of knowledge in 234 NUTS II from 24 European countries is analyzed. The results show that there is a strong spatial dependence on the creation of innovation. It is also concluded that the socio-economic environment is essential for the knowledge formation and, unlike public R&D institutions, private companies are efficient in producing innovation. It should be noted that regions with less capacity to transform R&D expenses into new patents, have greater capacity for absorption and segregation of knowledge, which shows that neighboring regions less efficient in the production of knowledge tend to create strong relations with each other taking advantage of the knowledge sharing process.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Gang. "The nonparametric least-squares method for estimating monotone functions with interval-censored observations." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2839.

Full text
Abstract:
Monotone function, such as growth function and cumulative distribution function, is often a study of interest in statistical literature. In this dissertation, we propose a nonparametric least-squares method for estimating monotone functions induced from stochastic processes in which the starting time of the process is subject to interval censoring. We apply this method to estimate the mean function of tumor growth with the data from either animal experiments or tumor screening programs to investigate tumor progression. In this type of application, the tumor onset time is observed within an interval. The proposed method can also be used to estimate the cumulative distribution function of the elapsed time between two related events in human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) studies, such as HIV transmission time between two partners and AIDS incubation time from HIV infection to AIDS onset. In these applications, both the initial event and the subsequent event are only known to occur within some intervals. Such data are called doubly interval-censored data. The common property of these stochastic processes is that the starting time of the process is subject to interval censoring. A unified two-step nonparametric estimation procedure is proposed for these problems. In the first step of this method, the nonparametric maximum likelihood estimate (NPMLE) of the cumulative distribution function for the starting time of the stochastic process is estimated with the framework of interval-censored data. In the second step, a specially designed least-squares objective function is constructed with the above NPMLE plugged in and the nonparametric least-squares estimate (NPLSE) of the mean function of tumor growth or the cumulative distribution function of the elapsed time is obtained by minimizing the aforementioned objective function. The theory of modern empirical process is applied to prove the consistency of the proposed NPLSE. Simulation studies are extensively carried out to provide numerical evidence for the validity of the NPLSE. The proposed estimation method is applied to two real scientific applications. For the first application, California Partners' Study, we estimate the distribution function of HIV transmission time between two partners. In the second application, the NPLSEs of the mean functions of tumor growth are estimated for tumors with different stages at diagnosis based on the data from a cancer surveillance program, the SEER program. An ad-hoc nonparametric statistic is designed to test the difference between two monotone functions under this context. In this dissertation, we also propose a numerical algorithm, the projected Newton-Raphson algorithm, to compute the non– and semi-parametric estimate for the M-estimation problems subject to linear equality or inequality constraints. By combining the Newton-Raphson algorithm and the dual method for strictly convex quadratic programming, the projected Newton-Raphson algorithm shows the desired convergence rate. Compared to the well-known iterative convex minorant algorithm, the projected Newton-Raphson algorithm achieves much quicker convergence when computing the non- and semi-parametric maximum likelihood estimate of panel count data.
APA, Harvard, Vancouver, ISO, and other styles
9

Lim, L. L.-Y. "Statistical methods for the assessment of lung function : Estimating the distribution of ventilation-perfusion ratio from inert gas experiments." Thesis, University of Reading, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gavin, Victor S. "Evaluation of cost estimating methods for military software application in a COTS environment." Master's thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02232010-020031/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Downey, Bruce W. J. "A regression based approach to estimating premorbid neuropsychological functioning in the older adult population using four tests of executive function." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/24534.

Full text
Abstract:
Objective: Build regression equations based on a combination of demographic variables and an estimate of premorbid IQ (National Adult Reading Test) for four tests of executive function: the Trail Making Test (TMT), the Modified Six Elements Test (SET) and the Hayling and Brixton tests. Method: 106 neurologically stable community-dwelling older adults participated in the study. These volunteers completed all test measures. The data were analysed using descriptive statistics and correlation analysis. Hierarchical regression analysis was used to explore the relationship between potential predictor variables and test scores. Results: As expected, age was a significant predictor of test score on all four tests of executive function. The proportion of variance explained by age varied. For instance, age alone accounted for 40.2% of the variance in performance on the TMT Part B, but only 8.1% of the variance on the SET. The addition of estimated IQ and other demographic variables to the regression analysis significantly improved prediction accuracy of test scores. Conclusion: Advancing age was associated with poorer test performance on all outcome measures (all ps<0.01). Poorer test performance was also associated with fewer years of education, lower educational achievements, lower socio-economic status, and lower estimated IQ. Incorporating such information, the set of equations produced provide clinicians with a practical means of estimating a client’s executive function test performance. Clinicians can assess the abnormality of a client’s executive function test performance by comparing the difference between their predicted and obtained test scores against a table of critical values. An example of how to apply these equations in clinical practice was presented. The findings presented here appear to provide further support for the hypothesis that normal ageing is associated with a decline in frontal executive functioning.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Daoji. "Empirical likelihood and mean-variance models for longitudinal data." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/empirical-likelihood-and-meanvariance-models-for-longitudinal-data(98e3c7ef-fc88-4384-8a06-2c76107a9134).html.

Full text
Abstract:
Improving the estimation efficiency has always been one of the important aspects in statistical modelling. Our goal is to develop new statistical methodologies yielding more efficient estimators in the analysis of longitudinal data. In this thesis, we consider two different approaches, empirical likelihood and jointly modelling the mean and variance, to improve the estimation efficiency. In part I of this thesis, empirical likelihood-based inference for longitudinal data within the framework of generalized linear model is investigated. The proposed procedure takes into account the within-subject correlation without involving direct estimation of nuisance parameters in the correlation matrix and retains optimality even if the working correlation structure is misspecified. The proposed approach yields more efficient estimators than conventional generalized estimating equations and achieves the same asymptotic variance as quadratic inference functions based methods. The second part of this thesis focus on the joint mean-variance models. We proposed a data-driven approach to modelling the mean and variance simultaneously, yielding more efficient estimates of the mean regression parameters than the conventional generalized estimating equations approach even if the within-subject correlation structure is misspecified in our joint mean-variance models. The joint mean-variances in parametric form as well as semi-parametric form has been investigated. Extensive simulation studies are conducted to assess the performance of our proposed approaches. Three longitudinal data sets, Ohio Children’s wheeze status data (Ware et al., 1984), Cattle data (Kenward, 1987) and CD4+ data (Kaslowet al., 1987), are used to demonstrate our models and approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Iyengar, Madhumita. "An economic approach towards estimating health impacts of major transport investments and transport policies: A case study of transport emission abatement policy." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/91723/1/Madhumita_Iyengar_Thesis.pdf.

Full text
Abstract:
The PhD thesis developed an economic model as an integral part of the current Health Impact Assessment (HIA) framework. Based on a Health Production Function approach, the model showed how to estimate economic benefits of positive health gains generated by transport investment programs and transport policies. Using Australian mortality and morbidity statistics and applying econometric analysis, the case study quantified health benefits induced by transport emission abatement policies in dollar terms for the Australian households. Finally, the thesis demonstrated transferability of the economic model through two example case studies, establishing a wider application capacity of the model.
APA, Harvard, Vancouver, ISO, and other styles
14

Jin, Lei. "Generalized score tests for missing covariate data." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ainkaran, Ponnuthurai. "Analysis of Some Linear and Nonlinear Time Series Models." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/582.

Full text
Abstract:
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
APA, Harvard, Vancouver, ISO, and other styles
16

Kharoufeh, Jeffrey P. "Density estimation for functions of correlated random variables." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1177097417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ainkaran, Ponnuthurai. "Analysis of Some Linear and Nonlinear Time Series Models." University of Sydney. Mathematics & statistics, 2004. http://hdl.handle.net/2123/582.

Full text
Abstract:
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
APA, Harvard, Vancouver, ISO, and other styles
18

Kibua, Titus Kithanze. "Variance function estimation." Thesis, City University London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Demirer, Mert. "Essays on production function estimation." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127028.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 193-201).
This first chapter develops a new method for estimating production functions with factor-augmenting technology and assesses its economic implications. The method does not impose parametric restrictions and generalizes prior approaches that rely on the CES production function. I first extend the canonical Olley-Pakes framework to accommodate factor-augmenting technology. Then, I show how to identify output elasticities based on a novel control variable approach and the optimality of input expenditures. I use this method to estimate output elasticities and markups in manufacturing industries in the US and four developing countries. Neglecting labor-augmenting productivity and imposing parametric restrictions mismeasures output elasticities and heterogeneity in the production function. My estimates suggest that standard models (i) underestimate capital elasticity by up to 70 percent (ii) overestimate labor elasticity by up to 80 percent.
These biases propagate into markup estimates inferred from output elasticities: markups are overestimated by 20 percentage points. Finally, heterogeneity in output elasticities also affects estimated trends in markups: my estimates point to a much more muted markup growth (about half) in the US manufacturing sector than recent estimates. The second chapter develops partial identification results that are robust to deviations from the commonly used control function approach assumptions and measurement errors in inputs. In particular, the model (i) allows for multi-dimensional unobserved heterogeneity,(ii) relaxes strict monotonicity to weak monotonicity, (iii) accommodates a more flexible timing assumption for capital. I show that under these assumptions production function parameters are partially identified by an 'imperfect proxy' variable via moment inequalities. Using these moment inequalities, I derive bounds on the parameters and propose an estimator.
An empirical application is presented to quantify the informativeness of the identified set. The third chapter develops an approach in which endogenous networks is a source of identification in estimations with network data. In particular, I study a linear model where network data can be used to control for unobserved heterogeneity and partially identify the parameters of the linear model. My method does not rely on a parametric model of network formation. Instead, identification is achieved by assuming that the network satisfies latent homophily - the tendency of individuals to be linked with others who are similar to themselves. I first provide two definitions of homophily: weak and strong homophily. Then, based on these definitions, I characterize the identified sets and show that they are bounded under weak conditions.
Finally, to illustrate the method in an empirical setting, I estimate the effects of education on risk preferences and peer effects using social network data from 150 Chinese villages.
by Mert Demirer.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Economics
APA, Harvard, Vancouver, ISO, and other styles
20

Louw, Markus. "A population Monte Carlo approach to estimating parametric bidirectional reflectance distribution functions through Markov random field parameter estimation." Doctoral thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/5179.

Full text
Abstract:
In this thesis, we propose a method for estimating the parameters of a parametric bidirectional reflectance distribution function (BRDF) for an object surface. The method uses a novel Markov Random Field (MRF) formulation on triplets of corner vertex nodes to model the probability of sets of reflectance parameters for arbitrary reflectance models, given probabilistic surface geometry, camera, illumination, and reflectance image information. In this way, the BRDF parameter estimation problem is cast as a MRF parameter estimation problem. We also present a novel method for estimating the MRF parameters, which uses Population Monte Carlo (PMC) sampling to yield a posterior distribution over the parameters of the BRDF. This PMC based method for estimating the posterior distribution on MRF parameters is compared, using synthetic data, to other parameter estimation methods based on Markov Chain Monte Carlo (MCMC) and Levenberg-Marquardt nonlinear minimization, where it is found to have better results for convergence to the known correct synthetic data parameter sets than the MCMC based methods, and similar convergence results to the LM method. The posterior distributions on the parametric BRDFs for real surfaces, which are represented as evolved sample sets calculated using a Population Monte Carlo algorithm, can be used as features in other high-level vision material or surface classification methods. A variety of probabilistic distances between these features, including the Kullback-Leibler divergence, the Bhattacharyya distance and the Patrick-Fisher distance is used to test the classifiability of the materials, using the PMC evolved sample sets as features. In our experiments on real data, which comprises 48 material surfaces belonging to 12 classes of material, classification errors are counted by comparing the 1-nearest-neighbour classification results to the known (manually specified) material classes. Other classification error statistics such as WNN (worst nearest neighbour) are also calculated. The symmetric Kullback-Leibler divergence, used as a distance measure between the PMC developed sample sets, is the distance measure which gives the best classification results on the real data, when using the 1-nearest neighbour classification method. It is also found that the sets of samples representing the posterior distributions over the MRF parameter spaces are better features for material surface classification than the optimal MRF parameters returned by multiple-seed Levenberg-Marquardt minimization algorithms, which are configured to find the same MRF parameters. The classifiability of the materials is also better when using the entire evolved sample sets (calculated by PMC) as classification features than it is when using only the maximum a-posteriori sample from the PMC evolved sample sets as the feature for each material. It is therefore possible to calculate usable parametric BRDF features for surface classification, using our method.
APA, Harvard, Vancouver, ISO, and other styles
21

Esterhuizen, Gerhard. "Generalised density function estimation using moments and the characteristic function." Thesis, Link to the online version, 2003. http://hdl.handle.net/10019.1/1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Zejiang. "Multiple roots of estimating functions and applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ51239.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Lu. "Cure Rate Model with Spline Estimated Components." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28359.

Full text
Abstract:
In some survival analysis of medical studies, there are often long term survivors who can be considered as permanently cured. The goals in these studies are to estimate the cure probability of the whole population and the hazard rate of the noncured subpopulation. The existing methods for cure rate models have been limited to parametric and semiparametric models. More specifically, the hazard function part is estimated by parametric or semiparametric model where the effect of covariate takes a parametric form. And the cure rate part is often estimated by a parametric logistic regression model. We introduce a non-parametric model employing smoothing splines. It provides non-parametric smooth estimates for both hazard function and cure rate. By introducing a latent cure status variable, we implement the method using a smooth EM algorithm. Louisâ formula for covariance estimation in an EM algorithm is generalized to yield point-wise confidence intervals for both functions. A simple model selection procedure based on the Kullback-Leibler geometry is derived for the proposed cure rate model. Numerical studies demonstrate excellent performance of the proposed method in estimation, inference and model selection. The application of the method is illustrated by the analysis of a melanoma study.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Amezziane, Mohamed. "SMOOTHING PARAMETER SELECTION IN NONPARAMETRIC FUNCTIONAL ESTIMATION." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3488.

Full text
Abstract:
This study intends to build up new techniques for how to obtain completely data-driven choices of the smoothing parameter in functional estimation, within the confines of minimal assumptions. The focus of the study will be within the framework of the estimation of the distribution function, the density function and their multivariable extensions along with some of their functionals such as the location and the integrated squared derivatives.
Ph.D.
Department of Mathematics
Arts and Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
25

Laeuchli, Jesse Harrison. "Methods for Estimating The Diagonal of Matrix Functions." W&M ScholarWorks, 2016. https://scholarworks.wm.edu/etd/1477067934.

Full text
Abstract:
Many applications such as path integral evaluation in Lattice Quantum Chromodynamics (LQCD), variance estimation of least square solutions and spline ts, and centrality measures in network analysis, require computing the diagonal of a function of a matrix, Diag(f(A)) where A is sparse matrix, and f is some function. Unfortunately, when A is large, this can be computationally prohibitive. Because of this, many applications resort to Monte Carlo methods. However, Monte Carlo methods tend to converge slowly. One method for dealing with this shortcoming is probing. Probing assumes that nodes that have a large distance between them in the graph of A, have only a small weight connection in f(A). to determine the distances between nodes, probing forms Ak. Coloring the graph of this matrix will group nodes that have a high distance between them together, and thus a small connection in f(A). This enables the construction of certain vectors, called probing vectors, that can capture the diagonals of f(A). One drawback of probing is in many cases it is too expensive to compute and store A^k for the k that adequately determines which nodes have a strong connection in f(A). Additionally, it is unlikely that the set of probing vectors required for A^k is a subset of the probing vectors needed for Ak+1. This means that if more accuracy in the estimation is required, all previously computed work must be discarded. In the case where the underlying problem arises from a discretization of a partial dierential equation (PDE) onto a lattice, we can make use of our knowledge of the geometry of the lattice to quickly create hierarchical colorings for the graph of A^k. A hierarchical coloring is one in which colors for A^{k+1} are created by splitting groups of nodes sharing a color in A^k. The hierarchical property ensures that the probing vectors used to estimate Diag(f(A)) are nested subsets, so if the results are inaccurate the estimate can be improved without discarding the previous work. If we do not have knowledge of the intrinsic geometry of the matrix, we propose two new classes of methods that improve on the results of probing. One method seeks to determine structural properties of the matrix f(A) by obtaining random samples of the columns of f(A). The other method leverages ideas arising from similar problems in graph partitioning, and makes use of the eigenvectors of f(A) to form effective hierarchical colorings. Our methods have thus far seen successful use in computational physics, where they have been applied to compute observables arising in LQCD. We hope that the renements presented in this work will enable interesting applications in many other elds.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Heeyoung. "Statistical methods for function estimation and classification." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44806.

Full text
Abstract:
This thesis consists of three chapters. The first chapter focuses on adaptive smoothing splines for fitting functions with varying roughness. In the first part of the first chapter, we study an asymptotically optimal procedure to choose the value of a discretized version of the variable smoothing parameter in adaptive smoothing splines. With the choice given by the multivariate version of the generalized cross validation, the resulting adaptive smoothing spline estimator is shown to be consistent and asymptotically optimal under some general conditions. In the second part, we derive the asymptotically optimal local penalty function, which is subsequently used for the derivation of the locally optimal smoothing spline estimator. In the second chapter, we propose a Lipschitz regularity based statistical model, and apply it to coordinate measuring machine (CMM) data to estimate the form error of a manufactured product and to determine the optimal sampling positions of CMM measurements. Our proposed wavelet-based model takes advantage of the fact that the Lipschitz regularity holds for the CMM data. The third chapter focuses on the classification of functional data which are known to be well separable within a particular interval. We propose an interval based classifier. We first estimate a baseline of each class via convex optimization, and then identify an optimal interval that maximizes the difference among the baselines. Our interval based classifier is constructed based on the identified optimal interval. The derived classifier can be implemented via a low-order-of-complexity algorithm.
APA, Harvard, Vancouver, ISO, and other styles
27

Ilvedson, Corinne Rachel 1974. "Transfer function estimation using time-frequency analysis." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50472.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1998.
Includes bibliographical references (p. 135-136).
Given limited and noisy data, identifying the transfer function of a complex aerospace system may prove difficult. In order to obtain a clean transfer function estimate despite noisy data, a time-frequency analysis approach to system identification has been developed. The method is based on the observation that for a linear system, an input at a given frequency should result in a response at the same frequency, and a time localized frequency input should result in a response that is nearby in time to the input. Using these principles, the noise in the response can be separated from the physical dynamics. In addition, the impulse response of the system can be restricted to be causal and of limited duration, thereby reducing the number of degrees of freedom in the estimation problem. The estimation method consists of finding a rough estimate of the impulse response from the sampled input and output data. The impulse response estimate is then transformed to a two dimensional time-frequency mapping. The mapping provides a clear graphical method for distinguishing the noise from the system dynamics. The information believed to correspond to noise is discarded and a cleaner estimate of the impulse response is obtained from the remaining information. The new impulse response estimate is then used to obtain the transfer function estimate. The results indicate that the time-frequency transfer function estimation method can provide estimates that are often less noisy than those obtained from other methods such as the Empirical Transfer Function Estimate and Welch's Averaged Periodogram Method.
by Corinne Rachel Ilvedson.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
28

Patwardhan, Rohit S. "Frequency Response and Coherence function estimation methods." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592169805143687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ptáček, Martin. "Spatial Function Estimation with Uncertain Sensor Locations." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449288.

Full text
Abstract:
Tato práce se zabývá úlohou odhadování prostorové funkce z hlediska regrese pomocí Gaussovských procesů (GPR) za současné nejistoty tréninkových pozic (pozic senzorů). Nejdříve je zde popsána teorie v pozadí GPR metody pracující se známými tréninkovými pozicemi. Tato teorie je poté aplikována při odvození výrazů prediktivní distribuce GPR v testovací pozici při uvážení nejistoty tréninkových pozic. Kvůli absenci analytického řešení těchto výrazů byly výrazy aproximovány pomocí metody Monte Carlo. U odvozené metody bylo demonstrováno zlepšení kvality odhadu prostorové funkce oproti standardnímu použití GPR metody a také oproti zjednodušenému řešení uvedenému v literatuře. Dále se práce zabývá možností použití metody GPR s nejistými tréninkovými pozicemi v~kombinaci s výrazy s dostupným analytickým řešením. Ukazuje se, že k dosažení těchto výrazů je třeba zavést značné předpoklady, což má od počátku za následek nepřesnost prediktivní distribuce. Také se ukazuje, že výsledná metoda používá standardní výrazy GPR v~kombinaci s upravenou kovarianční funkcí. Simulace dokazují, že tato metoda produkuje velmi podobné odhady jako základní GPR metoda uvažující známé tréninkové pozice. Na druhou stranu prediktivní variance (nejistota odhadu) je u této metody zvýšena, což je žádaný efekt uvážení nejistoty tréninkových pozic.
APA, Harvard, Vancouver, ISO, and other styles
30

Yoo, Hyungsuk. "Quality of the Volterra transfer function estimation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Yake, Bronson Thomas. "Self-Smoothing Functional Estimation." MSSTATE, 2002. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09032002-090546/.

Full text
Abstract:
Analysis of measured data is often required when there is no deep understanding of the mathematics that accurately describes the process being measured. Additionally, realistic estimation of the derivative of measured data is often useful. Current techniques of accomplishing this type of data analysis are labor intensive, prone to significant error, and highly dependent on the expertise of the engineer performing the analysis. The ?Self-Smoothing Functional Estimation? (SSFE) algorithm was developed to automate the analysis of measured data and to provide a reliable basis for the extraction of derivative information. In addition to the mathematical development of the SSFE algorithm, an example is included in Chapter III that illustrates several of the innovative features of the SSFE and associated algorithms. Conclusions are drawn about the usefulness of the algorithm from an engineering perspective and additional possible uses are mentioned.
APA, Harvard, Vancouver, ISO, and other styles
32

Kohatsu, Higa Arturo, and Kazuhiro Yasuda. "Estimating multidimensional density functions using the Malliavin-Thalmaier formula." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/96672.

Full text
Abstract:
The Malliavin-Thalmaier formula was introduced for simulation of high dimensional probability density functions. But when this integration by parts formula is applied directly in computer simulations, we show that it is unstable. We propose an approximation to the Malliavin-Thalmaier formula. In this paper, we find the order of the bias and the variance of the approximation error. And we obtain an explicit Malliavin-Thalmaier formula for the calculation of Greeks in finance. The weights obtained are free from the curse of dimensionality.
APA, Harvard, Vancouver, ISO, and other styles
33

Mantzel, William. "Parametric estimation of randomly compressed functions." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49053.

Full text
Abstract:
Within the last decade, a new type of signal acquisition has emerged called Compressive Sensing that has proven especially useful in providing a recoverable representation of sparse signals. This thesis presents similar results for Compressive Parametric Estimation. Here, signals known to lie on some unknown parameterized subspace may be recovered via randomized compressive measurements, provided the number of compressive measurements is a small factor above the product of the parametric dimension with the subspace dimension with an additional logarithmic term. In addition to potential applications that simplify the acquisition hardware, there is also the potential to reduce the computational burden in other applications, and we explore one such application in depth in this thesis. Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. We introduce a technique that mitigates this computational workload by ``compressing'' these computations. Drawing on key concepts from the recently developed field of compressed sensing, we show how a low-dimensional proxy for the Green's function can be constructed by backpropagating a small set of random receiver vectors. Then, the source can be located by performing a number of ``short'' correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented which demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditional broadband MFP, but with the added benefit of generic applicability. That is, the computationally intensive backpropagations may be computed offline independently from the received signals, and may be reused to locate any source within the search grid area. This thesis also introduces a round-robin approach for multi-source localization based on Matched-Field Processing. Each new source location is estimated from the ambiguity function after nulling from the data vector the current source location estimates using a robust projection matrix. This projection matrix effectively minimizes mean-square energy near current source location estimates subject to a rank constraint that prevents excessive interference with sources outside of these neighborhoods. Numerical simulations are presented for multiple sources transmitting through a generic Pekeris ocean waveguide that illustrate the performance of the proposed approach which compares favorably against other previously published approaches. Furthermore, the efficacy with which randomized back-propagations may also be incorporated for computational advantage (as in the case of compressive parametric estimation) is also presented.
APA, Harvard, Vancouver, ISO, and other styles
34

Bissey, Marie-Edith. "Semi-parametric estimation of preference functions." Thesis, University of York, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jiang, Yong. "Estimation of Hazard Function for Right Truncated Data." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/94.

Full text
Abstract:
This thesis centers on nonparametric inferences of the cumulative hazard function of a right truncated variable. We present three variance estimators for the Nelson-Aalen estimator of the cumulative hazard function and conduct a simulation study to investigate their performances. A close match between the sampling standard deviation and the estimated standard error is observed when an estimated survival probability is not close to 1. However, the problem of poor tail performance exists due to the limitation of the proposed variance estimators. We further analyze an AIDS blood transfusion sample for which the disease latent time is right truncated. We compute three variance estimators, yielding three sets of confidence intervals. This work provides insights of two-sample tests for right truncated data in the future research.
APA, Harvard, Vancouver, ISO, and other styles
36

Bury, Samuel Gary. "The Estimation of the RapidScat Spatial Response Function." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6797.

Full text
Abstract:
RapidScat is a pencil-beam wind scatterometer which operated from September 2014 to August 2016. Mounted aboard the International Space Station (ISS), RapidScat experiences significant altitude and attitude variations over its dataset. These variations need to be properly accounted for to ensure accurate calibration and to produce high resolution scatterometer images. Both the antenna pose and the one-way antenna pattern need to be validated. The spatial response function (SRF) is the two-way antenna pattern for a scatterometer combined with the processing and filtering done in the radar system electronics, and is dominated by the two-way pattern. To verify the pointing of the RapidScat antenna, the RapidScat SRF is estimated using on-orbit data. A rank reduced least squares estimate is used, which was developed previously for the Oceansat-2 (OSCAT) scatterometer [1]. This algorithm uses a small, isolated island as a delta function to sample the SRF. The island used is Rarotonga Island of the Cook Islands. The previously developed algorithm is updated to estimate the SRF in terms of beam azimuth and elevation angle rather than in kilometers on the ground. The angle-based coordinate system promotes greater understanding of how the SRF responds to biases and errors in antenna geometry. The estimation process is simulated to verify its accuracy by calculating the SRF for several thousand measurements in the region of Rarotonga. The calculated SRFs are multiplied by a corresponding synthetically created surface and integrated to yield simulated backscatter measurements, with added white noise. The SRF estimation algorithm is then performed. The results of the simulation show that the SRF estimation process yields a close estimate of the original SRF. The antenna pointing is validated by introducing a fixed offset in azimuth angle into the simulation and observing that the SRF is correspondingly shifted in the azimuth-elevation grid. The SRF computed from real data shows that there is an azimuth rotation angle bias of about 0.263 degrees for the inner beam and about 0.244 degrees for the outer beam. Since the SRF is dominated by the two-way antenna pattern, it can be modeled as the product of two identical one-way antenna patterns which are slightly offset from each other due to antenna rotation during the transmit/receive cycle. A method is developed based on this model to derive the one-way antenna pattern from the estimated SRF. Using a Taylor series expansion the one-way antenna pattern is computed from the SRF. The derived pattern recovers the SRF with small error, but there is significant error in the inferred one-way pattern when compared to the pre-launch estimated RapidScat one-way antenna pattern.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Huey-Shyan, and 林惠賢. "Estimating the Number of Species via Matingale Estimating Function." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/76229192223030643651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Tsai, Shu-Jane, and 蔡淑貞. "Wavelets in Estimating Smooth Function." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/56261501731366539447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Hsu-Pang, and 黃旭邦. "Estimating the Number of Population via Matingale Estimating Function in Countinuous Time." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/36120523705143432221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Huang, Xu-Bang, and 黃旭邦. "Estimating the Number of Population via Matingale Estimating Function in Countinuous Time." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/72000746060389566872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kuang-Chen, Hsiao. "ON ESTIMATING REGRESSION FUNCTION WITH CHANGE POINTS." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0607200503585200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hsiao, Kuang-Chen, and 蕭光呈. "ON ESTIMATING REGRESSION FUNCTION WITH CHANGE POINTS." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/93209714801005389344.

Full text
Abstract:
碩士
國立臺灣大學
數學研究所
93
Local polynomial fitting has been known as a powerful nonparametric regression method when dealing with correlated data and when trying to find implicit connections between variables. This method relaxes assumptions on the form of the regression function under investigation. Nevertheless, when we try fitting a regression curve with precipitous changes using general local polynomial method, the fitted curve is oversmoothed near points where the true regression function has sharp features. Since local polynomial modelling is fitting a "polynomial", a continuous and smooth function, to the regression function at each point of estimation, such drawback is intrinsic. Here, we suggest a modified estimator of the conventional local polynomial method. Asymptotic mean squared error is derived. Several numerical results are also presented.
APA, Harvard, Vancouver, ISO, and other styles
43

Siou, Zeng Yi, and 曾怡琇. "Estimating Linear Regression Using Integrated Likelihood Function." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/73370160829649554047.

Full text
Abstract:
碩士
東海大學
統計學系
101
In linear regression modeling, the method of least squares is a general way to find the optimal linear relation of a dependent variable and multiple independent variables (covariates) provided that the covariates are assumed to be given or deterministic to the model. In practice, the covariates can be collected from real data sources and by natural follow some distributions. The ordinary least square estimates can be less efficient if the covariates are stochastic. In this study, we propose a new method to estimate the regression. We estimate the parameters by maximizing the integrated likelihood function, that is, the joint marginal distribution of the dependent variable. We approximate the integrated likelihood function using selected Monte Carlo samples of covariates through that only important probability weights are accumulated in the likelihood function. The maximum likelihood estimation is obtained applying the Newton-Raphson iterations on the approximated likelihood function. Simulation examples are given and the results are compared to the least squares estimates.
APA, Harvard, Vancouver, ISO, and other styles
44

WENG, HONG-MING, and 翁宏明. "Estimating the distribution function of a symmetric distribution." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/88754651802105097413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Castro, Inês Maria Lucas Crista de Sousa. "Estimating residual Kidney function: present and future challenge." Master's thesis, 2019. https://hdl.handle.net/10216/121364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Castro, Inês Maria Lucas Crista de Sousa. "Estimating residual Kidney function: present and future challenge." Dissertação, 2019. https://hdl.handle.net/10216/121364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jie, Lin Zhe, and 林哲頡. "Estimating Time Series Regression Using Integrated Likelihood Function." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/42050102018910577968.

Full text
Abstract:
碩士
東海大學
統計學系
101
The time series regression provides an explicit analysis, in which one time series (dependent variable) can be expressed linearly related to other time series variables (covariates), and often errors of the model are possibly correlated or simply white noises. The method of least squares is a naive approach to estimate the regression conditioned on the covariates. When the covariates are non-Gaussian stochastic time series, the least square estimators may not be quite efficient. We propose a new method taking into account the distribution properties. We estimate the parameters by maximizing the unconditional likelihood, which is obtained via convolution. The calculation of multi-fold convolution is insurmountable, so we approximate the unconditional likelihood using Monte Carlo, in which covariates are re-sampled and only selected probability weights are counted into the approximation. The maximum likelihood estimation is obtained applying the Newton-Raphson iterations on the approximated likelihood function. Simulation examples are given and the results are compared to the least squares estimates.
APA, Harvard, Vancouver, ISO, and other styles
48

Weng, Cheng-Hsuan, and 翁正軒. "Method of Estimating the Atrial Function and Wall Motion." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/90710561450142136606.

Full text
Abstract:
碩士
中原大學
醫學工程研究所
96
Atrial fibrillation is caused by the disorderly action voltage which generates by sinoatrial node in the atrium. Some of the excited cardiac muscle patterns are re-stimulated by the recurrent impulse which makes the left atrium contract irregularly. As the heartbeat goes by, these cardiac patterns change can be taken down immediately by the CT scan. With the various time of CT images, we can analyze the abnormal rhythmic wall motion of heart chamber. For calculation of wall motion, reference which being a fixed figure in different time of CT image series must be calibrated slice by slice. In the analytic field of pattern changes, image registration is often taken as the method to complete characteristic reference matches between images. This study utilizes the high-quality cardiac CT images to efficiently calculate the matching degree between the CT images so that we can rule out the interference occasioned by the non-left atrium motion during the CT scan. As to delineate the atrial profile, we adopt seed region growth algorithm. By means of the profile data, we can analyze the atrial wall motion to evaluate the contraction extent of all regions in the left atrium. With the system built by this research, analysis of the 20 cardiac CT images (eleven being normal and nine suffering from atrial fibrillation) indicates the obvious differences between the normal people and the patients at issue. For the normal ones, the average side motion at anterior and posterior of left inferior wall readings are 6.37±1.81mm and 7.01±1.72mm; for the patients, the readings are 8.76±1.46mm and 9.20±1.63mm,p value<0.01. Besides, there is another difference with regard to the vector analysis of LA areas circling the right inferior pulmonary vein. For the normal, the magnitude of difference at inferior and posterior patterns readings are 0.045±0.016 and 0.051±0.022; and for the patients, the readings are 0.089±0.038 and 0.085±0.028,p value<0.01. These analyses implicate that the AF patients would develop partial malfunction in the left atrium. Therefore, they will serve as the diagnose index when clinic diagnoses and treatments are in process.
APA, Harvard, Vancouver, ISO, and other styles
49

"A data-driven bandwidth selector for estimating conditional density function." 2003. http://library.cuhk.edu.hk/record=b5891506.

Full text
Abstract:
Yim Tsz-ho.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (leaves 47-49).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Local Polynomial Modeling --- p.4
Chapter 2.1 --- Local Polynomial Fitting --- p.4
Chapter 2.1.1 --- Methodology --- p.4
Chapter 2.1.2 --- The kernel K --- p.6
Chapter 2.1.3 --- The bandwidth h --- p.7
Chapter 2.1.4 --- The order p --- p.10
Chapter 2.2 --- Estimation of Conditional Density --- p.11
Chapter 3 --- Bandwidth Selection --- p.14
Chapter 3.1 --- Rule of Thumb --- p.14
Chapter 3.2 --- Bootstrap Bandwidth Selection --- p.15
Chapter 3.3 --- A Cross-Validation Method --- p.16
Chapter 4 --- A Theoretical Justification --- p.18
Chapter 4.1 --- Proof of (4.1) --- p.19
Chapter 4.2 --- Proof of (4.2) --- p.22
Chapter 5 --- Simulation Studies --- p.25
Chapter 6 --- Real Data Applications --- p.38
Chapter 6.1 --- Case Study With Canadian Lynx Data.............................. --- p.38
Chapter 6.2 --- Case Study With U.S. Twelve-Month Treasury Bill Data.......... --- p.41
Chapter 7 --- Conclusions --- p.45
Bibliography --- p.47
APA, Harvard, Vancouver, ISO, and other styles
50

Van, Deventer Hendrick Emanuel. "Estimating glomerular filtration rate in black South Africans." Thesis, 2010. http://hdl.handle.net/10539/7996.

Full text
Abstract:
MMed, Chemical Pathology, Faculty of health Sciences, University of the Witwatersrand, 2009
Background The 4-variable Modification of Diet in Renal Disease (4-v MDRD) and Cockcroft-Gault (CG) equations are commonly used for estimating glomerular filtration rate (GFR); however, neither of these equations has been validated in an indigenous African population. The aim of this study was to evaluate the performance of the 4-v MDRD and CG equations for estimating GFR in black South Africans against measured GFR and to assess the appropriateness for the local population of the ethnicity factor established for African Americans in the 4-v MDRD equation. Methods We enrolled 100 patients in the study. The plasma clearance of chromium-51–EDTA (51Cr- EDTA) was used to measure GFR, and serum creatinine was measured using an isotope dilution mass spectrometry (IDMS) traceable assay. We estimated GFR using both the reexpressed 4-v MDRD and CG equations and compared it to measured GFR using 4 modalities: correlation coefficient, weighted Deming regression analysis, percentage bias, and proportion of estimated GFR within 30% of measured GFR (P30). Results The Spearman correlation coefficient between measured and estimated GFR for both equations was similar (4-v MDRD R2 = 0.80 and CG R2 = 0.79). Using the 4-v MDRD equation with the ethnicity factor of 1.212 as established for African Americans resulted in a median positive bias of 13.1 (95% CI 5.5 to 18.3) mL/min/1.73m2. Without the ethnicity factor median bias was 1.9 (95% CI -0.8 to 4.5) mL/min/1.73m2. Conclusion The 4-v MDRD equation, without the ethnicity factor of 1.212, can be used for estimating GFR in black South Africans.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography